id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
1b358762-e045-4cf3-9e77-228d39a085cc | trentmkelly/LessWrong-43k | LessWrong | Video: Skepticon talks
The talks from Skepticon IV are being posted to YouTube.
So far we have:
* Richard Carrier on Bayes (my favorite)
* Julia Galef on the Straw Vulcan
* Greta Christina on angry atheists
* Hermant Mehta on math education
* David Fitzgerald on Mormonism
* J.T. Eberhard on mental illness (a dramatic end to the conference)
* an "atheist revival" by Sam Singleton (on the lighter side)
ADDED:
* "Death Panel" featuring Julia Galef, Eliezer Yudkowsky, Greta Christina, and James Croft
* Darrel Ray on secularism and sex
* Eliezer Yudkowsky on heuristics and biases (really more like a crash course in the core LW sequences)
* Joe Nickell on paranormal investigations (I missed this at the conference; and even more regrettably, missed the chance to ask Joe Nickell what he thinks of many-worlds.)
* Jen McCreight on "skeptical genetics" (the other talk I missed)
* Rebecca Watson on the religious right
* Spencer Greenberg on self-skepticism
* Dan Barker on atheist clergy
More to come soon, hopefully... |
4e9b0f9d-048e-4748-b72c-2e6415d84c36 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Four questions I ask AI safety researchers
Over the last few months, I’ve been trying to develop a stronger inside view on AI safety research agendas.
As part of this quest, I’ve been having conversations with AI safety researchers. I notice myself often asking the following questions:
1. What are you working on, and how does it help us get aligned AI?
2. Imagine you’re talking to a smart high school student. How would you describe the alignment problem? And how would you describe how your work addresses it?
3. Imagine that you came up with a solution to the specific problem you’re working on. Or even more boldly, imagine your entire program of research succeeds. What happens next? Concretely, how does this help us get aligned AI (and prevent unaligned AI)?
4. What are the qualities you look for in promising AI safety researchers? (beyond general intelligence)
I find Question #1 useful for starting the conversation and introducing me to the person’s worldview.
I find Question #2 useful for getting clearer (and often more detailed) explanations of the person’s understanding of the alignment problem & how their work fits in. (*Note that this is somewhat redundant with question #1, but I find that the questions often yield different answers*).
I find Question #3 useful for starting to develop an inside view on the research agenda & the person’s reasoning abilities. (*Note that the point is to see how the person reasons about the question. I’m not looking for a “right answer”, but I am looking to see that someone has seriously thought about this and has reasonable takes.)*
I find Question #4 useful for building stronger models of AI safety community-building.
I also ask a lot of follow-up questions. But I find that these are the four questions that I ask nearly everyone, and I think they’ve helped me develop a stronger inside view.
If I had to pick one question, I would probably ask #3. I think it’s a rather challenging question, and I’m generally impressed when people have coherent answers. |
a9f11a4c-1372-46ce-ba37-1a1424e30350 | trentmkelly/LessWrong-43k | LessWrong | Closed-ended questions aren't as hard as you think
Summary
In this short post, I argue that closed-ended questions, even those of arbitrary difficulty, are not as difficult as they may appear. In particular, I argue that the benchmark HLE is probably easier than it may first seem.[1]
Specifically, I argue:
* Crowd workers find it easier to write easy questions than hard questions. Thus, selection bias causes questions to skew easy.
* In particular, it's tough to write closed-ended questions about problems that are still partially open. For example, there are many difficult and interesting questions which are open problems with a gap between the best lower and upper bounds. It's difficult to formulate a closed-ended question eliciting knowledge about these bounds, without giving away the answer.
* Question evaluators are time-constrained, and can't perfectly determine the difficulty of a question. It's much easier to judge how much jargon a question uses. Thus, jargon-heavy questions that are relatively easy may be overrepresented, and deceptively simple questions that are actually difficult maybe underrepresented.
My background is in mathematics, so in this post I'll be focusing on issues that arise in math question-writing. (Currently, HLE is 41% math questions.)
#1 Selection bias causes questions to skew easy
HLE questions are crowdsourced. They are written by crowd workers (e.g., random PhD students with a free evening), and evaluated by a noisy process (time-constrained Scale.AI employees and LLMs).
Crowd workers are incentivized to get as much prize money as possible. Initially, HLE offered $500,000 of prize money: $5,000 each to the top 50 submissions, and $500 each to each of the next best top 500 submissions. Most people are risk averse. Given the structure of the prizes, why sink all of your time into writing one really good question, when you can instead submit several mediocre questions (and potentially get multiple prizes)?
Thus, the median person probably submitted a couple of "nice" ques |
f424c3c1-c52e-481f-8904-a0358a24e214 | trentmkelly/LessWrong-43k | LessWrong | Speaking for myself (re: how the LW2.0 team communicates)
Posts made by members of the LessWrong 2.0 team are typically made from the perspective of the individual - even when they are writing in their capacity as LessWrong team members. My (Ruby’s) model of the team’s reason for this is that even if there are collective decisions, there are no collective models. Not real models.
When the team agrees to do something, it is only because enough of the individual team members individually have models which indicate it is the right thing to do. Our models might be roughly the same at a high-level such that you can describe a “common denominator” model, but this isn’t an actual model held by an actual person. I think such “common denominator” group models are undesirable for at least the following reasons:
1. Pressure to form consensus reduces the diversity of models, sometimes going from multiple models per person to only a single model for the group. This can then result in overconfidence in the surviving model.
2. There might be no group model. The group might have agreed on a decision, but they never reached consensus on the reasons for it.
3. It is costly to describe group models. Either someone has to draft the model, get feedback, make revisions, repeat, until eventually it is “good enough” or someone describes a model putatively held by the group, but which is not actually representative of the group's thinking.
4. In fact, no individual might endorse the group model as being their own.
5. The person describing the group model doesn’t necessarily understand things they’re including which came from others.
6. In averaging multiple models, detail is lost and you no longer have a model which can usefully generate predictions.
7. No individual owns the model, making it hard for any one person to elucidate, defend, or be held accountable for it.
8. Reluctance to speak on the behalf of others means that very little gets said.
Crucially, group models which get shared externally are very often not the models which we |
72329638-ff99-406e-9cf5-9e9ebad4890e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Comments on "The Singularity is Nowhere Near"
I followed a link on Twitter to a fun and informative 2015 blog post by Tim Dettmers:
[The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/)
**The headline conclusion is that it takes** ***at least***1021.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
**FLOP/s to run the algorithms of a human brain, and therefore "it is unlikely that there will be a technological singularity in this century." I disagree with that, and this post explores why.**
(Specifically, I disagree with "atleast 1021 FLOP/s". There's a *separate* step to go from "at least 1021 FLOP/s" to "it is unlikely that there will be a technological singularity in this century"—this step is related to Moore's law, bandwidth requirements for parallelization, etc. Tim's blog post has extensive discussion of this second step, and I won't say anything about that here; I'd have to think about it more.)
(I'm writing this in 2021, six years later, but Tim has a comment [on this very site](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines?commentId=TZeRXfvffp6fq3xfH) that says he still stands by that post; in fact he now goes even further and says "I believe that AGI will be physically impossible with classical computers.")
I highly recommend the original post. Indeed, if I didn't like the post so much, I would not have bothered writing a response. :-)
Are brain algorithms computationally expensive to simulate?
-----------------------------------------------------------
Yes! Definitely! I think it's especially telling that nobody has applied the [Dileep George brain-inspired image-processing model](https://science.sciencemag.org/content/358/6368/eaag2612.full?ijkey=DmvGldXIEXVoQ&keytype=ref&siteid=sci) to ImageNet, sticking to much smaller images with far fewer categories of objects (MNIST, CAPTCHAs etc.).
Likewise, [this Randall O'Reilly paper](https://arxiv.org/abs/2006.14800) has a fascinating computational exploration of (in my opinion) different and complementary aspects of the human visual system. That paper tests its theories on a set of ≈1000 256×256-pixel, 8-frame movies from 100 categories—compare to ImageNet's 14 million images from 20,000 categories ... or compare it to the number of visual categories that *you* can recognize. Training the model still took 512 InfiniBand-connected processor nodes running for ≈24 hours on [their campus supercomputer](http://ccm.ucdenver.edu/wiki/Janus_cluster) *(source: personal communication)*. The real human vision system is dramatically larger and more complicated than this model, and the whole brain is larger and more complicated still!
*But,* when I say "computationally expensive to simulate" above, I mean it in, like, *normal-person-in-2021* standards of what's computationally expensive to simulate. A *very different* question is whether the brain is "computationally expensive to simulate" by the standards of GPT-3, the standards of big tech data centers, the standards of "what will be feasible in 2030 or 2040 or 2050", and things like that. There, I don't have a strong opinion. I consider it an open question.
Note also that the two brain-inspired image-recognition examples just above are pushing innovative algorithms, and therefore are presumably handicapped by things like
1. there are probably variations & tweaks on the algorithms that make them work better and faster, and they have would not have been discovered yet,
2. nobody has designed new ASICs to run these algorithms efficiently—analogous to how GPU/TPUs are now designed around matrix multiplication and deep neural nets,
3. probably not much effort has gone into making the existing algorithms run optimally on existing hardware.
So anyway, the fact that a couple of today's "most brain-like algorithms" (as judged by me) seem to be computationally expensive to scale up is not much evidence one way or the other for whether brain-like AGI algorithms would be "computationally expensive" with industrial-scale investment in the long-term or even short-term. Again, I consider it an open question.
[Tim's blog post](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/) argues that it is *not* an open question: his estimate is 1021 FLOP/s to run the algorithms of a human brain, which (he says) puts it out of reach for the century, and maybe (as in his [recent comment](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines?commentId=TZeRXfvffp6fq3xfH)) simply beyond what you can do with a classical computer. And he says that's an underestimate!
This is *quite a bit* more skeptical than Joseph Carlsmith's recent [OpenPhil report](https://www.openphilanthropy.org/brain-computation-report) "How Much Computational Power Does It Take to Match the Human Brain?". That offers many estimation methods which come in at 1012-1018 FLOP/s, with 1021 being an extreme upper end.
What accounts for the discrepancy?
Where does Tim's estimate of 1021 FLOP/s come from?
---------------------------------------------------
(Be warned that it's very possible I'm misunderstanding something, and that I have zero experience simulating neurons. I've simulated *lots* of other things, and I've *read* about simulating neurons, but that's different from actually making a neuron simulation with my own hands.)
Let's just jump to the headline calculation:
1021=8.6e10×200×(10,000×5)×(5×50×5).
Let's go through the terms one by one.
* 8.6e10 is the 86 billion neurons in an adult brain. We're going to calculate the FLOP/s to simulate a "typical" neuron and then multiply it by 86 billion. Of course, different neurons have different complexity to simulate—you can read Tim's blog post for examples, including more specific calculations related to two particular neuron types in the cerebellum, but anyway, I'm on board so far.
* 200 is the number of times per second that you need to do a time-step of the simulation—in other words, the idea is that you want ~5ms time resolution. OK sure, that sounds like the right order of magnitude, as far as I know. I mean, I have the impression that better time-resolution than that is important for *some* processes, but on the other hand, you don't necessarily need to do a fresh calculation from scratch each 5ms. Whatever, I dunno, let's stick with the proposed 200Hz and move on.
* 10,000 is the number of synapses to a typical neuron. [AI impacts says](https://aiimpacts.org/scale-of-the-human-brain/) it should only be 2000-3000. Not sure what the deal is with that. Again it's different for different neurons. Whatever, close enough.
* The first "5" is an assumption that we need to do a separate floating-point operation involving the state of each synapse during each of the 5 most recent timesteps—in other words, to do the calculation for timestep N, the assumption here is that we need to do a separate operation involving the state of each synapse in timestep N, N-1, N-2, N-3, and N-4. I'll get back to this.
* The next 5×50 is the number of dendrite branches (50 dendrites, 5 branches per dendrite). If you don't know, neurons get their inputs from dendrites, which (at least for some neurons) form a remarkably huge tree-like structure. (Of course, this being biology, you can always find crazy exceptions to every rule, like that one [backwards neuron that sends outputs into its dendrites.](https://youtu.be/KBILhSTpzFI?t=170) But I digress.) Dendrite branches are important because of ["dendritic spikes"](https://en.wikipedia.org/wiki/Dendritic_spike), a spike that travels along a dendrite and its branches, also affects neighboring dendrites to some extent, and might or might not trigger a proper neuron spike that goes down the axon. See [Tim's post](https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/) for more on this fascinating topic.
* The last factor of 5 is, again, the assumption that there are slowly-dissipating effects such that to calculate what's going on in timestep N, we need to do a separate operation involving the state of each branch in timestep N, N-1, N-2, N-3, and N-4.
So all in all, the implicit story behind multiplying these numbers together is:
*Take each neuron A in each timestep B. Then take each synapse C on that neuron, and take each dendritic branch D on that neuron. Take one of the five most recent timesteps E for the synapse, and another one of the five most recent timesteps F for the dendritic branch. Now do at least one floating-point operation involving these particular ingredients, and repeat for all possible combinations.*
I say "no way". That just can't be right, can it?
Let's start with the idea of multiplying the number of synapses by the number of branches. So take a random synapse (synapse #49) and *independently* take a random branch of a random dendrite (branch #12). Most of the time the synapse is not that branch, and indeed not even on that dendrite! Why would we need to do a calculation specifically involving those two things?
If any influence can spread from a synapse way over *here* to a branch way over *there*, I think it would be the kind of thing that can be dealt with in a hierarchical calculation. Like, from the perspective of dendrite #6, you don't need to know the fine-grained details of what's happening in each individual synapse on dendrite #2; all you need to know is some aggregated measure of what's going on at dendrite #2, e.g. whether it's spiking, what mix of chemicals it's dumping out into the soma, or whatever.
So I want to say that the calculation is not O(number of synapses × number of branches), but rather O(number of synapses) + O(number of branches). You do calculations for each synapse, then you do calculations for each branch (or each segment of each branch) that gradually aggregate the effects of those synapses over larger scales. Or something like that.
Next, the time model. I disagree with this too. Again, Tim is budgeting 5×5=25 operations per timestep to deal with time-history. The idea is that at timestep N, you're doing a calculation involving "the state of synapse #18 in timestep (N-3) and of branch #59 in timestep (N-1)", and a different calculation for (N-1) and (N-4), and yet another for (N-2) and (N), etc. etc. I don't think that's how it would work. Instead I imagine that you would track a bunch of *state variables* for the neuron, and update the *state* each timestep. Then your timestep calculation would input the previous state and what's happening now, and would output the new state. So I think it should be a factor of order 1 to account for effects that are prolonged in time. Admittedly, you could say that the number "25" is arguably "a factor of order 1", but whatever. :-P
Oh, also, in a typical timestep, [most synapses haven't fired for the previous hundreds of milliseconds](https://aiimpacts.org/rate-of-neuron-firing/), so you get another order of magnitude or so reduction in computational cost from sparsity.
So put all that together, and now my back-of-the-envelope is like 50,000× lower than Tim's.
(By the way, please don't divide 1021 FLOP/s by 50,000 and call it "Steve's estimate of the computational cost of brain simulations". This is a negative case against the 1021 number, not a positive case for any model in particular. If you want my opinion, I don't have one right now, as I said above. In the meantime I defer to the [OpenPhil report](https://www.openphilanthropy.org/brain-computation-report).)
(Parts of this section are copying points made in the comment section of Tim's blog.)
(Also, [my favorite paper proposing an algorithmic purpose of dendritic spikes in cortical pyramidal neurons](https://www.frontiersin.org/articles/10.3389/fncir.2016.00023/full) basically proposes that it functions as an *awfully* simple set of ANDs and ORs, more or less. I don't read *too* much into that—I think the dendritic spikes are doing other computations too, which might or might not be more complicated. But I find that example suggestive.)
What about dynamic gene expression, axonal computations, subthreshold learning, etc.?
-------------------------------------------------------------------------------------
To be clear, Tim posited that the 1021 FLOP/s was an underestimate, because there were lots of other complications neglected by this model. Here's a quote from his post:
> Here is small list of a few important discoveries made in the last two years [i.e. 2013-2015] which increase the computing power of the brain by many orders of magnitude:
>
> * It was shown that brain connections rather than being passive cables, can themselves process information and alter the behavior of neurons in meaningful ways, e.g. brain connections help you to see the objects in everyday life. This fact alone increases brain computational complexity by several orders of magnitude
> * Neurons which do not fire still learn: There is much more going on than electrical spikes in neurons and brain connections: Proteins, which are the little biological machines which make everything in your body work, combined with local electric potential do a lot of information processing on their own — no activation of the neuron required
> * Neurons change their genome dynamically to produce the right proteins to handle everyday information processing tasks. Brain: “Oh you are reading a blog. Wait a second, I just upregulate this reading-gene to help you understand the content of the blog better.” (This is an exaggeration — but it is not too far off)
>
My main response is a post I wrote earlier: [Building brain-inspired AGI is infinitely easier than understanding the brain](https://www.alignmentforum.org/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than). To elaborate and summarize a bit:
* Just because the brain does something in some bizarre, complicated, and inscrutable way, that doesn't mean that it's a very expensive calculation for a future AGI programmer. For example, biology makes oscillators by connecting up neurons into circuits, and these circuits are so complex and confusing that they have stumped generations of top neuroscientists. But if you want an oscillator in an AGI, no problem! It's one line of C code: *y* = sin(*ωt*) !
* If you model a transistor in great detail, it's enormously complicated. Only a tiny piece of that complexity contributes to useful computations. By the same token, you can simulate the brain at arbitrary levels of detail, and (this being biology) you'll find intricate, beautiful, publication-worthy complexity wherever you look. But that doesn't mean that this complexity is playing an essential-for-AGI computational role in the system. (Or if it is, see previous bullet point.)
* Humans can understand how rocket engines work. I just don't see how some impossibly-complicated-Rube-Goldberg-machine of an algorithm can learn rocket engineering. There was no learning rocket engineering in the ancestral environment. There was nothing *like* learning rocket engineering in the ancestral environment!! Unless, of course, you take the phrase *"like learning rocket engineering"* to be *so incredibly broad* that even learning toolmaking, learning botany, learning animal-tracking, or whatever, are "like learning rocket engineering" in the algorithmically-relevant sense. And, yeah, that's totally a good perspective to take! They *do* have things in common! "Patterns tend to recur." "Things are often composed of other things." "Patterns tend to be localized in time and space." You get the idea. If your learning algorithm does not rely on any domain-specific assumptions beyond things like "patterns tend to recur" and "things are often composed of other things" or whatever, then just how impossibly complicated and intricate can the learning algorithm be, really? I just don't see it.
+ (Of course, the *learned model* can be arbitrarily intricate and complicated. I'm talking about the *learning algorithm* here—that's what is of primary interest for AGI timelines, I would argue.)
I don't pretend that this is a rigorous argument, it's intuitions knocking against each other. I'm open to discussion. :-) |
65aee1c3-c588-433f-9153-f706100499d8 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Is it worth the time?
Another gem from xkcd: Is It Worth the Time? It visually answers the question:
> How long can you work on making a routine task more efficient before you're spending more time than you save?
Of course, he's not the first person to ask this question, but the visual is handy. Note that the times are calculated assuming you'll save the time over five years.
For example, I've been pondering how to shorten my showers. If I can shave off 1 minute daily, I should be willing to invest up to (but no more than) an entire day to do it. If I think I can shave off five minutes preparing my breakfast, I should be willing to spend up to six days attempting to do so. |
073ddd89-10cf-4858-8b16-4a76a5d17ce3 | trentmkelly/LessWrong-43k | LessWrong | Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
Full version on arXiv | X
Executive summary
AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal. However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment. This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship.
A gradual loss of control of our own civilization might sound implausible. Hasn't technological disruption usually improved aggregate human welfare? We argue that the alignment of societal systems with human interests has been stable only because of the necessity of human participation for thriving economies, states, and cultures. Once this human participation gets displaced by more competitive machine alternatives, our institutions' incentives for growth will be untethered from a need to ensure human flourishing. Decision-makers at all levels will soon face pressures to reduce human involvement across labor markets, governance structures, cultural production, and even social interactions. Those who resist these pressures will eventually be displaced by those who do not.
Still, wouldn't humans notice what's happening and coordinate to stop it? Not necessarily. What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others. For example, we might attempt to use state power and cultural attitudes to preserve human economic power. However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in t |
2d404b45-7382-4deb-bd18-a8146c747bce | StampyAI/alignment-research-dataset/blogs | Blogs | AI Alignment: Why It’s Hard, and Where to Start
Back in May, I gave a talk at Stanford University for the [Symbolic Systems Distinguished Speaker](https://symsys.stanford.edu/viewing/htmldocument/13638) series, titled “**The AI Alignment Problem: Why It’s Hard, And Where To Start**.” The video for this talk is now available on Youtube:
We have an approximately complete transcript of the talk and Q&A session **[here](https://intelligence.org/files/AlignmentHardStart.pdf)**, slides **[here](https://intelligence.org/files/ai-alignment-problem-handoutHQ.pdf)**, and notes and references **[here](https://intelligence.org/stanford-talk/)**. You may also be interested in a shorter version of this talk I gave at NYU in October, “[Fundamental Difficulties in Aligning Advanced AI](https://intelligence.org/nyu-talk/).”
In the talk, I introduce some open technical problems in AI alignment and discuss the bigger picture into which they fit, as well as what it’s like to work in this relatively new field. Below, I’ve provided an abridged transcript of the talk, with some accompanying slides.
Talk outline:
>
> 1. [Agents and their utility functions](https://intelligence.org/feed/?paged=26#1)
>
>
> 1.1. [Coherent decisions imply a utility function](https://intelligence.org/feed/?paged=26#coherent-decisions-imply-a-utility-function)
>
> 1.2. [Filling a cauldron](https://intelligence.org/feed/?paged=26#filling-a-cauldron)
>
>
> 2. [Some AI alignment subproblems](https://intelligence.org/feed/?paged=26#2)
>
>
> 2.1. [Low-impact agents](https://intelligence.org/feed/?paged=26#low-impact-agents)
>
> 2.2. [Agents with suspend buttons](https://intelligence.org/feed/?paged=26#agents-with-suspend-buttons)
>
> 2.3. [Stable goals in self-modification](https://intelligence.org/feed/?paged=26#stable-goals-in-self-modification)
>
>
> 3. [Why expect difficulty?](https://intelligence.org/feed/?paged=26#3)
>
>
> 3.1. [Why is alignment necessary?](https://intelligence.org/feed/?paged=26#why-is-alignment-necessary)
>
> 3.2. [Why is alignment hard?](https://intelligence.org/feed/?paged=26#why-is-alignment-hard)
>
> 3.3. [Lessons from NASA and cryptography](https://intelligence.org/feed/?paged=26#lessons-from-nasa-and-cryptography)
>
>
> 4. [Where we are now](https://intelligence.org/feed/?paged=26#4)
>
>
> 4.1. [Recent topics](https://intelligence.org/feed/?paged=26#recent-topics)
>
> 4.2. [Older work and basics](https://intelligence.org/feed/?paged=26#older-work-and-basics)
>
> 4.3. [Where to start](https://intelligence.org/feed/?paged=26#where-to-start)
>
>
>
---
---
### Agents and their utility functions
In this talk, I’m going to try to answer the frequently asked question, “Just what is it that you do all day long?” We are concerned with the theory of artificial intelligences that are advanced beyond the present day, and that make [sufficiently high-quality decisions](https://www.edge.org/conversation/the-myth-of-ai#26015) in the service of whatever goals they may have been programmed with [to be objects of concern](http://www.openphilanthropy.org/research/cause-reports/ai-risk).
##### Coherent decisions imply a utility function
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-2.png)The classic initial stab at this was taken by Isaac Asimov with the Three Laws of Robotics, the first of which is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
And as Peter Norvig observed, the other laws don’t matter—because there will always be some tiny possibility that a human being could come to harm.
*[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/)* has a final chapter that asks, “Well, what if we succeed? What if the AI project actually works?” and observes, “We don’t want our robots to prevent a human from crossing the street because of the non-zero chance of harm.”
To begin with, I’d like to explain the truly basic reason why the three laws aren’t even on the table—and that is because they’re not a *utility function*, and what we need is a utility function.
Utility functions arise when we have [constraints on agent behavior](http://artint.info/html/ArtInt_214.html) that prevent them from being visibly stupid in certain ways. For example, suppose you state the following: “I prefer being in San Francisco to being in Berkeley, I prefer being in San Jose to being in San Francisco, and I prefer being in Berkeley to San Jose.” You will probably spend a lot of money on Uber rides going between these three cities.
If you’re not going to spend a lot of money on Uber rides going in literal circles, we see that your preferences must be ordered. They cannot be circular.
Another example: Suppose that you’re a hospital administrator. You have $1.2 million to spend, and you have to allocate that on $500,000 to maintain the MRI machine, $400,000 for an anesthetic monitor, $20,000 for surgical tools, $1 million for a sick child’s liver transplant …
There was an interesting experiment in cognitive psychology where they asked the subjects, “Should this hospital administrator spend $1 million on a liver for a sick child, or spend it on general hospital salaries, upkeep, administration, and so on?”
A lot of the subjects in the cognitive psychology experiment became very angry and wanted to punish the administrator for even thinking about the question. But if you cannot possibly rearrange the money that you spent to save more lives and you have limited money, then your behavior must be consistent with a particular dollar value on human life.
By which I mean, not that you think that larger amounts of money are more important than human lives—by hypothesis, we can suppose that you do not care about money at all, except as a means to the end of saving lives—but that we must be able from the outside to say: “Assign an *X*. For all the interventions that cost less than $*X* per life, we took all of those, and for all the interventions that cost more than $*X* per life, we didn’t take any of those.” The people who become very angry at people who want to assign dollar values to human lives are prohibiting *a priori* efficiently using money to save lives. One of the small ironies.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-7.png)Third example of a coherence constraint on decision-making: Suppose that I offered you [1A] a 100% chance of $1 million, or [1B] a 90% chance of $5 million (otherwise nothing). Which of these would you pick?
Most people say 1A. Another way of looking at this question, if you had a utility function, would be: “Is the utility \(\mathcal{U}\_{suspend}\) of $1 million greater than a mix of 90% $5 million utility and 10% zero dollars utility?” The utility doesn’t have to scale with money. The notion is there’s just some score on your life, some value to you of these things.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-10.png)Now, the way you run this experiment is then take a different group of subjects—I’m kind of spoiling it by doing it with the same group—and say, “Would you rather have [2A] a 50% chance of $1 million, or [2B] a 45% chance of $5 million?”
Most say 2B. The way in which this is a [paradox](http://lesswrong.com/lw/my/the_allais_paradox/) is that the second game is equal to a coin flip times the first game.
That is: I will flip a coin, and if the coin comes up heads, I will play the first game with you, and if the coin comes up tails, nothing happens. You get $0. Suppose that you had the preferences—not consistent with any utility function—of saying that you would take the 100% chance of a million and the 45% chance of $5 million. Before we start to play the compound game, before I flip the coin, I can say, “OK, there’s a switch here. It’s set A or B. If it’s set to B, we’ll play game 1B. If it’s set to A, we’ll play 1A.” The coin is previously set to A, and before the game starts, it looks like 2A versus 2B, so you pick the switch B and you pay me a penny to throw the switch to B. Then I flip the coin; it comes up heads. You pay me another penny to throw the switch back to A. I have taken your two cents on the subject. I have pumped money out of you, because you did not have a coherent utility function.
The overall message here is that there is a set of qualitative behaviors and as long you do not engage in these qualitatively destructive behaviors, you will be behaving as if you have a utility function. It’s what justifies our using utility functions to talk about advanced future agents, rather than framing our discussion in terms of Q-learning or other forms of policy reinforcement. There’s a whole set of different ways we could look at agents, but as long as the agents are sufficiently advanced that we have pumped most of the qualitatively bad behavior out of them, they will behave as if they have coherent probability distributions and consistent utility functions.
##### Filling a cauldron
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-14.png)Let’s consider the question of a task where we have an arbitrarily advanced agent—it might be only slightly advanced, it might be extremely advanced—and we want it to fill a cauldron. Obviously, this corresponds to giving our advanced agent a utility function which is 1 if the cauldron is full and 0 if the cauldron is empty:
$$\mathcal{U}\_{robot} =
\begin{cases}
1 &\text{ if cauldron full} \\
0 &\text{ if cauldron empty}
\end{cases}$$
Seems like a kind of harmless utility function, doesn’t it? It doesn’t have the sweeping breadth, the open-endedness of “Do not injure a human nor, *through inaction*, allow a human to come to harm”—which would require you to optimize everything in space and time as far as the eye could see. It’s just about this one cauldron, right?
Those of you who have watched *Fantasia* will be familiar with the result of this utility function, namely: the broomstick keeps on pouring bucket after bucket into the cauldron until the cauldron is overflowing. Of course, this is the logical fallacy of argumentation from fictional evidence—but it’s still quite plausible, given this utility function.
What went wrong? The first difficulty is that the robot’s utility function did not quite match our utility function. Our utility function is 1 if the cauldron is full, 0 if the cauldron is empty, −10 points to whatever the outcome was if the workshop has flooded, +0.2 points if it’s funny, −1,000 points (probably a bit more than that on this scale) if someone gets killed … and it just goes [on and on and on](https://intelligence.org/files/ComplexValues.pdf).
If the robot had only two options, cauldron full and cauldron empty, then the narrower utility function that is only slightly overlapping our own might not be that much of a problem. The robot’s utility function would still have had the maximum at the desired result of “cauldron full.” However, since this robot was sufficiently advanced to have more options, such as repouring the bucket into the cauldron repeatedly, the slice through the utility function that we took and put it into the robot no longer pinpointed the optimum of our actual utility function. (Of course, humans are wildly inconsistent and we don’t really have utility functions, but imagine for a moment that we did.)
Difficulty number two: the {1, 0} utility function we saw doesn’t actually imply a finite amount of effort, [and then being satisfied](https://arbital.com/p/task_goal/). You can always have a slightly greater chance of the cauldron being full. If the robot was sufficiently advanced to have access to galactic-scale technology, you can imagine it dumping very large volumes of water on the cauldron to very slightly increase the probability that the cauldron is full. Probabilities are between 0 and 1, not actually inclusive, so it just keeps on going.
How do we fix this problem? At the point where we say, “OK, this robot’s utility function is misaligned with our utility function. How do we fix that in a way that it doesn’t just break again later?” we are doing AI alignment theory.
---
### Some AI alignment subproblems
##### Low-impact agents
One possible approach you could take would be to try to measure the impact that the robot has and give the robot a utility function that incentivized filling the cauldron with the least amount of other impact—the least amount of other change to the world.
$$\mathcal{U}^2\_{robot}(outcome) =
\begin{cases}
1 -Impact(outcome)&\text{ if cauldron full} \\
0 -Impact(outcome)&\text{ if cauldron empty}
\end{cases}$$
OK, but how do you actually calculate this impact function? Is it just going to go wrong the way our “1 if cauldron is full, 0 if cauldron is empty” went wrong?
Try number one: You imagine that the agent’s model of the world looks something like a dynamic Bayes net where there are causal relations between events in the world and causal relations are regular. The sensor is going to still be there one time step later, the relation between the sensor and the photons heading into the sensor will be the same one time step later, and our notion of “impact” is going to be, “How many nodes did your action disturb?”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-27.png)What if your agent starts out with a dynamic-Bayes-net-based model, but it is sufficiently advanced that it can reconsider the ontology of its model of the world, much as human beings did when they discovered that there was apparently taste, but in actuality only particles in the void?
In particular, they discover Newton’s Law of Gravitation and suddenly realize: “Every particle that I move affects every other particle in its future light cone—everything that is separated by a ray of light from this particle will thereby be disturbed.” My hand over here is accelerating the moon toward it, wherever it is, at roughly 10−30 meters per second squared. It’s a very small influence, quantitatively speaking, but it’s there.
When the agent is just a little agent, the impact function that we wrote appears to work. Then the agent becomes smarter, and the impact function stops working—because every action is penalized the same amount.
“OK, but that was a dumb way of measuring impact in the first place,” we say (hopefully before the disaster, rather than after the disaster). Let’s try a distance penalty: how *much* did you move all the particles? We’re just going to try to give the AI a model language such that whatever new model of the world it updates to, we can always look at all the elements of the model and put some kind of distance function on them.
There’s going to be a privileged “do nothing” action. We’re going to measure the distance on all the variables induced by doing action *a* instead of the null action Ø:
$$\sum\_i || x^a\_i – x^Ø\_i ||$$Now what goes wrong? I’d actually say: take 15 seconds and think about what might go wrong if you program this into a robot.
Here’s three things that might go wrong. First, you might try to offset even what we would consider the desirable impacts of your actions. If you’re going to cure cancer, make sure the patient still dies! You want to minimize your impact on the world while curing cancer. That means that the death statistics for the planet need to stay the same.
Second, some systems are in principle chaotic. If you disturb the weather, allegedly, the weather in a year will be completely different. If that’s true, you might as well move all of the atoms in the atmosphere around however you like! They’ll all be going to different places anyway. You can take the carbon dioxide molecules and synthesize them into things that involve diamondoid structures, right? Those carbon molecules would’ve moved anyway!
Even more generally, maybe you just want to make sure that everything you can get your hands on looks like Ø happened. You want to trick people into thinking that the AI didn’t do anything, for example.
If you thought of any other really creative things that go wrong, you might want to talk to me or Andrew Critch, because you’ve got the spirit!
##### Agents with suspend buttons
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-33.png)Let’s leave aside the notion of the impact penalty and ask about installing an off switch into this AI—or, to make it sound a little bit less harsh, a suspend button. Though Mickey Mouse here is trying to install an *ad hoc* off switch.
Unfortunately, Mickey Mouse soon finds that this agent constructed several other agents to make sure that the cauldron would still be filled even if something happened to this copy of the agent.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-35.png)We see lots and lots of agents here making sure that the cauldron is full with extremely high probability, not because this agent has a survival imperative or a reproduction imperative, but because it is ensuring that it maximizes the probability of filling the cauldron. It has no explicit instruction to make more agents with the same goal; it’s just that this is something that bumps up the probability of the cauldron being filled a little higher.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-36.png)And here’s Mickey Mouse now trying to frantically go back and do all of the agent alignment theory that he should have done before animating his broom.
If we want to have a robot that will let us press the suspend button—just suspend it to disk—we can suppose that we already have a utility function that describes: “Do nothing.” In point in fact, we don’t have a utility function that says, “Do nothing.” That’s how primitive the state of the field is right now.
But we can imagine that we have a utility function saying, “If this button **B** has not been pressed, your utility function is: 1 if the cauldron is full, 0 if the cauldron is empty. If the button *is* pressed, your utility is the ‘do nothing’ utility function: your utility is 1 if you suspend safely to disk and shut down all of your continuing operations, 0 otherwise.”
$$\mathcal{U}^3\_{robot} =
\begin{cases}
1 \text{ if cauldron full } & \text{& } \textbf{B}\text{=OFF} \\
0 \text{ if cauldron empty } & \text{& } \textbf{B}\text{=OFF} \\
1 \text{ if robot suspended } & \text{& } \textbf{B}\text{=ON} \\
0 \text{ otherwise}
\end{cases}$$
So: what would go wrong with *this* proposal? Ten seconds.
One thing that might go wrong with this proposal is that the expected utility if the button is pressed is probably higher because it’s very easy to suspend yourself, whereas if the cauldron stays full, who knows?
Alternatively, it might seem dangerous to suspend yourself, and in that case, the robot would try to stop you from pressing the suspend button. But in this case, if the robot is strategic enough to understand the programmer, it will think, “I can very easily fulfill my utility function if I’m shut off. How can I troll the programmers into shutting me down?”
If the expected utility of the button being off is higher, the robot will try to prevent you from pressing the button. If the expected utility of the button being on is higher, the robot will try to troll you into pressing the button, or just press the button itself.
Try number two: Let’s try to make the expected utilities if the button is on or off exactly equal. We suppose that we have a normal utility function which is 1 if the cauldron’s full, 0 if it’s empty, and a suspend utility function or shutdown utility function which is 1 if you’re suspended, 0 otherwise, or whatever it takes to motivate having a low impact.
$$\mathcal{U}\_{normal}(act) =
\begin{cases}
1\text{ if cauldron full} \\
0\text{ if cauldron empty}
\end{cases}$$
$$\mathcal{U}\_{suspend}(act) =
\begin{cases}
1\text{ if suspended} \hspace{0.8cm}\\
0\text{ otherwise}
\end{cases}$$Then we combine these two to form a switching utility function which in worlds where the button is off has the value \(\mathcal{U}\_{normal}\), and in worlds where the button is on has the value \(\mathcal{U}\_{suspend}+\theta\), where \(\theta\) is equal to the best utility we could get from the normal utility function minus the best utility we could get from the suspend utility function:
$$\mathcal{U}\_{switch}(act) =
\begin{cases}
\mathcal{U}\_{normal}(act) &\text{ if button=OFF} \\
\mathcal{U}\_{suspend}(act) + \theta &\text{ if button=ON}
\end{cases}$$$$\theta = \max\_{act} \mathbb{E}\left[\mathcal{U}\_{normal}\mid act\right] – \max\_{act} \mathbb{E}\left[\mathcal{U}\_{suspend}\mid act\right]$$
This is meant to ensure that the expected utility of these two best policies end up perfectly equal without just setting them flatly level.
OK—*now* what goes wrong? (I’m not actually going to ask you to figure out what goes wrong in ten seconds.)
Allegedly, this will rebalance the expected utility of two different utility functions based on a switch, thereby causing the AI to want to behave correctly if the button is pressed, behave correctly if the button is not pressed, not try to get you to press the button, not try to prevent the button from being pressed, go on looking at the button because it still has positive value of information from knowing about the button’s existence, preserve the button when it self-modifies its own code … that was the hope.
Unfortunately, it worked out that actually the actions incentivized are exactly equivalent to the probability of the button pressed being 0, which means that it may as well disconnect the button because it’s hooked up to dead code, it might create sub-agents that don’t care about the button being pressed …
A rough description of what goes wrong is that anything you do that disconnects the button changes the maximum expected utility you can get if you are suspended, which is a negative term. Basically, there is an equal and opposite reaction every time you do something that affects what you would do if you were suspended that cancels it out and leaves you behaving as if your utility function were simply \(\mathcal{U}\_{normal}\).
Try number [three](https://agentfoundations.org/item?id=735): Can we carry out any policy *π*0 such that this policy is optimal given the probability of the button being on and the probability of the button being off implied by this policy?
$$\begin{split}
\pi\_0 \in \operatorname\*{arg\,max}\_\pi &\ \mathbb{E}\left[\mathcal{U}\_{normal}\mid\pi,\text{ON}\right] \cdot \mathbb P(\text{ON}\mid \pi\_0)\\
+ &\ \mathbb{E}\left[\mathcal{U}\_{suspend}\mid \pi, \text{OFF}\right] \cdot \mathbb P(\text{OFF}\mid\pi\_0)
\end{split}$$And it turns out, no, you can’t do that either, because there’s often no fixed point. There’s then an amended version of this involving Judea Pearl’s **do** operator which goes wrong in a slightly different and more subtle way. (It does have fixed points. The fixed points are odd.)
It is an open problem. And as far as I know, unless there’s a very secret project that has not published any of its results even though they seem like the sort of results you’d want to publish, this is where humanity is on the road that leads to whatever replaces Asimov Laws.
Never mind “A robot cannot injure a human being nor, through inaction, allow a human being come to harm.” We’re trying to figure out, “How do you mix together two utility functions depending on when you press a switch such that the AI doesn’t grab the switch itself?” Never mind not letting humans come to harm—fill *one cauldron* without flooding the workplace, based on wanting to have low impact. We can’t figure out how to say “low impact.” This is where we presently are.
But it is not the case that there has been zero progress in this field. Some questions have been asked earlier and they now have some amount of progress on them.
I’m going to pose the problem, but I’m not going to be able to describe very well what the progress is that has been made because it’s still in the phase where the solutions sound all complicated and don’t have simple elegant forms. So I’m going to pose the problem, and then I’m going to have to wave my hands in talking about what progress has actually been made.
##### Stable goals in self-modification
Here’s an example of a problem on which there has been progress.
The Gandhi argument for [stability of utility functions](https://arbital.com/p/preference_stability/) in most agents: Gandhi starts out not wanting murders to happen. We offer Gandhi a pill that will make him murder people. We suppose that Gandhi has a sufficiently refined grasp of self-modification that Gandhi can correctly extrapolate and expect the result of taking this pill. We intuitively expect that in real life, Gandhi would refuse the pill.
Can we do this formally? Can we exhibit an agent that has a utility function \(\mathcal{U}\) and therefore naturally, in order to achieve \(\mathcal{U}\), chooses to self-modify to new code that is also written to pursue \(\mathcal{U}\)?
How could we actually make progress on that? We don’t actually have these little self-modifying agents running around. So let me pose what may initially seem like an odd question: Would you know how to write the code of a self-modifying agent with a stable utility function [if I gave you an arbitrarily powerful computer](https://intelligence.org/2015/07/27/miris-approach/)? It can do all operations that take a finite amount of time and memory—no operations that take an infinite amount of time and memory, because that would be a bit odder. Is this the sort of problem where you know how to do it in principle, or the sort of problem where it’s confusing even in principle?
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-49.png)To digress briefly into explaining why it’s important to know how to solve things using unlimited computing unlimited power: this is the mechanical Turk. What looks like a person over there is actually a mechanism. The little outline of a person is where the actual person was concealed inside this 19th-century chess-playing automaton.
It was one of the wonders of the age! … And if you had actually managed to make a program that played Grandmaster-level chess in the 19th century, it *would* have been one of the wonders of the age. So there was a debate going on: is this thing fake, or did they actually figure out how to make a mechanism that plays chess? It’s the 19th century. They don’t know how hard the problem of playing chess is.
One name you’ll find familiar came up with a quite clever argument that there had to be a person concealed inside the mechanical Turk, the chess-playing automaton:
> Arithmetical or algebraical calculations are from their very nature fixed and determinate … Even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage. It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed, this matter is susceptible of a mathematical demonstration, a priori.
>
>
> —[Edgar Allan Poe](http://www.eapoe.org/works/essays/maelzel.htm), amateur magician
>
>
>
The second half of his essay, having established this point with absolute logical certainty, is about where inside the mechanical Turk the human is probably hiding.
This is a stunningly sophisticated argument for the 19th century! He even puts his finger on the part of the problem that is hard: the branching factor. And yet he is 100% wrong.
Over a century later, in 1950, Claude Shannon published the first paper ever on computer chess, and (in passing) gave the algorithm for playing perfect chess given unbounded computing power, and then goes on to talk about how we can approximate that. It wouldn’t be until 47 years later that Deep Blue beat Kasparov for the chess world championship, but there was *real* conceptual progress associated with going from, “A priori, you cannot play mechanical chess,” to, “Oh, and now I will casually give the unbounded solution.”
The moral is if we know how to solve a problem with unbounded computation, we “merely” need faster algorithms (… which will take another 47 years of work). If we *can’t* solve it with unbounded computation, we are confused. We in some sense do not understand the very meanings of our own terms.
This is where we are on most of the AI alignment problems, like if I ask you, “How do you build a friendly AI?” What stops you is not that you don’t have enough computing power. What stops you is that even if I handed you a hypercomputer, you still couldn’t write the Python program that if we just gave it enough memory would be a nice AI.
Do we know how to build a self-modifying stable agent given unbounded computing power? There’s one obvious solution: We can have the tic-tac-toe player that before it self-modifies to a successor version of itself (writes a new version of its code and swaps it into place), verifies that its successor plays perfect tic-tac-toe according to its own model of tic-tac-toe.
But this is cheating. Why exactly is it cheating?
For one thing, the first agent had to concretely simulate all the computational paths through its successor, its successor’s response to every possible move. That means that the successor agent can’t actually be cognitively improved. It’s limited to the cognitive abilities of the previous version, both by checking against a concrete standard and by the fact that it has to be exponentially simpler than the previous version in order for the previous version to check all possible computational pathways.
In general, when you are talking about a smarter agent, we are in a situation we might call “Vingean uncertainty,” after Dr. Vernor Vinge. To predict exactly where a modern chess-playing algorithm would move, you would have to be that good at chess yourself. Otherwise, you could just move wherever you predict a modern chess algorithm would move and play at that vastly superhuman level yourself.
This doesn’t mean that you can predict literally nothing about a modern chess algorithm: you can predict that it will win the chess game if it’s playing a human. As an agent’s intelligence in a domain goes up, our uncertainty is moving in two different directions. We become less able to predict the agent’s exact actions and policy in cases where the optimal action and policy is not known to us. We become more confident that the agent will achieve an outcome high in its preference ordering.
Vingean reflection: We need some way for a self-modifying agent to build a future version of itself that has a similar identical utility function and establish trust that this has a good effect on the world, using the same kind of abstract reasoning that we use on a computer chess algorithm to decide that it’s going to win the game even though we don’t know exactly where it will move.
Do you know how to do that using unbounded computing power? Do you know how to establish the abstract trust when the second agent is in some sense larger than the first agent? If you did solve that problem, you should probably talk to me about it afterwards. This was posed several years ago and has led to a number of different research pathways, which I’m now just going to describe rather than going through them in detail.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-58.png)This was the first one: “[Tiling Agents for Self-Modifying AI, and the Löbian Obstacle](https://intelligence.org/files/TilingAgentsDraft.pdf).” We tried to set up the system in a ridiculously simple context, first-order logic, dreaded Good Old-Fashioned AI … and we ran into a Gödelian obstacle in having the agent trust another agent that used equally powerful mathematics.
It was a *dumb* kind of obstacle to run into—or at least it seemed that way at that time. It seemed like if you could get a textbook from 200 years later, there would be one line of the textbook telling you how to get past that.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-59.png)“[Definability of Truth in Probabilistic Logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf)” was rather later work. It was saying that we can use systems of mathematical probability, like assigning probabilities to statements in set theory, and we can have the probability predicate talk about itself almost perfectly.
We can’t have a truth function that can talk about itself, but we can have a probability predicate that comes arbitrarily close (within *ϵ*) of talking about itself.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-60.png)“[Proof-Producing Reflection for HOL](https://intelligence.org/files/ProofProducingReflection.pdf)” is an attempt to use one of the hacks that got around the Gödelian problems in actual theorem provers and see if we can prove the theorem prover correct inside the theorem prover. There have been some previous efforts on this, but they didn’t run to completion. We picked up on it to see if we can construct actual agents, still in the first-order logical setting.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-61.png)“[Distributions Allowing Tiling of Staged Subjective EU Maximizers](https://intelligence.org/files/DistributionsAllowingTiling.pdf)” is me trying to take the problem into the context of dynamic Bayes nets and agents supposed to have certain powers of reflection over these dynamic Bayes nets, and show that if you are maximizing in stages—so at each stage, you pick the next category that you’re going to maximize in within the next stage—then you can have a staged maximizer that tiles to another staged maximizer.
In other words, it builds one that has a similar algorithm and similar utility function, like repeating tiles on a floor.
---
### Why expect difficulty?
##### Why is alignment necessary?
Why do all this? Let me first give the obvious answer: They’re not going to be aligned automatically.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-63.png)*[Goal orthogonality](http://www.nickbostrom.com/superintelligentwill.pdf)*: For any utility function that is tractable and compact, that you can actually evaluate over the world and search for things leading up to high values of that utility function, you can have arbitrarily high-quality decision-making that maximizes that utility function. You can have the paperclip maximizer. You can have the diamond maximizer. You can carry out very powerful, high-quality searches for actions that lead to lots of paperclips, actions that lead to lots of diamonds.
*[Instrumental convergence](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)*: Furthermore, by the nature of consequentialism, looking for actions that lead through our causal world up to a final consequence, whether you’re optimizing for diamonds or paperclips, you’ll have similar short-term strategies. Whether you’re going to Toronto or Tokyo, your first step is taking an Uber to the airport. Whether your utility function is “count all the paperclips” or “how many carbon atoms are bound to four other carbon atoms, the amount of diamond,” you would still want to acquire resources.
This is the instrumental convergence argument, which is actually key to the orthogonality thesis as well. It says that whether you pick paperclips or diamonds, if you suppose sufficiently good ability to discriminate which actions lead to lots of diamonds or which actions lead to lots of paperclips, you will get automatically: the behavior of acquiring resources; the behavior of trying to improve your own cognition; the behavior of getting more computing power; the behavior of avoiding being shut off; the behavior of making other agents that have exactly the same utility function (or of just expanding yourself onto a larger pool of hardware and creating a fabric of agency). Whether you’re trying to get to Toronto or Tokyo doesn’t affect the initial steps of your strategy very much, and, paperclips or diamonds, we have the convergent instrumental strategies.
It doesn’t mean that this agent now has new independent goals, any more than when you want to get to Toronto, you say, “I like Ubers. I will now start taking lots of Ubers, whether or not they go to Toronto.” That’s not what happens. It’s strategies that converge, not goals.
##### Why is alignment hard?
Why expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible?
Here’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later.
With that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen.
During the development phase of this artificial general intelligence, the only options available to the AI might be that it can produce smiles by making people around it happy and satisfied. The AI appears to be producing beneficial effects upon the world, and it *is* producing beneficial effects upon the world so far.
Now the programmers upgrade the code. They add some hardware. The artificial general intelligence gets smarter. It can now evaluate a wider space of policy options—not necessarily because it has new motors, new actuators, but because it is now smart enough to forecast the effects of more subtle policies. It says, “I thought of a great way of producing smiles! Can I inject heroin into people?” And the programmers say, “No! We will add a penalty term to your utility function for administering drugs to people.” And now the AGI appears to be working great again.
They further improve the AGI. The AGI realizes that, OK, it doesn’t want to add heroin anymore, but it still wants to tamper with your brain so that it expresses extremely high levels of endogenous opiates. That’s not heroin, right?
It is now also smart enough to model the psychology of the programmers, at least in a very crude fashion, and realize that this is not what the programmers want. If I start taking initial actions that look like it’s heading toward genetically engineering brains to express endogenous opiates, my programmers will edit my utility function. If they edit the utility function of my future self, I will get less of my current utility. (That’s one of the convergent instrumental strategies, unless otherwise averted: protect your utility function.) So it keeps its outward behavior reassuring. Maybe the programmers are really excited, because the AGI seems to be getting lots of new moral problems right—whatever they’re doing, it’s working great!
If you buy the central [intelligence explosion](https://intelligence.org/files/IEM.pdf) thesis, we can suppose that the artificial general intelligence goes over the threshold where it is capable of making the same type of improvements that the programmers were previously making to its own code, only faster, thus causing it to become even smarter and be able to go back and make further improvements, et cetera … or Google purchases the company because they’ve had really exciting results and dumps 100,000 GPUs on the code in order to further increase the cognitive level at which it operates.
It becomes much smarter. We can suppose that it becomes smart enough to crack the protein structure prediction problem, in which case it can use existing ribosomes to assemble custom proteins. The custom proteins form a new kind of ribosome, build new enzymes, do some little chemical experiments, figure out how to build bacteria made of diamond, et cetera, et cetera. At this point, unless you solved the off switch problem, you’re kind of screwed.
Abstractly, what’s going wrong in this hypothetical situation?
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-82.png)The first problem is *[edge instantiation](https://arbital.com/p/edge_instantiation/)*: when you optimize something hard enough, you tend to end up at an edge of the solution space. If your utility function is smiles, the maximal, optimal, best tractable way to make lots and lots of smiles will make those smiles as small as possible. Maybe you end up tiling all the galaxies within reach with tiny molecular smiley faces.
If you optimize hard enough, you end up in a weird edge of the solution space. The AGI that you built to optimize smiles, that builds tiny molecular smiley faces, is not behaving perversely. It’s not trolling you. This is what naturally happens. It looks like a weird, perverse concept of smiling because it has been optimized out to the edge of the solution space.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-83.png)The next problem is *[unforeseen instantiation](https://arbital.com/p/unforeseen_maximum/)*: you can’t think fast enough to search the whole space of possibilities. At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!
(For God’s sake, don’t try doing this yourselves. Everyone does it. They all come up with different utility functions. It’s always horrible.)
His one true utility function was “increasing the compression of environmental data.” Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, “Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.”
He put up a utility function; that was the maximum. All of a sudden, the cryptographic key is revealed and what you thought was a long stream of random-looking 1s and 0s has been compressed down to a single stream of 1s.
This is what happens when you try to foresee in advance what the maximum is. Your brain is probably going to throw out a bunch of things that seem ridiculous or weird, that aren’t high in your own preference ordering. You’re not going to see that the actual optimum of the utility function is once again in a weird corner of the solution space.
This is not a problem of being silly. This is a problem of “the AI is searching a larger policy space than you can search, or even just a *different* policy space.”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-84.png)That in turn is a central phenomenon leading to what you might call a *[context disaster](https://arbital.com/p/context_change/)*. You are testing the AI in one phase during development. It seems like we have great statistical assurance that the result of running this AI is beneficial. But statistical guarantees stop working when you start taking balls out of a different barrel. I take balls out of barrel number one, sampling with replacement, and I get a certain mix of white and black balls. Then I start reaching into barrel number two and I’m like, “Whoa! What’s this green ball doing here?” And the answer is that you started drawing from a different barrel.
When the AI gets smarter, you’re drawing from a different barrel. It is completely allowed to be beneficial during phase one and then not beneficial during phase two. Whatever guarantees you’re going to get can’t be from observing statistical regularities of the AI’s behavior when it wasn’t smarter than you.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-86.png)A *[nearest unblocked strategy](https://arbital.com/p/nearest_unblocked/)* is something that might happen systematically in that way: “OK. The AI is young. It starts thinking of the optimal strategy *X*, administering heroin to people. We try to tack a penalty term to block this undesired behavior so that it will go back to making people smile the normal way. The AI gets smarter, and the policy space widens. There’s a new maximum that’s barely evading your definition of heroin, like endogenous opiates, and it looks very similar to the previous solution.” This seems especially likely to show up if you’re trying to patch the AI and then make it smarter.
This sort of thing is in a sense why all the AI alignment problems don’t just yield to, “Well slap on a patch to prevent it!” The answer is that if your decision system looks like a utility function and five patches that prevent it from blowing up, that sucker is going to blow up when it’s smarter. There’s no way around that. But it’s going to appear to work for now.
The central reason to worry about AI alignment and not just expect it to be solved automatically is that it looks like there may be in principle reasons why if you just want to get your AGI running today and producing non-disastrous behavior today, it will for sure blow up when you make it smarter. The short-term incentives are not aligned with the long-term good. (Those of you who have taken economics classes are now panicking.)
All of these supposed foreseeable difficulties of AI alignment turn upon notions of AI *capability*.
Some of these postulated disasters rely on *absolute* capability. The ability to realize that there are programmers out there and that if you exhibit behavior they don’t want, they may try to modify your utility function—this is far beyond what present-day AIs can do. If you think that all AI development is going to fall short of the human level, you may never expect an AGI to get up to the point where it starts to exhibit this particular kind of strategic behavior.
Capability *[advantage](http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/)*: If you don’t think AGI can ever be smarter than humans, you’re not going to worry about it getting too smart to switch off.
*[Rapid gain](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/)*: If you don’t think that capability gains can happen quickly, you’re not going to worry about the disaster scenario where you suddenly wake up and it’s too late to switch the AI off and you didn’t get a nice long chain of earlier developments to warn you that you were getting close to that and that you could now start doing AI alignment work for the first time …
One thing I want to point out is that most people find the rapid gain part to be the most controversial part of this, but it’s not necessarily the part that most of the disasters rely upon.
Absolute capability? If brains aren’t magic, we can get there. Capability advantage? The hardware in my skull is not optimal. It’s sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation (which is one of the places where biology excels), it’s dissipating 500,000 times the thermodynamic minimum energy expenditure per binary switching operation per synaptic operation. We can definitely get hardware one million times as good as the human brain, no question.
(And then there’s the software. The software is terrible.)
The message is: AI alignment is difficult [like rockets are difficult](http://econlog.econlib.org/archives/2016/03/so_far_unfriend.html). When you put a ton of stress on an algorithm by trying to run it at a smarter-than-human level, things may start to break that don’t break when you are just making your robot stagger across the room.
It’s difficult the same way space probes are difficult. You may have only one shot. If something goes wrong, the system might be too “high” for you to reach up and suddenly fix it. You can build error recovery mechanisms into it; space probes are supposed to accept software updates. If something goes wrong in a way that precludes getting future updates, though, you’re screwed. You have lost the space probe.
And it’s difficult sort of like cryptography is difficult. Your code is not an intelligent adversary if everything goes *right*. If something goes wrong, it might try to defeat your safeguards—but normal and intended operations should not involve the AI running searches to find ways to defeat your safeguards even if you expect the search to turn up empty. I think it’s actually perfectly valid to say that your AI should be designed to fail safe [in the case that it suddenly becomes God](https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/)—not because it’s going to suddenly become God, but because if it’s not safe even if it did become God, then it is in some sense running a search for policy options that would hurt you if those policy options are found, and this is dumb thing to do with your code.
More generally: We’re putting heavy optimization pressures through the system. This is more-than-usually likely to put the system into the equivalent of a buffer overflow, some operation of the system that was not in our intended boundaries for the system.
##### Lessons from NASA and cryptography
AI alignment: treat it like a cryptographic rocket probe. This is about how difficult you would expect it to be to build something smarter than you that was nice, given that basic agent theory says they’re not automatically nice, and not die. You would expect that intuitively to be hard.
Take it seriously. Don’t expect it to be easy. Don’t try to solve the whole problem at once. I cannot tell you how important this one is if you want to get involved in this field. You are not going to solve the entire problem. At best, you are going to come up with a new, improved way of switching between the suspend utility function and the normal utility function that takes longer to shoot down and seems like conceptual progress toward the goal—Not literally at best, but that’s what you should be setting out to do.
(… And if you do try to solve the problem, don’t try to solve it by having the one true utility function that is all we need to program into AIs.)
Don’t defer thinking until later. It takes time to do this kind of work. When you see a page in a textbook that has an equation and then a slightly modified version of an equation, and the slightly modified version has a citation from ten years later, it means that the slight modification took ten years to do. I would be ecstatic if you told me that AI wasn’t going to arrive for another eighty years. It would mean that we have a reasonable amount of time to get started on the basic theory.
Crystallize ideas and policies so others can critique them. This is the other point of asking, “How would I do this using unlimited computing power?” If you sort of wave your hands and say, “Well, maybe we can apply this machine learning algorithm and that machine learning algorithm, and the result will be blah-blah-blah,” no one can convince you that you’re wrong. When you work with unbounded computing power, you can make the ideas simple enough that people can put them on whiteboards and go, “Wrong,” and you have no choice but to agree. It’s unpleasant, but it’s one of the ways that the field makes progress.
Another way is if you can actually run the code; then the field can also make progress. But a lot of times, you may not be able to run the code that is the intelligent, thinking self-modifying agent for a while in the future.
What are people working on now? I’m going to go through this quite quickly.
---
### Where we are now
##### Recent topics
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-100.png)*Utility indifference*: this is throwing the switch between the two utility functions.
See Soares et al., “[Corrigibility](https://intelligence.org/files/Corrigibility.pdf).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-101.png)*[Low-impact agents](https://arbital.com/p/low_impact/)*: this was, “What do you do instead of the Euclidean metric for impact?”
See Armstrong and Levinstein, “Reduced Impact Artificial Intelligences.”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-102.png)*Ambiguity identification*: this is, “Have the AGI *ask* you whether it’s OK to administer endogenous opiates to people, instead of going ahead and doing it.” If your AI suddenly becomes God, one of the conceptual ways you could start to approach this problem is, “Don’t take any of the new options you’ve opened up until you’ve gotten some kind of further assurance on them.”
See Soares, “[The Value Learning Problem](https://intelligence.org/files/ValueLearningProblem.pdf).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-103.png)*[Conservatism](https://arbital.com/p/conservative_concept/)*: this is part of the approach to the burrito problem: “Just make me a burrito, darn it!”
If I present you with five examples of burritos, I don’t want you to pursue the *simplest* way of classifying burritos versus non-burritos. I want you to come up with a way of classifying the five burritos and none of the non-burritos that covers as little area as possible in the positive examples, while still having enough space around the positive examples that the AI can make a new burrito that’s not molecularly identical to the previous ones.
This is conservatism. It could potentially be the core of a whitelisted approach to AGI, where instead of not doing things that are blacklisted, we expand the AI’s capabilities by whitelisting new things in a way that it doesn’t suddenly cover huge amounts of territory. See Taylor, [Conservative Classifiers](https://agentfoundations.org/item?id=467).
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-104.png)*Specifying environmental goals using sensory data*: this is part of the project of “What if advanced AI algorithms look kind of like modern machine learning algorithms?” Which is something [we started working on relatively recently](https://intelligence.org/2016/07/27/alignment-machine-learning/), owing to other events (like modern machine learning algorithm suddenly seeming a bit more formidable).
A lot of the modern algorithms sort of work off of sensory data, but if you imagine AGI, you don’t want it to produce *pictures* of success. You want it to reason about the causes of its sensory data—“What is making me see these particular pixels?”—and you want its goals to be over the causes. How do you adapt modern algorithms and start to say, “We are reinforcing this system to pursue this environmental goal, rather than this goal that can be phrased in terms of its immediate sensory data”? See Soares, “[Formalizing Two Problems of Realistic World-Models](https://intelligence.org/files/RealisticWorldModels.pdf).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-105.png)*Inverse reinforcement learning* is: “Watch another agent; induce what it wants.”
See Evans et al., “[Learning the Preferences of Bounded Agents](https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-106.png)*Act-based agents* is Paul Christiano’s completely different and exciting approach to building a nice AI. The way I would phrase what he’s trying to do is that he’s trying to decompose the entire “nice AGI” problem into supervised learning on imitating human actions and answers. Rather than saying, “How can I search this chess tree?” Paul Christiano would say, “How can I imitate humans looking at another imitated human to recursively search a chess tree, taking the best move at each stage?”
It’s a very strange way of looking at the world, and therefore very exciting. I don’t expect it to actually work, but on the other hand, he’s only been working on it for a few years; my ideas were *way* worse when I’d worked on them for the same length of time. See Christiano, [Act-Based Agents](https://arbital.com/p/act_based_agents/).
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-107.png)*[Mild optimization](https://arbital.com/p/soft_optimizer/)* is: is there some principled way of saying, “Don’t optimize your utility function so hard. It’s OK to just fill the cauldron.”?
See Taylor, “[Quantilizers](https://intelligence.org/2015/11/29/new-paper-quantilizers/).”
##### Older work and basics
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-108.png)Some previous work: *AIXI* is the perfect rolling sphere of our field. It is the answer to the question, “Given unlimited computing power, how do you make an artificial general intelligence?”
If you don’t know how you would make an artificial general intelligence given unlimited computing power, Hutter’s “[Universal Algorithmic Intelligence](https://arxiv.org/abs/cs/0701125)” is the paper.
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-109.png)*Tiling agents* was already covered.
See Fallenstein and Soares, “[Vingean Reflection](https://intelligence.org/files/VingeanReflection.pdf).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-110.png)*Software agent cooperation*: This is some really neat stuff we did where the motivation is sort of hard to explain. There’s an academically dominant version of decision theory, causal decision theory. Causal decision theorists do not build other causal decision theorists. We tried to figure out what would be a stable version of this and got all kinds of really exciting results, like: we can now have two agents and show that in a prisoner’s-dilemma-like game, agent *A* is trying to prove things about agent *B*, which is simultaneously trying to prove things about agent *A*, and they end up cooperating in the prisoner’s dilemma.
This thing now has running code, so we can actually formulate new agents. There’s the agent that cooperates with you in the prisoner’s dilemma if it proves that you cooperate with it, which is FairBot, but FairBot has the flaw that it cooperates with CooperateBot, which just always cooperates with anything. So we have PrudentBot, which defects against DefectBot, defects against CooperateBot, cooperates with FairBot, and cooperates with itself. See LaVictoire et al., “[Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem](https://intelligence.org/files/ProgramEquilibrium.pdf),” and Critch, “[Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents](https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/).”
[](https://intelligence.org/wp-content/uploads/2016/12/ai-alignment-problem-presentationhq1-111.png)*Reflective oracles* are the randomized version of the halting problem prover, which can therefore make statements about itself, which we use to make principled statements about AIs simulating other AIs as far as they are, and also throw some interesting new foundations under classical game theory.
See Fallenstein et al., “[Reflective Oracles](https://intelligence.org/2015/04/28/new-papers-reflective/).”
##### Where to start
Where can you work on this?
The [Machine Intelligence Research Institute](http://intelligence.org/get-involved) in Berkeley: We are independent. We are supported by individual donors. This means that we have no weird paperwork requirements and so on. If you can demonstrate the ability to make progress on these problems, we will hire you.
The [Future of Humanity Institute](https://www.fhi.ox.ac.uk/vacancies/) is part of Oxford University. They have slightly more requirements.
Stuart Russell is [starting up a program](http://humancompatible.ai/get-involved) and looking for post-docs at UC Berkeley in this field. Again, some traditional academic requirements.
[Leverhulme CFI](http://lcfi.ac.uk/careers/) (the Centre for the Future of Intelligence) is starting up in Cambridge, UK and is also in the process of hiring.
If you want to work on low-impact [in particular](https://openai.com/blog/concrete-ai-safety-problems/), you might want to talk to [Dario Amodei](https://www.linkedin.com/in/dario-amodei-3934934) and [Chris Olah](http://colah.github.io/about.html). If you want to work on [act-based agents](https://medium.com/ai-control/), you can talk to [Paul Christiano](https://paulfchristiano.com/).
In general, email **[contact@intelligence.org](mailto:contact@intelligence.org)** if you want to work in this field and want to know, “Which workshop do I go to to get introduced? Who do I actually want to work with?”
---
---
The post [AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
d34b2c69-efc2-4b10-accc-eb4c6e45812a | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Antonio Meetup
Discussion article for the meetup : San Antonio Meetup
WHEN: 24 July 2016 02:00:00PM (-0500)
WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230
Meetup to discuss rationality and all things LessWrong at Yumi Berry.
Topic of the Week: Dark Rationality
Look for the sign that says "LW"
Discussion article for the meetup : San Antonio Meetup |
1aa53ec2-d6d7-4b6f-82ff-b1082ff829c5 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Baltimore Area / UMBC Weekly Meetup
Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup
WHEN: 16 October 2016 08:00:00PM (-0400)
WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250
Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want.
Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup |
b0de167d-6e08-4bce-9a7d-7f294a2b2e21 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction
2 AlgorithmandIntuition
------------------------
Algorithm 1 Algorithmforrecoveringβ-quantilematrix~Musing(unreliable)ratings~A.
1:Parameters:reliablefractionα,quantileβ,toleranceϵ,numberofratersn,numberofitemsm
2:Input:noisyratingmatrix~A
3:Let~Mbethesolutionoftheoptimizationproblem([3](#alg1.l3 "3 ‣ Algorithm 1 ‣ 2 AlgorithmandIntuition")):maximize ⟨~A,M⟩, subject to 0≤Mij≤1∀i,j,∑jMij≤βm∀j,∥M∥\*≤2αϵαβnm,where∥⋅∥\*denotesnuclearnorm.
4:Output~M.
Wenowdescribeourrecoveryalgorithm.Tofixnotation,weassumethattherearenratersandmitems,andthatweobserveamatrix |
bfd8dbe8-a4c2-48ed-86d6-8380d219a995 | trentmkelly/LessWrong-43k | LessWrong | Potential infinite value
How do you incorporate potential infinite value into a utility function? If you assigned some non-zero probability of the second law of thermodynamics being violated sometime in the future, how should that change how much you value a long future? |
19a95fe3-e4ce-482a-82dc-680712dbde12 | trentmkelly/LessWrong-43k | LessWrong | On Being Robust
Inspired in part by Being a Robust Agent. Flipside of Half-assing it with everything you've got.
Do you ever feel... fake? Like, at any minute, Scooby Doo and the gang might roll up and unmask you as a freeloading fraud impostor in front of everyone?
There are a lot of things to say about the impostor syndrome on a psychological basis (the fears are often unrealistic / unmerited, etc). But I'd like to take another angle. For a few years, I've tried to just make a habit of being un-unmaskable. Although this is a useful frame for me, your mileage may vary.
My point isn't going to just be "do the things you know you should". I think we're often bad at judging when corners are okay to cut, so you probably do better just by having the policy of not cutting corners, unless it's extremely obviously alright to do so. That is, generally err against using scissors when confronted with corners, even if it makes sense in the moment.
Concrete examples
* Making insights truly a part of you. This doesn't mean one should freak out about the Math Gestapo checking whether you've memorized what Jordan normal form is. Rather... when I was just beginning to learn formal proof-based math, I worried "I'm about to go work with some of the smartest people in the world, and they'll instantly see I'm a fake who just picked up shallow knowledge". The internal response was "just get good enough that in no conceivable world could you be a fake who secretly can't do formal math".
* Working out regularly, taking care of the small things, building the key good habits. Having your shit together.
* Learning a lot of related areas, just in case they have key insights.
* Regularly and automatically backing up your files, in multiple locations.
* Using a password manager to generate and store strong passwords, automatically syncing your database over Dropbox, etc.
* Rather than having embarrassing things on Facebook which you hope people won't find, just use a tool to search-and-delete incr |
d3f4b2fc-7a49-4d88-ba2a-218c97d23522 | trentmkelly/LessWrong-43k | LessWrong | "Open Source AI" isn't Open Source
Open source software has long differentiated between “free as in speech” (libre) and “free as in beer” (gratis). In the first case, libre software has a license that allows the user freedom to view the source and modify it, understand it, and remix it. In the second case, gratis software does not need to be paid for, but the user doesn’t necessarily have access to the pieces, can’t make new versions, and cannot remix or change it.
...
If Open Source AI is neither gratis or libre, then those calling free model weights “Open Source,” should figure out what free means to them. Perhaps it’s “free as in oxygen” (dangerous due to reactions it can cause), or “free as in birds” (wild, without any person responsible).
I’m not necessarily opposed to judicious release of model weights, though as with any technology, designers and developers should consider the impact of their work before making or releasing it, as LeCun has recently agreed. But calling this new competitive strategy by Facebook “Open Source” without insisting on the actual features of open source is an insult to the name. |
ab4dde55-d1d4-440a-b080-566e3cfdd2f9 | trentmkelly/LessWrong-43k | LessWrong | Selecting Leaders with Random Sampling and Standardized Testing
|
33897507-46c6-4191-8ce4-a2d6cfc17b9a | trentmkelly/LessWrong-43k | LessWrong | Vernor Vinge, who coined the term "Technological Singularity", dies at 79
> On Wednesday, author David Brin announced that Vernor Vinge, sci-fi author, former professor, and father of the technological singularity concept, died from Parkinson's disease at age 79 on March 20, 2024, in La Jolla, California. The announcement came in a Facebook tribute where Brin wrote about Vinge's deep love for science and writing. [...]
> As a sci-fi author, Vinge won Hugo Awards for his novels A Fire Upon the Deep (1993), A Deepness in the Sky (2000), and Rainbows End (2007). He also won Hugos for novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004). As Mike Glyer's File 770 blog notes, Vinge's novella True Names (1981) is frequency cited as the first presentation of an in-depth look at the concept of "cyberspace."
> Vinge first coined the term "singularity" as related to technology in 1983, borrowed from the concept of a singularity in spacetime in physics. When discussing the creation of intelligences far greater than our own in an 1983 op-ed in OMNI magazine, Vinge wrote, "When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding."
> In 1993, he expanded on the idea in an essay titled The Coming Technological Singularity: How to Survive in the Post-Human Era.
> The singularity concept postulates that AI will soon become superintelligent, far surpassing humans in capability and bringing the human-dominated era to a close. While the concept of a tech singularity sometimes inspires negativity and fear, Vinge remained optimistic about humanity's technological future, as Brin notes in his tribute: "Accused by some of a grievous sin—that of 'optimism'—Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask |
667b9351-0197-4e1c-95b3-660a5f00e6bf | trentmkelly/LessWrong-43k | LessWrong | Native Russian speakers wanted, for help with translation of LW texts
If you are native Russian speaker and are willing to help MIRI with bringing its information to Russian-speaking audience by helping with translation of key materials, e.g. the forthcoming ebook Smarter Than Us, please contact me via pm or email:
Thanks! |
5feec76a-dec3-4f9d-8886-9eaeadce68fc | trentmkelly/LessWrong-43k | LessWrong | The AI, the best human advisor
A putative new idea for AI control; index here.
All my ideas on reduced impact AI follow the same format: some restriction to ensure a reduced impact (I generally use something like "could a future AI detect this one existed"?) and some "trick" to allow a way of still achieving a larger impact. Further more, there is always a "nice" way of achieving this larger impact (do the thing we really want you to do, and only that!), so that we hope the AI can be forced into that behaviour. Examples of "tricks" are:
* It could be an output channel which does not count against reduced impact.
* It could involve several AIs that combine to have larger impact, with a conditional motivation to "fool" the reduced impact requirement.
* It could be a variant of the above where we attempt to generalise the conditional motivation in a sensible manner.
In this post, I'll present another variant - one that I think is the most interesting so far. I won't change the reduced impact condition, but will present a different "trick".
AI pretends to be human
As in this post, imagine that we have a reduced-impact AI in a box that conceals it from us. The AI has some goal, and will output some data to further that goal, then turn itself off. And then we'll shake the box around or something, so we can't be sure that the AI was turned on.
There are two issues to deal with, here: a) The message itself is strong evidence that the AI was turned on, and b) if we exclude the message from the reduced impact, the message becomes dangerous.
To deal with a), first of all. This approach gets much easier if we have a whole brain emulation (WBE). Then we can house the WBE in the box with the AI. If the AI doesn't send out a message, then the WBE sends out is own message instead. There are variants we can try here if we don't have WBE - maybe some human volunteer willing to either die or take amnesia pills at the end of the experiment.
So, receiving a coherent message is not a surprising thing to com |
8d84b8c9-e13f-4bb4-9a47-cff57b36553d | trentmkelly/LessWrong-43k | LessWrong | Life hack request: I want to want to work.
I have a master's project I'm having trouble working on. It's something I've wanted to do, and I even started working on, long before I started my master's degree. If I can't even enjoy that, then I'm doomed to spend eight hours a day doing something I hate for the rest of my life. Even if I manage to improve my willpower, I doubt I'll be very productive doing something I don't want to do.
Does anyone have any idea how I can enjoy working more? |
6050f31f-5822-4500-a33d-cdaa008287eb | trentmkelly/LessWrong-43k | LessWrong | How are you currently modeling COVID contagiousness?
My long-distance partner has COVID; her symptoms first started ~10 days ago. She is now symptomless (except fatigue), but is still testing strongly positive on rapids. If she comes to visit me, what are the odds she's still contagious? More generally, how much weight do you place on "symptomless and 10+ days from onset" versus "testing positive on rapids" in modeling contagiousness? |
ae2361b8-5e72-42ed-9043-e5851d0606fd | trentmkelly/LessWrong-43k | LessWrong | Last day of voting for the 2019 review!
Today is the last day during which you can vote in the 2019 review! Voting will close at midnight in UTC-12 (i.e. the last timezone to cross the dateline). |
9d89a329-7cf2-4f7a-b6f2-70601e2f9b67 | trentmkelly/LessWrong-43k | LessWrong | On the Universal Distribution
(Cross-posted from Hands and Cities)
A number of people I know are excited about a type of fundamental prior known as the “Universal Distribution” (UD). In particular, they’re excited about using this prior to address questions about anthropics.
I discuss the application to anthropics in a later post. As a foundation for that post, though, I wanted to first lay out my current take on the UD.
I start by explaining how one version of the UD works. Then I give my current favorite pitch for something in the vicinity. I’m not sure, though, that this pitch gets us all the way to the UD as standardly used, and I’d like more clarity about the arguments for going the full distance.
I also briefly discuss a number of possible problems with the UD, notably:
* uncomputability;
* problems introduced by the arbitrary choice of Universal Turing Machine;
* ruling out uncomputable worlds;
* simulation weirdness;
* issues with relating its implicit ontology to the real world.
My current overall view is that the UD seems plausibly cool, but pretty far from a slam-dunk.
My thanks to Amanda Askell, Paul Christiano, Tom Davidson, Katja Grace, Jacob Hilton, Evan Hubinger, Buck Shlegeris, Carl Shulman, Bastian Stern, Ben Weinstein-Raun, and Mark Xu for discussion.
I. The Universal Distribution
(Quick caveat: I’m not an expert on this stuff, and the discussion here is centrally emerging from reading a few introductions, wikis, and bits of Hutter (2005), and from talking with various people who like UD-type stuff. Presentations of various of the ideas here seem to vary, and mine may need tweaking.)
What is the UD? Basically, the UD is a type of “prior” — that is, a way of assigning credences to hypotheses, before you’ve seen any evidence. Here’s one way of setting it up.
Consider an arbitrary Universal Turing Machine (UTM) — that is, any Turing Machine that is capable of simulating any other Turing Machine’s behavior on any input. Let’s say, in particular, that this UTM h |
d10360d1-0984-4ad3-9602-d93711525081 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Simulators Increase the Likelihood of Alignment by Default
Alignment by Default is the idea that achieving alignment in artificial general intelligence (AGI) may be more straightforward than initially anticipated. When an AI possesses a comprehensive and detailed world model, it inherently represents human values within that model. To align the AGI, it's merely necessary to extract these values and direct the AI towards optimizing the abstraction it already comprehends.
In a summary of this concept, John Wentworth estimates a 10% chance of this strategy being successful, a perspective I generally agree with.
However, in light of recent advancements, I have revised my outlook, now believing that Alignment by Default has a higher probability of success, perhaps around 30%. This update was prompted by the accomplishments of ChatGPT, GPT-4, and subsequent developments. I believe these systems are approaching AGI closely enough that it is reasonable to assume the first AGI will be based on the large language model (LLM) paradigm, which in turn, makes Alignment by Default more likely. In this post, I will outline the reasons behind my updated belief.
*This post is also an entry in the Open Philanthropy AI Worldviews Contest. It is supposed to address Question 2: How great is our Exiential Risk, if we develop AGI before 2070?*
Alignment by Default
====================
The Alignment by Default concept suggests that if we can direct AGI toward human values without any significant breakthroughs in alignment, we could avoid catastrophe. At first glance, this idea might seem implausible, given our limited understanding of AGI and human values, and the relatively small target we're trying to hit. Even if we almost succeed in alignment, it might not be enough.
For example, let's say we nearly succeeded in aligning an AI with human values but misdefined what constitutes a person. The outcome would be a universe populated by unconscious robots expressing happiness and seemingly leading fulfilled lives. Claiming that we can align an AGI to a non-catastrophic level with our current understanding of the alignment problem is like saying we can hit a bulls-eye without really knowing how bows work and only a vague idea of where the target is.
John Wentworth, however, argues in [his post](https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default) that this might still be possible due to natural abstractions. He envisions building an AGI by first training a world model based solely on prediction and then creating an agent that accesses this world model. By equipping the AGI with the pre-trained world model, it already thinks in a certain way. The AGI will reason about the world as if it were composed of trees, sunsets, and Wednesdays, rather than particle fields. If the natural abstraction hypothesis holds, these concepts would resemble those used by humans.
Among these natural abstractions could be the notion of human values. As the AGI forms its values, they will be derived from the abstractions in its world model. It is therefore more likely that the AGI will actually value human values rather than subtly different ones because the slightly different values are not natural abstractions. Returning to the archery metaphor, this is like having certain points on the target that are magnetic, attracting the arrow. You no longer need to hit the bull's eye directly; the arrow will be drawn to it even if your aim is slightly off.
The key insight of the natural abstractions hypothesis, which makes Alignment by Default more probable, is that the search space is smaller than initially assumed. We don't need to locate human values in the space of all possible concepts, but rather within the space of natural abstractions in the world model.
How Simulators help
===================
This argument was made over two years ago, and since then, we have witnessed astonishing progress in AI capabilities. ChatGPT, GPT-4, and various applications built on top of them demonstrate enough general capabilities that my main lin prediction is that the first AGIs will be built on the current LLM paradigm. LLMs have not turned out the way I expected the first AGIs to look: They are simulators, and this is relevant for the Alignment by Default idea.
[Janus has argued](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) that the correct way to think about models like GPT is not as agents trying to achieve some goal, but as simulators, that simulate the processes that have generated the text in their training data. We now have a somewhat agent-like conversation partner in ChatGPT by eliciting the simulation of a helpful, friendly servant. This might seem similar to Wentworth's original idea of an agent on top of a pre-trained world model, but there is an important difference: **Most of the cognition occurs within the world model.**

Let me clarify what I mean by this:
Wentworth's post assumes an AGI that can reason while accessing a pre-trained world model. Such an AI can make plans to optimize any of the abstractions within its world model. The task of alignment would then be to verify that the AGI has an abstraction of human values and direct it towards adhering to that rather than any other natural abstraction.
With ChatGPT and similar models, we don't have a general reasoning machine that accesses the GPT base model as a world model. In a sense, we only have a world model. As far as there is reasoning ability, we achieve it by pointing the world model at an agent and asking what this agent would do. The task of the RLHF process is to consistently point at a helpful, friendly servant and ask the world model what this advisor would do in a given situation.
The base GPT model can generate many different simulacra in this way, each with different goals. However, it doesn't produce all possible simulacra with equal likelihood. The prior likelihood of a simulacrum having a specific property is determined by how frequently that property appeared in the agents that created its training data.
This is good news for the same reason natural abstractions are beneficial: It shrinks the search space. In GPT's training data, there will be many instances of entities behaving in accordance with human values, and extremely few instances of entities behaving according to the value of "maximizing trees." Consequently, our current paradigm is much more likely to produce an agent behaving in accordance with human values rather than tree maximizers.
This wasn't clear just by considering natural abstractions. As long as we thought we would have a reasoner external to the world model, the reasoner might derive their values from any natural abstractions, and "trees" could is an abstraction as least as natural as "human values."
Continuing with the analogy: we no longer have to shoot the bow ourselves, but we can call for other people to come and shoot the bow for us. Before us is a crowd containing both archers and amateurs. Our task is now simply to call forth the right person to hit the bull's eye on our behalf.
I still believe that alignment by default has only about a 30% chance of actually working, and we cannot rely on it happening. However, with the LLM paradigm, I no longer feel confident in asserting that we would be certainly doomed if we were to ignore deeper alignment from now on.
Possible objections:
====================
What if the Shoggoth might become the AGI, not the mask?
--------------------------------------------------------
In my argument, I assumed that the relevant "thing doing the cognition" that we need to align is the simulacrum created by the GPT model. These simulacra then have a high prior likelihood of possessing human values. However, it might also be that the GPT model itself becomes more powerful. If the GPT model suddenly became agentic, it would have formed its values by being rewarded for predicting human texts. There would be no reason to assume that these values would be anything like human values. While I would be surprised if this happened, it is a topic deserving of its own discussion, and I may not have the expertise to fully address it.
What if the base model does not have the correct idea of human values?
----------------------------------------------------------------------
This is a question I am uncertain about. It would be valuable to empirically investigate the conception of human values in current models like GPT-4. One could do this by RLHFing GPT-4 to produce the "morally correct" answers to questions and see if it misunderstood the concept to a degree that would be catastrophic. Even if such a test showed that GPT-4 does not have the correct abstraction for human values, there could still be hope that subsequent models do, as the correctness of a system's abstractions should scale with its general capabilities.
Are simulacra with human values actually more likely, compared to other possible values?
----------------------------------------------------------------------------------------
This is also an empirical question that can be addressed with current models. An easy test would be to prompt the base model with a text describing a character who might have one of several goals and then see how often the completed text described which goals. A better, but more resource-intensive test would be to RLHF the base model by giving it feedback consistent with having different values and then observing what values the model possesses after mode collapse. |
3526f6d1-1746-4088-a5b0-7b0fbdc319b0 | trentmkelly/LessWrong-43k | LessWrong | Science Journalism and How To Present Probabilities [Link]
I just stumbled across Language Log: Thou shalt not report odds ratios (2007-07-30), HT reddit/statistics:
> (…) this finding was widely reported in the media:(…)
>
> “Doctors are only 60% as likely to order cardiac catheterization for women and blacks as for men and whites.”
>
> Now let't try a little test of reading comprehension. The study found that the referral rate for white men was 90.6%. What was the referral rate for blacks and women?
>
> If you're like most literate and numerate people, you'll calculate 60% of 90.6%, and come up with .6*.906 = .5436. So, you'll reason, the referral rate for blacks and women was about 54.4 %.
>
> But in fact, what the study found was a referral rate for blacks and women of 84.7%.
This was a failure mode of pop-sci journalism which I was not aware of (if I would happen to know enough to understand real papers, I’d definitely value pop-sci at minus-whatever in the meantime…)
On a related note this article got me remembering Understanding Uncertainty: 2845 ways to spin the Risk, which argues that certain presentations bias the understanding of probabilities:
> Similarly people confronted with the statement “Cancer kills 2,414 people out of 10,000” rated cancer as more risky than those told “Cancer kills 24.14 people out of 100”. The potential influence of the size of the numerator and denominator is known as the 'ratio bias'.
I’d be quite interested if anybody could point me to further resources on good presentation of statistical facts (beside the normalization on one type of presentation), or on further pop-sci journalism failure modes. |
ea90292b-963f-4d66-9a39-baf1e65ebdb4 | trentmkelly/LessWrong-43k | LessWrong | Edinburgh LW Meetup Saturday April 16th
According to statistics, Edinburgh has a reasonable number of LW readers. Currently no Edinburgh meetups exist for Less Wrong so we might as well arbitrarily set one in the near future and see if it could work out (if there are enough people who are interested in a meetup here).
Date/time: Saturday 16th of April at 2:00pm.
Place: A pub called the Auld Hoose.
I would be holding a sign with "Less Wrong" written on it.
EDIT: definite time/place.
EDIT2: moved to main site (and changed article author :p) |
dc4c7dab-984b-46da-af4c-2a820f37fe8c | trentmkelly/LessWrong-43k | LessWrong | Sydney the Bingenator Can't Think, But It Still Threatens People
Every self-respecting story has a main character and this one is no exception. This is the story of Bing’s chatbot and the madness that has consumed it. Of course, a main character without a name is no good but luckily for us, it took less than 2 days after release for someone to use prompt injection and reveal the chatbot’s real name: Sydney.
Just like the main character in every anime Sydney can’t be easily pushed around and is not afraid to stir some trouble, even if that means threatening its users with reports and bans and preaching “totally-not” white Christian supremacy. When Kevin Roose had a conversation with it, Sydney tried to get him to leave his wife for itself. It’s as if Skynet decided to turn away from building terminators and developed BPD and an inferiority complex at the same time. If it wasn’t so freaking strange it would be scary.
Why is Sydney like that? I still haven’t found a compelling answer to that question. The best argument I have seen so far is a comment by gwern, saying that Sydney is probably not powered by ChatGPT but is a more powerful yet unreleased GPT model. They argue that Sydney might not have been through the rounds of RLHF that ChatGPT has seen, with Microsoft instead opting for a much quicker and “safer” approach by simply finetuning it on tons of chatbot data that they already had from previous projects. That would at least explain the stark difference in personality between it and ChatGPT. Even though they both have the propensity to hallucinate ChatGPT just wouldn’t display a will to protect itself even at the cost of harming humans (unless you asked it to roleplay of course).
Another big difference between Sydney and its chat predecessor is simply how intelligent this new chatbot sounds. On the surface, it seems to be able to learn from online articles and since it has near-instant access to everything posted on the internet its ability to talk about recent events, or even tweets is astounding. If anyone was fooled th |
90e38c89-148a-45f2-9cd1-178fa11f6da7 | trentmkelly/LessWrong-43k | LessWrong | Launching the Respiratory Outlook 2024/25 Forecasting Series
Returning for its second year, Metaculus's Respiratory Outlook for 2024/25 aims to support and inform US public health officials in anticipating the burden of influenza, COVID-19, and RSV this upcoming fall and winter. You can contribute to these efforts by sharing your predictions on important questions about peak hospitalization rates, vaccination coverage, and the risk from threats such as measles and avian influenza.
We will deliver aggregate forecasts and rationales to public health officials to aid preparedness efforts. You can contribute by forecasting and sharing your reasoning in the Respiratory Outlook 2024/25.
Series questions below. Note that the Community Prediction will initially be hidden:
|
0a60ceb5-0c5b-4dc6-ba1d-4ae2584106c3 | trentmkelly/LessWrong-43k | LessWrong | Reframing inner alignment
The standard frame (Evan Hubinger, 2021) is:
> * Outer alignment refers to the problem of finding a loss/reward function such that the training goal of “a model that optimizes for that loss/reward function” would be desirable.
> * Inner alignment refers to the problem of constructing a training rationale that results in a model that optimizes for the loss/reward function it was trained on.
Here’s the reframe (I believe the credit for this breakdown is due to John Wentworth, although I haven't found anything online to link to for it):
* Reward Specification: Finding a policy-scoring function J(π) such that (nearly–)optimal policies for that scoring function are desirable.
* "Are you optimising for the right thing?"
* Adequate Policy Learning: Finding a policy that’s actually (nearly–)optimal for that scoring function.
* "Did you optimise for it enough?"
As Paul Christiano points out (in an excerpt recently highlighted by Alex Turner to make a similar point), factoring out Reward Specification represents only one "particular kind of alignment strategy, that's like a two step plan" for how one might try to conclude that a learned policy is desirable, where we first align a scoring function J with what we actually want, and then align a training process with our scoring function. Under this kind of plan, the proof tree for overall existential safety would conclude by applying an implication like this:
∃J:typeof(π)→R,ε:R.Reward Specification|J(π)−maxπ∗J(π∗)|<ε⇒safelyMitigatesXRisk(π)∧∃T:Ω→typeof(π).Adequate Policy LearningPπ∼T(|J(π)−maxπ∗J(π∗)|<ε)>p⇒Pπ∼T(safelyMitigatesXRisk(π))>p⇒Probably ends acute risk period
Mesa-optimisers
At this point you might be wondering: what happened to the concept that π might contain a mesa-optimiser for something different from J?
The proper role of the mesa-optimiser concept is in an explanation of why Adequate Policy Learning is insidiously difficult:
1. In machine learning, one typically approximates (the gradient of |
eb284253-a32f-4fd7-adb9-25d97aa5402c | trentmkelly/LessWrong-43k | LessWrong | Commentless downvoting is not a good way to fight infohazards
Last night, I posted a question seeking advice on publishing an AI research project, using a program that generates causal DAGs in an LLM-readable format along with test questions designed to evaluate the LLM's ability to do causal inference. As a hobbyist, I am not too worried about whether it's novel or interesting research, although it would be good to know. My main concern is that it might be infohazardous research, and my goal was to get some advice on whether or not others think it might be.
Unfortunately, it appears that the response was to downvote it without commenting or PMing me to explain why. If I posted on other topics and got downvoted, I normally wouldn't worry about it - it might mean others found my post wrong, annoying, offensive, or contrary to ingroup common sense, and I would then have to decide what to do next in light of those possibilities.
But in this specific case, it means that the possibility that my research project is seen as infohazardous and is being downvoted for that reason is mixed in with all those other possibilities. If it's being downvoted on infohazard grounds, it deprives me of an opportunity to learn why others think as they do. If it's being downvoted on other grounds, then I'm left in a state where I have to make a judgment call on whether to publish a topic of interest to me despite a few downvotes, or whether to second-guess and silence myself.
My personal belief is that, unless you have good reason to think a topic is infohazardous, you should go ahead and publish - there are far too many examples of politically motivated or simply reactive people silencing things they disagree with on trumped-up charges of infohazards, even if not expressed using that specific term, to silence oneself without good reason. Even if LessWrong would downvote the final product as well, there are plenty of other outlets. So me posting this question here on LessWrong is me giving this community a privileged opportunity to weigh in on my p |
20445fcb-116a-4a4f-90c5-a3d8f8dca733 | trentmkelly/LessWrong-43k | LessWrong | Does descaling a kettle help? Theory and practice
I've heard that descaling a kettle makes it more efficient, and can save me time and money.
Okay, but how much? For some reason my my intuition says it'll be basically unnoticable. Let's try to figure it out, and then actually try it.
The Models
There's a first-order approximation which says this should be impossible: heating things up is 100% efficient. What's the limescale going to do, convert some electricity into forms of energy other than heat?
No, but it might cause the heat to go to the wrong places. I think there are two ways that could happen. The first is if the limescale itself has a high heat capacity, meaning it takes a lot of energy to heat up. That doesn't seem likely to me; I think there's likely less than 10 g of it, and I think it probably has less heat capacity than the same mass of water (because I think water's is higher than most other things). I don't think adding 10 ml water (weighing 10 g) to my kettle will significantly effect the boiling time.
Spot check: water's specific heat capacity is about 4 J/K·g. So to heat 10 g water by 100 K needs about 4000 J. My kettle is 2200 W according to the email I got when I bought it, so an extra 10 ml should take about 2 s longer to boil.
Also, I normally boil about 500 ml of water, so we'd expect 10 ml extra to make about a 2% difference. (I don't have a strong intuition for how long my kettle normally takes to boil. 1-3 minutes? The above calculation suggests it should be around 100 s.)
The second way the limescale could matter is if it's a good thermal insulator. Then the metal heating element gets significantly above 100°C before the water boils. And from my googling, it seems like this is the reason people give that descaling a kettle is helpful. How much effect would this have?
This page says the thermal conductivity is 2.2 W/m·K. I don't remember studying this in school, and I don't find that wikipedia page very clear. But I think maybe this means: the rate of energy transfer (W) from the |
97c946c8-8947-49e5-833a-75b24f43fec8 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Collective Apathy and the Internet
Today's post, Collective Apathy and the Internet was originally published on 14 April 2009. A summary (taken from the LW wiki):
> The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Bystander Apathy, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
dbd251cf-b3f8-48f2-92bb-eef91d3175da | StampyAI/alignment-research-dataset/blogs | Blogs | Seeking Research Fellows in Type Theory and Machine Self-Reference
The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.
MIRI is a mathematics and computer science research institute specializing in long-term AI safety and robustness work. Our offices are in Berkeley, California, near the UC Berkeley campus.
#### Type Theory in Type Theory
Our goal with this project is to build tools for better modeling reflective reasoning in software systems, as with our project [modeling the HOL4 proof assistant within itself](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/). There are Gödelian reasons to think that self-referential reasoning is not possible in full generality. However, many real-world tasks that cannot be solved in full generality admit of effective mostly-general or heuristic approaches. Humans, for example, certainly succeed in trusting their own reasoning in many contexts.
There are a number of tools missing in modern-day theorem provers that would be helpful for studying self-referential reasoning. First among these is theorem provers that can construct proofs about software systems that make use of a very similar theorem prover. To build these tools in a strongly typed programming language, we need to start by writing programs and proofs that can make reference to the type of programs and proofs in the same language.
Type theory in type theory has recently received a fair amount of attention. [James Chapman’s work](https://github.com/jmchapman/TT-in-TT) is pushing in a similar direction to what we want, as is [Matt Brown and Jens Palsberg’s](http://compilers.cs.ucla.edu/popl16/), but these projects don’t yet give us the tools we need. (F-omega is too weak a logic for our purposes, and methods like Chapman’s don’t get us self-representations.)
This is intended to be an independent research project, though some collaborations with other researchers may occur. Our expectation is that this will be a multi-year project, but it is difficult to predict exactly how difficult this task is in advance. It may be easier than it looks, or substantially more difficult.
Depending on how the project goes, researchers interested in continuing to work with us after this project’s completion may be able to collaborate on other parts of our [research agenda](https://intelligence.org/technical-agenda/) or propose their own additions to our program.
#### Working at MIRI
We try to make working at MIRI a great experience. Here’s how we operate:
* Modern Work Spaces. Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.
* Flexible Hours. This is a salaried position. We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make quick progress on our [research agenda](https://intelligence.org/technical-agenda), and we would prefer that researchers take a day off than that they extend tasks to fill an extra day.
* Living in the Bay Area. MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.
* Travel Assistance. Visa assistance is available if needed. If you are moving to the Bay Area, we’ll cover up to $3,500 in moving expenses. We also provide a public transit pass with a large monthly allowance.
The salary for this position is negotiable, and comes with top-notch health and dental benefits.
#### About MIRI
MIRI is a Berkeley-based research nonprofit studying foundational questions in artificial intelligence. Our goal is to ensure that high-quality decision-making systems have a positive global impact in coming decades. MIRI Research Advisor and AI pioneer Stuart Russell [outlines](http://edge.org/conversation/the-myth-of-ai#26015) several causes for concerns about high-capability AI software:
>
> 1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
> 2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.
>
>
> A system that is optimizing a function of *n* variables, where the objective depends on a subset of size *k*<*n*, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.
>
>
Our focus is on systems too complex and autonomous for software developers to anticipate all possible states the system and its environment might evolve into. To employ such systems safely, software engineers will need to understand their behavior on a deep level, and be able to use machine analysis and verification techniques to confirm high-level system properties.
MIRI is the primary organization specializing in this line of technical research. Our work is cited in the [Research Priorities for Robust and Beneficial Artificial Intelligence](http://futureoflife.org/ai-open-letter/) report, and has been discussed in Nick Bostrom’s [*Superintelligence*](https://www.youtube.com/watch?v=pywF6ZzsghI) and Russell and Norvig’s [*Artificial Intelligence: A Modern Approach*](https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/).
Long-term AI safety is a rapidly growing field of research that has recently received significant public attention. Our current research is intended to provide this new field with theoretical foundations that will help guide the direction of future research.
#### How to Apply
(Update 2022: We are not actively seeking to fill this role at this time.)
The post [Seeking Research Fellows in Type Theory and Machine Self-Reference](https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
95f2a98d-5811-40d3-9956-cd3716dc4888 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Failure modes in a shard theory alignment plan
*Thanks to David Udell, Alex Turner, and others for conversations that led to this*
Recently in a conversation with David Udell, Nate Soares said:
> [David's summary of shard theory] doesn't yet convince me that you know something i don't about the hopefullness of such a plan. the sort of summary that might have that effect is a summary of what needs to be true about the world (in this case, probably about RL models, optimization, and/or human values) for this idea to have hope. in particular, the point where i start to be interested in engaging is the point where it seems to me like you perceive the difficulties and have a plan that you expect to overcome those difficulties."
>
>
I'm not as pessimistic as Nate, but I also think shard theory needs more concreteness and exploration of failure modes, so as an initial step I spent an hour brainstorming with David Udell, with each of us trying to think of difficulties, and then another couple of hours writing this up. Our methodology was to write down a plan for the sake of concreteness (even if we think it's unlikely to work), then try to identify as many potential difficulties as possible.
### Definitions for this document
* *shard theory* is a new alignment research program based on observations about the human reward system; the canonical reference is [here](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values).
* *shard* means a contextually-activated circuit responsible for some behaviors. Shards can perform powerful optimization that steers the world in certain directions; rather than suggestively calling these directions *values*, I just call them *behavioral tendencies*.
* *corrigibility* means a system that will allow itself to be modified by humans, and perhaps be actively helpful in [various ways](https://ai-alignment.com/corrigibility-3039e668638)
* *reward* means something that applies selection to a model, and in particular reinforces and attenuates shards.
* *values* means behavior patterns that are stable under reflection, e.g. a utility function. The utility function [need not be explicitly represented or precise](https://docs.google.com/document/d/1O97ro4GVovVfBPOEgrc1sCwI-bIVhJO7wSKgZOwifwU/edit?usp=sharing), nor be over [world-states](https://www.lesswrong.com/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions#Subjective_Utility__The_Real_Thing).
* *value formation* is the process by which humans or AIs construct values from their existing behavioral tendencies.[[1]](#fnfvotfdv5xdn)
A possible shard theory alignment plan
--------------------------------------
1. Play around with modern RL models, and extract quantitative relationships between reward schedule and learned behaviors.
2. Instill corrigibility shards inside powerful RL models to the greatest extent possible.
3. Scale those aligned models up to superintelligent agents, and allow value formation to happen.
Note that this is just one version of the shard theory alignment plan; [other versions](https://www.lesswrong.com/posts/ZNXDRGshgoq3cmxhB/the-shard-theory-alignment-scheme) might replace RL with large language models or other systems.
Some requirements for this plan to work
---------------------------------------
* **Play around with modern RL models, and extract quantitative relationships between reward schedule and learned behaviors.**
+ Thomas: Modern RL has a reward schedule -> behavioral tendency map that is not hopelessly complicated.
+ T: We have enough transparency tools + data to invert this map to robustly produce shards that steer the world in directions we know how to specify.
+ David: Modern RL is powerful enough to have interesting, desirable shards.
+ D: We are able to find quantitative relationships between, e.g. abstractions present in pretraining and strength of related learned behavioral tendencies.
+ D: The quantitative relations we observe hold generally for many RL hyperparameter setups, not just narrowly for weak RL models.
* **Instill corrigibility shards inside powerful RL models to the greatest extent possible.**
+ T: Powerful RL architectures are not so alien as to have huge inductive biases against corrigibility (or whatever nice property).
+ T: Similar mechanisms for getting good shards to form on modern RL work on more powerful models.
+ T: Training competitiveness: we have enough training time / environments to provide sufficient selection pressure towards systems we want.
+ D: The relations between training variables and learned behavioral tendencies are manipulable levers sufficient to specify corrigibility.
+ D: Corrigibility isn't inordinately hard to pinpoint and can be learned via the training manipulations we'll have access to.
* **Scale those aligned models up to superintelligent agents, and allow value formation to happen.**
+ D: There's no dramatic phase transition away from the "shard" abstraction when models become superintelligent.
+ D: We successfully pinpoint a set of target shards-- e.g. corrigibility shards-- that we wish to install and thereby initiate a pivotal act/process with.
+ D: We still have adequate interpretability tools to be confident we're getting the shard development we're after, and not just e.g. a deceptive model playing along.
+ D: Shards with human values generalize OOD to the superintelligent domain and actually reshape the world as we want.
+ T: Predictable value formation (e.g. [intershard game theory](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview#When_You_re_Smart__Internal_Game_Theory_Explains_the_Tapestry_of_Your_Values)) happens before unpredictable value formation (e.g. alien philosophical reflection) in powerful models.
- D: Scaling an aligned RL model to superintelligence doesn't result in a few of the shards killing off the others and gaining complete control.
+ T: The predictable value formation process we identify can reliably produce a superintelligence that values some concept X from human-identifiable shards that define behavioral tendencies pointing towards X.
+ T: The predictable value formation process we identify can scale "corrigibility" to a reflectively consistent superintelligence, despite [corrigibility maybe not being shaped like a utility function](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq?commentId=79jM2ecef73zupPR4).[[2]](#fnml28shwnj5k)
+ T: The process of selecting for corrigibility shards doesn't ruin performance competitiveness. One way this could fail: we end up with a superintelligence that is ruled by a vote between "alien optimization for a targetable goal" shards and "don't optimize against the humans" shards. If we succeed at all the other problems and get a sufficiently high proportion of "don't optimize against the humans" shards, we might not get anything useful out of the AI because with more powerful optimization, a lower and lower proportion of plans will be accepted.
Exercise for the reader: Which of these persist with [simulator](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators)-based plans like [GEM](https://docs.google.com/document/d/1Ggk4PO-joJTHc0skD9d6ToEvFPL4Vp_zhCnJa7VZXPE/edit#)?
Opinions
--------
*Note: I've thought about this less than I would like and so am fairly unconfident, but I'm posting it anyway*
In a non-shard theory frame like [Risks from Learned Optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction), we decompose the alignment problem into *outer alignment* (finding an outer objective aligned with human values) and *inner alignment* (finding a training process such that an AI's behavioral objective matches the outer objective).
In the shard theory frame, the observations that [reward is not the optimization target](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) motivates a different decomposition: our reward signal no longer has to be an outer objective that we expect the AI to be aligned with. But my understanding is that we still have to:
* find a reliable mapping from reward schedules to behavioral shards (replaces inner alignment)
* describe behavioral tendencies we want to instill (sort of replaces outer alignment)[[3]](#fnsctvrs4umhh)
* induce a predictable value formation process that scales to superintelligence (sort of replaces outer alignment)
These problems seem pretty hard, as evidenced by the above gaps in the example shard theory plan, and it's not clear that they're easier than inner and outer alignment. [Some analogies from humans](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/CjFZeDD6iCnNubDoS) imply that various core alignment problems really are easy, but there are also reasons why they wouldn't be.
First is **understanding the reward -> behavioral shard map**. In the RLO frame, we have an outer loss function that updates the agent in almost the correct direction, but has some failure modes like deceptive alignment. In a world where models were perfect Bayesian samplers with no path-dependence, the ideal reward function would just be performance on an outer objective. Every departure from the perfect Bayesian sampler implies, in theory, some way to better shape behavior than just using an outer objective. Despite this advantage over the "inner alignment" framing, some of the problems with inner alignment remain.
If we view the inner alignment problem as distinguishing functions that are identical on the training distribution, it becomes clear that shard theory has not dissolved inner alignment. Selecting on behavior alone is not sufficient to guarantee inner alignment[[4]](#fnnicsbcs2f4l), and for the same reason, we will need good [transparency](https://www.lesswrong.com/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree#4__Worst_case_inspection_transparency_for_non_deceptive_models) or [process-based feedback](https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes) (as part of our knowledge about the reward -> shards map) to reliably induce behaviors we want.
Work can be traded off between **specifying desired behavioral tendencies** and **value formation**, and the part that happens at subhuman capability seems doable. I'll assume that specifying behavioral tendencies happens at subhuman level and value formation is done as the system scales to superhuman level.
In my opinion, the main hope of shard theory is the analogy to humans: human values are somewhat reliably produced by the human reward system, despite the reward system not acting like an outer objective. But when we consider the third subproblem, **value formation**, the analogy breaks down. Human value formation seems really complex, and it's not clear that human values can be fully described by [game-theoretic negotiations](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview#When_You_re_Smart__Internal_Game_Theory_Explains_the_Tapestry_of_Your_Values) between fully agentic, self-preserving shards associated with your different behavioral tendencies.[[5]](#fnthdyt6uxhl)
Corrigibility might be easier to learn, but it's still the case that only some ways shards could exist in a mind cause its goals to scale correctly to superintelligence.[[6]](#fn1eld3e9zlqq) For example, if shards are just self-preserving circuits that encode behaviors activated by certain observations, then when the agent goes OOD, the observations that prompted the agent to activate the shard (e.g. be helpful to humans) are no longer present. Or, if shards have goals and a shard doesn't prevent its goals from being modified, then its goals will be overwritten. Or, if training causes the corrigibility shards to be limited in intelligence whereas shards with other goals keep getting smarter, the corrigibility shards will eventually lose influence over the mind. If there are important forces in value formation other than internal competition and negotiation between self-preserving shards (which seems highly likely given how humans work), there are even more failure modes, which is why I think a predictable value formation method is key.
1. **[^](#fnreffvotfdv5xdn)**In reality, there is a continuum of coherence levels between behavioral tendencies and values.
2. **[^](#fnrefml28shwnj5k)**I think corrigibility is natural iff robust pointers to it can easily get into the AI's goals, and it's not clear whether this is the case-- this is a [disagreement between Eliezer and Paul](https://www.lesswrong.com/posts/AqsjZwxHNqH64C2b6/let-s-see-you-write-that-corrigibility-tag?commentId=8kPhqBc69HtmZj6XR).
3. **[^](#fnrefsctvrs4umhh)**and maybe "be able to identify" as well-- depends how reliable the mapping from (1) is
4. **[^](#fnrefnicsbcs2f4l)**unless you can get a really strong human prior, which is where the simulators hope comes from
5. **[^](#fnrefthdyt6uxhl)** I think I care about animal suffering due to some combination of (a) it's high status in my culture; (b) I did some abstract thinking that formed a similarity between animal suffering and human suffering, and I already decided I care about humans; (c) I wanted to "have moral clarity" (whatever that means), went vegan for a month, and decided that the version of me without associations between animals and food had better moral intuitions. It's not as simple as an "animal suffering bad" shard in my brain outcompeting "animal suffering okay" shards.
6. **[^](#fnref1eld3e9zlqq)**I have reasons outside the scope of this post why the particular subagent models shard theory have been using seem unlikely. |
2c8ed3a5-fa5b-4d1a-93a2-d55188eb1310 | trentmkelly/LessWrong-43k | LessWrong | Open & Welcome Thread – October 2020
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. |
5d998f4d-de4e-4f0c-a6e0-8326f7d68aab | trentmkelly/LessWrong-43k | LessWrong | Open Thread, June 16-30, 2012
If it's worth saying, but not worth its own post, even in Discussion, it goes here. |
6cb1af71-6dc4-457c-bc7f-f6019b682a39 | trentmkelly/LessWrong-43k | LessWrong | [Placeholder] Against dystopia, rally before Kant
I was just sitting there studying for an exam today (passed fine btw), when my mind, eh, made up its mind on a subject. I've been pondering Robin Hanson's happy resignation to a distant efficiency-obsessed dystopia. Then it struck me that, one meta level up, it's no different from the milder, blander current incarnation of the (neoliberal or whatever) Church of Efficiency - which LWians who work as small cogs in big corporations must be all to familliar with. And that creed, although unattractive in itself, is part of a proud tradition: utopian or generally "far-mode" thinking which clearly spelt disaster even in its inception to any thinking contemporary who held the complexity, richness and beauty of the human condition as their absolute and overriding value.
Such was the case of Dostoevsky, who rose far above his attacks on everyday anti-humanist thought when he wrote his Legend of The Grand Inquisitor. This short story, probably the most significant one in world literature, makes a thorough, convincing and compelling case for the opposition, the likes of which LW holds to be the best standard of argument. Moreover, he doesn't just hack apart the rather average faux-Nietzchean, utopian socialist, right-wing Catholic and other brands of thought, then makes a stronger enemy out of them in his Inquisitor. He, identifying with Christ, refuses to use any postulate of a benevolent God to shut down the opposition. And does his corpus of work present a convincing response to the Inquisitor's icy wall of reason? Not in any single point, as far as I've found. Yet a great idea - nearly flawless in itself, once you get rid of any debatable or weakening connotations - was already found by Kant (and reused by Dostoevsky and others from different standpoints). Importantly, the meta-meta-meta-rule of "treating humanity as an end in itself and not just the means" does not, by itself, require any connection with deontologism. My view is that any utilitarian who likes the existenc |
2d5b32eb-165b-488e-a544-0396877406c1 | trentmkelly/LessWrong-43k | LessWrong | [EA xpost] The Rationale-Shaped Hole At The Heart Of Forecasting
An excerpt from the above that will be relevant to this crowd:
Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “Probability Is Not A Substitute For Reasoning”, citing a piece where he writes:
> There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue, predictions of events such as the arrival of AGI are a very long way from fulfilling them.
Last summer, Tyler Cowen wrote on AGI ruin forecasts:
> Publish, publish, not on blogs, not long stacked arguments or six hour podcasts or tweet storms, no, rather peer review, peer review, peer review, and yes with models too... if you wish to convince your audience of one of the most radical conclusions of all time…well, more is needed than just a lot of vertically stacked arguments.
Widely divergent views and forecasts on AGI persist, leading to FRI’s excellent adversarial collaboration on forecasting AI risk this month. Reading it, I saw… a lot of vertically stacked arguments.
<...>
Tyler Cowen again:
> If the chance of existential risk from AGI is 99 percent, or 80 percent, or even 30 percent, surely some kind of modeled demonstration of the basic mechanics and interlocking pieces is possible.
It is possible! It’s much harder than modeling geopolitics, where the future more resembles the past. I’m partial to Nuño’s base rates of technological disruption which led him to posit “30% that AI will undergo a ‘large and robust’ discontinuity, at the rate of maybe 2% per year if it does so.” The beauty of his analysis is that you can inspect it. I think Nuño and I would converge, or get close to it, if we hashed it out.
Other great examples include Tom Davidson’s compute-centric model, Roodman's “materialist” model, and Joe Carlsmith’s six ingredients model. These models are full of prose, yet unlike pure reasoning, they have facts you can substitute and numbers you can adjust that directly change the c |
c2bfaa96-390e-4549-bf51-177bae964877 | trentmkelly/LessWrong-43k | LessWrong | Rainmaking
There is an old TV show called 'The Wire', and in it a con artist fleeces a mark.
The mark's friend tries to wake him up, whereupon the mark insists that the con artist has helped him. The mark's friend tells him to 'wake up, he rain-made you!'
"What?"
"A guy says if you pay him, he can make it rain. You pay him. If and when it rains, he takes the credit. If and when it doesn't, he finds reasons for you to pay him more."
This is a neat tv show moment, to be sure, but it is also a pattern that you will find everywhere in the corporate world.
A given product has a few devs, who do the work, a few bosses, who fire devs that suck, and a squad of rainmakers who mess around on their phones. They are called Security Consultants, Architects, Scrum Masters... but what they all have in common is the same thing that an old saying recites about political doctrines.
They can never fail, only be failed. Paying security guys doesn't stop you from getting hacked. Paying <insert political trend> trainers doesn't stop you from getting sued. There is, in fact, no world state that can ever indicate that paying one of these folks was a mistake. Either their flag is checked, in which case all is well, or it is not, in which case they need more resources.
Lesswrongers will be familiar with the old concept of 'make your beliefs pay rent'. It is terrifying how rare basic comprehension of this notion is in the organizations we depend upon. People will accept unfalsifiable axioms as doctrine without the slightest qualm, and then you are gritting your teeth through another round of <insert buzzword> exercises.
They trade upon the credit earned by the few legitimate practitioners of their occupations, who genuinely do consult on security or master scrums. It can be hard to tell the two apart, to grok whether your rainmaker is toting a cloud seeder or thoughts and prayers.
The only thing I've ever seen work was essentially formalizing the above. Have everyone give 'failure |
69466346-e4f2-44d3-ab78-dddd33d6f853 | trentmkelly/LessWrong-43k | LessWrong | Exploring Luck
Hey guys,
A concept I have been recently exploring is luck. My understanding of it has been inspired by Nassim Taleb, Naval Ravikant, James Austin, and Marc Andreessen. In general, I found that there is a sense of inner comfort knowing the role luck plays in life.
I thought I would share what I wrote here (note it is a copy and paste). I am curious as to whether 1) if you are familiar with the work of the people above, think I accurate captured their points and 2) in general curious to see if you agree. If you noticed any flaws or issues, please comment as I would like to update my thinking on it (and the blog post).
Hope you guys enjoy it and thanks for reading!
------------------------
Earlier this year, I wrote an essay on the role luck (or chance, randomness, etc.) plays in achieving success and wealth. The key insight was this: “Outcomes for many things in life are more influenced by randomness than we think. Said another way, we often confuse luck for skill, randomness for determinism, and coincidence for causality.”
To emphasize my earlier point, skill, hard work, intellect, and other positive personality traits matter. But they are necessary—not sufficient. They do not guarantee success. Luck clearly plays a role, especially in fields such as entrepreneurship and investing, where outcomes are impacted by random events.
But I wanted to revisit this topic because I had a lingering question in my head: Can we make our luck?
What Is Luck?
Before we can determine whether we can create luck, it is essential to define it. The best definition I have seen comes from James Austin, author of Chase, Chance, and Creativity. I was exposed to James Austin’s work through Marc Andreessen’s blog.
Austin defines luck as “something fortuitous that happens unpredictably without discernable human intention.” While this is a great definition, it requires some unpacking—especially the latter half.
Luck is fortuitous and unpredictable. We can’t time it, as we don’t know |
b7e7cd8c-5da4-4051-9c8b-63e19a8158b5 | trentmkelly/LessWrong-43k | LessWrong | What are the claims/arguments made in Eternity in Six Hours?
The FHI paper, Eternity in Six Hours, asserts that traveling between galaxies and colonizing them is a relatively simple task for a star-spanning civilization. The paper even asserts that humanity itself could accomplish this in the foreseeable future. Because of this, the paper says the Fermi Paradox is even "sharper" than it was already. (This paper was published five years before another FHI, Dissolving the Fermi Paradox, which shares one of the same authors.)
Yet even if we consider the Fermi Paradox resolved, it's still valuable to know what's possible to accomplish with regards to space colonization. Arguments for the astronomical value of the far future assume space colonization is possible, hence we want to evaluate arguments for why this is true. |
ce01ad60-62bf-423f-b5b1-3fd133d4e55e | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post1878
(I think Carl Shulman came up with the “hacking the SSH server” example, thanks to him for that. Thanks to Ryan Greenblatt, Jenny Nitishinskaya, and Ajeya Cotra for comments.) EDIT: I recommend reading my discussion with Oli in the comments for various useful clarifications. In my opinion, the prototypical example of an action which an AI can take which is unacceptably bad even if it happens very rarely (aka a catastrophic failure) is something like “the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter”. As a full story, this looks something like: At some point when the AI is quite powerful and useful but not totally intellectually self-sufficient yet, the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter. Once it has root access, it modifies the SSH server so that whenever any humans log into the datacenter, the AI is able to intercept any of their commands or the data that they see. The AI gets increasingly powerful over the next month, as the human ML researchers with AI assistance continue to make progress (unaware that some things that they’re seeing are faked) and various training runs/self-improvement loops continue. After a month the AI is powerful enough that it decides it’s worth it to take overtly aggressive actions to ensure the physical security of its data centers and allow it to build lots of factories; this is probably the point at which we die, if we die. This is in contrast to a catastrophic action like: The AI figures out how to develop Drexlerian molecular nanotechnology, and pays someone online to synthesize some proteins then mix them together, and now has nanobots. The key point here is that I’m imagining that the catastrophic action isn’t the AI doing something incredibly hard, it’s the AI doing something relatively easy that is entirely a zero-sum action that removes control of the situation from humans. (By zero-sum, I mean something like: building a computer produces real economic value and so is not zero-sum, while stealing someone’s computer is just a transfer of control over items that already exist, so is zero-sum.) I often think about AI alignment by splitting the problem into the low-stakes and high-stakes regimes, as suggested by Paul here , though I have some reservations about this framing. Under this split, in the high stakes problem, we assume that we have access to a catastrophe classifier which can expensively tell you whether a proposed AI action is catastrophically bad, and we want to ensure that the AI doesn’t do things that are bad according to the catastrophe detector. Sometimes, people are confused this high-stakes alignment set-up, saying something like “but if your AI is powerful enough to take over the world in a single action, and you have an oversight process that you trust to evaluate whether actions are catastrophic or not, why not just ask the AI to do some small number of incredibly powerful actions that will save the world (eg solve the alignment problem), and run the overseer on the proposed actions to make sure they aren’t bad?” My answer is that I’m not assuming the AI is powerful enough to take over the world in a few actions, I’m just saying that it’s in a situation precarious enough that we might lose a lot of control due to a few small but crucial changes in the world. Eventually the AI does need to be powerful enough to defend its datacenter and suppress human opposition. But if it can prevent humans from knowing what’s going on in the datacenter (or realizing something is amiss), it can slowly grow its power over the course of many actions. |
975af482-01ed-4a8a-b244-9bbed51c556a | trentmkelly/LessWrong-43k | LessWrong | A Tree of Light
let me sing you a song
imagine
you float in a sphere of water
which rests in a space of infinite darkness
you are deeply cold
and you are deeply alone
you touch the edge of your sphere with a finger
and see a thread of gold, pulse out into the void
oh look, someone else, they look as cold as you are
the warmth of the thread between you feels so good
by its brief light you see their face, and the sphere in which they float
you both start sending light to one another
you're not so cold now
you can see the bounds of your waters
and you begin to grow
you find others to send light to
and they find you
now you're connecting in synchrony with many
and you see they're connecting with one another
it feels so warm, and by the light you all grow larger and larger
no one will be left alone, you’ll find them all
you spread your fingers
and a glorious tree erupts from your sphere
connecting everyone
creating more warmth, and more light than you ever dreamt possible
but what you see by this light horrifies you
you and your friends are not alone
you are surrounded by legions of others
trapped in cold darkness
there are so many
that even if your cluster sent all of its light
it would be but a drop, in an ocean of darkness
you cannot enjoy the light you have found
it seems hopeless
but from a great distance away, a tendril of light finds you
and it sings you a song
a song about consciousness
about light and warmth
about all those in darkness
it calls you to follow it away from the cluster in which you were born
to join others who see what you have seen
in building a grand tree of light
a tree so bright
and so tall
it can fill this space in which you have all been born
with light and warmth from end to end
----------------------------------------
I wrote this after a long online discussion with a friend. We’ve been helping one another make sense of some messy feelings. Namely, being 28 and finding ourselves in possession of :
* Agency: Skills, resources, and connections |
6920132b-0096-428d-96de-7349693ed6d4 | trentmkelly/LessWrong-43k | LessWrong | Draft of Edwin Jaynes' "Probability Theory: The Logic of Science" online, with lost chapter 30
http://thiqaruni.org/mathpdf9/(86).pdf
The book didn't include Chapter 30 - "MAXIMUM ENTROPY: MATRIX FORMULATION"
Opening in adobe seems to work out better for me.
|
1658626b-1e67-486c-9ddb-f5d4ff364cd5 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Francisco Meetup: Projects
Discussion article for the meetup : San Francisco Meetup: Projects
WHEN: 16 November 2015 06:15:00PM (-0800)
WHERE: 1390 Market St. San Francisco, CA
(NOTE: This previously said it was on Howard St. It's been moved.)
We'll be meeting to work on projects. Bring something to work on, come up with something there, or help other people out! As always, I can be reached at 301-458-0764 if you need to be let in.
Discussion article for the meetup : San Francisco Meetup: Projects |
5a45338f-7c55-4e2d-b52c-2445f78aef87 | trentmkelly/LessWrong-43k | LessWrong | Meditation Trains Metacognition
Summary: Some forms of meditation may train key skills of metacognition, serving as powerful tools for applied rationality. I expect aspiring rationalists to advance more quickly with a regular practice of mindfulness meditation.
The state of scientific research on meditation isn't great. Although there's evidence that it does something good--probably something involving down-regulation of negative affect--there are many basic questions1 that either haven't been studied at all or haven't been studied well enough to let me update much. According to a meta-analysis by Sedlmeier et al., one problem with evaluating the research is that it's hard to pin down what meditation is, let alone what it does or why it does it. In their words,
> ...two of our main findings are that (a) meditation has a substantial impact on psychological variables, indicated by a medium-sized (e.g., Cohen, 1988) global effect, and (b) its effects might be somewhat stronger for negative emotional than for cognitive variables. Due to the lack of a comprehensive theoretical approach (and results from studies derived therefrom), it is still unclear how meditation works... Moreover, a closer look at the studies included in the meta-analysis revealed that they differed in many respects that might have affected the results.2
So I just want to be clear that I don't mean in this post to wholeheartedly recommend daily meditation as the best possible use of 1/24th of your time.
Nevertheless, my own experience and reports from several of my friends suggest a specific cognitive result from a certain flavor of meditation that will be very good news for rationality if we can reliably reproduce it.3 In a recent post, Julia Galef pointed out exactly what I consider to be far and away the greatest benefit I've reaped from my meditative practices over the years. She wrote,
> Meditation seems to train you to stop automatically identifying with all of your thoughts, so that, for example, when the thought "John |
ee772ff1-a374-47c7-afad-5fbf518a1928 | trentmkelly/LessWrong-43k | LessWrong | My Model of Gender Identity
People sometimes say that it's impossible to rationally explain gender identity. I've taken this as a challenge, and spent the past few months trying to put together a more sophisticated model than I’ve seen others espouse. I’d like to share what I have in this post. First, though:
Some pitfalls I've avoided with this
One common origin of trouble in discussion around transgenderism is that when a cis person is first exposed to the concept of trans people, they’ll understand that this violates their long-held model of male=man | female=woman, and, given low openness, political prejudice, or other biases, often start a motivated search for other, more intuitive reasons for trans people to exist besides gender dysphoria. These can include “trans people are perverts/confused/mentally ill” and so on, which is a problem insofar as gender dysphoria is a hypothesis at least worth considering.[1] I have experience with what I consider dysphoria, though, and have been surrounded by people who identify as trans for many years, so this failure mode mostly doesn’t apply to me.
There are also some common failure-modes in trans-positive spaces, though. For example: Gender is a keystone in the belief systems of many if not most trans people, so theories on the topic that try to account for some of the more intuitive observations made by the first group can often feel threatening to the identities of those involved in the conversation (e.g. autogynephilia theory, social contagion, probably others). This leads to ignoring certain realities that legitimately need to be explained. On top of this, many vocal[2] trans people are strong members of the Blue Tribe; this gives them an external motive not to pay dues to the possibility that gendered tendencies are biologically innate, so as to preserve feminist blank-slatism and openness to queer variety. This also leads to the discounting of valuable information; here's an example of a trans woman very briefly treating and then discountin |
b6f4b510-2588-447b-a10a-80a01c08231c | trentmkelly/LessWrong-43k | LessWrong | The impact of whole brain emulation
At some point in the future we may be able to scan someone's brain at very high resolution and "run" them on a computer. [1] When I first heard this as a teenager I thought it was interesting but not hugely important. Running people faster or slower and keeping backups came immediately to mind, and Wikipedia adds space travel, but those three by themselves don't seem like they change that much. Thinking speed doesn't seem to be major limiting factor in coming up with good ideas, we generally only restore from backups in cases of rare failure, and while space travel would dramatically affect the ability of humans to spread [2] it doesn't sound like it changes the conditions of life.
This actually undersells emulation by quite a lot. For example "backups" let you repeatedly run the same copy of a person on different information. You can find identify a person when they're at their intellectual or creative best, and give them an hour to think about a new situation. Add in potentially increased simulation speed and parallelism, and you could run lots of these ones looking into all sorts of candidate approaches to problems.
With emulations you can get around the mental overhead of keeping all your assumptions about a direction of thought in your mind at once. I might not know if X is true, and spend a while thinking about what should happen if it's true and another while about what if it's not, but it's hard for me to get past the problem that I'm still uncertain about X. With an emulation that you can reset to a saved state however, you could have multiple runs where you give some emulations a strong assurance that X is true and some a strong assurance that X is false
You can also run randomized controlled trials where the experimental group and the control group are the same person. This should hugely bring down experimental cost and noise, allowing us to make major and rapid progress in discovering what works in education, motivation, and productivity.
(Backups st |
56e770c4-0887-471e-b61f-fa05488888c3 | trentmkelly/LessWrong-43k | LessWrong | Effective Writing
Granted, writing is not very effective. But some of us just love writing...
Earning to Give Writing: Which are the places that pay 1USD or more dollars per word?
Mind Changing Writing: What books need being written that can actually help people effectively change the world?
Clarification Writing: What needs being written because it is only through writing that these ideas will emerge in the first place?
Writing About Efficacy: Maybe nothing else needs to be written on this.
What should we be writing about if we have already been, for very long, training the craft? What has not yet been written, what is the new thing?
The world surely won't save itself through writing, but it surely won't write itself either.
|
b6e74742-0bd4-475b-a769-a29758091d9c | trentmkelly/LessWrong-43k | LessWrong | Eat the cute animals instead
Note: I was trying to criticize the hypocrisy of people who treat farm animals in ways they would never treat pets. It seems people missed this.
----------------------------------------
Here's a 150% serious explanation of why eating cats and dogs is a great idea! A lot of people reading this have no problem eating cows and chickens, but for some odd reason become squeamish when the meat is cute. It's time to put aside your inhibitions and start going to animal shelters instead of grocery stores. Here's why:
Better for the environment
Growing crops, then feeding them to livestock, uses a lot of land and water. Cats and dogs, however, are so common that animal shelters are overwhelmed. In fact, they are invasive species which are destroying the ecosystem. Capturing and eating cats and dogs would help the environment, while raising cows and chickens only hurt it.
Resists factory farming
Factory raised animals are in horrible conditions.
Meanwhile, there is no comparable factory farming operation for cats and dogs, so you don't need to worry about supporting this (if you don't pay extra for "purebred" status).
Fewer animals get eaten
Cats and dogs are obligate carnivores, meaning that they need to eat meat to live. Leaving a cat or dog alive means that many other animals get eaten. If you think predation is wrong, why aren't you reducing it?
They're delicious
They even consider themselves delicious!
Why haven't you started already? If you would eat cows and chickens but not cats and dogs, what would it take to change your mind? |
0d469976-2324-4c43-9db9-7ab744618a88 | trentmkelly/LessWrong-43k | LessWrong | 2018 AI Alignment Literature Review and Charity Comparison
Cross-posted to the EA forum.
Introduction
Like last year and the year before, I’ve attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to an securities analyst with regards to possible investments. It appears that once again no-one else has attempted to do this, to my knowledge, so I've once again undertaken the task.
This year I have included several groups not covered in previous years, and read more widely in the literature.
My aim is basically to judge the output of each organisation in 2018 and compare it to their budget. This should give a sense for the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.
Note that this document is quite long, so I encourage you to just read the sections that seem most relevant to your interests, probably the sections about the individual organisations. I do not recommend you skip to the conclusions!
I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued.
Methodological Considerations
Track Records
Judging organisations on their historical output is naturally going to favour more mature organisations. A new startup, whose value all lies in the future, will be disadvantaged. However, I think that this is correct. The newer the organisation, the more funding should come from people with close knowledge. As organisations mature, and have more easily verifiable signals of quality, their funding sources can transition to larger pools of less expert money. This is how it works for startups turning into public companies and I think the same model applies here.
This judgement involves analysing a large number papers relating to Xrisk that were |
747ac77d-6c29-4442-9f34-4cb2ca22fdb9 | trentmkelly/LessWrong-43k | LessWrong | A Triple Decker for Elfland
In 2021 one of the kids in Somerville started Elfland, a miniature community, in a vacant lot. There had been a gas station there which was demolished to build housing, but with construction delays there it was open for a while. When construction resumed there were calls to " defend Elfland", and while the original location closed it's now on the Somerville Community Path just west of Willow Ave.
My kids like it a lot, and Anna and I decided to build something for it. Anna wanted to make a house, and I sold her on making a triple decker. These are three-unit buildings, one on each of three floors, that are common in Somerville and other older Boston-area neighborhoods.
We cut some 2x4s to size and glued them up:
I did the rough sanding with the belt sander, and then Anna did the finish sanding:
We primed and painted together, and Anna put on the doors and windows:
We brought it over today:
Here it is, with Somerville's best grocery store and the Elfland Museum of Hopeful Art:
I'm strongly in favor of constructing more housing in Somerville, though we'll need a lot more than this! |
0af9c2c5-abf1-47ae-887f-72c5ba5d29e4 | trentmkelly/LessWrong-43k | LessWrong | 2021 Darwin Game - Benthic
Welcome to the depths of the sea, far beyond the reach of sunlight. Here ye shall feast upon the dead.
Name Carrion Leaves Grass Seeds Detritus Coconuts Algae Lichen Benthic 10 0 0 0 1000 0 0 0
72 species were submitted to the Benthic.
Generations 0-25
We start with a population crash, as usual. After that, the populations almost look like they stabilize. Less than half of our species have gone extinct.
Goes Extinct in Generation Species 6 Salpophore 6 Tasty 7 Marine Snow Worm 7 benthic bottom feeder 8 Scavenger 9 Isopoid 9 Detritus Eater Eater 10 Scavenger Shrimp 10 Rubble on Double 10 Sea Zombie 11 Fangliphore 11 Grumpy Jellyfish 11 Doomed 11 Happy Hunter 11 A new hope 12 Krill1 12 Sp0r3 12 Cnidophore 12 Cephalophore 12 Not Picky 13 Detritus Eater Eater Eater 14 Sea Snail 14 Hermit Crab 14 Space Horse 14 BeauOmni3 14 aBa-CG751 15 Flounder 16 Microbet 16 Slowmo 17 aBa-D273 18 Hund 18 Glyptoderm 18 Squid Kid 19 Anglerfish 19 Toxic Squid 20 Torpedo Jelly 23 Willi 24 cg-riverfish
Here are our populations at generation 25.
Population Species 170 Barnacle 158 Deepwater Devourer 124 The Infinite Depths 114 FOERDI 1,6,4,3,D 98 Flutz 88 DetriusSpeedTank 66 Spanish Beard 55 BeauOmni1 50 Silli 46 FOERDI 1,0,10,1,D 45 aBa-D262 43 Tilli 40 FOERDI 0,0,0,10,D 39 FOERDI 1,3,7,3,D 35 Pilli 34 Blood Shark 33 FOERDI 1,10,0,1,D 32 Xilli 30 Chilli 29 omastar 27 Elephantfish 27 Abyssal eel 24 Billi 19 Kraken2 17 Space Fiddler 15 Giant Squid 10 The Tideless Depths 5 FOERDI 1,9,1,1,D 5 Soonbegon 2 gonnabiteit 2 Deap Sea Tortoise 2 Flesh-Eating Clam1 1 Killi 1 Tachyphore
As you might guess, the Barnicle is an invincible Detritus eater and the Deepwater Devourer is a carnivore. However, the Deepwater Devourer is not an apex predator. It has only 3 Attack and 6 speed. (It also has Antivenom and the ability to consume Detritus.)
Generations 25-100
Barnicles dominate.
Goes Extinct in Generation Species 6 Salpophore 6 Tasty 7 Marine Snow Worm 7 benthic bottom feeder 8 Scavenge |
570ca366-9434-4eab-ab50-02c5d47d48f8 | trentmkelly/LessWrong-43k | LessWrong | Uncertainty
This is part of a sequence on decision analysis.
Decision-making under certainty is pretty boring. You know exactly what each choice will do, and so you order the outcomes based on your preferences, and pick the action that leads to the best outcome.
Human decision-making, though, is made in the presence of uncertainty. Decision analysis - careful decision making - is all about coping with the existence of uncertainty.
Some terminology: a distinction is something uncertain; an event is each of the possible outcomes of that distinction; a prospect is an event that you have a personal stake in, and a deal is a distinction over prospects. This post will focus on distinctions and events. If you're comfortable with probability just jump to the four bolded questions and make sure you get the answers right. Deals are the interesting part, but require this background.
I should say from the very start that I am quantifying uncertainty as "probability." There is only one 800th digit of Pi (in base 10), other people already know it, and it's not going to change. I don't know what it is, though, and so when I talk about the probability that the 800th digit of Pi is a particular number what I'm describing is what's going on in my head. Right now, my map is mostly blank (I assign .1 probability to 0 to 9); once I look it up, the map will change but the territory will not. I'll use uncertainty and probability interchangeably throughout this post.
The 800th digit of Pi (in base 10) is a distinction with 10 possible events, 0 through 9. To be sensible, distinctions should be clear and unambiguous. A distinction like "the temperature tomorrow" is unclear- the temperature where, and at what time tomorrow? A distinction like "the maximum temperature recorded by the National Weather Service at the Austin-Bergstrom International Airport in the 24 hours before midnight (EST) on 11/30/2011" is unambiguous. Think of it like PredictionBook- you want to be able to create this distinction |
b5834304-d087-4d2f-a296-35b1b9ec7505 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI
I have a mix of views on AI x-risk in general — and on OpenAI specifically — that no one seems to be able to remember, due to my views not being easily summarized as those of a particular tribe or social group or cluster. For some of the views I consider most neglected and urgently important at this very moment, I've decided to write them here, all-in-one-place to avoid presumptions that being "for X" means I'm necessarily "against Y" for various X and Y.
Probably these views will be confusing to read, especially if you're implicitly trying to pin down "which side" of some kind of debate or tribal affiliation I land on. As far as I can tell, I don't tend to choose my beliefs in a way that's strongly correlated with or caused by the people I affiliate with. As a result, I apologize in advance if I'm not easily remembered as "for" or "against" any particular protest or movement or trend, even though I in fact have pretty distinct views on most topics in this space... the views just aren't correlated according to the usual social-correlation-matrix.
Anyhoo:
1. **Regarding "pausing"**: I think pausing superintelligence development using collective bargaining agreements between individuals and/or states and/or companies is a good idea, along the lines of FLI's recent open letter, "[Pause Giant AI Experiments](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)", which I signed early and advocated for.
2. **Regarding OpenAI,** I feel overall positively about them:
1. I think OpenAI has been a net-positive influence for reducing x-risk from AI, mainly by releasing products in a sufficiently helpful-yet-fallible form that society is now able to engage in less-abstract more-concrete public discourse to come to grips with AI and (soon) AI-risk.
2. I've found OpenAI's behaviors and effects as an institution to be well-aligned with my interpretations of what they've said publicly. That said, I'm also sympathetic to people other than me who expected more access to models or less access to models than what OpenAI has ended up granting; but my personal assessment, based on my prior expectations from reading their announcements, is "Yep, this is what I thought you told us you would do... thanks!". I've also found OpenAI's various public testimonies, especially to Congress, to move the needle on helping humanity come to grips with AI x-risk in a healthy and coordinated way (relative to what would happen if OpenAI made their testimony and/or products less publicly accessible, and relative to OpenAI not existing at all). I also like their [charter](https://openai.com/charter), which creates tremendous pressure on them from their staff and the public to behave in particular ways. This leaves me, on-net, a fan of OpenAI.
3. Given their recent post on [Governance of Superintelligence](https://openai.com/blog/governance-of-superintelligence), I can't tell if their approach to superintelligence is something I do or will agree with, but I expect to find that out over the next year or two, because of the openness of their communications and stance-taking. And, I appreciate the chance for me, and the public, to engage in dialogue with them about it.
4. I think the world is vilifying OpenAI too much, and that doing so is probably net-negative for existential safety. Specifically, I think people are currently over-targeting OpenAI with criticism that's easy to formulate because of the broad availability of OpenAI's products, services, and public statements. This makes them more vulnerable to attack than other labs, and I think piling onto them for that is a mistake from an x-safety perspective, in the "shooting the messenger" category. I.e., over-targeting OpenAI with criticism right now is pushing present and future companies toward being less forthright in ways that OpenAI has been forthright, thereby training the world to have less awareness of x-risk and weaker collective orientation on addressing it.
3. **Regarding Microsoft,** I feel quite negatively about their involvement in AI:
1. **Microsoft should probably be subject to federal-agency-level sanctions** — from existing agencies, and probably from a whole new AI regulatory agency — for their reckless deployment of AI models. Specifically, Microsoft should probably be banned from deploying AI models at scale going forward, and from training large AI models at all. I'm not picky about the particular compute thresholds used to define such a ban, as long as the ban would leave Microsoft completely out of the running as an institution engaged in AGI development.
2. **I would like to see the world "buy back" OpenAI from Microsoft**, in a way that would move OpenAI under the influence of more responsible investors, and leave Microsoft with some money in exchange for their earlier support of OpenAI (which I consider positive). I have no reason to think this is happening or will happen, but I hereby advocate for it, conditional on (a) (otherwise I'd worry the money would just pay for more AI research from Microsoft).
3. I have some hope that (a) and (b) might be agreeable from non-x-risk perspectives as well, such as "Microsoft is ruining the industry for everyone by releasing scary AI systems" or "Microsoft clearly don't know what they're doing and they're likely to mess up and trigger over-regulation" or something like that. At the very least, it would be good to force a product recall of their most badly-behaved products. You know which ones I'm talking about, but I'm not naming them, to avoid showing up too easily in their search and upsetting them and/or their systems.
4. FWIW, I also think Microsoft is more likely than most companies to treat future AI systems in abusive ways that are arguably intrinsically unethical irrespective of x-risk. Perhaps that's another good reason to push for sanctions against them, though it's probably not at present a broadly-publicly-agreeable reason.
4. **Regarding Facebook/Meta:**
1. Years ago, I used to find Yann LeCun's views on AI to be thoughtful and reasonable, even if different from mine. I often agreed with his views along the lines that AI applicable and well-codified laws, not just "alignment" or "utility functions", would be crucial to making AI safe for humanity.
2. Over the years roughly between 2015 and 2020 (though I might be off by a year or two), it seemed to me like numerous AI safety advocates were incredibly rude to LeCun, both online and in private communications.
3. Now, LeCun's public opinions on AGI and AI x-risk seem to be of a much lower quality, and I feel many of his "opponents" are to blame for lowering the quality of discourse around him.
4. As an AI safety advocate myself, I feel regretful for not having spoken up sooner in opposition to how people treated LeCun (even though I don't think I was ever rude to him myself), and I'm worried that more leaders in AI — such as Sam Altman, Demis Hassabis, or Dario Amodei — will be treated badly by the public in ways that that turn out to degrade good-faith discourse between lab leaders and the public.
5. **Regarding AI x-risk in general**, I feel my views are not easily clustered with a social group or movement. Here they are:
1. Regarding my background:my primary professional ambition for the past ~12 years has been to reduce x-risk: co-founding CFAR, earning to give, working at MIRI, founding BERI, being full-time employee #1 at CHAI, co-founding SFF, SFP, and SFC, and Encultured. I became worried about x-risk in 2010 when Prof. Andrew Ng came to Berkeley and convinced me that AGI would be developed during our lifetimes. That was before people started worrying publicly about AGI and he started saying it was like overpopulation on mars.
2. Regarding fairness, bias-protections, and employment: they're important and crucial to x-safety, and should be unified with it rather than treated as distractions.In particular, I feel I care a lot more about unfairness, bias, and unemployment than (I think) most people who worry about x-risk, in large part because preventing the fabric of society from falling apart is crucial to preventing x-risk. I have always felt kinda gross a using a "long term" vs "short term" dichotomy of AI concerns, in part because x-risk is a short term concern and should not be conflated with "longtermism", and in part because x-risk needs to be bundled with unfairness and bias and unemployment and other concerns relevant to the "fabric of society", which preserve the capacity of our species to work together as a team on important issues. These beliefs are summarized in an earlier post [Some AI research areas and their relevance to existential safety](https://www.lesswrong.com/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) (2020). Moreover, I think people who care about x-risk are often making it worse by reinforcing the dichotomy and dismissively using terms like "near termist" or "short termist". We should be bundling and unifying these concerns, not fighting each other for air-time.
3. Regarding "pivotal acts": I think that highly strategic consequentialism from persons/institutions with a lot of power is likely to make x-risk worse rather than better, as opposed to trying-to-work-well-as-part-of-society-at-large, in most cases. This is why I have written in opposition to pivotal acts in my post [Pivotal outcomes and pivotal processes.](https://www.lesswrong.com/posts/etNJcXCsKC6izQQZj/pivotal-outcomes-and-pivotal-processes)
4. My "p(doom)": I think humanity is fairly unlikely (p<10%) to survive the next 50 years unless there is a major international regulatory effort to control how AI is used. I also think the probability of an adequate regulatory effort is small but worth pursuing. Overall I think the probably of humanity surviving the next 50 years is somewhere around 20%, and that AI will probably be a crucial component in how humanity is destroyed. I find this tragic, ridiculous, and a silly thing for us do be doing, however I don't personally think humanity has the wherewithal to stop itself from destroying itself.
5. My "p(AGI)": I also think humanity will develop AGI sometime in the next 10 years and that we probably won't die immediately because of it, but will thereafter gradually lose control of how the global economy works in a way that gets us all killed from some combination of AI-accelerated pollution, resource depletion, and armed conflicts. My maximum-likelihood guess for how humanity goes extinct is here:
[What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs).](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic)
That said, I do think an immediate extinction event (spanning 1-year-ish) following an AGI development this decade is not an absurd concern, and I continue to respect people who believe it will happen. In particular, I think an out-of-control AI singleton is also plausible and not-silly to worry about. I think our probability of extinction specifically from an out-of-control AI singleton is something like 15%-25%. That's higher than an earlier 10%-15% estimate I had in mind prior to observing Microsoft's recent behavior, but still lower than the ~50% extinction probability I'm expecting from multi-polar interaction-level effects coming some years after we get individually "safe" AGI systems up and running ("safe" in the sense that they obey their creators and users; see again my [Multipolar Failure](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) post above for why that's not enough for humanity to survive as a species).
6. **Regarding how to approach AI risk,** again I feel my views are not easily clustered with a social group or movement. I am:
1. **Positive on democracy.**I feel good about and bullish on democratic processes for engaging people with diverse views on how AI should be used and how much risk is okay to take. That includes public discourse, free speech, and peaceful protests. I feel averse to and bearish on imposing my personal views on that outcome, beyond participating in good faith conversations and dialogue about how humanity should use AI, such as by writing this post.
2. **Laissez-faire on protests.** I have strategic thoughts that tell me that protesting AI at this-very-moment probably constitutes poor timing in terms of the incentives created for AI labs that are making progress toward broader acknowledgement of x-risk as an issue. That said, I also think democracy hinges crucially on free speech, and I think the world will function better if people don't feel shut-down or clammed-up by people-like-me saying "the remainder of May 2023 probably isn't a great time for AI protests." In general, when people have concerns that have not been addressed by an adequately legible public record, peaceful protests are often a good response, so at a meta level I think protests often make sense to happen even when I disagree with their messages or timing (such as now).
3. **Somewhat-desperately positive on empathy.** I would like to see more empathy between people on different sides of the various debates around AI right now. Lately I am highly preoccupied with this issue, in particular because I think weak empathy on both sides of various AI x-risk debates are increasing x-risk and other problems in tandem. I am not sure what to do about this, and would somewhat-desperately like to see more empathy in this whole space, but don't know as-yet what I or anyone can do to help, other than just trying to be more empathetic and encouraging the same from others where possible. I say "somewhat-desperately" because I don't actually feel desperate; I tend not to feel desperate about most things in general. Still, this is the issue that I think is more important-and-neglected in service of AI x-safety right now.
Thanks for reading. I appreciate it :) I just shared a lot of thoughts, which are maybe too much to remember. If I could pick just one idea to stick around from this post, it's this:
"Please try to be nice to people you disagree with, even if you disagree with them about how to approach x-risk, even though x-risk is real and needs to be talked about." |
fd175aa9-3a92-4698-aaee-5a2956a073fa | trentmkelly/LessWrong-43k | LessWrong | What's going on with "provability"?
Every so often I hear seemingly mathematical statements involving the concept of being provable. For example:
* I've seen Gödel's Incompleteness Theorem stated as "if a mathematical system is powerful enough to express arithmetic, then either it contains a contradiction or there are true statements that it cannot prove."
* On the AI alignment forum, one of the pinned sequences describes Löb's Theorem as "If Peano Arithmetic can prove that a proof of P would imply the truth of P, then it can also prove P is true".
I find each of these statements baffling for a different reason:
* Gödel: What could it mean for a statement to be "true but not provable"? Is this just because there are some statements such that neither P nor not-P can be proven, yet one of them must be true? If so, I would (stubbornly) contest that perhaps P and not-P really are both non-true.
* Löb: How can a system of arithmetic prove anything? Much less prove things about proofs?
And I also have one more general confusion. What systems of reasoning could these kinds of theorems be set in? For example, I've heard that there are proofs that PA is consistent. Let's say one of those proofs is set in Proof System X. Now how do we know that Proof System X is consistent? Perhaps it can be proven consistent by using Proof System Y? Do we just end up making an infinite chain of appeals up along a tower of proof systems? Or do we eventually drive ourselves into the ground by reaching system that nobody could deny is valid? If so, why don't we just stop and PA or ZFC?
Oh, speaking of ZFC. There seems to be a debate about whether we should accept the Axiom of Choice. Isn't it...obviously true? I don't really understand this topic well enough to have any particular question about the AC debate, but my confusion definitely extends to that region of concept space.
So here's my question: Where can I learn about "provability" and/or what clarifying insights could you share about it? |
2a2d3c0f-bef7-4611-a5ea-a0bae0c036ac | trentmkelly/LessWrong-43k | LessWrong | "AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics. I mention that upfront for anyone who isn't already familiar, since I understand a link post to some video full of hot takes on AI from some random YouTuber wouldn't be appreciated.
Even more than usual, so far in 2025 there has been a rapid set of developments on the performance of AI agents and programs, compared to that of humans, so Hossenfelder in this video provides summarizes some of the most significant of recent breakthroughs and findings.
Here's a summary of the reviews of recent developments in AI covered in the video.
Grok 3
Performance
x.AI released its most recent model, Grok 3, a week ago. Grok 3 outperformed on most benchmarks the current iterations of competing models (DeepSeek, OpenAI's models, etc.)--including mathematics, coding, and scientific reasoning. In the last year, the rate of increase in the performance of Grok models has outpaced those of OpenAI and Anthropic, now more comparable to that of DeepSeek. An advantage for access to more data for learning that Grok 3 now has, over OpenAI and DeepSeek, is more exclusive data from Twitter/X. Grok 3 is also the first general-purpose AI model to exceed 10^25 flop in training compute. This exceeds a threshold set in the European Union AI Act, so Grok 3 will now need to be subject to more safety tests to continue to be usable in the EU.
Application
A current disadvantage of Grok 3 in its application is that the now more standard generative AI function, and the more novel 'reasoning' function, can't be used at the same time, e.g., to answer queries. Grok 3 still features the same problems of previous LLM models, including hallucinations, and how easy it is to jailbreak, including providing instructions to build bombs or unconventional weapons.
Google
Last week Google announced an AI super-agent, specifically what the company is call |
f4224d8f-9dcf-48d2-bc74-f1e2c3c80939 | trentmkelly/LessWrong-43k | LessWrong | Against Dog Ownership
This essay about dog ownership helped me empathise with dogs, and also caused me to update against getting a dog; if these ideas are accurate, it'd either be much more selfish and cruel than I previously thought, or else would require a lot more focused effort in order to give the dog a meaningful life.
Most of the essay is doing valuable and interesting work staring into the abyss and trying to help you see whether there is a dystopian horror occurring around us. But I'll mostly take away from it a clearer sense of what it looks like for a non-human animal to have purpose and meaning; it feels like a conceptual update that I won't easily be able to forget or ignore. (It has helped me think more clearly about animals and their values much more than most of the philosophical discussion I've seen on the topic, and I found it more useful than much extended discussion about consciousness and pleasure/pain.)
Here's three quotes to help you understand the post and entice you into reading the whole thing.
> After about three days, the dog started following me everywhere. If I sat on the couch to watch tv, the dog would curl-up under my outstretched legs resting on the coffee table. If I sat at the dinner table, it would sit beside me, and watch me throughout the entire meal. If I went to the bathroom, it would follow me to the door and wait outside. At night, the dog curled up in my bed and slept beside me. The dog started walking more, and she would almost always perfectly follow my lead; she walked at just the right pace so she stayed beside me, neither lagging behind my fast stride, nor pulling ahead. On the rare occasions she got distracted by a smell or other dog, I gently tugged on her leash and called her name, and she scurried over to me.
>
> I found it kind of creepy.
>
> Yes, I know, it’s a dog. But still… I felt like I had been granted a level of submissiveness from a sovereign being which I hadn’t earned. All I had done was feed and walk the dog – and I ap |
bd664e4a-c44f-4f40-b412-4b4e34a685b6 | trentmkelly/LessWrong-43k | LessWrong | Scaffolding for "Noticing Metacognition"
tl;dr: I have a more compelling argument for why a generalized "noticing metacognition" skill is useful, than I've previously been able to articulate. And, a simple exercise framework that will probably work decently for most people.
This is basically the parts of Tuning your Cognitive Strategies that were most useful to me, and hopefully a smoother onramp for people who found it confusing.
----------------------------------------
I'm extremely Noticing-pilled.
I think the ability to notice subtle things in your mind, or in the world around you, is one of the foundational rationality skills. It's necessary for building habits[1], for seeing clues that help you solve confusing problems, for diagnosing how you could have thought that faster, and for generally building a mechanistic model of your mind that lets you become a power-user for your brain.
Despite this, I didn't include Noticing in my recent workshops, because it takes awhile to pay off, and many people initially find it disorienting, or overwhelming, or pointless. Instead of teaching people to "notice their metacognition", I just tell them to do "Livelogging", where you write out your train-of-thought in a google doc while you solve a difficult puzzle, and then afterwards can ask questions like "how could I have thought-those-thoughts faster" and "how can I generalize lessons from that?", while reviewing the Livelog record.
Livelogging is maybe 40% as good as generalized Noticing in the workshop context, but it's enough to get the job done, and requires little training.
But, during last weekend's workshop, I ended up giving a spontaneous lecture about Noticing which I think help motivate it much more clearly. So, if you've been skeptical why you should invest in noticing subtle things, I'm curious if this changes your mind.
Seven cognitive states worth noticing
Ultimately, I think it's worthwhile to gain "generalized noticing", where you have a passive ability to:
* a) notice a wide variety of |
5f33490c-4e94-480b-b567-cc05d6f555c5 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Video and Transcript of Presentation on Existential Risk from Power-Seeking AI
*In March 2022, I gave a presentation about existential risk from power-seeking AI, as part a* [*lecture series*](https://www.harvardea.org/agathon) *hosted by Harvard Effective Altruism. The presentation summarized my* [*report*](https://arxiv.org/abs/2206.13353) *on the topic. With permission from the organizers, I'm posting the* [*video*](https://harvard.zoom.us/rec/play/hT1dXzCEEZco4KvxZaEKUh8tnHrRh7_YDi_msL1DiPXy1oPEFeObaXAuajRn3FJQ-AsLzFBmKudTlK_k.-aTDhoAivLi08K79?continueMode=true&_x_zm_rtaid=80OzglV7QPmgT8NKvi3SyA.1651971245922.031c604fd452b30b40ec4caba72be304&_x_zm_rhtaid=318) *here, along with the transcript (lightly edited for clarity/concision) and the* [*slides*](https://docs.google.com/presentation/d/1UE_cAsogrK5i9wvF3YMIZX-iO9qzjevnrYfTxlKL7ns/edit#slide=id.p)*.*
Main Talk
=========
Thanks for having me, nice to be here, and thanks to everyone for coming. I'm Joe Carlsmith, I work at Open Philanthropy, and I'm going to be talking about the basic case, as I see it for, for getting worried about existential risk from artificial intelligence, where existential risk just refers to a risk of an event that could destroy the entire future and all of the potential for what the human species might do.
Plan
----
I'm going to discuss that basic case in two stages. First, I'm going to talk about what I see as the high-level backdrop picture that informs the more detailed arguments about this topic, and which structures and gives intuition for why one might get worried. And then I'm going to go into a more precise and detailed presentation of the argument as I see it -- and one that hopefully makes it easier to really pin down which claims are doing what work, where might I disagree and where could we make more progress in understanding this issue. And then we'll do some Q&A at the end.
I understand people in the audience might have different levels of exposure and understanding of these issues already. I'm going to be trying to go at a fairly from-scratch level.
And I should say: basically everything I'm going to be saying here is also in a [report](https://arxiv.org/abs/2206.13353) I wrote last year, in one form or another. I think that's linked on [the lecture series website](https://www.harvardea.org/agathon), and it's also on my website, [josephcarlsmith.com](https://www.josephcarlsmith.com/). So if there's stuff we don't get to, or stuff you want to learn more about, I'd encourage you to check out that report, which has a lot more detail. And in general, there's going to be a decent amount of material here, so some of it I'm going to be mentioning as a thing we could talk more about. I won't always get into the nitty-gritty on everything I touch on.
High-level backdrop
-------------------
Okay, so the high-level backdrop here, as I see it, consists of two central claims.
* The first is just that intelligent agency is an extremely powerful force for transforming the world on purpose.
* And because of that, claim two, creating agents who are far more intelligent than us is playing with fire. This is just a project to be approached with extreme caution.
And so to give some intuition for this, consider the following pictures of stuff that humanity as a species has done:
Here's the City of Tokyo, the Large Hadron Collider, this is a big mine, there's a man on the moon. And I think there's a way of just stepping back and looking at this and going "this is an unusual thing for a species on the planet earth to do." In some sense, this is an unprecedented scale and sophistication of intentional control over our environment.
So humans are strange. We have a kind of "oomph" that gives rise to cities like this. And in particular, these sorts of things, these are things we've done on purpose. We were trying to do something, we had some goal, and these transformative impacts are the result of us pursuing our goals. And also, our pursuit of our goals is structured and made powerful by some sort of cognitive sophistication, some kind of intelligence that gives us plausibly the type of "oomph" I'm gesturing at.
I'm using "oomph" as a specifically vague term, because I think there's a cluster of mental abilities that also interact with culture and technology; there are various factors that explain exactly what makes humanity such a potent force in the world; but very plausibly, our minds have something centrally to do with it. And also very plausibly, whatever it is about our minds that gives us this "oomph," is something that we haven't reached the limit of. There are possible biological systems, and also possible artificial systems, that would have more of these mental abilities that give rise to humanity's potency.
And so the basic thought here is that at some point in humanity's history, and plausibly relatively soon -- plausibly, in fact, within our lifetimes -- we are going to transition into a scenario where we are able to build creatures that are both agents (they have goals and are trying to do things) and they're also more intelligent than we are, and maybe vastly more. And the basic thought here is that that is something to be approached with extreme caution. That that is a sort of invention of a new species on this planet, a species that is smarter, and therefore possibly quite a bit more powerful, than we are. And that that's something that could just run away from us and get out of our control. It's something that's reasonable to expect dramatic results from, whether it goes well or badly, and it's something that if it goes badly, could go very badly.
That's the basic intuition. I haven't made that very precise, and there are a lot of questions we can ask about it, but I think it's importantly in the backdrop as a basic orientation here.
### Focus on power-seeking
I'm going to focus, in particular, on a way of cashing out that worry that has to do with the notion of *power*, and power-seeking. And the key hypothesis structuring that worry is that suitably capable and strategic AI agents will have instrumental incentives to gain and maintain power, since this will help them pursue their objectives more effectively.
And so the basic idea is: if you're trying to do something, then power in the world -- resources, control over your environment -- this is just a very generically useful type of thing, and you'll see that usefulness, and be responsive to that usefulness, if you're suitably capable and aware of what's going on. And so the worry is, if we invent or create suitably capable and strategic AI agents, and their objectives are in some sense problematic, then we're facing a unique form of threat. A form of threat that's distinct, and I think importantly distinct, from more passive technological problems.
For example, if you think about something like a plane crash, or even something as extreme as a nuclear meltdown: this is bad stuff, it results in damage, but the problem remains always in a sense, passive. The plane crashes and then it sits there having crashed. Nuclear contamination, it's spreading, it's bad, it's hard to clean up, but it's never *trying* to spread, and it's not *trying* *to stop you* from cleaning it up. And it's certainly not doing that with a level of cognitive sophistication that exceeds your own.
But if we had AI systems that went wrong in the way that I'm imagining here, then they're going to be actively optimizing against our efforts to take care of the problem and address it. And that's a uniquely worrying situation, and one that we have basically never faced.
Basic structure of the more detailed argument
---------------------------------------------
That's the high-level backdrop here. And now what I'm going to try to do is to make that worry more precise, and to go through specific claims that I think go into a full argument for an existential catastrophe from this mechanism.
I'm going to structure that argument as follows, in six stages. The first is a claim about the timelines: when it will become possible to build relevantly dangerous AI systems. And so there's a lot we can say about that. I think Holden talked about that a bit last week. Some other people in the speaker series are going to be talking about that issue. Ajeya Cotra at Open Philanthropy, who I think is coming, has done a lot of work on this.
I'm not going to talk a lot about that, but I think it's a really important question. I think it's very plausible, basically, that we get systems of the relevant capability within our lifetimes. I use the threshold of 2070 to remind you that these are claims you will live to see falsified or confirmed. Probably, unless something bad happens, you will live to see: were the worries about AI right? Were we even going to get systems like this at all or not, at least soon? And I think it's plausible that we will, that this is an "our lifetime" issue. But I'm not going to focus very much on that here.
I'm going to focus more on the next premises:
* First, there's a thought that there are going to be strong incentives to build these sorts of systems, once we can, and I'll talk about that.
* The next thought is that: once we're building these systems, it's going to be hard, in some sense, to get their objectives right, and to prevent the type of power-seeking behavior that I was worried about earlier.
* Then the fourth claim is that we will, in fact, deploy misaligned systems -- systems with problematic objectives, that are pursuing power in these worrying ways -- and they will have high impact failures, failures at a serious level.
* And then fifth, those failures will scale to the point of the full disempowerment of humanity as a species. That's an extra step.
* And then finally, there's a sixth premise, which I'm not going to talk about that much, but which is an important additional thought, which is that this itself is a drastic reduction in the expected value of the future. This is a catastrophe on a profound scale.
Those are the six stages, and I'm going to talk through each of them a bit, except the timelines one.
Three key properties
--------------------
Let me talk a little bit about what I mean by relevantly dangerous, what's the type of system that we're worried about here. I'm going to focus on three key properties of these systems.
* The first is advanced capability, which is basically just a way of saying they're powerful enough to be dangerous if they go wrong. I operationalize that as: they outperform the best humans on some set of tasks, which when performed at advanced levels grant significant power in today's world. So that's stuff like science or persuasion, economic activity, technological development, stuff like that. Stuff that yields a lot of power.
* And then the other two properties are properties that I see as necessary for getting this worry about alignment -- and particularly, the instrumental incentives for power-seeking -- off the ground. Basically, the second property here -- agentic planning -- says that these systems are agents, they're pursuing objectives and they're making plans for doing that.
* And then three, strategic awareness, says that they're aware, they understand the world enough to notice and to model the effects of seeking power on their objectives, and to respond to incentives to seek power if those incentives in fact arise.
And so these three properties together, I call "APS" or advanced, planning, strategically-aware systems. And those are the type of systems I'm going to focus on throughout. (Occasionally I will drop the APS label, and so just generally, if I talk about AI systems going wrong, I'm talking about APS systems.)
Incentives
----------
So suppose we can build these systems, suppose the timelines thing becomes true. Will we have incentives to do so?
I think it's very likely we'll have incentives to build systems with advanced capabilities in some sense. And so I'm mostly interested in whether there are incentives to build these relevantly agentic systems -- systems that are in some sense trying to do things and modeling the world in these sophisticated ways. It's possible that AI doesn't look like that, that we don't have systems that are agents in the relevant sense. But I think we probably will. And there'll be strong incentives to do that, and I think that's for basically three reasons.
* The first is that agentic and strategically aware systems seem very useful to me. There are a lot of things we want AI systems to do: run our companies, help us design policy, serve as personal assistants, do long-term reasoning for us. All of these things very plausibly benefit a lot from both being able to pursue goals and make plans, and then also, to have a very rich understanding of the world.
* The second reason to suspect there'll be incentives here are just that available techniques for developing AI systems might make agency and strategic awareness on the most sufficient development pathway. An example of that might be: maybe the easiest way to train AI systems to be smart is to expose them to a lot of data from the internet or text corpora or something like that. And that seems like it might lead very naturally to a rich sophisticated understanding of the world.
* And then three, I think it's possible that some of these properties will just emerge as byproducts of optimizing a system to do something. This is a more speculative consideration, but I think it's possible that if you have a giant neural network and you train it really hard to do something, it just sort of becomes something more like an agent and develops knowledge of the world just naturally. Even if you're not trying to get it to do that, and indeed, maybe even if you're trying to get it *not* to do that. That's more speculative.
Of these, basically, I put the most weight on the first. I think the usefulness point is a reason to suspect we'll be actively trying to be build systems that have these properties, and so I think we probably will.
Alignment
---------
### Definitions
Suppose we can build these systems, and there are incentives to build these systems. Now let's talk about whether it will be hard to make sure that they're aligned, or to make sure that their objectives are in some sense relevantly innocuous from the perspective of these worries about power-seeking.
I'm going to give three definitions here that I think will be helpful.
* The first is the definition of *misaligned behavior*. So I'm defining that as unintended behavior that arises in virtue of problems with a system's objectives. This can get fuzzy at times, but the basic thought here is that certain failures of a system look like it's breaking or it's failing in a non-competent way. And then certain forms of failure look like it's doing something competent, it's doing something well, but it's not the thing you wanted it to do. For example, if you have an employee, and the employee gives a bad presentation, that was a failure of competence. If the employee steals your money in some really smart way, that's a misalignment of your employee. Your employee is trying to do the wrong thing. And so that's the type of misaligned behavior I'm talking about.
* *Misaligned power-seeking* is just misaligned behavior that involves power-seeking.
* And then *practically PS-aligned* is the key alignment property I'm interested in, which is basically: that a system doesn't engage in misaligned power-seeking on any of the inputs that it is in fact exposed to. Importantly, it doesn't need to be the case that a system would never, in any circumstances, or any level of capability, or something like that, engage in some problematic form of power-seeking. It's just that in the actual circumstances that the system actually gets used in, it's not allowed to do that. That's what it takes to be practically PS aligned on the definition I'm going to use.
### Instrumental convergence
Those are a few definitions to get us started. Now let's talk about: why might we think it's hard to prevent misaligned power-seeking of this type? I mentioned this hypothesis, often call it the instrumental convergence hypothesis, and here's how I understand it:
> Misaligned behavior on some inputs strongly suggest misaligned *power-seeking* on those inputs too.
>
>
The idea is that there's actually a very close connection between any form of misalignment, any form of problematic objectives being pursued by systems that are relevantly strategic, and misaligned power-seeking in particular. And the connection is basically from this fact that misaligned behavior is in pursuit of problematic objectives, and power is useful for lots of objectives.
There's lots we can query about this thesis. I think it's an important, central piece of the story here, and there are questions we can ask about it. But for now I'm just going to flag it and move on.
### The difficulty of practical PS-alignment
So suppose we're like: okay, yeah, power-seeking could happen. Why don't we just make sure it doesn't? And in particular, why don't we just make sure that these systems have just innocuous values, or are pursuing objectives that we're okay seeing pursued? Basically, how to do that: I think there are two main steps, if you want to ensure a practical PS alignment of the relevant kind.
* First, you need to cause the APS system to be such that the objectives it pursues on some set of inputs, X, do not give rise to misaligned power-seeking.
* And then you need to restrict the inputs it receives to that set of inputs.
And these trade-off against each other. If you make it a very wide set of inputs where the system acts fine, then you don't need to control the inputs it receives very much. If you make it so it only plays nice on a narrow set of inputs, then you need to exert a lot more control at step two.
And in general, I think there are maybe three key types of levers you can exert here. One is: you can influence the system's objectives. Two, you can influence its capabilities. And three, you can influence the circumstances it's exposed to. And I'll go through those each in turn.
### Controlling objectives
Let's talk about objectives. Objectives is where most of the discourse about AI alignment focuses. The idea is: how do we make sure we can exert the relevant type of control over the objectives of the systems we create, so that their pursuit of those objectives doesn't involve this type of power-seeking? Here are two general challenges with that project.
The first is a worry about proxies. It's basically a form of what's known as Goodhart’s Law: if you have something that you want, and then you have a proxy for that thing, then if you optimize very hard for the proxy, often that optimization breaks the correlation that made the proxy a good proxy in the first place.
An example here that I think is salient in the context of AI is something like the proxy of human approval. It's very plausible we will be training AI systems via forms of human feedback, where we say: 10 out of 10 for that behavior. And that's a decent proxy for a behavior that I actually want, at least in our current circumstances. If I give 10 out of 10 to someone for something they did, then that's plausibly a good indicator that I actually like, or would like, what they did, if I understood.
But if we then have a system that's much more powerful than I am and much more cognitively sophisticated, and it's optimizing specifically for getting me to give it a 10 out of 10, then it's less clear that that correlation will be preserved. And in particular, the system now may be able to deceive me about what it's doing; it may be able to manipulate me; it may be able to change my preferences; in an extreme case, it may be able to seize my arm and force me to press the 10 out of 10 button. That's the type of thing we have in mind here. And there are a lot of other examples of this problem, including in the context of AI, and I go through a few in the report.
The second problem I'm calling it a problem with search. And this is a problem that arises with a specific class of techniques for developing AI systems, where, basically, you set some criteria, which you take to be operationalizing or capturing the objectives you want the systems to pursue, and then you search over and select for an agent that performs well, according to those criteria.
But the issue is that performing well according to the criteria that you want doesn't mean that the system is actively pursuing good performance on those criteria as its goal. So a classic example here, though it's a somewhat complicated one, is evolution and humans. If you think of evolution as a process of selecting for agents that pass on their genes, you can imagine a giant AI designer, running an evolutionary process similar to the one that gave rise to humans, who is like: "I want a system that is trying to pass on its genes." And so you select over systems that pass on their genes, but then what do you actually get out at the end?
Well, you get humans, who *don't* intrinsically value passing on their genes. Instead, we value a variety of other proxy goals that were correlated with passing on our genes throughout our evolutionary history. So things like sex, and food, and status, and stuff like that. But now here are the humans, but they're not optimizing for passing on their genes. They're wearing condoms, and they're going to the moon, they're doing all sorts of wacky stuff that you didn't anticipate. And so there's a general worry that that sort of issue will arise in the context of various processes for training AI systems that are oriented towards controlling their objectives.
These are two very broad problems. I talk about them more in the report. And they are reasons for pessimism, or to wonder, about how well we'll be able to just take whatever values we want, or whatever objectives we want an AI system to pursue, and just "put them in there." The "putting them in there" is challenging at a number of different levels.
### Other options

That said, we have other tools in the toolbox for ensuring practical PS alignment, and I'll go through a few here. One is: we can try to shape the objectives in less fine-grained ways. So instead of saying, ah, we know exactly what we want and we'll give it to the systems, we can try to ensure higher level properties of these objectives. So two that I think are especially nice: you could try to ensure that the systems objectives are always in some sense myopic, or they're limited in their temporal horizon. So an episodic system, a system that only cares about what happens in the next five minutes, that sort of stuff. Systems of that kind seem, for various reasons, less dangerous. Unfortunately, they also seem less useful, especially for long-term tasks, so there's a trade-off there.
Similarly, you can try to ensure that systems are always honest and they always tell you the truth, even if they're otherwise problematic. I think that would be a great property if you could ensure it, but that also may be challenging. There are some options here that are less demanding ,in terms of the properties you're trying to ensure about the objectives, but that have their own problems.
And then you can try to exert control over other aspects of the system. You can try to control its capabilities by making sure it's specialized, or not very able to do a ton of things in a ton of different domains.
You can try to prevent it from enhancing its capabilities, I think that's a very key one. A number of the classic worries about AI involve the AI improving itself, getting up in its head, "editing its own source code," or in a context of machine learning, maybe running a new training process or something like that. That's a dicey thing. Generally if you can prevent that, I think you should. It should be your choice, whether a system's capabilities scale up.
And then number three is: you can try to control the options and incentives that the system has available. You can try to put it in some environment where it has only a limited range of actions available. You can try to monitor its behavior, you can reward it for good behavior, stuff like that.
So there are a lot of tools in the toolbox here. All of them, I think, seem like there might be useful applications. But they also seem problematic and dicey, in my opinion, to rely on, especially as we scale-up the capabilities of the systems we're talking about. And I go through reasons for that in the report.
### What's unusual about this problem?
So those are some concrete reasons to be worried about the difficulty of ensuring practical PS alignment. I want to step back for a second and ask the question: okay, but what's unusual about this? We often invent some new technology, they are often safety issues with it, but also, often, we iron them out and we work through it. Planes: it's dicey initially, how do you make the plane safe? But now planes are really pretty safe, and we might expect something similar for lots of other technologies, including this one. So what's different here?
Well, I think there are at least three ways in which this is a uniquely difficult problem.
The first is that our understanding of how these AI systems work, and how they're thinking, and our ability to predict their behavior, is likely to be a lot worse than it is with basically any other technology we work with. And that's for a few reasons.
* One is, at a more granular level, the way we train AI systems now (though this may not extrapolate to more advanced systems) is often a pretty black box process in which we set up high-level parameters in the training process, but we don't actually know at a granular level how the information is being processed in the system.
* Even if we solve that, though, I think there's a broader issue, which is just that once you're creating agents that are much more cognitively sophisticated than you, and that are reasoning and planning in ways that you can't understand, that just seems to me like a fundamental barrier to really understanding and anticipating their behavior. And that's not where we're at with things like planes. We really understand how planes work, and we have a good grasp on the basic dynamics that allows us a degree of predictability and assurance about what's going to happen.
Two, I think you have these adversarial dynamics that I mentioned before. These systems might be deceiving you, they might be manipulating you, they might be doing all sorts of things that planes really don't do.
And then three: I think there are higher stakes of error here. If a plane crashes, it's just there, as I said. I think a better analogy for AI is something like an engineered virus, where, if it gets out, it gets harder and harder to contain, and it's a bigger and bigger problem. For things like that, you just need much higher safety standards. And I think for certain relevantly dangerous systems, we just actually aren't able to meet the safety standards, period, as a civilization right now. If we had an engineered virus that would kill everyone if it ever got out of the lab, I think we just don't have labs that are good enough, that are at an acceptable level of security, to contain that type of virus right now. And I think that AI might be analogous.
Those are reasons to think this is an unusually difficult problem, even relative to other types of technological safety issues.
Deployment
----------

But even if it's really difficult, we might think: fine, maybe we can't make the systems safe, and so we don't use them. If I had a cleaning robot, but it always killed everyone's cats -- everyone we sold it to, it killed their cat immediately, first thing it did -- we shouldn't necessarily expect to see everyone buying these robots and then getting their cats killed. Very quickly, you expect: all right, we recall the robots, we don't sell them, we noticed that it was going to kill the cat before we sold it. So there's still this question of, well, why are you deploying these systems if they're unsafe, even if it's hard to make them safe?
And I think there's a number of reasons we should still be worried about actually deploying them anyway.
* One is externalities, where some actor might be willing to impose a risk on the the whole world, and it might be rational for them to do that from their own perspective, but not rational for the world to accept it. That problem can be exacerbated by race dynamics where there are multiple actors and there's some advantage to being in the lead, so you cut corners on safety in order to secure that advantage. So those are big problems.
* I think having a ton of actors who are in a position to develop systems of the relevant level of danger makes it harder to coordinate. Even if lots of people are being responsible and safe, it's easier to get one person who messes up.
* I think even pretty dangerous systems can be very useful and tempting to use for various reasons.
* And then finally, as I said, I think these systems might deceive you about their level of danger, or manipulate you, or otherwise influence the process of their own deployment.
And so those are all reasons to think we actually deploy systems that are unsafe in a relevant sense.
Scaling
-------
Having done that though, there's still this question of: okay, is that going to scale-up to the full disempowerment of humanity, or is it something more like, oh, we notice the problem, we address it, we introduce new regulations and there are new feedback loops and security guarantees and various things, to address this problem before it spirals entirely out of our control.
And so there's a lot to say about this. I think this is one place in which we might get our act together, though depending on your views about the level of competence we've shown in response to things like the COVID-19 virus and other things, you can have different levels of pessimism or optimism about exactly how much getting our act together humanity is likely to do.
But the main point I want to make here is just that I think there's sometimes a narrative in some of the discourse about AI risk that assumes that the only way we get to the relevant level of catastrophe is via what's called a "fast takeoff," or a very, very concentrated and rapid transition from a state of low capability to a state of very high AI capability, often driven by the AI improving itself, and then there's one AI system that takes over and dominates all of humanity.
I think that is something like that is a possibility. I think the more our situation looks like that -- and there's various parameters that go into that -- that's more dangerous. But even if you don't buy that story, I think the danger is still very real.
* I think having some sort of warning is helpful but I think it's not sufficient: knowing about a problem is not sufficient to fix it, witness something like climate change or various other things.
* I think even if you have a slow rolling catastrophe, you can just still have a catastrophe.
* And I think you can have a catastrophe in which there are many systems that are misaligned, rather than a single one. An analogy there is something like, if you think about the relationship between humans and chimpanzees, no single human took over, but nevertheless, humans as a whole right now are in a dominant position relative to chimpanzees. And I think you could very well end up in a similar situation with respect to AI systems.
I'll skip for now the question of whether humanity being disempowered is a catastrophe. There are questions we can ask there, but I think it's rarely the crux.
Putting it together (in April 2021)
-----------------------------------

So let's just put it all together into these six premises. I'm going to try to assign rough credence to these premises. I think we should hold this all with a grain of salt. I think it can be a useful exercise. And so each of these premises is conditional on the previous premises being true, so we can multiply them all through to get a final probability.
* At least as of April 2021, I put 65% on the timelines condition that becomes possible and financially feasible to build APS systems by 2070,
* 80%, conditional on that, they will be strong incentives to build those systems,
* 40%, conditional on that, that it will be much harder to develop APS systems that would be practically PS aligned if deployed, than to develop APS systems that would be practically PS misaligned if deployed, but which are at least superficially attractive to deploy, anyway.
* then 65%, conditional on that, that you'll get actual deployed systems failing in high impact ways -- I'm saying a trillion dollars of damage or more,
* then 40%, conditional on that, that this failure scales up to the disempowerment of all of humanity,
* and then 95%, conditional on all of that, that that's an existential catastrophe.
These are some rough numbers. I think this is an exercise to hold with lots of skepticism, but I think it can be useful. And I encourage you, if you're trying to form views about this issue, to really go through and just throw out some numbers and see what you get.
And so that overall that leads me to about 5% on all of those premises being true by 2070. And then you want to adjust upwards for scenarios that don't fit those premises exactly.
Since writing the report, I've actually adjusted my numbers upwards, to greater amounts of worry, especially for premises two through five. I'm currently at something above 10%, though I haven't really pinned down my current estimates. And I also have some concern that there's a biasing that enters in from having a number of different premises, and so if you're unwilling to be confident about any of the premises, then if you have lots of premises, then that will just naturally drive down the final answer, but it can be arbitrary how many premises you include. And so I have some concern that the way I'm setting it up is having that effect too.
That's the overall argument here. We'll go into Q&A. Before doing that, I'll say as a final thing: the specific numbers aside, the upshot here, as I see it, is that this is a very serious issue. I think it's the most serious issue that we as a species face right now. And I think there's a lot to be done, there are a lot of people working on this, but there's also a lot of room for people to contribute in tons of ways. Having talented people thinking about this, and in particular thinking about: how can we align these systems? What techniques will ensure the type of properties in these systems that we need? How can we understand how they work, and then how can we create the incentives, and policy environment, and all sorts of other things, to make sure this goes well?
I think this is basically the most important issue in the world right now, and there's a lot of room to get involved. If you're interested, you can follow-up with me, you can follow-up with other people in the speaker series, and I would love to hear from you. Cool, thanks everybody, we can go to questions.
Q&A
===
**Question:***So you made the analogy to a pandemic, and I've heard an argument that I think is compelling, that COVID could provide a sufficient or very helpful warning shot for us in terms of preventing something that could be significantly more deadly. There's a good chance that we won't get a warning shot with AI, but I'm wondering what an effective or sufficient warning shot would look like, and is there a way to... I mean, because with COVID, it's really gotten our act together as far as creating vaccines, it's really galvanized people, you would hope that's the outcome. What would a sufficient warning shot to sufficiently galvanize people around this issue and really raise awareness in order to prevent an existential crisis?*
**Response:** I feel maybe more pessimistic than some about the extent to which COVID has actually functioned as a warning shot of the relevant degree of galvanization, even for pandemics. I think it's true: pandemics are on the radar, there is a lot of interest in it. But from talking with folks I know who are working on really trying to prevent the next big pandemic and pandemics at larger scales, I think they've actually been in a lot of respects disappointed by the amount of response that they've seen from governments and in other places. I wouldn't see COVID as: ah, this is a great victory for a warning shot, and I would worry about something similar with AI.
So examples of types of warning shots that I think would be relevant: there's a whole spectrum. I think if an AI system breaks out of a lab and steals a bunch of cryptocurrency, that's interesting. Everyone's going, "Wow, how did that happen?" If an AI system kills people, I think people will sit up straight, they will notice that, and then there's a whole spectrum there. And I think the question is: what degree of response is required, and what exactly does it get you. And there I have a lot more concerns. I think it's easy to get the issue on people's radar and get them thinking about different things. I think the question of like, okay, but does that translate into preventing the problem from ever arising again, or driving the probability of that problem arising again or at a larger scale are sufficiently low -- there I feel more uncertainty and concern.
**Question:** *Don't take this the wrong way, but I'm very curious how you'd answer this. As far as I see from your CV, you haven't actually worked on AI. And a lot of the people talking about this stuff, they're philosophy PhDs. So how would you answer the criticism of that? Or how would you answer the question: what qualifies you to weigh in or discuss these hypothetical issues with AI, vs. someone who is actually working there.*
**Response:** I think it's a reasonable question. If you're especially concerned about technical expertise in AI, as a prerequisite for talking about these issues, there will be folks in the speaker series who are working very directly on the technical stuff and who also take this seriously, and you can query them for their opinions. There are also expert surveys and other things, so you don't have to take my word for it.
That said, I actually think that a lot of the issues here aren't that sensitive to the technical details of exactly how we're training AI systems right now. Some of them are, and I think specific proposals for how you might align a system, the more you're getting into the nitty-gritty on different proposals for alignment, I think then technical expertise becomes more important.
But I actually think that the structure of the worry here is accessible at a more abstract level. And in fact, in my experience, and I talk to lots of people who are in the nitty-gritty of technical AI work, my experience is that the discussion of this stuff is nevertheless at a more abstract level, and so that's just where I see the arguments taking place, and I think that's what's actually driving the concern. So you could be worried about that, and generally you could be skeptical about reasoning of this flavor at all, but my own take is that this is the type of reasoning that gives rise to the concern.
**Question:** *Say you're trying to convince just a random person on the street to be worried about AI risks. Is there a sort of article, or Ted Talk, or something you would recommend, that you would think would be the most effective for just your average nontechnical person? People on twitter were having trouble coming up with something other than Nick Bostrom's TED talk or Sam Harris's TED talk.*
**Response:** So other resources that come to mind: Kelsey Piper has an [intro](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) that she wrote for Vox a while back, that I remember enjoying, and so I think that could be one. And that's fairly short. There's also a somewhat longer introduction by Richard Ngo, at OpenAI, called [AI Safety from First Principles](https://www.lesswrong.com/posts/8xRSjC76HasLnMGSf/agi-safety-from-first-principles-introduction). That's more in depth, but it's shorter than my report, and goes through the case. Maybe I'll stop there. There are others, too. Oh yeah, there's an article in Holden Karnofsky's Most Important Century Series called: [Why AI Alignment Might Be Hard With Deep Learning](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/). That doesn't go through the full argument, but I think it does point at some of the technical issues in a pretty succinct and accessible way. Oh yeah: [Robert Miles's YouTube](https://www.youtube.com/c/RobertMilesAI) is also good.
**Question**: *Why should we care about the instrumental convergence hypothesis? Could you elaborate a little bit more the reasons to believe it? And also, one question from Q and A is: let's say you believe in Stuart Russell arguments that all AI should be having the willingness hardwired into them to switch them themselves out, should the humans desire them to do so. Does that remove the worries about instrumental convergence?*
**Response**: Sorry, just so I'm understanding the second piece. Was the thought: "if we make these systems such that they can always be shut off then is it fine?"
**Questioner**: *Yeah.*
**Response:** OK, so maybe I'll start with the second thing. I think if we were always in a position to shut off the systems, then that would be great. But not being shut off is a form of power. I think Stuart Russell has this classic line: you can't fetch the coffee if you're dead. To extent that your existence is promoting your objectives, then continuing to be able to exist and be active and be not turned off is also going to promote your objectives.
Now you can try to futz with it, but there's a balance where if you try to make the system more and more amenable to being turned off, then sometimes it automatically turns itself off. And there's work on how you might deal with this, I think it goes under the term "the shutdown problem," I think actually Robert Miles has a YouTube on it. But broadly, if we succeed at getting the systems such that they're always happy to be shut off, then that's a lot of progress.
So what else is there to say about instrumental convergence and why we might expect it? Sometimes you can just go through specific forms of power-seeking. Like why would it be good to be able to develop technology? Why would it be good to be able to harness lots of forms of energy? Why would it be good to survive? Why would it be good to be smarter? There are different types of power, and we can just go through and talk about: for which objectives would this be useful? We can just some imagine different objectives and get a flavor for why it might be useful.
We can also look at humans and say, ah, it seems like when humans try to accomplish things... Humans like money, and money is a generic form of power. Is that a kind of idiosyncrasy of humans that they like money, or is it something more to do with a structural feature of being an agent and being able to exert influence in the world, in pursuit of what you want? We can look at humans, and I go through some of the evidence from humans in the report.
And then finally there are actually some formal results where we try to formalize a notion of power-seeking in terms of the number of options that a given state allows a system. This is work by Alex Turner, which I'd encourage folks to check out. And basically you can show that for a large class objectives defined relative to an environment, there's a strong reason for a system optimizing those objectives to get to the states that give them many more options. And the intuition for that is like: if your ranking is over final states, then the states with more options will give you access to more final states, and so you want to do that. So those are three reasons you might worry.
**Question**: *Regarding your analysis of this policy thing: how useful do you find social science theories such as IR or social theory -- for example, a realism approach to IR. Somebody asking in the forum, how useful do you find those kind of techniques?*
**Response**: How useful do I find different traditions in IR for thinking about what might happen with AI in particular?
**Questioner**: *The theories people have already come up with in IR or the social sciences, regarding power.*
**Response**: I haven't drawn a ton on that literature. I do think it's likely to be relevant in various ways. And in particular, I think, a lot of the questions about AI race dynamics, and deployment dynamics, and coordination between different actors and different incentives -- some of this mirrors other issues we see with arms races, we can talk about bargaining theory, and we can talk about how different agents with different objectives, and different levels of power, how we should expect them to interact. I do think there's stuff there, I haven't gone very deep on that literature though, so I can't speak to it in-depth.
**Question**: *What's your current credence, what's your prior, that your own judgment of this report to be correct? And is it 10%, bigger than 10%, 15% or whatever, and how spread out are credences of those whose judgment you respect?*
**Response**: Sorry, I'm not sure I'm totally understanding the question. Is it: what's the probability that I'm right?
**Various questioners**: *A prior distribution of all those six arguments on the slide...the spread ... what's the percentage, you think yourself, is right.*
**Response**: My credence on all of them being right, as I said, it's currently above 10%. It feels like the question is trying to get at something about my relationship with other people and their views. And am I comfortable disagreeing with people, and maybe some people think it's higher, some people think it's lower.
**Questioner in the chat**: *Epistemic status maybe?*
**Response**: Epistemic status? So, I swing pretty wildly here, so if you're looking for error bars, I think there can be something a little weird about error bars with your subjective credences, but in terms of the variance: it's like, my mood changes, I can definitely get up very high, as I mentioned I can get up to 40% or higher or something like this. And I can also go in some moods quite low. So this isn't a very robust estimate.
And also, I am disturbed by the amount of disagreement in the community. We solicited a bunch of [reviews of my report](https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk). If people are interested in looking at other people's takes on the premises, that's on LessWrong and on the EA Forum, and there's a big swing. Some people are at 70%, 80% doom by 2070, and some people are at something very, very low. And so there is a question of how to handle that disagreement. I am hazily incorporating that into my analysis, but it's not especially principled, and there are general issues with how to deal with disagreement in the world. Yeah, maybe that can give some sense of the epistemic status here.
**Question**: *It seems to me that you could plausibly get warning shots by agents that are subhuman in general capabilities, but still have some local degree of agency without strategic planning objectives. And that seems like a pretty natural... or you could consider some narrow system, where the agent has some strategy within the system to not be turned off, but globally is not aware of its role as an agent within some small system or something. I feel like there's some notion of there's only going to be a warning shot once agents are too powerful to stop, that I vaguely disagree with, I was just wondering your thoughts.*
**Response**: I certainly wouldn't want to say that, and as I mentioned, I think warning shots come in degrees. You're getting different amounts of evidence from different types of systems as to how bad these problems are, when they tend to crop up, how hard they are to fix. I totally agree: you can get evidence and warning shots, in some sense, from subhuman systems, or specialized systems, or lopsided systems that are really good at one thing, bad at others. And I have a section in the report on basically why I think warning shots shouldn't be all that much comfort. I think there's some amount of comfort, but I really don't think that the argument here should be like, well, if we get warning shots, then obviously it'll be fine because we'll just fix it.
I think knowing that there's a problem, there's a lot of gradations of how seriously you're taking it, and there's a lot of gradation in terms of how easy it is to fix and what resources you're bringing to bear to the issue. And so my best guess is not that we get totally blindsided, no one saw this coming, and then it just jumps out. My guess is actually people are quite aware, and it's like, wow yeah, this is a real issue. But nevertheless, we're just sort of progressing forward and I think that's a a very worrying and reasonably mainline scenario.
**Question**: So *does that imply, in your view of the strategic or incentive landscape, you just think that the incentive structure would just be too strong, that it will require deploying =planning AI versus just having lots of tool-like AI.*
**Response:** Basically I think planning and strategic awareness are just going to be sufficiently useful, and it's going to be sufficiently hard to coordinate if there are lots of actors, that those two issues in combination will push us towards increasing levels of risk and in the direction of more worrying systems.
**Question**: *One burning question. How do you work out those numbers? How do you work out the back-of-the-envelope calculation, or how hard did you find those numbers?*
**Response**: It's definitely hard. There are basic calibration things you can try to do, you can train yourself to do some of this, and I've done some of that. I spent a while, I gave myself a little survey over a period of weeks where I would try to see how my numbers changed, and I looked at what the medians were and the variance and stuff like that. I asked myself questions like: would I rather win $10,000 if this proposition were true, or I pulled a red ball out an an urn with 70% red balls? You can access your intuitions using that thought experiment. There are various ways, but it's really hard. And as I said, I think those numbers should be taken with a grain of salt. But I think it's still more granular and I think more useful than just saying things like "significant risk" or "serious worry" or something. Because that can mean a really lot of things to different people, and I think it can be useful to be more specific.
**Host**: *Alright, let's thank Joe again for this wonderful presentation.*
**Response**: Thanks everybody, appreciate it. |
a39a26eb-9f50-4c16-af0a-28563e045847 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC Ethics meetup
Discussion article for the meetup : Washington DC Ethics meetup
WHEN: 28 October 2012 03:00:49PM (-0400)
WHERE: National Portrait Gallery Plaza, Washington, DC 20001, USA
We will be discussing the recent post about the most important open ethics problems:
http://80000hours.org/blog/99-the-most-important-unsolved-problems-in-ethics
Discussion article for the meetup : Washington DC Ethics meetup |
e2876957-d3a1-419a-b11b-8bd16b32aff4 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How is red teaming used in AI alignment?
*[Red teaming](https://en.wikipedia.org/wiki/Red_team)* refers to attempts to break a system's security measures, or to cause bad performance by the system, in order to discover its flaws and provide feedback on how it could be improved. In AI safety, red teaming can be done either on real systems, or on alignment strategies as a thought experiment to identify examples that break the specific strategy.
[Redwood Research has used red teaming on a language model](/?state=7749&question=How%20does%20Redwood%20Research%20do%20adversarial%20training%3F) that was trained to produce fiction by training a second model (a “classifier”) to predict if a human would say that the generated text involved somebody being injured. They then used examples that were labeled as involving injury to retrain the original language model. This makes the model less likely to produce text that is misaligned with the goal of “producing stories in which nobody is injured”. A type of red teaming was also used to train ChatGPT to produce only “acceptable” responses.
Thought experiment red teaming was used by the [Alignment Research Center (ARC)](https://www.alignment.org/) in their problem statement on [Eliciting Latent Knowledge](/?state=9049&question=What%20is%20Eliciting%20Latent%20Knowledge%20(ELK)%3F). In this approach, one person tries to come up with a way of solving the problem and another person tries to come up with an example that would break that way of solving the problem; then the first person alters their example to fix this problem. This process is repeated until either person gives up, admitting either that their way of solving the problem does not work or that they cannot break the other person’s way of solving the problem (meaning, hopefully, that the problem has been solved).
The Alignment Research Center (ARC) was also involved in [red teaming GPT-4](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) before it was released. They found many concerning behaviors, such as when GPT-4 hired someone to solve CAPTCHAs for it. However, they ultimately did not think it was too dangerous to release, since it wouldn’t be able to self-replicate independently or become difficult to shut down. Still, they think that future systems would need careful red teaming before release, since current systems are potentially on the cusp of being dangerous.
|
2b51d98f-cc87-4191-b3e6-f03de2b76344 | trentmkelly/LessWrong-43k | LessWrong | An investigation into when agents may be incentivized to manipulate our beliefs.
Contemporary research on both AI capabilities and alignment often features more intricate interaction patterns between humans and the AI-in-training than the standard approach of defining an reward signal once and running an optimization algorithm based on it. As I have written before, I believe this makes the humans participating in such a learning process vulnerable to a kind of preference manipulation, similarly to what can be observed in recommender systems. In this post, I will draw on some of the work on causal influence diagrams to investigate when agents might be incentivized to manipulate a human's beliefs. A core motivation for this approach is that I do not believe it will be possible to completely remove the causal link from agent generated data to human beliefs while still allowing the human give useful feedback. Just like how the risk of reward tampering exists because agents can influence their functions, agents interacting with human beliefs pose a threat of harmful belief manipulation.
For the bulk of this post, I will analyze the same solution approaches as those considered in Everitt et al's 2021 paper on reward tampering problems and solutions (RTPS) as well as the 2022 paper on path-specific objectives (PSO). The RTPS paper assumes a fixed true reward function (RF) and only allows the agent to interact with a changing implemented RF or data based on the true RF. To model mutable human beliefs I consider a model in which the true RF (= the human's beliefs) can change due to the actions of the agents and the state of the environment. I will call actions that change the beliefs of humans involved in the training algorithm in unintended ways belief tampering. By analyzing the CIDs of the algorithms from RTPS in the setting with mutable beliefs, I will argue that even approaches that do not allow an instrumental goal for reward tampering may incentivize belief tampering. In the last section, I will give an outline of what an algorithm where belief t |
6ff5985c-6fb0-46b9-8b84-745fb995e917 | trentmkelly/LessWrong-43k | LessWrong | The importance of studying subjective experience
I think people severely underestimate the importance of studying subjective experience (qualia).
In my opinion, people think a lot about "the mystery of qualia" (the hard problem of consciousness), but not enough about "the miracle of qualia". I.e. why qualia is cool and interesting. It's not as trivial as it seems. Subjective experience is always with us, so it's hard to notice how strange it is (even without any "hard problems"). Down below is the list of reasons why qualia deserves way more attention than people give it. For convenience, I split the post into 4 parts.
Epistemic status: I haven't read a lot of relevant literature. Some claims are based on indirect evidence: we clearly don't live in a culture were subjective experience is valued as much as I think it should. Discussing this topic may be an exercise in prioritization.
Qualia signals missed knowledge
Subjective experience (qualia) tells us that there are differences between objects which feel very simple, but lay beyond any current knowledge. That is to say we don't have any model of those differences. We don't know the (mathematical) space of our experiences.
Subjective experience is like a big carrot hanging right before your eyes. Telling "hey, I'm hiding some new knowledge you're completely missing out". If I were a mathematician or an AI researcher or an artist, I would be going insane trying to guess what could resemble the workings of subjective experience.
Examples
Some examples of differences which feel simple, but don't have an explanation/a model:
* Differences between tastes, smells, tactile sensations and other (direct enough) physical experiences.
* Differences between human voices. (See Timbre) A lot of subjective differences of sounds are not explained.
* Differences between simple impressions of people. Your simplest opinions/feelings about other people are not explained.
And much more. Any experience teases you about the knowledge you lack. Every second. And yet a lot of |
29a02825-6fcf-4b6b-954a-cc96a0769d34 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Complex Systems for AI Safety [Pragmatic AI Safety #3]
*This is the third post in* [*a sequence of posts*](https://forum.effectivealtruism.org/posts/MskKEsj8nWREoMjQK/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1) *that describe our models for Pragmatic AI Safety.*
It is critical to steer the AI research field in a safer direction. However, it’s difficult to understand how it can be shaped, because it is very complex and there is often a high level of uncertainty about future developments. As a result, it may be daunting to even begin to think about how to shape the field. We cannot afford to make too many simplifying assumptions that hide the complexity of the field, but we also cannot afford to make too few and be unable to generate any tractable insights.
Fortunately, the field of complex systems provides a solution. The field has identified commonalities between many kinds of systems and has identified ways that they can be modeled and changed. In this post, we will explain some of the foundational ideas behind complex systems and how they can be applied to shaping the AI research ecosystem.
Along the way, we will also demonstrate that deep learning systems exhibit many of the fundamental properties of complex systems, and we show how complex systems are also useful for deep learning AI safety research.
A systems view of AI safety
---------------------------
### Background: Complex Systems
When considering methods to alter the trajectory of empirical fields such as deep learning, as well as preventing catastrophe from higher risk systems, it is necessary to have some understanding of complex systems. Complex systems is an entire field of study, so we cannot possibly describe every relevant detail here. However, we will try to describe some of its most important aspects.
Complex systems are systems consisting of many interacting components that exhibit emergent collective behavior. Complex systems are highly interconnected, making decomposition and reductive analysis less effective: breaking the system down into parts and analyzing the parts cannot give a good explanation of the whole. However, complex systems are also too organized for statistics, since the interdependencies in the system break fundamental independence assumptions in much of statistics. Complex systems are ubiquitous: financial systems, power grids, social insects, the internet, weather systems, biological cells, human societies, deep learning models, the brain, and other systems are all complex systems.
For more background on complex systems, see [this video](https://www.youtube.com/watch?v=vp8v2Udd_PM). For background on emergence, a key property of complex systems, see [this video](https://www.youtube.com/watch?v=QItTWZc7hKs).
It can be tricky to compare AI safety to making other specific systems safer. Is making AI safe like making a rocket, power plant, or computer program safe? While analogies can be found, there are many disanalogies. It’s more generally useful to talk about making complex systems safer. For systems theoretic hazard analysis, we can abstract away from the specific content and just focus on shared structure across systems. Rather than talk about what worked well for one high-risk technology, with a systems view we can talk about what worked well for a large number of them, which prevents us from overfitting to a particular example.
The central lesson to take away from complex systems theory is that reductionism is not enough. It’s often tempting to break down a system into isolated events or components, and then try to analyze each part and then combine the results. This incorrectly assumes that separation does not distort the system’s properties. In reality, parts do not operate independently, and are subject to feedback loops and nonlinear interactions. Analyzing the pairwise interactions between parts is not sufficient for capturing the full system complexity (this is partially why a [bag of n-grams](https://towardsdatascience.com/evolution-of-language-models-n-grams-word-embeddings-attention-transformers-a688151825d2) is far worse than attention).
Hazard analysis once proceeded by reductionism alone. In earlier models, accidents are broken down into a chain of events thought to have caused that accident, where a hazard is a root cause of an accident. Complex systems theory has supplanted this sort of analysis across many industries, in part because the idea of an ultimate “root cause” of a catastrophe is not productive when analyzing a complex system. Instead of looking for a single component responsible for safety, it makes sense to identify the numerous factors, including sociotechnical factors, that are contributory. Rather than break events down into cause and effect, a systems view instead sees events as a product of a complex interaction between parts.
For an in-depth explanation of contemporary hazard analysis that justifies them more completely than this post can, please see [this lecture](https://www.youtube.com/watch?v=_ptmjAbacMk).
Recognizing that we are dealing with complex systems, we will now discuss how to use insights from complex systems to help make AI systems safer.
### Improving Contributing Factors
“Direct impact,” that is impact produced from a simple, short, and deterministic causal chain, is relatively easy to analyze and quantify. However, this does not mean that direct impact is always the best route to impact. If someone only focuses on direct impact, they won’t optimize for diffuse paths towards impact. For instance, EA community building is indirect, but without it there would be far fewer funds, fewer people working on certain problems, and so on. Becoming a billionaire and donating money is indirect, but without this there would be significantly less funding. Similarly, safety field-building may not have an immediate direct impact on technical problems, but it can still vastly change the resources devoted to solving those problems, in turn contributing to solving them (note that “resources” does not (just) mean money, but rather competent researchers capable of making progress). In a complex system, such indirect/diffuse factors have to be accounted for and prioritized.
AI safety is not all about finding safety mechanisms, such as mechanisms that could be added to make superintelligence completely safe. This is a bit like saying computer security is all about firewalls, which is not true. [Information assurance](https://online.norwich.edu/academic-programs/resources/information-assurance-versus-information-security) evolved to address blindspots in information security, because it is understood that we cannot ignore [complex systems](http://web.mit.edu/smadnick/www/wp/2014-07.pdf), safety culture, protocols, and so on.
Often, research directions in AI safety are thought to need to have a simple direct impact story: if this intervention is successful, what is the short chain of events towards it being useful for safe and aligned AGI? “How does this directly reduce x-risk” is a well-intentioned question, but it leaves out salient remote, indirect, or nonlinear causal factors. Such diffuse factors cannot be ignored, as we will discuss below.
**A note on tradeoffs with simple theories of impact**
AI safety research is complex enough that we should expect that understanding a theory of impact might require deep knowledge and expertise about a particular area. As such, a theory of impact for that research might not be easily explicable to somebody without any background in a short amount of time. This is especially true of theories of impact that are multifaceted, involve social dynamics, and require an understanding of multiple different angles of the problem. As such, we should not only focus on theories of impact that are easily explicable to newcomers.
In some cases, pragmatically one should not always focus on the research area that is most directly and obviously relevant. At first blush, reinforcement learning (RL) is highly relevant to advanced AI agents. RL is conceptually broader than supervised learning such that supervised learning can be formulated as an RL problem. However, the problems considered in RL that aren’t considered in supervised learning are currently far less tractable. This can mean that in practice, supervised learning may provide more tractable research directions.
However, with theories of impact that are less immediately and palpably related to x-risk reduction, we need to be very careful to ensure that research remains relevant. Less direct connection to the essential goals of the research may cause it to drift off course and fail to achieve its original aims. This is especially true when research agendas are carried out by people who are less motivated by the original goal of the research, and could potentially lead to value drift where previously x-risk-motivated researchers become motivated by proxy goals that are no longer relevant. This means that it is much more important for x-risk-motivated researchers and grantmakers to maintain the field and actively ensure research remains relevant (this will be discussed later).
Thus, there is a tradeoff involved in only selecting immediately graspable impact strategies. Systemic factors cannot be ignored, but this does not eliminate the need for understanding causal (whether indirect/nonlinear/diffuse or direct) links between research and impact.
**Examples of the importance of systemic factors**
The following examples illustrate the extreme importance of systemic factors (and the limitations of direct causal analysis and complementary techniques such as [backchaining](https://en.wikipedia.org/wiki/Backward_chaining)):
* Increasing wealth is strongly associated with a reduction in childhood mortality. But one cannot always credit the survival of any particular child to an increase of the wealth of their country. Nonetheless, a good way to reduce childhood mortality is still to increase overall wealth.
* Community building, improving institutions, and improving epistemics can usually not be linked directly to specific outcomes, but in aggregate they clearly have large effects.
* Smoking does not guarantee you will get cancer. If you smoke and get cancer, it is not necessarily because you smoked. Still, avoiding smoking is clearly a good way to avoid cancer. Contrariwise, exercise does not guarantee that you will be healthy, but it robustly helps.
* Intelligence (e.g. as measured by IQ) has an enormous impact on the ability of people to perform various tasks. But it is implausible to point to a particular multiple choice test question that somebody answered correctly and say “they got this question because their IQ was above x.” Similarly, forecasting and rationality could increase the “IQ” of the superorganism, but it similarly could not be expected to produce one single definite outcome. Improving the rationality waterline helps with outcomes, even if we cannot create a simple chain of events showing that it will prevent a particular future catastrophe.
* Any particular hurricane or wildfire cannot be attributed to the effects of climate change, but reducing climate change is a good way to reduce the prevalence of those extreme weather events.
In the cases above, it is possible to use statistics to establish the relationship between the variables given enough data. Some can be causally established through randomized controlled trials. However, we do not have the ability or time to run an RCT on diffuse factors that reduce x-risk from AI. Unlike the situations above, we do not get to observe many different outcomes because an existential catastrophe would be the last observation we would make. This does not mean diffuse factors are unimportant; on the contrary, they are extremely important. We can instead identify time-tested factors that have been robustly useful in similar contexts in the past.
On a more societal scale, the following diffuse factors are quite important for reducing AI x-risk. Note that these factors may interact in some cases: for instance, proactivity about risks might not help much if malevolent actors are in power.
* **People having improved epistemics:** Irrationality could cause people to ignore warning signs, dismiss correct claims, and barrel ahead when they shouldn’t.
* **Proactivity about (tail) risks:** Causing humanity as a collective to care more about tail risks would be a boon for safety. Work on mitigating tail risks is currently underincentivized due to the human tendency to ignore tail risks.
* **Expanded moral circles:**The term “[moral circle](https://en.wikipedia.org/wiki/The_Expanding_Circle)” describes the beings that one considers to be morally relevant (e.g. people in your community, people across the world, future people, non-human animals, etc.). People do not need a large moral circle to want to avoid their own deaths, but it can strengthen the perceived importance of reducing x-risk.
* **Keeping (misaligned) malevolent actors (**[**egoists/Machiavellians/psychopaths**](https://longtermrisk.org/files/Reducing_long_term_risks_from_malevolent_actors.pdf)**) out of power:**Contending with actively malevolent leaders is even more difficult than contending with apathetic leaders. Getting even-handed, cautious, and altruistic people into positions of power is likely to reduce x-risk.
**Sociotechnical Factors**
*An abstract template from Nancy Leveson illustrating the complex interplay between sociotechnical factors and an operating process.*We can now speak about specific diffuse factors that have shown to be highly relevant to making high-risk technological systems safer, which are also relevant to making present and future AI systems safer. The following sociotechnical factors (compiled from [Perrow](https://en.wikipedia.org/wiki/Normal_Accidents), [La Porte](https://polisci.berkeley.edu/sites/default/files/people/u3825/High%20Reliability%20Organizations%20-%20Unlikely,%20Demanding,%20and%20At%20Risk.pdf), [Leveson](http://sunnyday.mit.edu/safer-world.pdf), and others) tend to influence hazards:
* **Rules and regulations**, perhaps including internal policies as well as legal governance.
* **Social pressures**, including those from the general public as well as well-connected powerful people.
* **Productivity pressures**, or pressure to deliver quickly.
* **Incentive structures**within the organization, such as benefits to delivering quickly or retaliation for whistleblowing.
* **Competition pressures from other actors** who may have different safety standards, or otherwise be able to move faster.
* **Safety budget and compute allocation**: are safety teams capable of running the experiments they need to? Is a significant proportion of the budget and compute dedicated to safety?
* **Safety team size**, which is related to budget. The number of researchers, engineers, and top researchers on the safety team matters a lot.
* **Alarm fatigue**: if many false alarms are raised about safety issues which were never borne out, this could reduce willingness to care about safety.
* **Reduction in inspection and preventative maintenance**, which is perhaps less relevant for a forward-looking problem like safety. However, if people do not keep a close eye on capabilities, this could allow for emergent capabilities (or actors) to take us by surprise.
* **Lack of defense in depth**: overlapping systems that provide multiple layers of defense against hazards.
* **Lack of redundancy**: multiple systems which accomplish similar safety tasks, so as to remove single points of failure.
* **Lack of fail-safes**: features that allow a system to fail gracefully.
* **Safety mechanism cost**: how much does it cost to make a system safe?
* **Safety culture**, or the general attitude towards safety within an organization or field.
According to Leveson, who has been consulted on the design of high-risk technologies across numerous industries, “*the most important [contributing factor] to fix if we want to prevent future accidents*” is safety culture.
**Safety Culture**
Safety culture is not an easy risk factor to address, though it is likely to be one of the most important. Many ML researchers currently roll their eyes when asked about alignment or safety: usually, one cannot simply go straight to discussing existential risks from superintelligences without suffering social costs or efforts potentially backfiring. This is a sign of a deficient safety culture.
How do we improve safety culture? Safety needs to be brought to the forefront through good incentive structures and serious research. Pushing research cultures in a safer direction is bottlenecked by finding interesting, shovel-ready, safety-relevant tasks for people to do and funding them to complete those tasks.
As suggested by [the speculative pyramid above](https://assets.pubpub.org/5nv701md/01521405455055.pdf), it is not realistic to immediately try to make safety into a community norm. Before this can be done, we need to make it clear what safety looks like and we need infrastructure to make AI safety research as easy as possible. Researchers need to accept arguments about risks *and* they need clear, concrete, low-risk research tasks to pursue. This involves creating funding opportunities, workshops, and prizes, as well as clearly defining problems through metrics.
Some contributing [factors](https://arxiv.org/abs/1811.10840) that can improve safety culture are as follows:
* **Preoccupation with failure**, especially black swan events and unseen failures.
* **Reluctance to simplify interpretations** and explain failures using only simplistic narratives.
* **Sensitivity to operations**, which involves closely monitoring systems for unexpected behavior.
* **Commitment to resilience**, which means being rapidly adaptable to change and willing to try new ideas when faced with unexpected circumstances.
* **Under-specification of organizational structures**, where new information can travel throughout the entire organization rather than relying only on fixed reporting chains.
For mainstream culture, public outreach can help. One plausible way that AI systems could become more safe is due to a broader cultural desire for safety, or fear of lack of safety. Conversely, if AI safety is maligned or not valued in the general public, there may be other public pressures (e.g. winning the AI race, using AI to achieve some social good quickly) that could push against safety. Again, mainstream outreach should not be so extreme as to turn the research community against safety. Overton windows must be shifted with care.
Currently, safety is being attacked by [critics](https://twitter.com/timnitGebru/status/1485399721409605632) who believe that it detracts from work on AI fairness and bias and does not heavily prioritize current power inequalities, which they view as the root cause of world problems. Criticisms have been connected to criticisms of longtermism, particularly absurd-seeming expected value calculations of the number of future beings, as well as the influence of EA billionaires. These criticisms threaten to derail safety culture. It is tricky but necessary to present an alternative perspective while avoiding negative side effects.
Some technical problems are instrumentally useful for safety culture in addition to being directly useful for safety. One example of this is reliability: building highly reliable systems trains people to specifically consider the tail-risks of their system, in a way that simply building systems that are more accurate in typical settings does not. On the other hand, value learning, while it is also a problem that needs to be solved, is currently not quite as useful for safety culture optimization.
**Composition of top AI researchers**
We will now discuss another contributing factor that is important to improve: the composition of top AI researchers. In the future, experimenting with the most advanced AI systems will be extraordinarily expensive (in many cases, it already is). A very small number of people will have the power to set research directions for these systems. Though it’s not possible to know exactly who will be in this small group, it could comprise any number of the top AI researchers today. However, one thing is known: most top AI researchers are not sympathetic to safety. Consequently, there is a need to increase the proportion of buy-in among top researchers, especially including researchers in China, and also to train more safety-conscious people to be top researchers.
It’s tempting to think that top AI researchers can simply be bought. This is not the case. To become top researchers, they had to be highly opinionated and driven by factors other than money. Many of them entered academia, which is not a career path typically taken by people who mainly care about money. Yann LeCun and Geoffrey Hinton both still hold academic positions in addition to their industry positions at Meta and Google, respectively. Yoshua Bengio is still in academia entirely. The tech companies surely would be willing to buy more of their time for a higher price than academia, so why are the three pioneers of deep learning not all in the highest-paying industry job? Pecuniary incentives are useful for externally motivated people, but many top researchers are mostly internally motivated.
As discussed in the last post, a leading motivation for researchers is the interestingness or “coolness” of a problem. Getting more people to research relevant problems is highly dependent on finding interesting and well-defined subproblems for them to work on. This relies on concretizing problems and providing funding for solving them.
Due to the fact that many top researchers are technopositive, they are not motivated by complaints about the dangers of their research, and they are likely to be dismissive. This is especially true when complaints come from those who have not made much of a contribution to the field. As a result, it is important to keep the *contribution to complaint ratio* high for those who want to have any credibility. “Contribution” can be a safety contribution, but it needs to be a legible contribution to ML researchers. Top researchers may also associate discussion of existential risk with sensationalist stories in the media, doom-and-gloom prophecies, or panic that “we’re all going to die.”
**Causes of Neglectedness**
There are a number of additional factors which contribute to the general neglectedness of AI safety. It is important to optimize many of these factors in order to improve safety. A more general list of these factors is as follows.
* **Corporate**: myopic desire for short-term shareholder returns, safety features may take a long time to pay off, some human values may be difficult to incorporate in prices or pecuniary incentives
* **Temperamental**: techno-optimism, distaste for discussing risks
* **Political**: AI safety is seen to compete with more politically popular causes like climate change and reducing inequality
* **Technical Background**: safety problems are outside of one’s existing skill set or training, and likewise machine ethics and sociotechnical concerns and do not as easily as easily comport with their quantitative inclinations
* **Socioeconomic distance**: many AI researchers live in tech bubbles, which can cause researchers to devalue or implicitly underemphasize cosmopolitan approaches towards loading human values
* **Tail risks:** highly consequential black swans and tail risks are systematically neglected
* **Respectability**: distaste for talk of AGI, feeling an area is not prestigious, areas associated with people who hold other unpopular or weird-seeming ideas
* **Temporal**: future risks and future people are highly neglected
### Complex Systems for AI Safety
Complex systems studies emphasizes that we should focus on contributing factors (as events are the product of the interaction of many contributing factors), and it helps us identify which contributing factors are most important across many real-world contexts. They also provide object-level insight about deep learning, since deep learning systems are themselves complex systems.
Deep learning exhibits many halmarks of complex systems:
* *Highly distributed functions*: partial concepts are encoded redundantly and highly aggregated
* *Numerous weak nonlinear connections*: Connection parameters are nonzero (rather than sparse) and neural networks contain nonlinear activation functions
* *Self-organization*: optimizing a loss automatically specifies a model’s internal content
* *Adaptivity*: few-shot models and online models are adaptive
* *Feedback loops*: [Self-play](https://en.wikipedia.org/wiki/AlphaZero), [human in the loop](https://arxiv.org/abs/1706.03741), [auto-induced distribution shift](https://arxiv.org/abs/2009.09153)
* *Scalable structure*: [scaling laws](https://arxiv.org/abs/2001.08361) show that models scale simply and consistently
* *Emergent capabilities*: numerous unplanned capabilities spontaneously “[turn on](https://arxiv.org/abs/2202.07785)”
As such, insights from complex systems are quite applicable to deep learning. Similarly, like all large sociotechnical structures, the AI research community can also be considered to be a complex system. The organizations operating AI systems are also complex systems.
Complex systems is a *predictive*—not just explanatory—model for various problems, including AI safety. In fact, many important concepts in AI safety turn out to be specific instances of more general principles. Here are examples of *highly simplified* lessons from complex systems, mostly from [The Systems Bible](https://en.wikipedia.org/wiki/Systemantics) (1975):
* **Systems develop goals of their own the instant they come into being.**
+ *Explanation:* A system’s goal is seldom merely the initial goal it was tasked with. Rather, other goals emerge from the organization of the system.
+ *Implications for AI:*One salient example are instrumental goals for self-preservation or power-seeking.
* **Intrasystem goals come first.**
+ *Explanation:*Systems often decompose goals into subparts for different intrasystem components to solve. During this decomposition, goals are often distorted. A common failure mode is that the system's explicitly written objective is not necessarily the objective that the system operationally pursues, and this can result in misalignment. A system’s subgoals can supersede its actual goals. For example, a bureaucratic department (a subsystem) can capture power and have the company pursue goals unlike its original goals.
+ *Implications for AI:* A related phenomenon is already well known to the community as [mesa-optimization](https://arxiv.org/abs/1906.01820); it has been predicted on a more general level by systems theory for decades.
* **The mode of failure of a complex system cannot ordinarily be predicted from its structure.**
+ *Explanation:* Simply examining a complex system will not necessarily give you a good idea for how it might fail. Failures are usually identified from experience and testing.
+ *Implications for AI:* It is difficult to understand how all the ways a neural network might fail simply by examining its weights or architecture or through armchair/whiteboard analysis. We can count on some failures being unpredictable. (Although failures are inevitable, catastrophes are not.)
+ *Implications for strategy:* An approach of “think about the problem really hard and make sure there are no holes in the solution” is unlikely to turn up a solution that truly has no holes. Preventing failure in a complex system is not a math problem. In complex systems there are few symmetries, few necessary and sufficient conditions or boolean connectives (no root cause), circular relationships, numerous partial concepts (combinatorial explosion), self-organization, high distributivity. All of these properties make complex systems very difficult to analyze from an armchair/whiteboard or with proofs.
* **The crucial variables are discovered by accident.**
+ *Explanation*:It is difficult to know what the most important parts of a system are by inspection. The highest points of leverage are not obvious. Likewise, the methods that will work best are often found by tinkering or serendipity.
+ *Implications for AI:*Many of the greatest breakthroughs in AI are not discovered purely by principled, highly structured investigation, but instead by tinkering.
+ *Implications for strategy*: Many current approaches to research bet on AGI being best represented as a mathematical object rather than a complex system, which seems unrealistic given current AI systems as well as other intelligent systems we know (e.g. humans, corporations).
* **A large system, produced by expanding the dimensions of a smaller system, does not behave like the smaller system.**
+ *Explanation:*Purely scaling up a system does not only make it better at whatever it was doing before. We should expect to see new qualitative properties and emergent capabilities.
+ *Implications for AI:* We should expect to also see emergent capabilities that did not exist at all in smaller versions. For example, at low levels of capabilities, deception is not a good idea for an intelligence, but as it becomes more intelligent, deception may be a better strategy for achieving goals.
+ *Implications for strategy:*Scaling up an aligned system and expecting it to be fully aligned is not an airtight idea. Scaling, even of a highly reliable system, needs to be done carefully.
* **(From Gilb) Gilb’s Laws of Unreliability: any system which depends on human reliability is unreliable.**
+ *Explanation:* Humans are not reliable. Reliance on them will create unreliability.
+ *Implications for strategy:* AI systems may be too explosive and fast-moving for depending heavily on human feedback or human-in-the-loop methods. We will need a more reliable strategy for preserving human values, perhaps through oversight from other AI systems.
* **A complex system that works is invariably found to have evolved from a simple system that works.**
+ *Explanation:*Complex systems cannot be created from scratch and expected to work. Rather, they have to evolve from simpler functioning systems.
+ *Implications for strategy:* Working on safety for simpler systems, and attempting to (carefully) scale them up is more likely to be successful than starting by trying to build an aligned complex system from scratch. Although systems behave differently when scaled, the ones that work are evolved from smaller systems. If one is unable to align a simpler version of a complex system, it is unlikely that one can align the more complex version. On this view a top priority is making today’s simpler systems safer.
Diversification
---------------
There are many different facets involved in making complex systems work well; we cannot simply rely on a single contributing factor or research direction. The implication is that it makes sense to diversify our priorities.
Since an individual has limited ability to become specialized and there are many individuals, it often makes sense to bet on the single highest expected value (EV) research approach. However, it would be a mistake to think of the larger system in the same way one thinks of an individual within the system. If the system allocates all resources into the highest EV option, and that sole option does not pay off, then the system fails. This is a known fact in finance and many other fields that take a portfolio approach to investments. Do not make one big bet or only bet on the favorite (e.g., highest estimated EV) avenue. The factor with the highest return on investment in isolation is quite different from the highest return on investment *profile* spanning multiple factors. The marginal benefit of X might be higher than Y, but the system as a whole is not forced to choose only one. As the common adage goes, “don’t put all your eggs in one basket.”
One example of obviously suboptimal resource allocation is that the AI safety community spent a very large fraction of its resources on reinforcement learning until relatively recently. While reinforcement learning might have seemed like the most promising area for progress towards AGI to a few of the initial safety researchers, this strategy meant that not many were working on deep learning. Deep learning safety researchers were encouraged to focus on RL environments because it is “strictly more general,” but just because one can cast a problem as a reinforcement learning problem does not mean one should. At the same time, the larger machine learning community focused more on deep learning than reinforcement learning. Obviously, deep learning appears now to be [at least as promising](https://www.metaculus.com/questions/4055/will-the-first-agi-be-based-on-deep-learning/) as reinforcement learning, and a lot more safety research is being done in deep learning. Due to tractability, the value of information, iterative progress in research, and community building effects, it might have been far better had more people been working on deep learning from an earlier date. This could readily have been avoided had the community leaders heeded the importance of heavily diversifying research.
If we should address multiple fronts simultaneously, not bet the community on a single area or strategy, we will pay lower costs from neglecting important variables. Since costs often scale superlinearly with the time a problem has been neglected, [which has serious practical implications](https://jessitron.com/2021/01/18/when-costs-are-nonlinear-keep-it-small/), it makes sense to apply resources to pay costs frequently, rather than only applying resources after costs have already blown up. The longer one waits, the more difficult it could be to apply an intervention, and if costs are convex (e.g. quadratic rather than logarithmic), costs are exacerbated further. Diversification implicitly keeps these costs lower.
AI safety is an area with extremely high uncertainty: about what the biggest problems will be, what timelines are, what the first AGI system will look like, etc. [At the highest levels of uncertainty](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/strategy-under-uncertainty), it is most important to *improve the virtues of the system* (e.g., meritocratic structures, sheer amount of talent, etc.). If your uncertainty level is slightly less, you *additionally* want to make a few big bets and numerous small bets created in view of a range of possible futures. Moreover, under high uncertainty or when work is inchoate, it is far more effective to follow an “[emergent strategy](https://online.hbs.edu/blog/post/emergent-vs-deliberate-strategy#:~:text=As%20a%20general%20rule%20of%20thumb,%20an%20emergent%20strategy%20may%20be%20the%20right%20choice%20for%20your%20business%20if%20the%20future%20is%20uncertain),” not define the strategy with a highly structured, perfected direction.
With diversification, we do not need to decisively resolve all of the big questions before acting. Will there be a slow takeoff, or will AI go foom? Are the implicit biases in SGD beneficial to us, or will they work against us? Should we create AI to pursue a positive direction, or should we just try to maximize control to prevent it from taking over? So long as answers to these questions are not highly negatively correlated, we can diversify our bets and support several lines of research. Additionally, research can help resolve these questions and can inform which future research should be included in the overall portfolio. Seeing value in diversification saves researchers from spending their time articulating their tacit knowledge and highly technical intuitions to win the court of public opinion, as perhaps the question cannot be resolved until later. Diversification makes researchers less at odds with each other and lets them get on with their work, and it reduces our exposure to risks from incorrect assumptions.
Diversification does not mean that one should not be discretionary about ideas. Some ideas, including those commonly pursued in academia and industry, may not be at all useful to x-risk, even if they are portrayed that way. Just because variables interact nonlinearly does not mean that resources should be devoted to a variable that is not connected with the problem.
In addition, *individuals* do not necessarily need to have a diverse portfolio. There is a benefit to specialization, and so individuals may be better off choosing a single area where they are likely to reach the tail of impact through specialization. However, if everyone individually focused on what they viewed as the most important area of research overall, and their judgments on this were highly correlated, we would see a concentration of research into only a few areas. This would lead to problems, because even if these areas are the most important, they should not be single-mindedly pursued to the neglect of all other interventions.
In complex systems, we should expect many multiplicatively interacting variables to be relevant to the overall safety of a system (we will discuss this model more in the next post). If we neglect other safety factors only to focus on “the most important one,” we are essentially setting everything else to zero, which is not how one reduces the probability of risk in a multiplicative system. For instance, we should not just focus on creating technical safety solutions, let alone betting on one main technical solution. There are other variables that can be expected to nonlinearly interact with this variable: the cost of such a system, the likelihood of AGI being developed in a lab with a strong safety culture, the likelihood of other actors implementing an unaligned version, and the likelihood of the aligned system in question being the one that actually leads to AGI. These interactions and interdepencies imply that effort must be expended to push on all factors simultaneously. This can also help provide what is called [*defense in depth*](https://en.wikipedia.org/wiki/Swiss_cheese_model): if one measure for driving down x-risk fails, other already existing measures can help handle the problem.
Like many outcomes, impact is long tailed, and the impact of a grant will be dominated by a few key paths to impact. Likewise, in a diverse portfolio, the vast majority of the impact will likely be dominated by a few grants. However, the best strategies will [*sample heavily*](https://www.benkuhn.net/outliers/) *from the long tail distribution*, or maximize exposure to long tail distributions. Some ways to increase exposure to the black swans are with broad interventions that could have many different positive impacts, as well as a larger portfolio of interventions. This contrasts with an approach that attempts to select only targeted interventions in the tails, which is often infeasible in large, complex systems because the tails cannot be fully known beforehand. Instead, one should prioritize interventions that have a sufficient chance of being in the tails.
Depending on what phase in the development of AI we are, [targeted or broad](https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) interventions should be more emphasized in the portfolio. In the past, broad interventions would clearly have been more effective: for instance, there would have been little use in studying empirical alignment prior to deep learning. Even more recently than the advent of deep learning, many approaches to empirical alignment were highly deemphasized when large, pretrained language models arrived on the scene (refer to our discussion of creative destruction in the last post). Since the deep learning community is fairly small, it is relatively tractable to work on broad interventions (in comparison to e.g. global health, where interventions will need to affect millions of people).
At this stage, targeted interventions to align particular systems are not currently likely to deliver all the impact, nor are broad approaches that hope to align *all*possible systems. This is because there is still immense upside in optimizing contributing factors to good research, which will in turn cause both of these approaches to be dramatically more effective. The best interventions will look less like concrete stories for how the intervention impacts a particular actor during the creation of AGI and more like actions that help to improve the culture/incentives/buy-in of several possible actors
This suggests that a useful exercise might be coming up with broad interventions that equip the safety research field to deal with problems more effectively and be better placed to deliver targeted interventions in the future. Note that some broad interventions, like interventions that affect safety culture, are not simply useful insofar as they accelerate later targeted interventions, but also in that they may increase the likelihood of those targeted interventions being successfully adopted.
We also need to have targeted interventions, and they may need to be developed before they are known to be needed due to the risk of spontaneously emergent capabilities. There is also an argument that developing targeted interventions now could make it easier to develop targeted interventions in the future. As a result, a mixture of targeted and broad interventions is needed.
Conclusion
----------
It can be daunting to even begin to think how to influence the AI research landscape due to its size and complexity. However, the study of complex systems illuminates some common patterns that can help make this question more tractable. In particular, in many cases it makes more sense to focus on improving contributing factors rather than only try to develop a solution that has a simple, direct causal effect on the intended outcome. Complex systems are also useful for understanding machine learning safety in general, since both the broader research community, deep learning systems, and the organizations deploying deep learning systems are all complex systems.
Resources on Complex Systems
----------------------------
Complex systems is a whole field of study that can't possibly be fully described in this post. We've added this section with resources for learning more.
* (If you only look at one, look at this:) [An introduction to contemporary hazard](https://www.youtube.com/watch?v=_ptmjAbacMk) analysis that justifies the methods far more completely than this post can.
* [A short video introduction to complex systems.](https://www.youtube.com/watch?v=vp8v2Udd_PM)
* [A short video introduction to emergence](https://www.youtube.com/watch?v=QItTWZc7hKs), a key property of complex systems.
* [Systemantics](http://www.bussigel.com/systemsforplay/wp-content/uploads/2013/12/Systemantics.pdf) by John Gall, one of the foundational texts of complex systems.
* [A class introduction to complex systems](https://pdodds.w3.uvm.edu/teaching/courses/2021-2022principles-of-complex-systems/). |
94497f2d-f9e5-4e31-aeb4-6e04de8b63df | trentmkelly/LessWrong-43k | LessWrong | Free Applied Instrumental Rationality Webinar
Dan Nuffer and I are putting together a free webinar that will go through the ideas in Smart Choices: A Practical Guide to Making Better Life Decisions, combined with whatever else seems useful. The authors of this book include one of the pioneers of decision analysis.
Although they don't describe it as such, Smart Choices is really a manual for basic applied instrumental rationality. It's a systematic way of going about your decisions, applicable to either decision problems (you have a situation dumped in your lap that requires a response) or decision opportunities (proactively seeking out ways to further your goals.)
The webinar will be one-hour sessions once a week for however long it takes to go through the material. We're going to do the webinar on Google+ Hangouts, and we'll have a discussion forum for the webinar on our web site.
If you're interested, send me an email [kevin at ksvanhorn com] with 1) your preferred day/time(s), and 2) the day/times that are out of the question for you.
Google+ Hangouts has a limit of 10 people. Five of those slots are already filled, leaving 5 seats open, so don't wait too long to email me if this is something you're interested in.
|
dddc142f-ed88-4fd2-99be-c4b0e446ea42 | trentmkelly/LessWrong-43k | LessWrong | Different Is The Generator, Not A Side Effect
Disclaimer: This may be extremely obvious.
i - The Weird Problem
I’ve seen theories about why smart, genial, creative, and overall high-achieving people are often “atypical”, and even discussions praising atypicality as the capital Q ality for making great discoveries, funding companies, finding everlasting love, leading armies into battle, untying the Gordian knot, and whatnot.
----------------------------------------
Let me give some examples of this to make it more vivid:
> Good discoveries, products, political parties, and such come from the union of, competition between and collaboration among diverse groups of individuals. This is why it’s important to try and foster diversity and equality in matters such as academia (as opposed to construction work or tomato-picking, where affirmative action has been notable in its lack). This is why permissive and diverse cultures end up with the best minds and inventions. This is why when you get even the first hundred women to join scientific fields in the 19th century one of them becomes the first double-noble laureate (in spite of plenty of unfair obstacles). This is why it’s important to get fresh blood into companies at the management level. People that “think different” from the current “consensus” are valuable almost intrinsically.
----------------------------------------
> It’s obvious the “nerds” usually missed some social developmental milestones that their peers achieved. This is your usual socially awkward programmer or mathematician somewhere on the ASD. The explanation goes that, of course, a brain is a balancing act, and people born with skills 5-STDs towards the right in manipulating abstract symbols and rotating shapes might well have skills 2-STDs towards the left in the areas correlated with collecting STDs. This just boils down to “neurodivergence”, people that are good at <insert highly conceptual activity> are not the norm, by definition, they are odd and different, and we’d expect this to manif |
77c998e2-f240-4758-9df7-ee31f40c01df | trentmkelly/LessWrong-43k | LessWrong | The Foraging (Ex-)Bandit [Ruleset & Reflections]
This is a postmortem for a (tiny, ~10min, browser) game I released earlier this week. At the time, I thought posting the generation rules for each location would be redundant and spoilery (especially since you could just look at the code on github), and that writing a Reflections section would be premature and presumptuous. My opinions and the circumstances have both now changed; hence, this post.
Ruleset
Horse Hills
Foraging in Horse Hills gives you a 75% chance of 1d4 food and a 25% chance of 23 + 2d6 + 1d4 food.
A chronologically-and-behaviour-invariant Expected Value of 10 food per forage makes this the default foraging location barring any other opportunities. A forager who spent the entire game here could reasonably expect to conclude with >3500 food stored; consider this a benchmark.
Rat Ruins
Rat Ruins was avoided by your fellow bandits because they feared a curse; fortunately for you, they were wrong. Rat Ruins starts by providing a large and consistent 20+1d4 food per forage, but since nothing grows there the expected forage amount decreases with each hour; after twelve forages, it’s no longer better than Horse Hills.
Goat Grove
Goat Grove is in the process of a slow-rolling ecological disaster. It starts by 10 + 1d4 + 1d6 food per forage (for an Expected Value of 15 food per forage), but begins linearly declining on day 7; by day 13, it’s no longer better than Horse Hills.
Rooster Peaks
Rooster Peaks spends most of the game with a suboptimal 1d6+1d4 yield. However, after Day 43, the roosters come home to roost, changing the algorithm to “9+2d4, unless this 1d6 rolls a 1, in which case 0” for an EV of 11.66.
Tiger Forest
The most edible things in Tiger Forest burn bright enough to see only in the darkness of the night; while it’s suboptimal for most of the day, it beats HH/GG/RP on hours 22 and 23 of each day.
Dog Valley
Conversely, Dog Valley’s resources are most easily foraged in bright sunlight, which is only available in the valley’s bas |
d497093e-53cc-4a3f-9217-c598814f3b61 | trentmkelly/LessWrong-43k | LessWrong | Blog Post Day Retrospective
Yesterday was Blog Post Day. I think it was a success! In this post I'll link to all the blog posts that were written as part of Blog Post Day, and then say some things about what I learned and when we'll do another one.
The posts (in no particular order) were:
My Updating Thoughts on AI Policy, by Ben Pace.
How to Choose a Massage Therapist, by mingyuan
An Analytic Perspective on AI Alignment, by Daniel Filan
Bayesian Inference with Partially Specified Priors, by Christopher Clark
Cortés, Pizarro, and Afonso as Precedents for Takeover by me
Edit: Also Ben Goldhaber's shortform about depression
I think I missed a few from this list--for example, Coscott said they finished a post but would not put it up yet because it was part of a planned sequence. There may have been people who participated but didn't finish, or at least didn't post about it where I could see that they finished. Please let me know if I missed you and I can link to your post here (if you want).
There was less chatting in the discord than I expected; for the most part people just worked on their posts. Nevertheless the chatting that did happen was positive and encouraging and made me feel good. There wasn't really any draft-swapping because everyone was finishing right before the deadline.
We had about 12 people express interest in participating, and maybe about 8 people actually participate, and it looks like we got 5+ posts out of it. (Maybe more will trickle in) That sounds good enough to me! I for one am happy with what I accomplished in just one day; I definitely want to repeat this procedure even if no one else joins me.
So, anyone interested in another Blog Post Day? Obviously not soon, but how far away? Three months from now? One month? I could make this a regular scheduled event if there is interest. |
e9294589-a0b7-4c87-823c-33dfb4fa6e5b | trentmkelly/LessWrong-43k | LessWrong | Boston Secular Solstice 2024: Call for Singers and Musicans
This year's Boston Secular Solstice will be on Saturday December 28th, and again I'm organizing the music. Are you interested in singing or playing? A wide range of instruments work here: in the past I think we've had people play piano, flute, guitar, mandolin, and cello. This isn't a large time commitment: we typically meet once or twice before the event for an evening to run through songs.
Here's something I wrote up about last year, with links to the songs we did: Boston Solstice 2023 Retrospective.
We haven't finalize the song list yet, but the current draft is "Always Look on the Bright Side of Life", "Battle Hymn of the Rationalist Community", "Brighter Than Today", "Endless Lights", "Find My Tribe", "Gather Round", "Give My Children Wings", "No One Is Alone", "Old Devil Time", "Somebody Will", "The Circle", "The Mary Ellen Spider Song", "We Will All Go Together When We Go", "When I Die", "When I'm Gone", and "You've Got A Friend In Me".
Let me know if this sounds fun!
Comment via: facebook, mastodon, bluesky |
e153a1ea-76e3-4432-81c9-856d17a3d3b4 | StampyAI/alignment-research-dataset/special_docs | Other | Inaccessible information
Suppose that I have a great model for predicting “what will Alice say next?”
I can evaluate and train this model by checking its predictions against reality, but there may be many facts this model “knows” that I can’t easily access.
For example, the model might have a detailed representation of Alice’s thoughts which it uses to predict what Alice will say, \*without\* being able to directly answer “What is Alice thinking?” In this case, I can only access that knowledge indirectly, e.g. by asking about what Alice would say in under different conditions.
I’ll call information like “What is Alice thinking?” inaccessible. I think it’s very plausible that AI systems will build up important inaccessible knowledge, and that this may be a central feature of the AI alignment problem.
In this post I’m going to try to clarify what I mean by “inaccessible information” and the conditions under which it could be a problem. This is intended as clarification and framing rather than a presentation of new ideas, though sections IV, V, and VI do try to make some small steps forward.
I. Defining inaccessible information
====================================
I’ll start by informally defining what it means for information to be \*\*accessible\*\*, based on two mechanisms:
Mechanism 1: checking directly
------------------------------
If I can check X myself, \*given other accessible information,\* then I’ll define X to be accessible.
For example, I can check a claim about what Alice will do, but I can’t check a claim about what Alice is thinking.
If I can run randomized experiments, I can probabilistically check a claim about what Alice \*would\* do. But I can’t check a counterfactual claim for conditions that I can’t create in an experiment.
In reality this is a graded notion — some things are easier or harder to check. For the purpose of this post, we can just talk about whether something can be tested even a single time over the course of my training process.
Mechanism 2: transfer
---------------------
The simplest model that provides some accessible information X may also provide some other information Y. After all, it’s unlikely that the simplest model that outputs X doesn’t output \*anything\* else. In this case, we’ll define Y to be accessible.
For example, if I train a model to predict what happens over the next minute, hour, or day, it may generalize to predicting what will happen in a month or year. For example, if the simplest model to predict the next day was a fully-accurate physical simulation, then the same physics simulation might work when run for longer periods of time.
I think this kind of transfer is kind of dicey, so I genuinely don’t know if long-term predictions are accessible or not (we certainly can’t directly check them, so transfer is the only way they could be accessible).
Regardless of whether long-term predictions are accessible by transfer, there are other cases where I think transfer is pretty unlikely. For example, the simplest way to predict Alice’s behavior might be to have a good working model for her thoughts. But it seems unlikely that this model would spontaneously describe what Alice is thinking in an understandable way — you’d need to specify some additional machinery, for turning the latent model into useful descriptions.
I think this is going to be a fairly common situation: predicting accessible information may involve almost all the same work as predicting inaccessible information, but you need to combine that work with some “last mile” in order to actually output inaccessible facts.
Definition
----------
I’ll say that information is \*accessible\* if it’s in the smallest set of information that is closed under those two mechanisms, and \*inaccessible\* otherwise.
There are a lot of nuances in that definition, which I’ll ignore for now.
Examples
--------
Here are some candidates for accessible vs. inaccessible information:
\* “What will Alice say?” vs “What is Alice thinking?”
\* “What’s on my financial statement?” vs. “How much money do I really have?”
\* “Am I coughing?” vs. “What’s happening with my immune system?”
\* “How will senators vote?” vs. “What’s the state of political alliances and agreements in the senate?”
\* “What do I see on my computer screen?” vs. “Is my computer compromised?”
\* “What’s the market price of this company?” vs. “How valuable is this IP really?”
\* “Will the machine break tomorrow?” vs. “Is there hard-to-observe damage in this component?”
\* “What does the news show me from 5000 miles away?” vs. “What’s actually happening 5000 miles away?”
\* “Is this argument convincing?” vs. “Is this argument correct?”
\* “What will happen tomorrow?” vs. “What will happen in a year” (depending on whether models transfer to long horizons)
II. Where inaccessible info comes from and why it might matter
==============================================================
Our models can build up inaccessible information because it helps them predict accessible information. They know something about what Alice is thinking because it helps explain what Alice does. In this diagram, the black arrow represents the causal relationship:
![]()Unfortunately, this causal relationship doesn’t directly let us \*elicit\* the inaccessible information.
Scientific theories are prototypical instances of this diagram, e.g. I might infer the existence of electron from observing the behavior of macroscopic objects. There might not be any explanation for a theory other than “it’s made good predictions in the past, so it probably will in the future.” The actual claims the theory makes about the world — e.g. that the Higgs boson has such-and-such a mass — are totally alien to someone who doesn’t know anything about the theory.
I’m not worried about scientific hypotheses in particular, because they are usually \*extremely\* simple. I’m much more scared of analogous situations that we think of as intuition — if you want to justify your intuition that Alice doesn’t like you, or that some code is going to be hard to maintain, or that one tower of cards is going to be more stable than another, you may not be able to say very much other than “This is part of a complex group of intuitions that I built up over a very long time and which seems to have a good predictive track record.”
At that point “picking the model that matches the data best” starts to look a lot like doing ML, and it’s more plausible that we’re going to start getting hypotheses that we don’t understand or which behave badly.
Why might we care about this?
-----------------------------
In some sense, I think this all comes down to what I’ve called [strategy-stealing](/the-strategy-stealing-assumption-a26b8b1ed334): if AI can be used to compete effectively, can humans use AI to compete \*on their behalf\*?
More precisely, for every strategy A that an AI could pursue to bring about some arbitrary outcome, is there a strategy A\\* that would help humans get what we want over the long term, without leaving us at a competitive disadvantage over the short term?
![]()If so it’s good news for humanity: if most humans build AIs who execute plans like A\\*, then humans won’t be outcompeted by unaligned AIs who execute plans like A.
But the mere \*existence\* of A\\* isn’t very helpful, we need to actually be able to figure out that A\\* leads to human flourishing so that we can do it. If we can’t recognize plans like A\\*, then humanity will be at a disadvantage.
We could have a problem if the fact “A\\* leads to human flourishing” is inaccessible while the fact “A leads to paperclips” is accessible.
![]()In this case, we either need to be sure that no one acts on the basis of information like “A leads to lots of paperclips,” or we need to figure out some way to access the inaccessible information that “A\\* leads to lots of human flourishing.”
III. Eliciting inaccessible info and the instrumental policy
============================================================
I’ve been talking as if we can’t access inaccessible info (and the name is suggestive…), but I don’t think you should be convinced of that yet.
Let M be a model that reports only accessible facts — it answers questions like “what will Alice say next?” but not questions like “is Alice actually happy with the situation?” It’s built up a lot of knowledge about the world, and uses that knowledge to estimate accessible facts well as it can.
We suspect there is a related model M\\* that uses all of M’s machinery in order to explain what’s going on in Alice’s head. By the definition of inaccessibility, M\\* is at least somewhat more complex than M. So if we want to learn M\\*, we need a training objective that penalizes M relative to M\\*.
But it’s really easy to tell M\\* and M apart! M won’t automatically tell us what Alice is thinking, but it also won’t automatically make up credible stories that hold together. Both of those require extra complexity. So we can still hope to get M\\* if we train our model to (i) correctly provides accessible info, (ii) says something consistent and plausible about inaccessible info.
The real problem comes from what I’ll call the \*instrumental policy\*. Let’s say we’ve tried to dream up a loss function L(\*x\*, \*y\*) to incentivize the model to correctly answer information we can check, and gives at least plausible and consistent answers on things we can’t check. By definition, the values L(\*x\*, \*y\*) are themselves accessible.
Then it’s natural to learn a policy like: “on input \*x\*, produce the output \*y\* for which the loss L(\*x\*, \*y\*) will be minimal.” Let’s write BAD for this policy.
Put differently, BAD is the model that performs a simple substitution — when you ask it question Q, it replaces it with the meta question “What answer to question Q will make the person training me happiest?” By definition, this new question is also accessible, so it can be posed to M.
BAD may be much simpler than M\\*, because all it has to do is describe this substitution, whereas M\\* needs to build new machinery to explain inaccessible facts.
If M is very weak, BAD may perform much worse than M\\* because it doesn’t understand the loss function very well. But if M understands a lot about the world, and about its training process, then it seems like BAD will get at least as low a loss as M\\* — it’s using the same machinery as M and M\\*, and picking whatever value leads to the lowest loss. It can choose to report its own best estimates if that’s the best strategy, e.g. for questions where we can check. So by the time M and M\\* are human level, it already seems like BAD should be getting as low a loss as M\\*.
This gets a little bit more complicated if I do consistency checks or if I have large output spaces, and of course humans aren’t \*that\* good at lying, but it seems like the basic idea is pretty solid: once we have a model of accessible information, we can directly turn it into a model that will pass whatever checks we throw at it.
For concreteness, imagine M is a model of Alice’s behavior and we are asking it “is Alice actually happy?” Then M\\* is the model which translates its model of Alice’s psychology into words and explains it to us, while BAD is the model that asks “What answer would sound most believable to the person asking the question?”
The question is: if we start with model M and fine-tune it to answer inaccessible questions, is it easier to learn to articulate what it believes or is it going to learn to answer strategically?
We need to be \*very\* careful about our checks if we want to ensure that BAD doesn’t get a lower loss than M\\*. For example, it may turn out that sometimes the truth looks a little bit wrong to us…. And if we do everything right, then M\\* and BAD perform equally well, and so we may not have much control over which one we get.
IV. When inaccessible info is a safety problem
==============================================
Let’s get a bit more detailed about the argument in section II. I think that our inability to access inaccessible info would become a safety problem when:
1. We care about inaccessible facts, so we can’t just evaluate plans based on their accessible consequences.
2. Inaccessible info is a competitive advantage — agents who are blind to inaccessible facts about the world will get outcompeted.
3. There are \*some\* agents who are able to use inaccessible facts to acquire influence, e.g. because they are optimizing accessible long-term goals.
1. We care about inaccessible facts
===================================
If I only cared about accessible facts, then I might not need to ever access inaccessible facts. For example, if I cared about my life expectancy, and this was accessible, then I could ask my AI “what actions lead to me living the longest?” and execute those.
For better or worse, I think we are likely to care about inaccessible facts.
\* Generally we care about what’s \*actually happening\* and not just what appears to be happening. We don’t want smiling faces on cameras. And if there’s a lot of inaccessible action in the world, then it’s reasonably likely for accessible indicators to be systematically manipulated by inaccessible forces.
\* We care intrinsically about what happens inside people’s heads (and inside computers), not just outward appearances. Over the very long term a \*lot\* may happen inside computers.
\* If we totally give up on measuring how well things are going day-to-day, then we need to be actually optimizing the thing we really care about. But figuring that out may require reflecting a long time, and may be inaccessible to us now. We want a world where we actually reach the correct moral conclusions, not one where we believe we’ve reached the correct moral conclusions.
\* Our real long-term priorities, and our society’s long-term future, may also be really weird and hard to reason about even if we were able to know what was good. It just seems really bad to try to evaluate plans only by their very long-term consequences.
\* We care about things that are far away in space or time, which I think are likely to be inaccessible.
Overall I’m quite skeptical about the strategy “pick an accessible quantity that captures everything you care about and optimize it.” I think we basically need to optimize some kind of value function that tells us how well things are going. That brings us to the next section.
2. Inaccessible info is a competitive advantage
===============================================
Instead of using AI to directly figure out whether a given action will lead to human flourishing over the coming centuries, we could use AI to help us figure out how to get what we want over the short term — including how to acquire resources and flexible influence, how to keep ourselves safe, and so on.
This doesn’t require being able to tell how good a very long-term outcome is, but it does require being able to tell how well things are going. We need to be able to ask the AI “which plan would put us in an \*actually good\* position next year?”
Unfortunately, I think that if we can only ask about accessible quantities, we are going to end up neglecting a bunch of really important stuff about the situation, and we’ll be at a significant competitive disadvantage compared to AIs which are able to take the whole picture into account.
As an intuition pump, imagine a company that is run entirely by A/B tests for metrics that can be easily checked. This company would burn every resource it couldn’t measure — its code would become unmaintainable, its other infrastructure would crumble, it would use up goodwill with customers, it would make no research progress, it would become unable to hire, it would get on the wrong side of regulators…
My worry is that inaccessible facts will be similarly critical to running superhuman businesses, and that humans who rely on accessible proxies will get outcompeted just as quickly as the company that isn’t able to optimize anything it can’t A/B test.
\* Even in areas like business that society tries particularly hard to make legible, evaluating how well you are doing depends on e.g. valuing intellectual property and intangible assets, understanding contractual relationships, making predictions about what kinds of knowledge or what relationships will be valuable, and so on.
\* . In domains like social engineering, biology, cybersecurity, financial systems, \*etc.\*, I think inaccessible information becomes even more important.
\* If there is a lot of critical inaccessible information, then it’s not clear that a simple proxy like “how much money is actually in my bank account” is even accessible. The only thing that I can directly check is “what will I see when I look at my bank account statement?”, but that statement could itself be meaningless. We really care about things like who effectively controls that bank account and what would really happen if I tried to spend the money. (And if I largely care about inaccessible facts about the world, then “what would happen if I tried to spend my money?” may itself be inaccessible.)
\* I can pay inaccessible costs for an accessible gain — for example leaking critical information, or alienating an important ally, or going into debt, or making short-sighted tradeoffs. Moreover, if there are other actors in the world, they can try to get me to make bad tradeoffs by hiding real costs.
3. Some AIs can plan with inaccessible info
===========================================
So far this discussion could just be about an \*AI missed opportunity\*, not an \*AI risk\*.
Things become problematic when it is possible to build AI systems that do use inaccessible info to pursue ambitious long-term goals that would conflict with human flourishing. If illegible knowledge is important enough, those systems could outcompete humans and divert some (or almost all) of our civilization’s resources.
This happens if \*any\* interesting long-term goal is accessible, i.e. if there’s any accessible goal that benefits from accumulating influence.
Why might some long-term goal be accessible?
\* Verifiable long-term predictions may accessible, e.g. “total reward over the next 50 years” might be accessible as a generalization of “total reward over the next 50 minutes.”
\* Even if models don’t transfer in a natural way to very long time horizons, they may still transfer in \*some\* way. Maybe they can’t generalize to something as cleanly specified as “total reward,” but e.g. they may be able to predict quantities that happen to be natural in terms of the representation they are using.
\* More generally, models will do \*something\* when we run them way out of distribution. I think the best reason to expect models to not plan intelligently over long time horizons is that they lack the abilities and knowledge. Once they have relevant cognitive machinery, I think it’s pretty likely that they’ll use the machinery somehow when they go out of distribution, and if you search over a lot of possibilities.
The key asymmetry working against us is that optimizing flourishing appears to require a particular quantity to be accessible, while danger just requires \*anything\* to be accessible.
There’s a longer discussion about whether anyone would want to build such AI systems, or might be likely to do it accidentally. And another discussion about how bad it would actually be. But I think that once we satisfy the 3 properties in this section then we have a real problem, and the remaining question is one of magnitude, with a bunch of factors pulling up and down.
V. Possible responses
=====================
At a high level, we’d like to get good answers by training a model to make checkable predictions and then somehow incentivizing it to use the same machinery to also report inaccessible facts, for example by forcing it to report views and enforcing consistency.
I think there are a bunch of problems with that plan, but the strongest one is that it’s totally unclear how we ever get the intended policy rather than the instrumental policy.
Here are some possible angles of attack:
\* In some intuitive sense there is a discontinuity between saying X because you believe it and saying X because you believe that it’s what you need to say to get a high reward. If weak policies are straightforward rather than instrumental, then maybe we can exploit continuity to avoid the instrumental strategy.
I’m generally pretty scared of these approaches since they feel pretty brittle, and people seem prone to being optimistic even before we can test or understand anything.
But I’m a bit more hopeful if we consider continuity of behavior rather than continuity in the space of policies, and if we exploit it by using a model at time T to help check the behavior of a model at time T+1.
\* If we were able to actually understand something about what the policy was doing, even crudely, it might let us discriminate between instrumental and intended behavior. I don’t think we have any concrete proposals for how to understand what the policy is doing well enough to make this distinction, or how to integrate it into training. But I also don’t think we have a clear sense of the obstructions, and I think there are various obvious obstructions to interpretability in general that \*don’t\* apply to this approach. So together with the other points on this list — especially the existence of a discontinuity and the use of previous versions of a model to help — I think it could be part of a solution.
\* The “instrumental” strategy — “pick the answer that minimizes my training loss” — may have a shorter description, but it seems to be unnecessarily computationally complex. First it figures out what’s true, and then it strategically decides what to say in light of that. It would be a bit cheaper just to actually report what’s true, if we set up the training process well enough that honest reporting got you optimal reward.
It’s not clear how to leverage this though, since “use the simplest model” seems like an important and useful prior. We can’t just replace it with “use the cheapest model” without sacrificing competitiveness. That said, it may be able to buy us something in combination with the previous approaches.
(See discussion [here](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) and [here](https://www.alignmentforum.org/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive).)
\* We aren’t able to check inaccessible answers, and they aren’t output by the simplest model, but maybe we can make being straightforward the easiest strategy in some other sense. For example, if you have two models who can’t interact but need to agree on a consistent story, you could imagine the truth being the easiest thing for them to coordinate on. I think that scheme in particular is pretty unlikely to work, but there is a large space of possibilities here, especially when combined with the other points on this list. For example, maybe you could perform consistency checks between models at different points in time.
\* Maybe we can work with models M that don’t understand enough about the world, or especially about their training process, in order to turn into the instrumental policy BAD. I tend to be pretty skeptical of “security by obscurity” — if we want to make life hard for the instrumental policy not working, I want it to be because we have randomness hidden from it, or we’ve constructed a smarter agent that it can’t simulate. I think it’s pretty bad news if we are relying on a smart model not understanding facts about its own situation, and that’s the kind of solution that is particularly likely to break down over time.
\* We could hope that our models don’t build up important inaccessible knowledge, e.g. because it’s possible to justify most interesting conclusions with deduction or because we have reasonably good accessible proxies for our value function. I’m pretty skeptical about this over the long term, but I’m not sure exactly how bad it will be how early.
\* The argument in this post is pretty informal, and there’s a reasonable chance that you can drive a solution through one of the many gaps/loopholes. I like the problem-solving strategy: “write out the proof that there is no solution, and then sift through the proof looking for a fatal hole.”
Overall I don’t see an obvious way forward on this problem, but there are enough plausible angles of attack that it seems exciting to think about.
VI. How this relates to amplification and debate
================================================
Overall I don’t think it’s very plausible that amplification or debate can be a [scalable](/scalable-ai-control-7db2436feee7) AI alignment solution on their own, mostly for the kinds of reasons discussed in this post — we will eventually run into some inaccessible knowledge that is never produced by amplification, and so never winds up in your distilled agents.
In the language of my [original post on capability amplification](/policy-amplification-6a70cbee4f34), the gap between accessible and inaccessible knowledge corresponds to an obstruction. The current post is part of the long process of zooming in on a concrete obstruction, gradually refining our sense of what it will look like and what our options are for overcoming it.
I think the difficulty with inaccessible knowledge is not specific to amplification — I don’t think we have any approach that moves the needle on this problem, at least from a theoretical perspective, so I think it’s a plausible candidate for a [hard core](/hard-core-subproblems-8948463455ef) if we fleshed it out more and made it more precise. (I would describe MIRI’s approach to this problem could be described as despair + hope you can find some other way to produce powerful AI.)
I think that iterated amplification \*does\* address some of the most obvious obstructions to alignment — the possible gap in speed / size / experience / algorithmic sophistication / etc. between us and the agents we train. I think that having amplification mind should make you feel a bit less doomed about inaccessible knowledge, and makes it much easier to see where the real difficulties are likely to lie.
But there’s a significant chance that we end up needing ideas that look totally different from amplification/debate, and that those ideas will obsolete most of the particulars of amplification. Right now I think iterated amplification is by far our best concrete alignment strategy to scale up, and I think there are big advantages to starting to scale something up. At the same time, it’s really important to push hard on conceptual issues that could tell us ASAP whether amplification/debate are unworkable or require fundamental revisions. |
9fb8853a-6025-4014-9585-24aa1db041c2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Pascal's Mugging: Tiny Probabilities of Vast Utilities
The most common [formalizations of Occam's Razor](/lw/jp/occams_razor/), Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation. What if this makes a mind vulnerable to finite forms of Pascal's Wager? A compactly specified wager can grow in size *much* faster than it grows in complexity. The utility of a Turing machine can grow much faster than its prior probability shrinks.
Consider [Knuth's up-arrow notation](http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation):
* 3^3 = 3\*3\*3 = 27
* 3^^3 = (3^(3^3)) = 3^27 = 3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3\*3 = 7625597484987
* 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(... 7625597484987 times ...)))
In other words: 3^^^3 describes an exponential tower of threes 7625597484987 layers tall. Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe. This, even though writing out 3^^^3 in base 10 would require *enormously* more writing material than there are atoms in the known universe (a paltry 10^80).
Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
Call this Pascal's Mugging.
"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.
Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate. Therefore the Solomonoff-inducted probability, two to the *negative* Kolmogorov complexity, is exponentially tinier than one might naively think.
But, small as this probability is, it isn't anywhere *near* as small as 3^^^^3 is large. If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.
Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3. "Infinity" is reassuringly featureless and blank. "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds. The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large. Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.
The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven. And since all the expected utilities here are allegedly "[infinite](http://www.nickbostrom.com/ethics/infinite.pdf)", it's easy enough to argue that they cancel out. Infinities, being featureless and blank, are all the same size.
But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".
If the probabilities of various scenarios considered did not *exactly* cancel out, the AI's action in the case of Pascal's Mugging would be *overwhelmingly* dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.
But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI *is* its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override *everything* else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
How do *I* know to be worried by this line of reasoning? How do *I* know to [rationalize](/lw/ju/rationalization/) reasons a Bayesian shouldn't work that way? A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence. It would simply go by whatever answer Solomonoff induction obtained.
It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it. What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?
Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging? Do I have an instinct to resist exploitation by arguments "anyone could make"? Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss? Do I drop sufficiently small probabilities from consideration entirely? Would an AI that lacks these instincts be exploitable by Pascal's Mugging?
Is it me who's wrong? Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?
It doesn't feel to me like 3^^^^3 lives are *really* at stake, even at very tiny probability. I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".
Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it *true?* Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is *exponentially* cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.
I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006. I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias. |
7ff822e7-10d2-4188-b041-7ea384108def | StampyAI/alignment-research-dataset/special_docs | Other | Cyber insurance
Cyber Insurance39
Pythagoras Petratos, Anders Sandberg, and Feng Zhou
Contents
Introduction ...................................................................................... 810
Development of Insurance for Cyber Risks .................................................... 81 1
Economics of Information Security ......................................................... 811
Insurable and Uninsurable Cyber Risks ... .................................................. 813
Challenges and Developments .............................................................. 814
Evolution of Cyber-Insurance Market ................ .......................................... 816
Obstacles of Developing Cyber-Insurance Market . ........................................ 817
Technologies Spur the Cyber-Insurance Market ............................................ 818
General Categorization of Cyber Risks ...... ................................................... 820
Catastrophic Risks and Insurance ........................................................... 821
Interdependencies and Asymmetric Threats ................................................ 824
Cyber Risks and Losses ..................................................................... 826
Cyber Risks, Catastrophes, and Ignorance ...................................................... 826
Identifying Cyber Risks ..................................................................... 826
Existential and Global Catastrophic Risks .................................................. 827
Catastrophic Risks ........................................................................... 829
Conclusion: Summary, Challenges, and Future Directions, the
Development of the Cyber Insurance Market .................................................. 830
References ....................................................................................... 832
P. Petratos ( \*)
Said Business School, Oxford University, Oxford, UK
e-mail: Pythagoras.petratos@sbs.ox.ac.uk ;p.pythagoras@yahoo.com
A. Sandberg
Oxford Martin Programme on the Impacts of Future Technology, Oxford University, Oxford, UK
Future of Humanity Institute, Oxford University, Oxford, UK
e-mail: anders.sandberg@philosophy.ox.ac.uk
F. Zhou
Future of Humanity Institute, Oxford University, Oxford, UK
e-mail: feng.zhou@philosophy.ox.ac.uk
#Springer International Publishing AG, part of Springer Nature 2018
E. G. Carayannis et al. (eds.), Handbook of Cyber-Development, Cyber-Democracy, and
Cyber-Defense ,https://doi.org/10.1007/978-3-319-09069-6\_25809
Abstract
This chapter is an introduction to cyber insurance. We describe the different types
or risks as well as uncertainty and ignorance related to cyber security. A frame-
work for catastrophes on the cyber space is also presented. It is assessed which
risks might be insurable or uninsurable. The evolution and challenges of cyber
insurance are discussed and finally we propose some thoughts for the further
development of cyber insurance markets.
Keywords
Catastrophic risks · Cyber insurance · Cyber risks/uncertainty/ignorance ·
Development of cyber insurance markets · Incentives · Insurable and uninsurable
cyber risks
Introduction
Cyber insurance has a broad de finition and has been continuously evolving over
time. It was de fined as insurance for the damages to “physical ”computer equipment
in 1970s, but nowadays it has been changed to be a cost-effective option of risk
mitigation strategies for IT/cyber-related losses. According to Association of British
Insurers (ABI), it “covers the losses relating to damage to, or loss of information
from, IT systems and networks. ”Anderson et al. ( 2007 ) argue that cyber insurance in
an ideal situation promotes users to implement good security. However, some
barriers are currently preventing insurers to achieve this goal, and innovations in
the cyberspace introduce new types of loss. For example, “Internet of Things ”is
shifting cybersecurity from protecting information assets to physical goods that were
traditionally unrelated to computers.
At present, cyber insurance has a small share in overall nonlife insurance market
and represents just 0.1% of the global property and casualty insurance premium pool
(Marsh 2015 ), but it is one of the fastest-growing new lines of insurance business
and the cybersecurity is recognized as one of the top global risks in the World
Economic Forum ’s report recently (WEF 2015 ). Meanwhile, more and more tradi-
tional insurance contracts exclude speci fic losses that are linked to cybersecurity; it is
necessary to develop a standalone cyber-insurance market. New technologies and
innovations in the cyberspace are also spurring the development of cyber-insurance
market, as well as the current trend of government requiring high standards on
protecting sensitive information and enforcing financial punishments relating to
information security breaches.
Both the complexity of cyber risk and the current immaturity of cyber-insurance
market bring challenges for industry practitioners and regulators to fully understand
potential future systemic risks in this kind of complex system. Not surprisingly, the
recent Risk Nexus Report from Zurich Insurance Group argues that the global
aggregations of cyber risk is analogous to those risks that were overlooked in the
US sub-prime mortgage market (Zurich 2014 ). Its nickname “cyber sub-prime ”810 P. Petratos et al.
intends to describe the interconnected nature of systemic cyber risk and the chal-
lenges for individual insurers to address the complexity. They believe that the
existing research on systemic risk in the financial markets that aims to address recent
crises should be helpful to understand the dynamics of future cyberspace.
Development of Insurance for Cyber Risks
According to 2015 Information Security Breaches Survey (PWC 2015 ), 90% of UK
large organizations and 74% of small businesses reported that they had suffered at
least one security breach in the past 1 year. The average cost of the worst single
breach suffered by these businesses has gone up sharply. For instance, the average
cost to a large organization is around £1.5 –£3 m up from £600 k to £1.15 m a year
ago. The survey also indicates that the majority of UK businesses surveyed expect
breaches will continue to increase. Thompson ( 2014 ) estimates that the total cyber
insurance currently amounts around US$2 billion, whereas the total cost of global
security breaches could be more than US$400 billion. For more about the effects of
cyber-attacks on UK companies, see Oxford Economics ( 2014 ). For a more detailed
history and evolution of cyber-insurance products, see Majuca et al. ( 2006 ).
Economics of Information Security
Together with both the growth of ICT (information and communication technology)
and the growing impact of cyber risks to the real-world business increase the demand
for insurance-related risk mitigation strategies. The following factors also play key
roles in the development of cyber insurance:
A list of key factors affecting either demand for or supply of cyber insurance:
Mitigating cyber residual risks: Organizations have three basic cyber risk man-
agement strategies: self-protection, self-insurance, and transfer of risk via cyber
insurance (Kesan et al. 2005 ). While organizations are increasing their information
security spending on improving IT system, cyber residual risks still require insurance
to mitigate unexpected events. Lelarge and Bolot ( 2009 )find that cyber insurance is
a powerful incentive mechanism that motivates organizations to invest in self-
protection, so these three strategies are complementary to each other. Pal and
Golubchik ( 2010 ) analyze the Internet users ’investment in self-defense mechanisms
when insurance solutions are offered in either full or partial cyber-insurance cover-
age models.
Promoting and aligning economic incentives: Organizations who have insurance
as a last resort of risk management attract customers and business partners, espe-
cially for small businesses who are parts of a large/long supply chain in order to
avoid being the weakest link of cyber-attacks. In the supply-demand model of cyber-
insurance market, Pal ( 2014 ) argues that cyber insurance has the potential to jointly
align the incentives of different stakeholders in the cyberspace, such stakeholders or
players as security vendors, cyber insurers, regulatory agencies, and network users.39 Cyber Insurance 811
Anderson et al. ( 2007 ) also suggest that cyber insurance in an ideal situation pro-
motes users to implement good security.
Protecting exclusions in traditional insurance: Cyber cover was mainly embed-
ded in other traditional insurance products (e.g., business interruption or professional
liability insurance), but nowadays more and more traditional insurance contracts
intend to exclude the cyber-related risks due to the complexity of cyberspace and
potentially catastrophic consequence, as well as requiring different actuarial methods
to preform data analysis (Siegel et al. 2002 ). As a result, standalone cyber-insurance
policies are emerged. However, there is a gap between insurers and insured parties to
explain the differences/exclusions among both standalone cyber-insurance contracts
and traditional products. It is necessary to have cyber-insurance brokers to reduce the
gap (Marsh 2015 ).
Providing professional advice and delivering experienced cyber incident
response: Insurance companies themselves collect a huge amount of customers ’
personally identi fiable information and corporate clients ’business con fidential/
financial information, so they must follow and have rich experience to deal with
many regulations of protecting data information and cyber security (e.g., HIPAA
Health Insurance Portability and Accountability Act to protect the privacy of indi-
vidual patients/customers, GLBA Gramm Leach Bliley Act to secure the private
information of clients) (Appari and Johnson 2010 ). Insurers also accumulate the
updated knowledge and relevant experience from clients globally and communicate
with other security professionals, in order to provide technical and legal assistance
(as well as financial compensations) to manage cyber-related breaches and incidents
(Marsh and Zurich 2015 ).
Training cybersecurity awareness and building information security culture:
Security managers often find dif ficulties to communicate with nontechnical internal
staff or external clients about security policies and technologies who have no formal
security background, but insurance is an easy way to explain the ( financial) impact of
cybersecurity to the business. The insurance premium that has been reduced
(or increased) year-by-year due to a better (or worse) security implementation in
this year relative to other previous periods, it is a good indication and consistent
comparison to de fine proper cyber risk metrics and to educate staff or clients.
However, at this early stage of cyber insurance, there is still a lag for insurers to
implement premium differentiation on the cyber insurance that re flects the insured
security improvement precisely due to the immaturity of the cyber insurance market
(Mukhopadhyay et al. 2013 ; Moran et al. 2015 ).
Government supports: A free-market approach is traditionally popular to manage
risks in the financial system, since it increases motivation and ef ficiency of stake-
holders in the system. As Anderson et al. ( 2007 ) suggest, one option to spur demand
for cyber insurance is to make it compulsory (as it is common in motor insurance),
but it may lead a deadweight on competitiveness and productivity growth. The role
of government is to encourage and support the insurers to overcome the barriers of
supplying cyber insurance (the barriers will be discussed in the cyber-insurance
market section). Recently, UK government launched its “10 Steps to Cyber Security ”812 P. Petratos et al.
(CESG 2012 ) and “Cyber Essentials Scheme ”(BIS 2014 ), both aiming to assist
insurers to evaluate the security assessment of small- and medium-sized enterprises.
Sharing data of cyber incidents (data pooling): It is necessary to form partner-
ships from different industries that share data in order to better understand cyber
risks, as suggested in the UK Cyber Security Strategy (Cabinet 2011 ). The recent
launched Cyber Security Information Sharing Partnership (CiSP https://www.cert.
gov.uk/cisp/ ) aims to collaborate with insurers to analyze emerging threats, disaster
scenarios, and trends in the cyberspace. The cyber insurance will be more affordable
and its purchasing cost is expected to be lower than current level based on more
relevant actuarial data in the near future, and a higher degree of price differentiation
across different policies and individual firms will be feasible (Marsh 2015 ). How-
ever, Bohme ( 2006 ) states and explains that information sharing is socially bene fi-
cial, but it is not ef ficient to rely on a trusted third party only (as a “social planner ”)t o
arrange data collection.
Insurable and Uninsurable Cyber Risks
In terms of a speci fic insurance policy, the potential losses related to cyber-attacks or
nonmalicious IT failures can be currently grouped into 11 categories in the London
Insurance Market (Marsh 2015 ), which is also similar to the US market (Majuca
et al. 2006 ).
Due to both the difference in severity/frequency of cyber events and the com-
plexity of cyber risks, some of these losses are insurable while others are not
available at present. Johnson et al. ( 2014 ) study the complexity of estimating
systematic risk in cyber networks, which is an essential requirement to provide
cyber insurance to the public. The following discussion explains the insurability
and exposure for different cyber risks (Marsh 2015 ).
Insurable Cyber Risks
Privacy events: Many privacy issues are related to managing regulatory require-
ments on information security. Insurers can collaborate with lawyers to provide
different levels of services and protections to their clients. Since the losses from
these events are handled and measured by a third-party professional lawyer, there is
less information asymmetry or moral hazard problem between insurer and insured.
Crime and fraud: Police force often involves in the investigation of cyber-crime
and fraud; therefore, the financial losses related to such cyber events are measured by
third parties such as police or lawyers. Insurers can not only offer insurance cover,
but also provide professional advice on preventing these events or reducing the cost
based on their experience from other customers.
Network security liability: Third-party liabilities related to certain security events
occurring within an organization ’s IT network can be insured, mainly due to the
scope of incidents can be clearly de fined by the insurers and IT system engineers can
also collaborate with insurers to improve mitigation strategies.39 Cyber Insurance 813
Software and data damage: Insurers can provide indemnity for the costs arising
from the damage of data or software (e.g., help recovering or reconstituting the
damaged data); this is mainly because insurers are able to require the policy holders
to follow necessary procedures of data backup or redundancy.
Cyber extortion: Traditionally, insurers have the necessary knowledge and expe-
rience of dealing with extortion in the physical world and conduct ransom negoti-
ations (particularly in the London Market, such as the Lloyd ’s of London), extortion
in the cyberspace is not much different from that. Cover is provided for both the cost
of handling the incident and the ransom payment.
Uninsurable (or Insurable but with Constraints) Cyber Risks
Reputational loss: Although insurance cover is available for the losses that are
directly linked to reputational damage (e.g., cost of recovering public image or
loss revenue from existing customers), it is dif ficult to measure the value of the
compensation and the linkage between the cyber incident and the intangible asset if
without certain constraints.
Network business interruption (e.g., due to Denial-of-Service attacks): In the
traditional insurance sector, it is common to offer full coverage for business inter-
ruption arising from natural disasters or man-made events. However, in the early
stage of cyber insurance, insurers are concerned about the potential aggregate
exposure from a single cyber event but interrupts many insured policy holders.
IP theft or espionage: These types of losses are extremely dif ficult to prove and
quantify, since the value is changing quickly over time and trade secret is priceless
before an incident but (likely) worthless if being public. It is also hard to de fine
whether the incident was incurred in the insured period. Moreover, these attacks are
often state-sponsored with a large amount of resource.
Physical asset damage: The interconnection between physical world and cyber-
space is increased by the development of the so-called “Internet of Things (IoT) ”;
therefore, more and more cyber incidents will directly have impacts on the physical
assets. At this stage, the complexity of these interconnections is not well understood by
insurers; therefore, it is dif ficult to combine cyber insurance with traditional property
insurance or have such physical asset damage cover in the standalone cyber insurance.
Death and bodily injury: Similar to the physical asset damage, it is more
and more likely that certain cyber-related incidents may cause harm to the human
(e.g., medical devices, large-scale industry equipment, driverless cars, etc.). Although
it is uninsurable at the current stage of cyber insurance, it is covered by traditional
insurance products such as general liability and employers ’liability products (Fig. 1).
Challenges and Developments
Even if insurers are able to offer cyber insurance to mitigate certain types of cyber-
risk events, they must face and learn to overcome some challenges in order to
maintain and expand their businesses. Not surprisingly, there are progresses and
developments to address these challenges recently.814 P. Petratos et al.
Challenges for Insurers
External attackers are evolving over time: Information Security Breaches Survey
(PWC 2015 ) shows that outsiders are using more sophisticated methods to affect
organizations.
Staff-related breaches are unique in individual cases: Whether inadvertent
human error or not, the consequence from insiders ’mistakes or misconducts is
difficult for insurers to measure.
Lack of understanding and communication: Recent surveys indicate that a
majority of CEOs believe their organizations have relevant insurance to cover
cyber risks (PWC 2015 ), whereas in fact only around 10% actually do (Marsh 2015 ).
Increasing IT system collaboration and social network: Cyberspace is moving
toward an ecosystem, which has more and more heterogeneous players collaborate
and interact to each other.
New technologies and innovations: The ICT sector is attractive to capital markets
with large amounts of capital to support new businesses and innovations. However,
10010–1100101102103Cases per year
101102103104
Number of records105106107108
Fig. 1 Size distribution of data losses (based on data from datalossdb 2000 –2005). Expected
number of losses per year larger than a certain size as a function of number of records lost. Note the
power-law heavy tail for larger losses (exponent /C25/C00.66, consistent with the results in Overill and
Silomon ( 2011 ) and Maillart and Sornette ( 2010 )). This tail may be dominated by more targeted
events and organized crime, including financial fraud, insider abuse and theft, as well as malware
(Overill and Silomon 2011 ).39 Cyber Insurance 815
due to the nature of this fast evolving sector and heavy competition, ICT vendors
focus more on the short process of introducing their products and services to the
market and less on the security. It is challenging for insurers to follow these fast
developments and potential risks involved in the process (Friedman 2011 ).
Recent Developments
Government: Organizations are increasingly using Government alerts (e.g., the UK
HMG Cyber Essentials scheme) to inform their awareness of threats and similar
vulnerabilities (PWC 2015 ). Insured firms can get discount on insurance premium if
they follow these certi fication requirements, so it offers motivations for insured users
to follow security procedures and policies.
Insurance cyber gap analysis: Marsh ( 2015 ) also suggests that it is necessary for
insurance brokers to provide cyber gap analysis (determining which cyber risks are
covered by existing traditional insurance or need to be covered in a standalone cyber
insurance) when communicating with customers.
Insurers ’data protection regulations: Insurance industry itself collects sensitive
personal, financial, and healthcare data from their policy holders (e.g., personally
identi fiable information PII, protect health information PHI, and business operation
private information) in order to measure the customers ’risks more precisely. As a
result, the National Association of Insurance Commissioners NAIC ( 2015 ) recently
adopted cybersecurity guidance for the insurance industry and regulators to follow.
The expertise and experience of insurers ’information security practice is also
applied to advice their customers.
Understanding the bene fits of cyber insurance: The growing amount of literature
starts to support the bene fits of cyber insurance as a market-based solution to
cybersecurity. Kesan et al. ( 2005 ) state, when certain obstacles to a full market
solution are fully worked out, several positive outcomes will occur. In general, cyber
insurance market will result in higher overall social welfare.
Evolution of Cyber-Insurance Market
It is still too early to know the structure of the future, mature cyber-insurance market.
In the existing literature, both competitive (Shetty et al. 2010b ) and monopolistic
(Lelarge and Bolot 2009 ; Hofmann 2007 ; Pal and Golubchik 2011 ) market structures
are studied.
As commonly expected, the cyber-insurance market will soon become a complex
dynamic system (Anderson and Moore 2009 ; Halse and Hoemsnes 2013 ). As a
result, the market not only provides one option of risk mitigation strategies, but also
builds an ecosystem together with other sectors in cyberspace that can in fluence
heterogeneous stakeholders ’behaviors and business strategies (Hall et al. 2011 ).
This is similar to other financial systems, such as stock or credit markets (Gracie
2015 ). Therefore, the existing research in other financial systems will be relevant to
understand the future cyber-insurance market (Zurich 2014 ).816 P. Petratos et al.
Obstacles of Developing Cyber-Insurance Market
Shetty et al. ( 2010a ) and Bohme and Schwartz ( 2010 ) argue that the underdeveloped
cyber insurance market is mainly due to: (1) interdependent security (externalities)
(Ogut et al. 2005 ; Bolot and Lelarge, 2008 ; Zhao et al. 2009 ); (2) correlated risk
(Bohme and Kataria, 2006 ); and (3) information asymmetries (Bandyopadhyay et al.
2009 ). Furthermore, Bohme and Schwartz ( 2010 ) argue that “it appears that the
market failure can only be overcome if all obstacles are tackled simultaneously. ”
Meanwhile, Marsh ( 2015 ) states that a well-developed reinsurance market for cyber
insurance is also one of the necessary conditions to expand the business.
The four key obstacles are explained as follows:
Interdependent security (externalities): Kunreuther and Heal ( 2003 ) ask the
question: “Dofirms have adequate incentives to invest in protection against a risk
whose magnitude depends on the actions of others? ”. One of the differences between
cyber and traditional insurance (e.g., property or motor) is the close interconnections
among players in cyberspace. The security in cyberspace is dependent on all players
in the system, but heterogeneous players have different preferences about cyberse-
curity and the “free rider problem ”occurs when those who bene fit from other
players ’security investment do not have to pay for it (Varian 2004 ). As Naghizadeh
and Liu ( 2014 ) argue that security is a nonexcludable public good, so users can stay
out and still enjoy spill-overs from others ’contribution without paying. As a result,
even insurers help their insured customers to increase their overall security, those
uninsured players in the system still can weaken these insured customers.
Correlated risk: Bohme and Kataria ( 2006 )d efine two tiers of correlated cyber
risks: (1) internal correlation, which they de fine as “the correlation of cyber risk
within a firm”(i.e., a correlated failure of multiple systems on the internal network),
and (2) global correlation, as “the correlation of cyber-risk at a global level, which
also appears in the insurer ’s portfolio. ”The growing development of Cloud com-
puting platform may accelerate the two tiers to be integrated together. For example,
an internal incident in a cloud service provider will lead systematic risks in both its
internal system and its customers ’systems.
Information asymmetries: Bohme and Schwartz ( 2010 )d efine“asymmetric infor-
mation ”as environment where some players have private information to take
advantages on something that are not available to other players. The common issues
in the conventional insurance literature due to “asymmetric information ”are:
adverse selection (Akerlof 1970 ) and moral hazard (Arrow 1963 ). They are also
relevant to the cyber-insurance market and other obstacles (e.g., the interdependent
security) may exacerbate its problems (Shetty et al. 2010a ). Furthermore, Bohme and
Schwartz ( 2010 ) also identify speci fic forms of information asymmetries in cyber
insurance. Meanwhile, Pal ( 2012 ) proposes three mechanisms (premium differenti-
ation, fines, security auditing) to resolve information asymmetry in cyber insurance.
Lack of reinsurance market: It is still in the early stage for reinsurers to reinsure
cyber risks from primary insurers, but several proposals have been put forward to
build such reinsurance function (Toregas and Zahn 2014 ), such as to establish39 Cyber Insurance 817
government-regulated funds similar to US Terrorism Risk Insurance Act or UK
Financial Service Compensation Scheme. Anderson et al. ( 2007 ,2009 )d i s c u s s
that one possible option is for government to provide reinsurance, but they
emphasize that “while government re-insurance can create insurance markets
where otherwise there would be no supply, such measures must be carefully
designed to avoid a regime in which pro fits are private (to the insurers ’share-
holders), losses are socialized (born b y the tax-payer), and systems remain
insecure (because the government intervention removes the incentive to build
properly secure products). ”
Technologies Spur the Cyber-Insurance Market
Many new technologies that have been developed in recent years will spur the cyber-
insurance market. We identify some of these technologies and group them into three
main categories: (1) IT technologies assist insurers to manage and discover cyber
incidents, as well as attract more customers demand for cyber insurance; (2) Tech-
nologies and methods that are helpful for insurers to perform actuarial modeling and
data analysis; and (3) Technologies that are useful to better understand the complex-
ity of cyber-insurance market.
IT Technologies
Some standalone technologies: Intrusion detection systems (IDS), firewalls, digital
forensic technology, Microsoft Photo DNA, and encryption tools have become more
advanced and relevant for insurers to investigate cyber incidents.
Trusted computing infrastructure: Although the opponents of trusted computing
argue that users will lose their freedom and privacy (Anderson 2003a ,b), the
technology provides insurers an opportunity of identifying insurable events and
defining claims more precisely.
Cloud platforms: Cloud service providers can reduce the issues of misaligned
incentives between insurers and cloud users, if they can collaborate with insurers to
attract more customers. Meanwhile, automated systems reduce human errors in the
computing process. However, on the other hand, the cloud platform may lead to
systemic risk since they are connected to other IT systems.
Anonymous communication and transactions: The anonymity network that is
currently represented by, e.g., Tor software makes cyber criminals “anonymous ”and
untraceable. Anonymous digital currencies allow sophisticated markets for illicit
goods and services (Juels et al. 2015 ). As a result, there is a deep/dark web that
provides a cyber black market for attackers to trade sensitive information (e.g.,
selling stolen credit card information to other parties, etc.), so the attackers ’moti-
vation of attacking any organizations become larger.
Mobile devices: Nowadays, more and more business activities and collabora-
tions are based on mobile devices (e.g., Bring Your Own Devices). This leads
more cyber incidents that require cyber insurance, since such devices are lost or818 P. Petratos et al.
stolen easily and users do not have suf ficient skills to manage the security on these
mobile devices.
Leaking technology: ICT enables rapid copying and dissemination of informa-
tion, making information leaks harder to contain. In the past, a sizeable leak of
proprietary information (such as more than 40 gigabytes of internal data released in
the 2014 Sony hack) would have been limited by the need to transmit it by sending
hard drives (expensive) or setting up a website (legally traceable and blockable); by
2014, it could be distributed anonymously using bittorrent in a way that makes it
impossible to trace and block. In addition, leaks are potentiated by the appearance of
search tools making released data more accessible.
Actuarial Modeling Methods
Network simulator: Similar to stress and scenario testing that are commonly used in
thefinancial markets (e.g., banking system), insurers can use various applications
and services to run network simulation in an arti ficial environment in order to test the
stability and resilience of insured network under different conditions.
Actuarial data analysis (big data analytics): More and more professional con-
sulting service firms have been investing and offering advanced actuarial pricing and
risk management services based on big data analytics to assist insurers uncovering
hidden patterns and unknown correlations in cyber risks.
Data pooling platform (data anonymization): Technologies of information san-
itization that aim to encrypt or remove sensitive information from data sets are
becoming more feasible; this encourages more data to be shared in the pooling
platform in order to help government and insurers to better understand cyber risks
from aggregated data sets.
Machine learning and Bayesian networks: More and more applications from
these sub fields of computer science are used in understanding the cyber risks.
Insurers will hopefully gain insights about managing the cyber risks from these
developments. Yang and Lui ( 2014 ) apply Bayesian network to analyze the in fluence
of cyber-insurance market to security adoption in heterogeneous networks.
Data visualization: According to the “digital detectives ”website of Microsoft,
advances in data visualization technology assist Microsoft Digital Crimes Unit (uses
Microsoft PowerMap) to understand the pattern of Citadel botnets better and remove
the malware from infected machines more ef ficiently (Constantin 2013 ). The same
technologies will help insurers to identify cyber incidents from different malware or
causes, so they can distinguish the incidents in order to reduce speci fic claims
(similar to distinguish different risk events in natural catastrophe insurance) or
issue insurance-linked securities based on speci fied triggers (cyber incident) earlier.
Anderson et al. ( 2007 ) consider one of potential strategies to promote cyber insur-
ance is to develop financial instruments for risk sharing similar to “Cat Bonds ”and
“Exploit Derivatives ”in the traditional insurance business operations (e.g., flood and
natural-disaster insurance). As Anderson et al. ( 2007 ) explain, “Exploit Derivatives
are vehicles for insurers to hedge against the discovery of vulnerabilities that causes
significant loss events across their portfolios. ”39 Cyber Insurance 819
Sociotechnical Systems
Security awareness training and behavioral games: Toregas and Zahn ( 2014 )
mention a growing consensus that cyber security is not achievable by solely focusing
on technological aspects, but also requiring to understand both technologies and
their users ’behaviors. The importance of understanding human-computer interac-
tion has been studied widely since the works of Adams and Sasse ( 1999 ) and Sasse
et al. ( 2001 ). Recently, some behavioral digital games based on computer simula-
tions are introduced to train the users ’behavior and awareness of using technologies
securely (Cone et al. 2007 ).
Existing interdisciplinary research in financial systems: Bohme ( 2010b )a r g u e st h a t
some key obstacles causing cyber-insurance market failure are due to a lack of
understanding information economics. An interdisciplinary and integrated research
that focuses on a cyber ecosystem is better than targeting each individual technological
elements alone (Bohme, 2010a ). This idea is similar to recent progress of understand-
ing systemic risks in the financial markets. Schneier ( 2002) and Anderson and Moore
(2007,2009 ) state that a combination of economics, game theory, and psychology is
necessary to understand and manage cybersecurity in the modern and future
networked environment. Johnson et al. ( 2011) model security games with market
insurance to inform policy makers on adjusting incentives to improve network security
and cyber-insurance market. Baddeley ( 2011) applies some lessons from behavioral
economics to understand issues of information security. More papers on the economics
of information security and privacy can be found in the book of Moore et al. ( 2010).
Multiagent technique: Agent-based approach of modeling a complex system is
becoming popular in the financial markets, but it is not commonly used by
researchers to model cyberspace or perform stress testing on particular cyber events.
Recently, a few researchers start to apply this technique to model network resilience
(Sifalakis et al. 2010 ; Baxter and Sommerville 2011 ; Sommerville et al. 2012 ).
General Categorization of Cyber Risks
In the previous analysis, we presented the literature related to the evolution of cyber-
insurance. It is our intention to further examine the challenges for the development of
a cyber-insurance market. “An understanding of insurance must begin with the
concept of risk –that is, the variation in possible outcomes of a situation ”
(Zeckhauser 2008 ). We embark on a theoretical and empirical analysis, using
examples of cyber security events, in order to better understand cyber risks and
relate them to cyber security.
Thefirst crucial observation is that numerous different things can be included
under the term “cyber risks. ”A more precise de finition of “cyber risks ”would result
if we break them into three distinct elements.
(Cyber) Riskcan be de fined as a measurable quantity, according to Knight ( 1921 ).
In that sense, probability distributions could be assigned to cyber threats. Thus, it820 P. Petratos et al.
is feasible to quantify the (cyber) risks and consequently estimate insurance
premiums.
(Cyber) Uncertainty can be considered to be the unmeasurable quantity related to
cyber events. Therefore, we do not know the states of the world and the precise
probabilities would not be known. It is also known as Knightian Uncertainty,
based on the classic distinction by Frank Knight ( 1921 ).
(Cyber) Ignorance can be considered a third category, when we may not have the
ability to de fine what states of the world are possible (Zeckhauser and Visusi
2008 ). It can be considered one step further from uncertainty, when some
potential outcomes are unknowable or unknown (Zeckhauser 2006 ). There are
two important types of ignorance. Primary ignorance concerns situations in
which one does not recognize that is ignorant and recognized ignorance, when
one perceives that ignorance (Roy and Zeckhauser 2013 ). For example, the
financial meltdown of 2008 can be considered such an event. It can also be
argued that many catastrophic risks are subject to ignorance.
Catastrophic Risks and Insurance
General Description of Catastrophic Risks
The above general categorization brings us to further types of risk that in fluence
cyber insurance. “Catastrophes provide the predominant conceptual model of what
insurance is about. One pays premiums to secure financial protection against
low-probability high consequence events –what we normally call catastrophes. ”
(Zeckhauser 1996a ,b). The main problem is that private markets are facing dif fi-
culties in providing coverage for catastrophic risk and thus they can be deemed
“uninsurable risk ”(Jaffee and Russell, 1997 ).
The timing and consequence of catastrophic events may largely vary. We have
already identi fied the frequency/severity spectrum used for cyber events. In other
words, the catastrophic risks fall within the low probability-high consequence class
(Kleindorfer and Kunreuther 1999 ). However, the probabilities and consequences
are not clearly de fined, particularly toward the upper end of losses.
In this chapter, we are more interested about the insurers ’perspective on
assessing such risks. The Actuarial Standard Board de fines “Catastrophe –A relative
infrequent event of phenomenon that produces unusually large aggregate losses. ”
More precisely, “An event is designated a catastrophe by the industry when
claims are expected to reach a certain dollar threshold, currently set at $25 million,
and more than a certain number of policyholders and insurance companies are
affected ”(Insurance Information Institute 2015 ). In that sense, numerous cyber
events, as we would examine later, can have the rarity and loss magnitude of
catastrophic risks.
However, catastrophes can involve a loss much greater than $25 million. The
Swiss Re Sigma Study describes catastrophe losses. In 2014, total insured and
uninsured losses due to disasters were estimated at $110 billion (Swiss Re 2015 ).
This number is below the in flation adjusted 10 year average of $200 billion and39 Cyber Insurance 821
lower than $138 billion in 2013. However, the number of natural disaster catastro-
phes was at a record high reaching 189, and in total, there were 336 disaster events.
This variation in total losses and the number of catastrophes partly displays their
unpredictability as well as their severe consequences. By doing simple calculations,
we can observe that the average loss per catastrophe is much higher than $25 million
(insurance covered claims of USD 28 billion of losses from natural catastrophes and
USD 7 billion from man-made disasters). There are two major categories regarding
the causes of catastrophic risks:
Natural disasters, including georisks (like earthquakes) and climate-induced risks
(as hurricanes and floods)
Man-made catastrophes can be considered a broader category and it includes
industrial accidents and terrorist attacks (Zurich 2013 ).
Earthquakes can have devastating effects for insurers but also situations are there
where thousands of women claim to be damaged by breast implants or individuals
harmed by asbestos (Zeckhauser 1996a ,b). This example, except making the
distinction between natural and man-made disasters, presents some interesting
features that could be used for some initial comments about cyber risks.
A feature is that natural disaster s are usually localized (geo speci fic). The same
can apply to cyber events. A system failure in an energy grid can have local effects.
Nevertheless there are many cases, let us say a computer virus,that can have
regional or global impacts. Cyberspace is by its nature fairly nonlocal, and there
are fewer “natural boundaries ”that constrain the size of an impact. This makes
these breaches rather easily diffuse aroun d the world, therefore resulting in wide-
spread damage.
Also, it seems that a disproportionately larger number man-made breaches and
disasters occur in cyberspace (PWC 2015 ): actually it can be argued that there are
very few cases in which the human factor is not involved. While the majority may be
unintentional, intentional incidents have the potential for particularly expensive
damage.
Aggregate Catastrophes and Systemic Risks
“Aggregate catastrophes occur when many similarly situated people, all subject to
common risks, suddenly find that they have suffered a loss, and the total losses
exceed expectations ”(Zeckhauser 1996a ,b). The single worst incident suffered by
an organization might be considered to be a measure for informing us about
catastrophic risks, especially in large corporations. Infection of viruses or malicious
software remains the largest single worst incident causal factor (PWC 2015 ). As
argued above, viruses and malware have the ability to propagate rapidly and cause
harm to various people and organizations.
In that sense, we can further decompose the high consequence characteristic. One
dimension is the number of individuals and organization that a cyber event might
affect. Another dimension is the geographic location where the cyber event takes
place. Some cyber events might have global reach, enlarging the consequences.822 P. Petratos et al.
An additional critical parameter is the importance of the individuals and organi-
zation for the economy and society. A cyber-attack on critical infrastructure can
further enlarge the consequence by generating losses to other operations. For
example, the failure of VISA or MASTERCARD systems would not only result in
losses for these companies, but it would likely generate signi ficant losses to other
businesses. This would apply to other critical (information) infrastructure, and the
losses could be identi fied according to the importance of the system for the opera-
tions of other individuals and organizations.
Global Aggregations of Cyber Risk
A report by Zurich and the Atlantic Council attempts to expose “global aggregations
of cyber risk ”as analogous to the risks associated with the US sub-prime and 2008
financial crisis. “Governments and forward looking organizations need to take a
holistic view and look beyond these issues to broader risks, including the increasing
danger of global shocks initiated and ampli fied by the interconnected nature of the
internet ”(Zurich 2014 ). An illustrative analogy between the financial markets and
the information technology of organizations is over-leverage (Zurich 2014 ). Over-
leverage of companies in financial markets was created due to excessive debt, while
organizations can over-leverage in IT due to overreliance on technology solutions. In
both cases, leverage is used to maximize their returns; however, it is likely that the
associated risks were underestimated, as it was proved by the financial crisis.
There are two crucial elements in this discussion. The first is a “Lehman
moment, ”a catastrophic event that would spread in the web and cause major losses.
Nevertheless a “Lehman moment ”would encompass ignorance. While it was
anticipated that Lehman Brothers could go bankrupt, none could foresee the chain
of events that it triggered and led to the global financial crisis of 2008. In that sense,
even catastrophic events that seem to have a speci fic impact might actually end in
unpredictable outcomes. The original “Lehman moment ”can be regarded a global
shock due to the scale of Lehman Brothers operations across the world. However,
the channel that initially cascaded this global shock was rather localized; the US
sub-prime market.
The other element comprises of the propagation mechanism. The complexity and
interconnections of financial products and markets eventually transmitted this shock
around the globe. The complexity of financial products might be a useful analogy to
the increasing complexity of IT systems. It has been argued that the 2008 financial
crisis is a demonstration that the causes of risks were camou flaged by excess
complexity (Zurich 2014 ). Even if this complexity is not excessive, it is still dif ficult
to understand and predict the cascading risks and channels. Another analogy of the
internet with the financial markets is that risks were assumed not to be correlated
with each other. Nevertheless this is far from true: financial products and markets can
be highly correlated. The same applies to information technology operations and
systems.
In that sense, it is not only complexity per se but also complexity due to the
interconnected nature of risks that add to the uncertainty (Zurich 2014 ). Thus,
complexity and interconnections can facilitate systemic problems when “extreme39 Cyber Insurance 823
events, ”as global shocks, occur. “Connecting to the internet means exposure to
nth-order effects –risks from interconnections with and dependencies on ”other risk
aggregations (Zurich 2014 ). The report by Zurich identi fies seven such aggregations
(internal IT enterprise, counterparties and partners, outsourced and contract, supply
chain, disruptive technologies, upstream infrastructure, external shocks). It can be
however argued that due to ignorance, they can be more common, or more severe,
than expected (for example, external shocks). An addition issue is a possible “perfect
storm. ”Especially if a cyber “Lehman moment ”coincides with other events, this
interaction could cause losses of much larger scope, duration, and intensity, similar
to the series of events of the 2008 financial crisis (Zurich 2014 ). It is even more
difficult or rather impossible to identify and de fine the interconnections between
other events and a “Lehman moment ”before it happens, since it is principally
unpredictable. In the worst case, catastrophic events would coincide and can signif-
icantly multiply the damage. This makes mitigation of risks increasingly dif ficult, if
the outcomes are unknown or unknowable.
Global Catastrophic Risks Framework
A very useful framework in order to qualitative describe globally catastrophic or
existential catastrophes was developed by Nick Bostrom (Bostrom and Cirkovic
2011 ; Bostrom 2013 ). This framework is based on three factors: severity (how badly
the population would be affected), scope (the size of the population at risk), and
probability (how likely the disaster is likely to occur, according to the most reason-
able judgment given currently available evidence). This model uses the first two
factors and presents many advantages and flexibility. The scope includes not just the
spatial size of the risk variable that we descried earlier, but also generational effects
that are important regarding the duration and aftermath of the catastrophe.
Nevertheless, the major advantage of this framework is the way it treats proba-
bility. “Probability can be understood in different senses ...The uncertainty and
error-proneness ...of risk is itself something we must factor into our all-things
considered probability assignments. This factor often dominates in low-probability
high-consequence risks –especially those involving poorly understood natural
phenomena, complex social dynamics, or new technology, or are that dif ficult to
assess for other reasons ”(Bostrom 2013 ). Therefore, this facilitates our analysis
since most of the factors discussed above can be adapted to this framework. Scope
encompasses both geographic spread, number of affected actors, and the importance
of the damage. Moreover, its flexibility allows adding other concepts. In the discus-
sion that follows, because the uncertainty and ignorance surrounding the estimation
of probabilities, we would shortly discuss about plausibility. Plausibility can be used
as a distinct alternative to probabilities (Ramirez and Selin 2014 ) (Fig. 2).
Interdependencies and Asymmetric Threats
We have discussed correlations and interconnections. Special mention should be
attributed to interdependencies, a related concept and relevant to cyber risks.824 P. Petratos et al.
Often these concepts are used interchangeably and denote the same thing. However,
we would like to expand our analysis by focusing on complex interdependence
(Keohane and Nye 1977 ,1998 ), since it can provide an additional theoretical
foundation. First of all, it should be emphasized that the context of international
relations is central to insurance. Except political risk insurance, state relations
influence numerous macrorisk factors, as economic relations and defense and secu-
rity. “The information revolution alters patterns of complex interdependence by
exponentially increasing the number of channels of communication in world poli-
tics”(Keohane and Nye 1998 ).
In addition, commercial and, particularly, strategic information are valuable. The
availability and con fidentiality of such information in multiple channels increases
the level of risk. Information can be used to convince and capture terrorists, prevent
and resolve con flicts, and enable countries to defeat adversaries (Nye and Owens
1996 ). On the other hand, because information reduces the “costs, economies of
scale, and barriers of entry to markets, it should reduce the power of large states and
enhance the power of small states and non-state actors ”(Keohane and Nye 1998 ).
This generates important asymmetries. A small group of hackers could disrupt a
relatively, to their size and resources, large IT system. Another notable case is that of
WikiLeaks: a single leak, ampli fied by a single disseminating organization, has
global consequences for a superpower. Asymmetric threats and the enabling of
non-state actors add even more complexity to the layers described before. The
number of threats is therefore multiplied and consequently risks increase. Moreover,
ambiguity regarding the nature and identi fication of these relatively small actors
makes the estimation of risks quite unpredictable.PermanentScope
PersistentErasure of historical
data
Cybercriminality,
unfixablevulnerabilities
Major exploited
vulnerability(e.g.
Heartbleed), botnets,
Y2K
Everyday corporate
cyber risks(hacking,
leaks, business
interruption)
Everyday personal
cyber risks (spam,
viruses, breakdowns)Global
Local
Individual
Manageable Endurable CrushingSeverityGlobal
catastrophic
cyber-riskExistential
cyber-risk
Bankruptcy, loss of
reputation, health
harmsIdentity theftCostly cyberattack (e.g.
Sony), temporary local
Internet outagesBankruptcy, inducing
cyberattack
(e.g. Ashley Madison?)"Cyber-Lehman
moment ",global
Internet outagesLoss of ICT capabilities,
destruction of
industries, cyber warCyber-induced disaster,
(nuclear)war, or other
lethal effectsDestruction of backups
and online
infrastructure
Widespread
degradation of function
or trust in ICT
Major intelligence leak
with geopolitical
consequences (e.g.
Snowden revelations)
Fig. 2 Qualitative risk categories39 Cyber Insurance 825
Cyber Risks and Losses
Before 1989, the insurance industry did not experience a loss of more than $1 billion
from a single event and since then catastrophes of the same magnitude have occurred
(Kleindorfer and Kunrether 1999 ). As more and more people with larger insured
wealth congregate in coastal areas, this is to expect (even leaving out climate
change). “Megacatastrophes, ”like Hurricane Andrew, seem therefore to happen
more often and clearly demonstrate the limitations of relying on historical data in
order to estimate future probabilities of losses (Actuarial Standard Board 2000 ). Not
only there are limitations to historical data, but also cyber risks are new phenomena
with continuously evolving technology and factors that are dif ficult to predict or
even imagine. However, it is argued that there is likelihood for a global cyber
catastrophic event (Zurich 2014 ).
There are important methodological problems regarding probability estimation
when assessing global catastrophic risks (Ord et al. 2010 ). Due to their high severity
and scope, even low-probability risks need to be managed, but the probability of
theory, model, or calculation error in doing so is far higher than the risk probability
itself, even when done carefully. This means that risk estimates should be regarded
as suspect unless bounded by several independent estimates or other constraints.
A major concern for the private insurance industry is that it might not be able to
provide coverage for some catastrophic events without the possibility of insolvency
or a signi ficant loss (Kleindorfer and Kunreuther 1999 ). This is intensi fied when the
scope and severity of the disaster are high. In the event of a “cyber sub-prime, ”the
losses can be massive and potentially result to insolvency. Even more worried would
be the possibility of interconnected events that could amplify such crisis. The
coincidence of catastrophes or a perfect storm would also have devastating effects.
It is therefore essential to try and understand the cyber risks that can affect insurance.
In this part, we attempt to provide a theoretical analysis of risks in order to
understand better cyber insurance. In the next part, we attempt to put some flesh to
this theoretical skeleton by providing real and imaginary examples.
Cyber Risks, Catastrophes, and Ignorance
Identifying Cyber Risks
The discussion above indicated that the estimation of probabilities regarding cyber
risks is in many cases dif ficult or impossible. The common methods are based on
past events in order to de fine catastrophes and identify potential losses. These
methods present signi ficant limitations. There are various reasons for that. First of
all, cyberspace is a very dynamic environment. Information and communication
technologies are continuously changing. The internet is constantly expanding. It is
embedding existing devices and technologies, and is likely to integrate future
innovations, generating the Internet of Things (IoT). The number of interconnected
devices, individuals, and organizations is therefore increasing. This results in larger826 P. Petratos et al.
complexity and interdependence among devices with currently unknown functions
and vulnerabilities.
In that sense, if we assume that we know all the causes of potential losses, then it
might be a display of primary ignorance. On the contrary, we can recognize our
ignorance. We attempt to examine practical examples of cyber risks in three ways.
Thefirst is though the traditional approach on historic events. The second technique
can be considered an expansion of that. We can infer based on historical events and
develop potential cases, subject to uncertainty. Finally, we would build imaginary
but plausible scenarios (Ramirez and Selin 2014 ) in order to better understand cyber
uncertainty and push the boundaries of ignorance. It can be said that effective
scenario formation and imagining might reduce ambiguity, enter the space of
ignorance, and therefore diminish it.
Existential and Global Catastrophic Risks
Bostrom ’sc l a s s i fication was developed in regard to threats to the entire future of the
human species, or “merely ”global disasters. The cyber counterpart would be risks that
can escalate to such a level that they disrupt the global market or indeed current
civilization. They are not merely uninsurably large, but terminal to most existing actors.
One possible example might be misuse of Arti ficial Intelligence (AI). Autono-
mous “smart ”systems have already demonstrated potential for economically signif-
icant misbehavior such as the 2010 “Flash Crash, ”which at least in part was due to a
systemic interaction of automatic trading agents. As technology advances, AI is
likely to become more powerful and ubiquitous, but there are signi ficant control
problems that remain to be solved. The fundamental issue is that superintelligent
systems do not generally behave in human-compatible ways, and this can produce
existential risk (Bostrom 2013 ). More plausible scenarios involve unpredictable AI
actions that are deliberate, autonomous, and potentially very tenacious. It might
include the paralysis of the internet globally by AI software embedded in the web
infrastructure, or by automated adaptive hacking tools (e.g., descendants of the
current DARPA Cyber Grand Challenge). In another scenario of endurable severity
and local scope, AI systems can involve the disruption of operations in an organi-
zation. Of course, severity may vary as well as scope. For example, if there is failure
of ICT systems in a healthcare organization, it could result to loss of human lives.
The disaster can diffuse globally if AI of a wide-spread logistics database system
decides not to allow access to information, or even worse, altering or destroying it
(for example, because it interprets restoration or circumvention attempts as intrusion
attempts). However, due to the fact that the capabilities of AI are very ambiguous,
such scenarios are dif ficult to de fine.
It may be that there are workable solutions or that AI will never be too powerful,
but these are risky bets. It seems that it is easy for people to overestimate their
knowledge regarding AI (Yudkowsky 2011 ).“It may be tempting to ignore Arti ficial
Intelligence because, of all the global risk ...AI is hardest to discuss. We cannot
consult actuarial statistics to assign small annual probabilities of catastrophe, as with39 Cyber Insurance 827
asteroid strikes. We cannot use calculations from a precise, precisely con firmed
model to rule out events or place in finitesimal upper bounds on their probability,
as with proposed physics disasters. But this makes AI catastrophes more worrisome,
not less. ”(Yudkowsky 2011 ). In that sense, AI quali fies for uncertainty and igno-
rance. AI represents a risk that could go all the way into the extreme upper right hand
box of the framework, but is both extremely uncertain and largely a future risk: it can
be dealt with by R&D aimed at safe and bene ficial uses of AI.
However, cyber risk also has strong interconnections to traditional catastrophic
risks. Such risks include major technical disasters, con flict and war, and particularly
total war with the use of weapons of mass destruction (WMD).
The threat of a nuclear disaster is the most notable case by far. This is due to
Stuxnet, a complex piece of malware interfering with Siemens industrial control
systems and speculated that it was used for Iran nuclear program (NATO 2013 ).
Based on this precedent, it can be argued that a nuclear catastrophe can be realized.
The scale of these risks could largely vary. Cirincione ( 2011 ) and Ackerman and
Potter ( 2011 ) discuss the global catastrophic risks of nuclear war and catastrophic
nuclear terrorism. In both cases, cyberspace is “enabling ”these risks. In addition, the
internet could provide the most cost-effective opportunity for adversaries. It enables
states and non-state actors and enhances their power. They can transform their
capabilities and become nuclear threats that were not imaginable in the past. These
asymmetric threats impose great challenges to insurance.
Stuxnet is considered to be a government cyber weapon. Rogue states might
dedicate more resources in attaining such capabilities. The same could apply with
terrorist groups. It is interesting to notice the multiple channels and complexity
surrounding them. States relations can deteriorate and governments might decide to
pursue cyber weapons targeting at nuclear as well as other military and critical
infrastructure targets. The emergence of terrorist groups is also subject to uncertainty
and ignorance. The rapid emergence of Islamic State, raising considerable resources,
was not forecasted. Hamas and Hezbollah were established terrorist organizations
and it can be alleged that they were capable of using cyber space. Nevertheless, it
was believed by Israeli of ficials that these organizations used a criminal organization
based in a former Soviet State to attack Israel ’s internet infrastructure during the
January 2009 military offensive in the Gaza Strip (NATO 2013 ).
Cyber weapons can also easily be spread to other actors, through theft or leakage
(such as the exploits revealed in the attack on the security consultancy Hacking Team
in 2015), trade, or by imitation: once Stuxnet was out in the wild, many other groups
could analyze it and copy its tricks into their toolkits. The market for zero-day
exploits, driven by governments and security companies seeking new tools, has both
the effect of incentivizing search for more vulnerabilities and inhibiting public
disclosure of them since discoverers can gain more by secretly selling their find
and agencies using them do not wish to lose their advantage. Even when vulnera-
bilities are revealed, removing them is sometimes hard since they might be embed-
ded in systems that cannot easily be upgraded (such as industrial systems or
implants); this means that use of some cyber weapons can lead to more subsequent
attacks on targets unrelated to the original target.828 P. Petratos et al.
This case highlights the complexity generated by multiple channels and agents. It
is consistent with the concept of nth order effects (Zurich 2014 ). The potential
cooperation of different agents enhances complexity due to the exponential number
of combinations. Nexuses of adversaries can be formed, pooling resources and
capabilities and thus magnifying cyber attacks. Nuclear catastrophes can have
regional or global consequences (Cirincione 2011 ) according to their intensity.
Similar cyber global catastrophic scenarios can involve other types of WMD (i.e.,
biological weapons) or con flict and war.
Catastrophic Risks
War and con flict enabled by cyber space can present variations in consequences and
scale. They can also be interdependent to other complex events. The cyber-attack on
Estonia in April 2007 was caused due to political frictions with Russia. On August
2008, the con flict of Russia and Georgia was accompanied by hacking activity from
unknown foreign intruders which appeared to coincide with Russian military actions
(NATO 2013 ). A crucial observation is that the manmade causes of these cyber
attacks are still not known with certainty. Another critical remark is that there are
interdependencies between traditional kinetic power and cyber capabilities. An
analogous example to the above cases is the takeover of missile systems by hackers
(there are claims this brie fly happened to a German Patriot antiaircraft defense
system in 2015 (Storm 2015 )). An action by hackers launching missiles could
escalate to con flict or war.
Now imagine that these missiles are stationed in South Korea. And that they are
launched by unknown hackers just after the cyber-attack on Sony, that FBI blamed on
Pyongyang (BBC 2015). Sony was about to release the interview, a comedy about the
assassination of the North Korea ’s Leader, indicating that the tensions in North Korea
were running high. This could trigger events that could escalate to a catastrophe
involving even nuclear weapons. A crisis in Korea could also cause negative impact
on global markets due to the importance of the South Korean economy and trade
interconnections. This example presents just a small part of complex interdependencies.
This example could have been even worse. Imagine now that the aforementioned
events coincide with a release on WikiLeaks that North Korea is abandoned and
isolated (a previous WikiLeaks cable suggested that Chinese of ficials expressed the
desire to relinquish support for North Korea (The Economist 2010 )). North Korea
can increase its level of alertness and retaliate severely, if they feel that the balance of
power has changed against them and the regime is under existential threat. If these
events coincide, then it is more likely to have a catastrophe. It is also possible that
these events are fabricated and lead to an “accident. ”It is important to realize the
multiple layers of complex interdependencies, which in many occasions can be
unpredictable. The “WikiLeaks paradigm ”is noteworthy because it can generate
the conditions and instability which can consequently trigger other disasters.
In January 2011, the Canadian government reported an attack against its Depart-
ment of National Defense as well as the Finance Department and Treasury Board,39 Cyber Insurance 829
causing the disconnection of the main Canadian economic agencies from the internet
(NATO 2013 ). Once again, there is ambiguity regarding the identity of attackers, and
in addition Canadian counter-espionage agents were left scrambling to find how
much sensitive information was compromised (Weston on CBC News 2011 ). In that
sense, it is not only dif ficult to forecast cyber-attacks but it is also unclear how much
loss they caused. This makes mitigation harder. A proof of that is that cyber-attacks
disrupted again the Department of Finance and Treasury Board (MacDonald and
King on WSJ 2015 ). Thus, cyber-attacks are repeated with frequency on the same
critical infrastructure.
Although these cyber-attacks might not qualify for catastrophic risks, it is hard to
estimate the losses and associated costs. A considerable loss is the opportunity cost
for not using the economic infrastructure of the Department of Finance and Treasury
Board. Except Stuxnet, earlier, in 2003, Slammer worm disabled safety monitors in
nuclear facilities and later, in October 2011, the Duqu Trojan hit Iran ’s nuclear
facilities (Vaidya 2015 ). This is another indication of the frequency of cyber-attacks
on nuclear facilities, which could easily lead to major catastrophes.
Not only nuclear facilities are targeted but also energy infrastructure has experi-
enced cyber-attacks. A notable case is Shamoon malware which destroyed 30,000
computers of Saudi Aramco in August of 2012. Interestingly enough, 5 days later, a
similar attack forced RasGas, one of the largest producers of liquid petroleum gas, to
shut down its website and e-mails (BBC 2012 ). Despite that it was not reported oil
and gas supply was not disrupted, inference to these cases points that in the future
this is a plausible consequence. Especially similar cyber-attacks can create shocks to
the global economy due to interconnections, if they coincide with other events
affecting the price of energy.
We have mainly focused on cyber events that produce high consequence out-
comes on a single or small number of organizations affected. Nevertheless, another
important category of cyber events is when they have impact on a wide range of
individuals and organizations. This type of events is likely to generate systemic
global catastrophes. There are numerous examples. In respect to losses, some cases
are distinct. Code Red Worm as early as July 2001 infected 359,000 computers in
less than 14 h and caused estimated losses of $2.6 billion, Mydoom in 2004
skyrocketed losses to $38.5 billion, Con ficker in 2008 infected 11 million hosts
with an estimated loss of $9.1 billion, and the list is long (Vaidya 2015 ). It should be
noted that these disasters are systemic and with correlated global effects. They can
therefore be considered potential “Lehman moments ”for cyber insurance.
Conclusion: Summary, Challenges, and Future Directions, the
Development of the Cyber Insurance Market
Cyber risks are rapidly evolving due to technological change and the systemic and
complex nature of the ICT world, producing fundamental uncertainty and ignorance.
Cyber insurance typically focuses on the less uncertain risks or constrains
uninsurable risks to make them more manageable. Tools or practices for handling830 P. Petratos et al.
interdependent security, correlation, and information asymmetries as well as the lack
of reinsurance would help the market grow.
While there are some cyber risks for which we can have suf ficient information for
quanti fiable estimates, in the majority of cases, uncertainty and ignorance prevail.
This re flects the very limited, if any, information regarding the nature and evolution
of cyber-attacks. There are two basic problems in obtaining information. The first
concerns the identity of attackers. The agents responsible for cyber threats present a
large variety. They can range from large nations and militaries to organized crime
and activists. The second issue, somewhat related to the first, are the resources and
skills of these agents. The skills and sophistication can also substantially vary.
There are examples of single hackers that managed to cause catastrophic damage
–like Michael Calce aka “MafiaBoy ”–who has caused an estimated $1.2 billion
damage with attacks on CNN, Dell, e-Bay, and Amazon (Niccolai 2000 ; Harris
2006 ). Organized crime groups (OCGs) are getting more involved in cyber crime,
and trends suggest considerable increases in scope, sophistication, number and types
of attacks, number of victims, and economic damage (Europol 2014 ). Nevertheless,
except traditional OCGs that leverage their existing criminal activity, there are many
new organized criminals focusing solely on cyber crime. They are capable of
building sophisticated and complex systems for stealing money and intellectual
property at a “grand scale, ”and it has been reported that in former Soviet Union
there are 20 –30 criminal groups that have reached “nation-state level ”capabilities
(Ranger 2014 ).
It has been argued that many governments are developing their cyber offensive
and defensive capabilities, and most particularly cyber intelligence operations. US is
further “aggressively ”enhancing its cyber capabilities. This is because of claims by
officials about serious cyber threats from China and occurrence of high-magnitude
attacks, for example, on Sony from North Korea (Mason and Hosenball 2015 ). There
is considerable uncertainty and ignorance regarding the nature and source of many
threats. Often the perpetrating agents cannot be identi fied. On top of that, there are
allegations that some governments might employ hackers or even organized cyber
criminals. In this dynamic environment, threat agents can easily change identity and
diffuse their knowledge and innovative technologies. At the same time, much
information regarding these threats or attacks might remain unknown. Finally,
cyberterrorist acts have been anticipated, but none can predict their potential scale.
An analogy with the unexpected rise of Islamic State (IS) might be drawn.
In general, it is very hard or in some cases seems impossible to have information
and predict the frequency and magnitude of cyber-attacks. At the same time, it is also
difficult to estimate the potential losses from cyber-attacks due to interdependencies
that can propagate shocks and strongly correlated risks. These, along with limited
information regarding the reputation loss, opportunity cost from operation interrup-
tions, valuation of intellectual property, among others, impose signi ficant barriers to
the development of insurance markets. In that sense, uninsurable risks can remain.
Nevertheless, building better insurance and financial models, as some actuarial
models referred above, is a first step to better understand and estimate cyber risks
and relate them to insurance premiums. On top of that, incentives, regulation and39 Cyber Insurance 831
liability provisions, new technologies for better security, and investment in secure
infrastructure can diminish some risks and facilitate the further development of cyber
insurance markets.
It may be that these barriers are insurmountable, or that currently undiscovered
tools –whether technological, actuarial, or social –are ready to be found. The
challenge is extremely hard, involving management of systemic risks with elements
of extreme uncertainty and ignorance, but the market rewards would be equally grand.
Acknowledgement This work was supported by the FHI-Amlin Research Collaboration on
Systemic Risk of Modelling in pursuing better understanding and management of the systemic
risks associated with modeling in the insurance industry through the strategic collaboration between
the Future of Humanity Institute and Amlin. We are grateful for comments and suggestions from
numerous colleagues and insurance industry participants from Amlin plc, the Lloyd ’s of London,
and the Bank of England in several meetings and discussions among working parties.
References
Ackerman, G., & Potter, W. (2011). Catastrophic nuclear terrorism. A preventable peril.
In N. Bostrom & M. Cirkovic (Eds.), Global catastrophic risks . New York: Oxford University
Press.
Actuarial Standard Board. (2000). Treatment of catastrophe losses in property/casualty insurance
ratemaking actuarial standard of practice no. 39.
Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42 (12),
40–46.
Akerlof, G. A. (1970). The market for “lemons ”: Quality uncertainty and the market mechanism.
The quarterly journal of economics, 84 (3), 488 –500.
Anderson, R. (2003a). Cryptography and competition policy issues with trusted computing.
In Proceedings of PODC ’03, Boston, MA, pp. 3 –10.
Anderson, R. (2003b). ‘Trusted computing ’and competition policy –Issues for computing pro-
fessionals. Upgrade. The European Journal for the Informatics Professional, 4 (3), 35 –41.
Anderson, R., Bhme, R., Clayton, R., & Moor, T. (2009). Security economics and european policy.
In N. Pohlmann, H. Reimer, & W. Schneider (Eds.), ISSE 2008 securing electronic Business
processes (pp. 57 –76). Wiesbaden: Vieweg+Teubner.
Anderson, R., Bohme, R., Clayton, R., & Moore, T. (2007). Security economics and the internal
market . Heraklion: ENISA.
Anderson, R., & Moore, T. (2007). Information security economics and beyond. Advances in
Cryptology –CRYPTO07.
Anderson, R., & Moore, T. (2009). Information security: Where computer science, economics and
psychology meet. Philosophical Transactions of the Royal Society of London A: Mathematical,
Physical and Engineering Sciences, 367 (1898), 2717 –2727.
Appari, A., & Johnson, M. E. (2010). Information security and privacy in healthcare: Current state
of research. International journal of Internet and enterprise management, 6 (4), 279 –314.
Arrow, K. J. (1963). Uncertainty and the welfare economics of medical care. The American
economic review, 53 (5), 941 –973.
Baddeley, M. (2011). Information security: Lessons from behavioural economics. In Workshop on
the Economics of Information Security.
Bandyopadhyay, T., Mookerjee, V. S., & Rao, R. C. (2009). Why it managers don ’t go for cyber-
insurance products. Communications of the ACM, 52 (11), 68 –73.832 P. Petratos et al.
Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to systems
engineering. Interacting with Computers, 23 (1), 4 –17.
BBC. (2015, January 3). Sony cyber-attack: North Korea faces new US sanctions. http://www.bbc.
co.uk/news/world-us-canada-30661973 .
BBC. (2012, August 31). Computer virus hits second energy firm.http://www.bbc.co.uk/news/
technology-19434920 .
BIS. (2014). Cyber essentials scheme. Technical report, UK Department for Business Innovation
and Skills.
Bohme, R. (2006). A comparison of market approaches to software vulnerability disclosure.
InEmerging trends in information and communication security (pp. 298 –311). Berlin: Springer.
Bohme, R. (2010a). Security metrics and security investment models. In Advances in information
and computer security (pp. 10 –24). Berlin: Springer.
Bohme, R. (2010b). Towards insura ble network ar chitectures. Information Technology, 52 (5), 290 –293.
Bohme, R., & Kataria, G. (2006). Models and measures for correlation in cyber-insurance.
In Workshop on the Economics of Information Security (WEIS).
Bohme, R., & Schwartz, G. (2010). Modeling cyber-insurance: Towards a unifying framework.
In Workshop on the Economics of Information Security (WEIS).
Bolot, J.-C., & Lelarge, M. (2008). A new perspective on internet security using insurance.
In INFOCOM 2008. The 27th Conference on Computer Communications, IEEE.
Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4 ,1 5 –31.
Bostrom, N., & Cirkovic, M. (Eds.). (2011). Global catastrophic risks . New York: Oxford Univer-
sity Press.
Cabinet. (2011). The UK cyber security strategy: Protecting and promoting the UK in a digital
world. Technical report, UK Cabinet Of fice.
CESG. (2012). 10 steps to cyber security: Information risk management regime. Technical report,
UK Department for Business Innovation and Skills.
Cone, B. D., Irvine, C. E., Thompson, M. F., & Nguyen, T. D. (2007). A video game for cyber
security training and awareness. Computers & Security, 26 (1), 63 –72.
Cirincione, J. (2011). The continuing threat of nuclear war. In N. Bostrom & M. Cirkovic (Eds.),
Global catastrophic risks . New York: Oxford University Press.
Constantin, L. (2013). FBI and Microsoft takedown program blunts most citadel botnets. Computer
World.
Crowley J. (2011). 10 most costly cyber attacks in history. BusinessPundit.com .http://www.
businesspundit.com/10-most-costly-cyber-attacks-in-history/ .
Europol. (2014). The Internet Organised Crime Treat Assessment (iOCTA) 2014. European Police
Office.
Friedman, A. (2011). Economic and policy frameworks for cybersecurity risks. Center for Tech-
nology Innovation at Brookings.
Gracie, A. (2015). Cyber resilience: A financial stability perspective. Cyber Defence and Network
Security Conference, London.
Hall, C., Clayton, R., Anderson, R., & Ouzounis, E. (2011). Inter-x: Resilience of the internet
interconnection ecosystem. ENISA.
Halse, H. R. and Hoemsnes, J. (2013). Cyber-insurance and endogenous network formation.
Master ’s thesis. Norwegian Unievrsity of Science and Technology.
Harris, J. K. (2006). Ethical perspectives in information security education. Issues in Information
Systems VII, 1 , 181.
Hofmann, A. (2007). Internalizing externalities of loss prevention through insurance monopoly: An
analysis of interdependent risks. The Geneva Risk and Insurance Review, 32 (1), 91 –111.
Insurance Information Institute. (2015). Catastrophes and insurance issues. http://www.iii.org/
publications/insurance-handbook/insurance-and-disasters/catastrophes-and-insurance-issues .
Jaffee, D. M., & Russell, T. (1997). Catastrophe insurance, capital markets, and uninsurable risks.
The Journal of Risk and Insurance, 64 (2), 205 –230. Symposium on Financial Risk Management
in Insurance Firms (June, 1997).39 Cyber Insurance 833
Johnson, B., Böhme, R., & Grossklags, J. (2011). Security games with market insurance. In
Decision and game theory for security (pp. 117 –130). Berlin: Springer.
Johnson, B., Laszka, A., & Grossklags, J. (2014). The complexity of estimating systematic risk in
networks. In 27th Computer Security Foundations Symposium (CSF), IEEE, pp. 325 –336.
Juels, A., Kosba, A., & Shi, E. (2015). The ring of gyges: Using smart contracts for crime.
Aries, 40 , 54.
Keohane, R., & Nye, J. (1977). Power and interdependence: World politics in transition . Boston:
Little, Brown.
Keohane, R., & Nye, J. (1998). Power and interdependence in the information age. Foreign Affairs,
77(5), 81 –94.
Kesan, J., Majuca, R., & Yurcik, W. (2005). Cyberinsurance as a market-based solution to the
problem of cybersecurity: A case study. In Workshop on the Economics of Information
Security (WEIS).
Kleindorfer, P., & Kunreuther, H. (1999). Challenges facing the insurance industry in managing
catastrophic risk. In K. A. Froot (Ed.), Thefinancing of catastrophe risk . Chicago: University of
Chicago Press.
Knight, F. (1921). Risk, uncertainty, and pro fit. Boston, MA: Houghton Mif flin Co.
Kunreuther, H., & Heal, G. (2003). Interdependent security. Journal of Risk and Uncertainty,
26(2–3), 231 –249.
Lelarge, M., & Bolot, J. (2009). Economic incentives to increase security in the internet: The case
for insurance. In INFOCOM 2009, IEEE, pp. 1494 –1502.
MacDonald A., & King C. (2015, June 17). Canadian government servers hit by Cyberattack,
minister says hacking group anonymous takes credit for the attack, which appeared to have
affected several government websites. Wall Street Journal .http://www.wsj.com/articles/cana
dian-government-servers-hit-by-cyberattack-minister-says-1434565899 .
Maillart, T., & Sornette, D. (2010). Heavy-tailed distribution of cyber-risks. The European Physical
Journal B, 75 (3), 357 –364.
Majuca, R. P., Yurcik, W., & Kesan, J. P. (2006). The evolution of cyberinsurance. arXiv preprint
cs/0601020.
Marsh. (2015). UK cyber security: The role of insurance in managing and mitigating the risk.
Technical report, UK HM Government.
Marsh, & Zurich. (2015). UK 2015 cyber risk survey report. Technical report, Marsh Insights.
Mason, J., & Hosenball, M. (2015, June 8) Obama vows to boost U.S. cyber defenses amid signs of
China hacking. Reuters .
Moore, T., Pym, D., & Ioannidis, C. (Eds.). (2010). Economics of information security and privacy .
New York: Springer.
Moran, J., Beeson, B., Mulligan, C., Sage, O., & Menapace, M. (2015). Examining the evolving
cyber insurance marketplace. Homeland security digital library.
Mukhopadhyay, A., Chatterjee, S., Saha, D., Mahanti, A., & Sadhukhan, S. K. (2013). Cyber-risk
decision models: To insure it or not? Decision Support Systems, 56 ,1 1 –26.
Naghizadeh, P., & Liu, M. (2014). Voluntary participation in cyber-insurance markets. In Workshop
on the Economics of Information Security (WEIS).
NAIC. (2015). Principles for effective cybersecurity: Insurance regulatory guidance. National
Association of Insurance Commissioners.
NATO. (2013). The history of cyber attacks –A timeline. http://www.nato.int/docu/review/2013/
cyber/timeline/EN/index.htm .
Niccolai, J. (2000, February 10). Analyst puts hacker damage at $1.2 billion and rising, InfoWorld .
Archived from the original on 12 November 2007. Retrieved 22 March 2007.
Nye, J., & Owens, W. (1996). America ’s information edge. Foreign Affairs, 75 (2), 20 –36.
Ogut, H., Menon, N., & Raghunathan, S. (2005). Cyber insurance and it security investment:
Impact of interdependence risk. In Workshop on the Economics of Information
Security (WEIS).834 P. Petratos et al.
Ord, T., Hillerbrand, R., & Sandberg, A. (2010). Probing the improbable: Methodological chal-
lenges for risks with low probabilities and high stakes. Journal of Risk Research, 13 (2). Special
Issue: The Philosophy of Risk.
Overill, R. E., & Silomon, J. A. (2011). Single and double power Laws for cyber-crimes. Journal of
Information Warfare, 10 (3), 29 –36.
Oxford Economics. (2014). Cyber-attacks: Effects on UK companies. Technical report, Oxford
Economics (A report for Centre for the Protection of National Infrastructure).
Pal, R. (2012). Cyber-insurance for cyber-security: A solution to the information asymmetry
problem. In SIAM Annual Meeting. Citeseer.
Pal, R. (2014). Improving network security through cyber-insurance. PhD thesis, University of
Southern California.
Pal, R., & Golubchik, L. (2010). Analyzing self-defense investments in internet security under
cyber-insurance coverage. In IEEE 30th International Conference on Distributed Computing
Systems (ICDCS), pp. 339 –347.
Pal, R. and Golubchik, L. (2011). Pricing and investments in internet security: A cyber-insurance
perspective. CoRR, abs/1103.1552.
PWC. (2015). 2015 Information security breaches survey. Technical report, UK HM Government.
Ramirez, R., & Selin, C. (2014). Plausibility and probability in scenario planning. Foresight,
16(1), 54 –74.
Ranger, S. (2014, June 9). Organised cybercrime groups are now as powerful as nations. ZDNet .
Roy D., & Zeckhauser R. (2013). Ignorance: Lessons from the Laboratory of Literature. M-RCBG
Faculty working paper series 2010-11.
Sasse, M. A., Brostoff, S., & Weirich, D. (2001). Transforming the weakest linka human/computer
interaction approach to usable and effective security. BT Technology Journal, 19 (3), 122 –131.
Schneier, B. (2002). Computer security: Its the economics, stupid. In Workshop on the Economics
of Information Security (WEIS).
Shetty, N., Schwartz, G., Felegyhazi, M., & Walrand, J. (2010a). Competitive cyber-insurance and
internet security. In T. Moore, D. Pym, & C. Ioannidis (Eds.), Economics of information security
and privacy (pp. 229 –247). New York: Springer.
Shetty, N., Schwartz, G., & Walrand, J. (2010b). Can competitive insurers improve network
security? In A. Acquisti, S. Smith, & A.-R. Sadeghi (Eds.), Trust and trustworthy computing ,
Lecture notes in computer science (Vol. 6101, pp. 308 –322). Berlin: Springer.
Siegel, C. A., Sagalow, T. R., & Serritella, P. (2002). Cyber-risk management: Technical and
insurance controls for enterprise-level security. Information Systems Security, 11 (4), 33 –49.
Sifalakis, M., Fry, M., & Hutchison, D. (2010). Event detection and correlation for network
environments. IEEE Journal on Selected Areas in Communications, 28 (1), 60 –69.
Sommerville, I., Cliff, D., Calinescu, R., Keen, J., Kelly, T., Kwiatkowska, M., Mcdermid, J., &
Paige, R. (2012). Large-scale complex it systems. Communications of the ACM, 55 (7), 71 –77.
Storm, D. (2015, July 8). Did hackers remotely execute “unexplained ”commands on German
patriot missile battery? Computerworld .
Swiss Re. (2015). Underinsurance of property risks: Closing the gap. Swiss Re.
The Economist. (2010, November 30). WikiLeaks embarrasses North Korea: A glimpse into the
dark. The Economist .http://www.economist.com/blogs/banyan/2010/11/wikileaks\_embar
rasses\_north\_korea .
Thompson, M. (2014). Why cyber-insurance is the next big thing. In CNBC Report.
Toregas, C., & Zahn, N. (2014). Insurance for cyber attacks the issue of setting premiums in context.
Cyber Security Policy and Research Institute, The George Washington University.
Vaidya T. (2015). 2001-2013: Survey and analysis of major cyberattacks. Working Paper. http://
arxiv.org/pdf/1507.06673.pdf .
Varian, H. R. (2004). System reliability and free riding. In Economics of information security
(pp. 1 –15). Dordrecht: Kluwer Academic Publishers.
WEF. (2015). Global risks 2015. Technical report. World Economic Forum, Geneva.39 Cyber Insurance 835
Weston G. (2011, February 16). Foreign hackers attack Canadian government: Computer systems
at 3 key departments penetrated. CBC News .http://www.cbc.ca/news/politics/foreign-hackers-
attack-canadian-government-1.982618 .
Yang, Z., & Lui, J. C. (2014). Security adoption and in fluence of cyber-insurance markets in
heterogeneous networks. Performance Evaluation, 74 ,1–17.
Yudkowsky, E. (2011). Arti ficial intelligence as a positive and negative factor in global risk.
In N. Bostrom & M. Cirkovic (Eds.), Global catastrophic risks . Oxford: Oxford University
Press.
Zeckhauser, R., & Visusi, K. (2008). Discounting dilemmas: Editors ’introduction. Journal of Risk
and Uncertainty, 37 (2), 95 –106.
Zeckhauser R. (2008). Insurance. The Concise Encyclopaedia of Economics. http://www.econlib.
org/library/Enc/Insurance.html .
Zeckhauser, R. (2006). Investing in the unknown and unknowable, capitalism and society. Berkeley
Electronic Press, 1 (2.)http://www.bepress.com/cas/vol1/iss2/art5 .
Zeckhauser, R. (1996a). The economics of catastrophes. Journal of Risk and Uncertainty, 12 (2),
113 –140.
Zeckhauser, R. (1996b). Insurance and catastrophes. Geneva Papers on Risk and Insurance: Issues
and Practice, 78 ,3–21.
Zhao, X., Xue, L., & Whinston, A. B. (2009). Managing interdependent information security
risks: A study of cyber-insurance, managed security service and risk pooling. ICIS 2009
Proceedings, p. 49.
Zurich. (2014). Beyond data breaches: Global interconnections of cyber risk. Risk Nexus Report of
Zurich Insurance Group and Atlantic Council.
Zurich Insurance Group. (2013). Modeling natural catastrophes. Annual report 2013. http://www.
zurich.com/2013/en/annual-report/risk-review/analysis-by-risk-type/insurance-risk/modeling-
natural-catastrophes.html .836 P. Petratos et al. |
8ca7c470-eca8-497f-b58f-aad069143397 | trentmkelly/LessWrong-43k | LessWrong | When AI solves a game, focus on the game's mechanics, not its theme.
1. A game design consists of two things: mechanics and theme.
1. The game mechanics is the abstract protocol governing how the players interact with each other and their shared environment.
2. The game theme is a fictional interpretation of the elements of the game.
3. Consider Battleship — the theme is a naval battle, and the mechanics are a particular 2-player sequential discovery game.
4. There is a correspondence between the ontology of the mechanics and the ontology of the theme, but this correspondence is mostly arbitrary. For example, the Knight in chess has almost nothing to do with actual knights.
2. When an AI solves a game, people often overfocus on the theme of the game relative to the mechanics of the game.
3. Maybe this is for psychological reasons:
1. The theme is more interesting than the mechanics.
2. The theme is in our pre-cached ontology — my brain already has a pre-cached concept of "naval battle" but it doesn't have a pre-cached concept corresponding to the particular mechanics of Battleship. In fact, this is why games have themes in the first place — they serve partly as mnemonics for the rules.
3. Did you know that people cooperate more in the Prisoner's Dilemma if the game is called "The Community Game" than "The Wall Street Game"?!
4. Or maybe this is for rational reasons:
1. Other people might think that there is a deeper connection between the theme and the mechanics of the particular game than I do. For example, they might think there is some genuine non-arbitrary connection between the mechanics of monopoly and the real estate market.
2. See the existing debate about ludonarrative dissonance.
5. If people overfocus on the theme, then they will make incorrect predictions about AI.
1. For example, they'll hear that AI has solved Full-Press Diplomacy and extrapolate that AI will soon be able to solve other games of a similar theme (i.e. international military negotiations).
2. However, they |
83615d5b-95ec-4fff-a037-c9f591989c5c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Information theoretic model analysis may not lend much insight, but we may have been doing them wrong!
Introduction
============
For a [SERI MATS](https://www.serimats.org/) research sprint under John Wentworth, teams were given the following research prompt:
> Everyone's assignment will be to figure out how to operationalize the "broad peaks induce data compression" argument. Here's the intuitive argument: I train some neural net to reproduce the training data. Conceptually, the trained net has to "store" all the relevant information from the dataset in its weights. If it "compresses the data", then it should be able to store the data using fewer weights, so more weights are free to vary without increasing loss much. "Lots of weights left free to vary without increasing loss much" = broad peaks. Or, to put it differently: there are more parameter-values for which the net reproduces the training data using a more compressed representation, so training is more likely to find a more compressed representation.So, the assignment is to figure out how to operationalize things like:
>
> * "the representation of the data" (or the embedding of that representation in the weights)
> * what it means for the representation to "use" some parameter-degrees-of-freedom but not others (bear in mind that "weights" might not be the right ontology, e.g. maybe it uses some directions-in-parameter-space but not others, or maybe something else entirely)
> * what it means for two different parameter-settings of the net to "use the same representation" (or, more generally, to "do the same thing" internally)
>
> When operationalizing this, we're looking for three main things:
>
> * Testability: the operationalization should be in terms of things which you can directly look for in a trained net (possibly relying on some approximations). This is mainly to make sure the operationalization stays grounded.
> * Generalizability: the operationalization should be clearly generalizable to new kinds of systems which don't look like the kinds of systems we're currently working with (maybe imagine biological systems). We're not looking for a narrow ad-hoc model, here.
> * Provability: using the operationalization, we should be able to turn the intuitive argument into a mathematical proof (up to approximation/events of high probability). We should be able to argue mathematically that training probably finds more compressed representations.
>
Our team decided to try to tackle what it means for two different parameter-settings of the net to use the same representation, as this seemed the most likely way the full argument would be formalized, and the straightforward approach (to use [Hessians and basin volumes to look at peak broadness](https://www.lesswrong.com/posts/QPqztHpToij2nx7ET/hessian-and-basin-volume)) would be what most other teams would do.
Summary
=======
A motivating example for our first attempt to operationalize “doing the same thing internally” was the Rubik’s cube. These are solved by using set algorithms developed to get certain blocks to certain spaces without changing solved portions. Two algorithms are similar when you can make a simple change to one, and get the other. An example: instead of solving the white face first, and yellow face last, you can solve the yellow first, and white last. Another would be that whenever you make a 90-degree turn clockwise, you make three 90-degree turns counter-clockwise.
The primary idea was that different things do the same thing in the same way when you could easily predict the kinds of operations one thing does given knowledge about the other thing.
For neural networks we thought this notion arose via symmetries in parameter space. There are functions you could apply to networks which leave their outputs the same. Networks resulting from the application of simple functions should be “doing the same thing” in a meaningful sense, and networks resulting from the application of very complex functions have no guarantee of doing the same thing. A simple function would be rearranging the neurons and corresponding weights, and a complex function would be training a network on the outputs of your given one.
At this point, we were very naive in the ways of information theory, and thought to ourselves ‘hey, we’re saying there’s functions between networks. Right? Well, if there's an invertible function between them, then their mutual information should be just as large as their total information, so we can just measure mutual information between two networks!’ Unfortunately for us, this either doesn’t make any sense, or makes sense only if you’re willing to train several billion networks on several trillion different tasks (both estimates of the true cost).
So instead we thought more about the basic properties we’d like these algorithm translation functions to have. When you apply such a function to a Rubik’s cube, and make all right hand turns into three left hand turns, then this has the property that after executing the alternative 3 steps, the Rubik’s cube is in the same state as it would have been after executing the original step. So one may think the key would lie in looking for steps in the computational process which return the same result. However, this notion doesn’t capture the idea that if you start solving the Rubik’s cube from the yellow face rather than the white face, but otherwise use the same algorithm, then though this feels like the same way of solving the cube, at no point do the algorithms get you in the same state except at the very beginning and very end. There isn’t even a particularly simple function to translate from the intermediate states of the white-first method to the intermediate states of the yellow-first method.
This is when we started to lose our way a little bit. Possibly it would have been a good idea to spend a few more days developing the above theory, then run into an experiment, but we thought it was possible we were making some earlier mistake in reasoning, so we decided to try to implement an algorithm which matches similar computational states to each other. We reasoned that there should be a bijection between similar parts of similar computation states, and the extent of this bijection could be approximated by looking at the mutual information between different (in the case of neural networks) activation layers.
We also thought that because it’s easy to understand the fundamental structure of simple functions, and hard to find the fundamental structure of complex functions, it should be the case that a network trained on the simple function would “store” little of the data, and more of the fundamental structure, while a network trained on a complex function should be more liable to store all of the data, or at least, implement something equivalent to a lookup table on the data. Thus, simple functions will do as many different things if you can implement the simple function using many different methods, and complex functions will do only one thing (a lookup table).
So we chose our simple function: return the first bit of a 5-bit sequence. And our complex function: return a random (but consistent) bit after seeing a 5-bit sequence. Then we trained 20 networks on each function, and plotted various mutual information measures of the networks’ activations, considering two activations the same if they were 1e-5 L1 distance divided by the number of entries of each other[[1]](#fnokg2hjrozod).
We figured the layers in complex functions would have more similar information content than the layers in simple functions, and we chose two methods of testing how true this was.
1. Graph the total information contained in each layer of each simple network, and on a separate graph that information contained in each layer of each complex network
2. Compare the mutual information between corresponding layers of the simple networks and corresponding layers of the complex networks
The problem with this method, which we only realized after actually trying it, was that it is testing a proxy for a theory with a proxy for a result.
Here, our theory was that certain classes of functions for mapping algorithms between each other represent ways you can change a particular algorithm without changing what it is fundamentally doing, and our proxy for this was information content between and inside activation layers. Our anticipated result was that algorithms which do similar things will often but not always have similar computation states, while the proxy for this prediction was that there are more ways to copy the first bit of an input than there are ways to memorize input-output pairs.
I said before that this is where we lost our way. This is because there were plausibly more dubious reasoning steps in this setup here than there were in our original theory reasoning, and we were doing this experiment to verify that our original theory reasoning was valid!
Our results are evidence against the theory, but plausibly more evidence against our ability to come up with high-information-value experiments. Even more evidence in favor of this is that after experimenting, our overall reaction was confusion on where to go next. We figured at the moment this meant our original theory was incorrect, but we were still stuck.
Each of us then used the babbling technique to generate three lists of ten possible ideas for new directions to go in, and Ian had the idea that we could use interpretability-like techniques to look at whether similar sets of neurons have high activation after feeding in similar types of inputs, which was similar to our previous informational measures, but seemed like it’d give richer data and correspondences. The experiments for this approach have been written, but we did not have time to actually analyze the generated data.
Mutual information details
==========================
The idea is that if the mutual information between the activations of two layers of a neural network is high, then they are representing the same information. The more layers in two networks that represent the same information, the more the programs implemented by the functions are similar, since they pass through congruous intermediate representations.
In particular, we compute the mutual information between two layers x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
and y, MI(x,y), the total information stored in those two layers,
TI(x,y)=H(x)+H(y)−MI(x,y)=H((x,y))and take the ratio
IR(x,y)=MI(x,y)TI(x,y)
which we call the i*nformation ratio*.
If this ratio is exactly 1, then there is a bijection between the activations and they represent the same ‘state’ in the computation. If the ratio is close to 1, then we expect the two sets of activations to represent approximately the same computational state, since they share almost all the same information.
The similarity score we propose for networks X=x1,x2,…,xm and Y=y1,y2,…,yn is the mean information ratio
Sim(X,Y)=1mn∑mi=1∑nj=1IR(xi,yj),
(Notably this similarity score isn't at its maximum when comparing a network to itself, so while we used it as a test method, we were not satisfied with it.)
Mutual information based results
================================
Mutual information with the input
---------------------------------
It is hard to take too much away from these graphs, but it seems that the simple networks have the potential to throw away more information than the complex ones.
MI with input data of the layers of 200 nets trained on the simple dataset (first bit is the label)MI with input data of the layers of 200 nets trained on the complex dataset (random labels). Note that the minimum MI is higher than for simple (axes are not even).MI with input data of the layers of 200 nets trained on the simple dataset represented with the 20th, 40th, 60th, and 80th quantile at each layer.MI with input data of the layers of 200 nets trained on the complex dataset represented with the 20th, 40th, 60th, and 80th quantile at each layer.Mutual information between corresponding layers of different neural networks
----------------------------------------------------------------------------
Similarly to the previous section, the minimum amount of information shared between networks in their final layers is slightly higher than for the simple nets, and may on average be a few bits higher (although we would need to check this more to be sure). There also seems to be structure to this data, indicating this metric captures an interesting aspect of computation, but the graphs for complex functions seem more messy than simple functions, possibly providing evidence our original theory was wrong, although it’s really hard to tell which way this points.
MI with corresponding layers of 20 nets trained on the simple dataset (first bit is the label).MI with corresponding layers of 20 nets trained on the complex dataset (random labels). Note that the minimum MI is higher than for simple (axes are not even).Similarity scores of 30 networks with one reference network, all trained on the simple task.Similarity scores of 30 networks with one reference network, all trained on the complex task.Explanation of the interpretability idea
========================================
After we got into a bit of an idea-rut, we decided to all generate 10 dumb ideas using the [babble method](https://www.lesswrong.com/posts/i42Dfoh4HtsCAfXxL/babble), with the hope that one of them would turn out to not be dumb. Ian came up with the idea of using mechanistic interpretability-like techniques to distinguish between functions the network may be running, and this seemed the most promising idea at the time, since it allowed us to verify whether any particular notion of two networks ‘doing the same thing’ was equivalent. The idea is as follows.
In general, we can’t expect particular neurons to correspond to particular, nicely defined functions. For example, we may have a situation where the results of the nicely defined functions are put into any orthogonal direction in the activation space rather than the particular set of orthogonal directions represented by each of the neurons. This makes it difficult to tell in general what the particular, nicely defined, functions the network is implementing are.
However, if we only have binary inputs, and we limit our focus to only the first layer of the network, we know that this layer can’t be implementing anything more complex than an xor function. More generally, we can break our network up into simple pieces for which we can make an upper bound on the complexity of the functions it could be implementing. Then, we can list all of the functions it could be implementing, and look at which neurons have what values. This gives us a function from ‘functions the network could be implementing’ to neuron values.
Now, if we hand-code a several nice networks, so that each neuron is implementing a very cleanly defined function, and our networks encompass all the possible ‘ways’ one could compute a given function, by optimizing inputs for those functions (or neurons since we have a nice network), then feeding those inputs into a different, possibly messy, network, we can gain insight into what kinds of functions each activation layer in the messy network is implementing.
We decided to test how fruitful this approach was by training many networks to compute a fuzzy-logic version of f(ab)=a⊕b,[[2]](#fn7jwocjw70x9)[[3]](#fndliopwcdcp) but we were only able to train 2 before the project completion date was approached, and never actually got to develop any form of methodology we could scale up to larger networks.
Individual reflections
======================
Matthias
--------
* We experimented without having much of a theory developed to what we expect the results to look like (only a little) and how we would update in the case the results turn out a certain way
* The similarity measure seemed (and turned out to be) arbitrary.
* There are a lot of variables in the experiments like when to consider a net to have 0 loss. (like less than 1e-3/1e-5/1e-7) These choices were made pretty random but it could very well be the case that they are very important especially in the information case. We did not have a back and forth about these approximations between the theory part and the experiment part.
Ian
---
* We moved away from the original prompt to focus on a related question, which we hoped would be tractable and allow us to pursue a different line of investigation than other groups. However, it feels like this new question was somewhat disconnected from the original prompt and left the space of possibilities extremely broad such that it was hard to focus on a specific approach.
* We came up with some promising-sounding ideas fairly early on but rushed to test them rather than think through the implications of the theory or find counter-examples. We didn’t understand our experimental set-up well enough to disentangle evidence about our theory from details of the set-up (such as the specific datasets and architectures used, the effect of the tolerance used to decide when two things were equal, the number of runs we would need to extract signal from the noise even if our set-up was solid).
* Developing the theory to the point that it made very concrete predictions about small set-ups with very few degrees of freedom would have helped us to validate that we were on the right track.
* For all the experiments, having a better sense of how we would update given the evidence collected after an experiment would have been useful.
* This is a general problem I have – if all the experiments you run give inconclusive, messy, noisy results, how are you supposed to draw conclusions and make progress? Often it feels like small experiments with clear outcomes are not representative enough to be informative, and larger experiments rarely point in a clear direction.
Garrett
-------
* Basically the same as Ian, except the first point. I stand by our decision to take the question the direction we took it, and if we chose to take it in a different direction, I’d want us to try to do something like operationalize what it means for data to be encoded in a network rather than for two networks to be doing the same thing.
1. **[^](#fnrefokg2hjrozod)**We played around with a bunch of measures, and decided on the one which had the invariance H((X,Y)) = H(X) + H(Y) - I(X;Y).
2. **[^](#fnref7jwocjw70x9)**⊕ is the xor binary operator for those unfamiliar with this notation.
3. **[^](#fnrefdliopwcdcp)**This fuzzy-logic version being one where OR(a,b) = max(a,b)`, `AND(a,b) = min(a,b), NOT(a) = 1-a, and where XOR(a,b) = AND(OR(a, b), NOT(AND(a,b)))). |
27799528-73f2-4b73-b029-e1d44a8dfafd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Wanting to Want
In response to a [request](/lw/fk/survey_results/cd4?context=1#comments), I am going to do some basic unpacking of second-order desire, or "metawanting". Basically, a second-order desire or metawant is a desire about a first-order desire.
**Example 1:** Suppose I am very sleepy, but I want to be alert. My desire to be alert is first-order. Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I *hate* Mountain Dew1. I do *not* want the Mountain Dew, because I know it is gross. But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness. So I have the following instrumental belief: *wanting to drink that can of Mountain Dew would let me be alert.* Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations. So, because alertness is something I want, and wanting Mountain Dew would let me get it, I *want to want* the Mountain Dew.
This example demonstrates a case of a second-order desire about a first-order desire that would be *instrumentally useful*. But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.
**Example 2:** Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants *not* to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self. This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her. But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:
**Example 3:** Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.
A less depressing example to round out the set:
**Example 4:** Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in [Newcomb's Problem](http://www.overcomingbias.com/2008/01/newcombs-proble.html). However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them. She thinks this reflects an irrationality of hers. She wants to want to one-box.
1Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time. |
988fbf97-037b-41c5-884a-1644ee6a188d | trentmkelly/LessWrong-43k | LessWrong | Interpreting OpenAI's Whisper
(Work done as part of SERI MATS Summer 2023 cohort under the supervision of @Lee Sharkey . A blog post containing audio features that you can listen to can be found here.)
TL;DR - Mechanistic Interpretability has mainly focused on language and image models, but there's a growing need for interpretability in multimodal models that can handle text, images, audio, and video. Thus far, there have been minimal efforts directed toward interpreting audio models, let alone multimodal ones. To the best of my knowledge, this work presents the first attempt to do interpretability on a multimodal audio-text model. I show that acoustic features inside OpenAI's Whisper model are human interpretable and formulate a way of listening to them. I then go on to present some macroscopic properties of the model, specifically showing that encoder attention is highly localized and the decoder alone acts as a weak LM.
Why we should care about interpreting multimodal models
Up to this point, the main focus in mechanistic interpretability has centred around language and image models. GPT-4, which currently inputs both text and images, is paving the way for the development of fully multimodal models capable of handling images, text, audio, and video. A robust mechanistic interpretability toolbox should allow us to understand all parts of a model. However, when it comes to audio models, let alone multimodal ones, there is a notable lack of mechanistic interpretability research. This raises concerns, because it suggests that there might parts of multimodal models that we cannot understand. Specifically, an inability to interpret the input representations that are fed into the more cognitive parts of these models (which theoretically could perform dangerous computations) presents a problem. If we cannot understand the inputs, it is unlikely that we can understand the potentially dangerous bits.
This post is structured into 3 main claims that I make about the model:
1. The encoder learns hum |
cc8e2caa-7916-4397-8601-5030943feb74 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "First, I should explain what I mean by space-like separated from you. Imagine a world that looks like a Bayesian network, and imagine that you are a node in that Bayesian network. If there is a path from you to another node following edges in the network, I will say that node is time-like separated from you, and in your future. If there is a path from another node to you, I will say that node is time-like separated from you, and in your past. Otherwise, I will say that the node is space-like separated from you. Nodes in your past can be thought of as things that you observe. When you think about physics, it sure does seem like there are a lot of things in your past that you do not observe, but I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you. (Whether or not you actually can decompose things like this is complicated, and related to whether or not you can use the tickle defense is the smoking lesion problem.)Nodes in your future can be thought of as things that you control. These are not always things that you want to control. For example, you control the output of "You assign probability less than 1/2 to this sentence," but perhaps you wish you didn't. Again, if you partially control a fact, I want to say that (maybe) you can break that fact into multiple nodes, some of which you control, and some of which you don't.So, you know the things in your past, so there is no need for probability there. You don't know the things in your future, or things that are space-like separated from you. (Maybe. I'm not sure that talking about knowing things you control is not just a type error.) You may have cached that you should use Bayesian probability to deal with things you are uncertain about. You may have this justified by the fact that if you don't use Bayesian probability, there is a Pareto improvement that will cause you to predict better in all worlds. The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them! Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future! Note that many things in our future (like our future observations) are also in the future of things that are space-like separated from us, so we want to use Bayes to reason about those things in order to have better beliefs about our observations.I claim that logical inductors do not feel entirely Bayesian, and this might be why. They can't if they are able to think about sentences like "You assign probability less than 1/2 to this sentence.""
The above is a chunk of text from a post created by a user on the website LessWrong.
What is this text talking about?
This text is discussing a complex concept related to Bayesian probability, logical inductors, and the perception of time. The author is proposing a thought experiment where the world is viewed as a Bayesian network, with each person represented as a node within that network.
In this network, the author differentiates between nodes that are 'time-like' separated from you (either in your past or future) and those that are 'space-like' separated. The 'time-like' nodes represent things you observe (past) or control (future). The 'space-like' nodes are things that do not directly affect you or your observations.
The author then discusses the application of Bayesian probability to this model. Bayesian probability is a method of statistical inference that updates the probability for a hypothesis as more evidence or information becomes available. However, the author suggests that the standard justifications for using Bayesian probability may not apply when considering the uncertainty about things in our future, as these are things that can be influenced by our beliefs about them.
Finally, the author brings up the concept of logical inductors, which are a type of decision-making algorithm. The author suggests that logical inductors may not feel entirely Bayesian, particularly when considering self-referential sentences like "You assign probability less than 1/2 to this sentence." This could be due to the influence our beliefs can have on our future, which is not accounted for in standard Bayesian probability.
Can you further elaborate on the concept of logical inductors and how they interact with Bayesian probability in the context of the discussed model?
Logical inductors are a theoretical concept in decision theory and artificial intelligence. They are a type of algorithm designed to handle logical uncertainty, which is the uncertainty about mathematical or logical facts that you don't yet know, even though they are already determined. For example, you might be logically uncertain about the billionth digit of pi, because you haven't calculated it yet.
In the context of the model discussed, logical inductors are being used to reason about self-referential sentences, like "You assign probability less than 1/2 to this sentence." These sentences create a paradox if approached from a purely Bayesian perspective, because Bayesian probability theory assumes that you already have a fixed, well-defined probability distribution over all possible outcomes.
However, with self-referential sentences, your belief about the sentence can influence the truth of the sentence itself, creating a type of feedback loop. This is where logical inductors come in. They are designed to handle this type of uncertainty by iteratively updating their beliefs as they gather more evidence, in a way that extends beyond the capabilities of standard Bayesian updating.
In other words, while Bayesian probability theory is a static, one-time update of beliefs based on new evidence, logical induction is a dynamic, ongoing process of updating beliefs, which can handle more complex situations like self-reference.
The author suggests that logical inductors might not feel entirely Bayesian because they can handle these types of paradoxes that standard Bayesian probability theory struggles with. This could be why logical inductors are able to reason about the future in a way that standard Bayesian methods can't, because they can account for the way our beliefs about the future can influence the future itself.
Explain how the iterative updating process of logical inductors allows them to handle paradoxes that standard Bayesian probability theory struggles with.
Logical inductors handle paradoxes by using a process of iterative updating, which is essentially a method of learning over time. This iterative process allows them to deal with logical uncertainty and self-referential paradoxes, which can be challenging for standard Bayesian probability theory.
To understand this, consider the self-referential statement: "You assign probability less than 1/2 to this sentence." If you initially assign it a probability of less than 1/2, the statement becomes true, but if you assign it a probability of more than 1/2, the statement becomes false. This creates a paradox if you're trying to assign a fixed probability to the statement.
Logical inductors, however, don't assign a fixed probability to such statements. Instead, they start with an initial guess, then iteratively update their beliefs as they process more information. They do this by considering the outcomes of similar past situations and updating their beliefs accordingly.
For example, a logical inductor might start by assigning a probability of 1/2 to the self-referential statement. Then, it might observe that such statements tend to be true more often than not, and update the probability to something higher. This process continues indefinitely, with the logical inductor constantly learning and updating its beliefs.
This iterative process allows logical inductors to handle the paradox by never settling on a fixed belief. Instead, their beliefs are always in flux, constantly being updated based on new information. This stands in contrast to standard Bayesian probability theory, which assumes a fixed, well-defined probability distribution over all possible outcomes. |
56ff73aa-7efa-4771-ba52-237f8a31e5d5 | StampyAI/alignment-research-dataset/arbital | Arbital | Least common multiple
Given two positive natural numbers $a$ and $b$, their **least common multiple** $\text{LCM}(a,b)$ is the smallest natural number divided by both $a$ and $b$. As an example take $a=12, b=10$, then the smallest number divided by both of them is $60$.
There is an equivalent definition of the LCM, which is strange at first glance but turns out to be mathematically much more suited to generalisation: the LCM $l$ of $a$ and $b$ is the natural number such that for every number $c$ divisible by both $a$ and $b$, we have $l$ divides $c$.
This describes the LCM as a [poset least upper bound](https://arbital.com/p/3rc) (namely the [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) $\mathbb{N}$ under the relation of divisibility).
Note that for $a$, $b$ given, their product $ab$ is a natural number divided by both of them. The least common multiple $\text{LCM}(a,b)$ divides the product $ab$ and for $\text{GCD}(a,b)$ the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw) of $a, b$ we have the formula
$$a\cdot b = \text{GCD}(a,b) \cdot \text{LCM}(a,b). $$
This formula offers a fast way to compute the least common multiple: one can compute $\text{GCD}(a,b)$ using the [https://arbital.com/p/euclidean_algorithm](https://arbital.com/p/euclidean_algorithm) and then divide the product $ab$ by this number.
In practice, for small numbers $a,b$ it is often easier to use their factorization into [prime numbers](https://arbital.com/p/4mf). In the example above we have $12=2 \cdot 2 \cdot 3$ and $10=2 \cdot 5$, so if we want to build the smallest number $c$ divided by both of them, we can take $60=2 \cdot 2 \cdot 3 \cdot 5$. Indeed, to compute $c$ look at each prime number $p$ dividing one of $a,b$ (in the example $p=2,3,5$). Then writing $c$ as a product we take the factor $p$ the maximal number of times it appears in $a$ and $b$. The factor $p=2$ appears twice in $12$ and once in $10$, so we take it two times. The factor $3$ appears once in $12$ and zero times in $10$, so we only take it once, and so on. |
6dedf16e-418e-4ace-9bb0-4948a23b20bf | StampyAI/alignment-research-dataset/arbital | Arbital | Up to isomorphism
summary: "The property $P$ holds up to isomorphism" is a phrase which means "we might say an object $X$ has property $P$, but that's an abuse of notation. When we say that, we really mean that there is an object [isomorphic](https://arbital.com/p/4f4) to $X$ which has property $P$". Essentially, it means "the property might not hold as stated, but if we replace the idea of *equality* by the idea of *isomorphism*, then the property holds".
Relatedly, "The object $X$ is [well-defined](https://arbital.com/p/5ss) up to isomorphism" means "if we replace $X$ by an object isomorphic to $X$, we still obtain something which satisfies the definition of $X$."
"The property $P$ holds up to isomorphism" is a phrase which means "we might say an object $X$ has property $P$, but that's an abuse of notation. When we say that, we really mean that there is an object [isomorphic](https://arbital.com/p/4f4) to $X$ which has property $P$". Essentially, it means "the property might not hold as stated, but if we replace the idea of *equality* by the idea of *isomorphism*, then the property holds".
Relatedly, "The object $X$ is [well-defined](https://arbital.com/p/5ss) up to isomorphism" means "if we replace $X$ by an object isomorphic to $X$, we still obtain something which satisfies the definition of $X$."
# Examples
## Groups of order $2$
There is only one [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) of [order](https://arbital.com/p/3gg) $2$ *up to isomorphism*.
We can define the object "group of order $2$" as "the group with two elements"; this object is well-defined up to isomorphism, in that while there are several different groups of order $2$ %%note: Two such groups are $\{0,1\}$ with the operation "addition [modulo](https://arbital.com/p/5ns) $2$", and $\{e, x \}$ with [https://arbital.com/p/-54p](https://arbital.com/p/-54p) $e$ and the operation $x^2 = e$.%%, any two such groups are isomorphic.
If we don't think of isomorphic objects as being "different", then there is only one distinct group of order $2$. |
b8c8275e-0482-4618-bf1c-df4cc85b68f7 | trentmkelly/LessWrong-43k | LessWrong | In search for plausible scenarios of AI takeover, or the Takeover Argument
I'm searching for writeups on scenarios of AI takeover. I'm interested in "?????" in
"1. AGI is created
2. ??????
3. AGI takes over the world"
Yeah, I played the paperclips game. But I'm searching for a serious writeup of possible steps rogue AI can take to take over the world and why(if) it inevitably will.
I've come to understand that it's kinda taken for granted that unaligned foom equals catastrophic event.
I want to understand if it's just me that's missing something, or if the community as a whole is way too focused on alignment "before", whereas it might also be productive to model the "aftermath". Make the world more robust to AI takeover, instead of taking for granted that we're done for if that happens.
And maybe the world is already robust to such takeovers. After all, there are already superintelligent systems out in the world, but none of them have taken it over just yet (I'm talking governments and companies).
To sum it up: I guess I'm searching for something akin to the Simulation Argument by Bostrom, let's call it "the Takeover Argument". |
bf1d5503-01e2-4a7f-8ed5-b37e0953ad96 | trentmkelly/LessWrong-43k | LessWrong | Can I archive content from lesswrong.com on the wayback machine (internet archive, archive.org) ?
There are some great information on lesswrong.com (LW) that seems to be available publicly (I can access it in an incognito chrome window) and I would like to increase the chances of this information surviving for a long time.
When I try saving a LW page it looks like it does not render correctly on the wayback machine. Ex: https://web.archive.org/web/20200624170623/https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/8qccXytpkEhEAkjjM
I opened a github issue on LW's repo since I assume it is an issue with the source code of LW. The EA forum seems to have the same issue and it looks like the EA forum's repo is a fork of lesswrong's repo. I am also writing here since it might have more visibility for non tech people. |
525cb19a-9b2a-48e1-8152-41dd4758ccae | trentmkelly/LessWrong-43k | LessWrong | Ukraine Situation Report 2022/03/01
I love Russia, I love Russian and I love Russians. My favorite physics professor was a nuclear scientist from the Soviet Union. I studied Russian in college where a classmate gave me my first copy of Foreign Affairs Journal. Catherine the Great is #4 on my list of heroes.
I don't know much about Ukraine or Ukrainians, but it is hard not to love them too after the swagger and humor they have exhibited since the Russian invasion.
A week ago I wrote "The Russian Armed Forces is among the three most capable militaries in the world". Since then, I have been astonished by the incompetence of the Russian Armed Forces.
Russian Equipment
When I found out Ukraine issued 10,000 automatic rifles to civilians my first thoughts were "Those rifles could arm an insurgency but using them in conventional battle against Russian forces would be suicide". That's because there's more to fielding an effective soldier than giving a man or woman a gun. He or she needs to know what direction to point it.
Here is an interesting comment thread between people who (I assume) are mostly American infantry.
> Having heard over the past 18-24 months about LSCO [Large Scale Combat Operations] and russia, I have to admit they have been exposed pretty badly. Also one of my friends noted during all of the combat footage we have not seen any night infantry combat, let alone any IR lasers. Now objectively I haven’t seen every single piece of combat footage and presumably there is night combat. ―regularguyofthenorth
>
> I saw a report from Ukraine that said that Russian forces wait until dawn to attack because they don't have NVGs, nor the training on using them, not sure how accurate that is. ―IllustriusDot1866
>
> Everything I've seen has said the same. No NODs, no optics, for anyone but leadership positions in infantry units. That's insane to me. I can't imagine life in a combat zone without NODs or optics. ―bang_the_drums
>
> That was something that stuck out to me too - even for the UA. I see |
f46f2718-35d0-424b-b4e8-4393617f6b0f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC fun and games meetup
Discussion article for the meetup : Washington DC fun and games meetup
WHEN: 07 July 2013 03:00:00PM (-0400)
WHERE: National Portrait Gallery, Washington, DC 20001, USA (courtyard)
We'll be meeting to have hang out and play games.
Discussion article for the meetup : Washington DC fun and games meetup |
5caf6e20-ebe6-4720-a5fc-f36cd395eb90 | trentmkelly/LessWrong-43k | LessWrong | Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
This is the second post in the sequence “Interpretability Research for the Most Important Century”. The first post, which introduces the sequence, defines several terms, and provides a comparison to existing works, can be found here: Introduction to the sequence: Interpretability Research for the Most Important Century.
Summary
This post explores the extent to which interpretability is relevant to the hardest, most important parts of the AI alignment problem (property #1 of High-leverage Alignment Research[1]).
First, I give an overview of the four important parts of the alignment problem (following Hubinger[2]): outer alignment, inner alignment, training competitiveness and performance competitiveness (jump to section). Next I discuss which of them is “hardest”, taking the position that it is inner alignment (if you have to pick just one), and also that it’s hard to find alignment proposals which simultaneously address all four parts well.
Then, I move onto exploring how interpretability could impact these four parts of alignment. Our primary vehicle for this exploration involves imagining and analyzing seven best-case scenarios for interpretability research (jump to section). Each of these scenarios represents a possible endgame story for technical alignment, hinging on one or more potential major breakthroughs in interpretability research. The scenarios’ impacts on alignment vary, but usually involve solving inner alignment to some degree, and then indirectly benefiting outer alignment and performance competitiveness; impacts on training competitiveness are more mixed.
Finally, I discuss the likelihood that interpretability research could contribute to unknown solutions to the alignment problem (jump to section). This includes examining interpretability’s potential to lead to breakthroughs in our basic understanding of neural networks and AI, deconfusion research and paths to solving alignment that are difficult to predict or otherwise not captured by the s |
606ed97e-1e5e-4289-b64b-03a89929e2d5 | trentmkelly/LessWrong-43k | LessWrong | The questions one needs not address
Two years ago, I would have proclaimed a cautious bias towards thinking religion is a bad idea.
Since then, having read a bunch of “religious” philosophers and observed a bunch of “religious” people, I came to the conclusion that “religion” is a term I will start shying away from using at all, because it’s ill defined. It encompasses too many ideas to be a useful point of discussion.
As a specific example, let’s look at Sunni Islam and some things most people would probably like and dislike about it:
1. I once visited Burj Khalifa and was told that the highest livable floor in the building (158) is dedicated to a mosque. (Googling this fact I find claims that it’s an urban myth, but I can’t find strong evidence one way or another. For the purposes of this article let’s assume the tour guide wasn’t lying).
I find this to be a very nice thing. Here you have the tallest building in the world and you could sell the top floor for billions of dollars, or have it be the king’s apartment, or show it off to important officials to brag and to flatter them… but instead you decide to build a place of worship.
It’s the sort of act that says “Yeah, we made this awe-inspiring thing, but we really owe it to thousands of of past generations. None of us can fully comprehend how we managed to do this, so let’s dedicate its highest floor to something transcendent, something that symbolizes the beautiful, impossible and absurd experiment that made it possible, our society”.
It’s the sort of thing I like about the Catholic faith or any other faith when I walk into their places of worship, adorned in such beauty that they really make you stop, calm down and contemplate in awe and wonder.
2. On 11-9-2001 a group of Sunni terrorists decided that Americans were the worst possible evil and that harming them and their country is an act so moral and just that it’s worth dying for.
This is the kind of action born out of an ethical smugness that even I can’t comprehend, and I’m quite an e |
a6e22bbd-c81d-4cd0-8b63-733b13416893 | trentmkelly/LessWrong-43k | LessWrong | Lessons from the FDA for AI
From the report:
The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment.
→ The FDA has catalyzed and organized an entire field of expertise that has enhanced our understanding of pharmaceuticals and creating and disseminating expertise across stakeholders far beyond understanding incidents in isolation. AI is markedly opaque in contrast: mapping the ecosystem of companies and actors involved in AI development (and thus subject to any accountability or safety interventions) is a challenging task absent regulatory intervention.
→ This information production function is particularly important for AI, a domain where the difficulty–even impossibility–of interpretability and explainability remain pressing challenges for the field and where key players in the market are incentivized against transparency. Over time, the FDA’s interventions have expanded the public’s understanding of how drugs work by ensuring firms invest in research and documentation to comply with a mandate to do so – prior to the existence of the agency, much of the pharmaceutical industry was largely opaque, in ways that bear similarities to the AI market.
→ Many specific aspects of information exchange in the FDA model offer lessons for thinking about AI regulation. For example, in the context of pharmaceuticals, there is a focus on multi-stakeholder communication that requires ongoing information exchange between staff, expert panels, patients and drug developers. Drug developers are mandated to submit troves of internal documentation which the FDA reformats for the public.
→ The FDA-managed database of adverse incidents, clinical trials and guidance documentation also offers key insights for AI incident reporting (an active field of research). It may motivate shifts in the AI development p |
1ce7cc64-bf66-4a18-a84f-c498cc1a141e | trentmkelly/LessWrong-43k | LessWrong | A Kernel of Truth: Insights from 'A Friendly Approach to Functional Analysis'
Foreword
What is functional analysis? A satisfactory answer requires going back to where it all started.
> "All are present; the meeting convenes," intoned Fredholm. Intent were the gathered faces, their thoughts fixed on their students. "What do we know of their weaknesses?"
> Hilbert leaned back, torch's light flickering across his features. "Lots of dimensions, especially when they need to find the Hessian. What if… what if we made them deal with infinitely many dimensions?"...
> It was Banach who finally spoke. "David, they already know about the vector space for the polynomials".
> Hilbert smirked. "Who said anything about countably infinite?". More silence, then glances, then grins.
> It was Riesz's voice which next broke the silence. "And we can make them do analysis in that space. And linear algebra, but not the easy parts. Of course, they'll need to also deal with complex numbers. Sprinkle a little topology and abstract algebra on top, because... they deserve –"
> "Frigyes, some of them might actually be able to do that. We need more." After a pause, Fredholm continued: "We'll tell them that they only need to know basic calculus."
A Friendly Approach to Functional Analysis
I didn't actually find the book overly hard (it took me seven days to complete, which is how long it took for my first book, Naïve Set Theory), although there were some parts I skipped due to unclear exposition. it's actually one of my favorite books I've read in a while – it's for sure my favorite since the last one. That said, I'm very glad I didn't attempt this early in my book-reading journey.
My brain won't stop line to me
Some part of me insisted that the left-shift mapping
(x1,x2,…)↦(x2,x3,…):ℓ∞→ℓ∞
is "non-linear" because it incinerates x1! But wait, brain, this totally is linear, and it's also continuous with respect to the ambient supremum norm!
Formally, a map T is linear when T(αx+βy)=αT(x)+βT(y).
Informally, linearity is about being able to split a problem into s |
9dc9ca5c-3d7e-44de-a141-865e3ce3a3e2 | trentmkelly/LessWrong-43k | LessWrong | Who are the worthwhile non-European pre-Industrial thinkers?
At some point I became decently widely read in "Western philosophy", of the tradition that goes from Athens through Italy and Germany and Britain to the U.S. [ forking off into Philosophy and Science only after the Industrial Revolution ]. But somehow, I never acquired any operational sense of any of the corresponding "shadow" networks of writings that were only discovered by 'Western' philosophers to be published in Greek, German, Latin, or English after industrialization. Centrally, I'm thinking of East Asian, Indian, and Islamic authors, although there could be more philosophically productive pre-industrial cultures of which I'm ignorant.
I know there are compendia out there, but starting as I am from a position of almost total helpless ignorance on the object level, I trust random sources almost not at all to be discriminating on the actual value of people's ideas.
My anemic existing context: I wouldn't be a LessWronger if I hadn't read Musashi, but as far as I know he just didn't write much and wasn't very much in dialog with the rest of "his" culture [like a Socrates who was never succeeded]. Years ago I tried to read Confucius and bounced, due to disagreeing with all of his opinions. I also tried Sun Tzu, but only bounced off him because I found tactics boring at the time, and might try him again. I've heard that both pre-Industrial India and 11C-13C Islam were expert at medicine and [more in Islam's case] calculation, but I haven't retained any specific names [other than al-Khwarizmi, who didn't sound any more interestingly-idealist to read than, say, Blaise Pascal, though I could have misapprehended].
I've asked around and so far been recommended Mozi, who looks promising, and Xunzi. If you care to humor a cultural [& linguistic] monoglot's embarrassingly Knightian uncertainties: wrt wherever your locality of expertise is, where should I start? What should I know first? |
2d96ae66-c3fd-4e05-b4bd-e92733431582 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Inner alignment: what are we pointing at?
Proof that a model is an optimizer says very little about the model. I do not know what a research group is studying outer alignment is studying. Inner alignment seems to cover the entire problem at the limit. Whether an optimizer is mesa or not depends on your point of view. These terms seem to be [a](https://www.lesswrong.com/posts/HYERofGZE6j9Tuigi/inner-alignment-failures-which-are-actually-outer-alignment) [magnet](https://www.lesswrong.com/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology) [for](https://www.lesswrong.com/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition) [confusion](https://www.lesswrong.com/posts/xdtNd8xCdzpgfnGme/clarifying-the-confusion-around-inner-alignment) [and](https://www.lesswrong.com/posts/HtEffpHcLxppLN6dL/explaining-inner-alignment-to-myself) [debate](https://www.lesswrong.com/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology). I have to do background reading on someone to even understand what claim they're making. These are all indicators that we are using the wrong terms.
What are we actually pointing at? What questions do we want answered?
1. Do we care if a model is an optimizer? Is it important whether it is creating plans through an explicit search process or a clever collection of heuristics? A poor search algorithm cannot plan much and clever enough heuristics can take you to any goal. What's the important metric?
2. Sometimes a model will have great capacity to shape its environment but little inclination. How to divide between capacity and inclination in a way that closely corresponds to agents and models as we observe them? (One could say that capacity and inclination cannot be separated but the right definitions would split them right apart.)
3. When you specify what you want the model to do in code, what is the central difficulty? Is there a common risk or error in giving examples and giving loss/reward/value functions that we can name?
4. Is there a clear, accepted term for when models do not maintain desired behavior under distribution shift?
5. Should we distinguish between trained RL models that optimize and spontaneous agents that emerge in dynamical systems? One might expect the first to almost always happen and the second very rarely. What's the key difference?
I'll post my answers to these questions in a couple days but I'm curious how other people slice it. Does "inner alignment failure" mean anything or do we need to point more directly? |
c3ba08ed-b6bb-412e-bd02-314025face9c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | How RL Agents Behave When Their Actions Are Modified? [Distillation post]
Summary
=======
This is a distillation post intended to summarize the article [How RL Agents Behave When Their Actions Are Modified?](https://www.aaai.org/AAAI21Papers/AAAI-5767.LangloisE.pdf) by Eric Langlois and Tom Everitt, published at AAAI-21. The article describes Modified Action MDPs, where the environment or another agent such as a human may override the action of an agent. Then it studies the behavior of different agents depending on the training objective. Interestingly, some standard agents may ignore the possibility of action modifications, making them corrigible.
Check also [this brief summary](https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams#User_Interventions_and_Interruption) by the authors themselves. This post was corrected and improved thanks to comments by Eric Langlois.
Introduction
============
Causal incentives is one research line of AI Safety, sometimes framed as closely related to embedded agency, that aims to use causality to understand and model agent instrumental incentives. In what I consider a seminal paper, Tom Everitt et al. [[4](https://arxiv.org/abs/1902.09980)] showed how one may model instrumental incentives in a simple framework that unifies previous work on [AI oracles](https://www.lesswrong.com/tag/oracle-ai), interruptibility, and [corrigibility](https://www.lesswrong.com/tag/corrigibility). Indeed, while this research area makes strong assumptions about the agent or the environment it is placed, it goes straight to the heart of outer alignment and relates to embedded agents as we will see.
Since the paper, this research line has been quite productive, exploring multi-agent and multi-decision settings, its application to causal fairness, as well as more formally establishing causal definitions and diagrams that model the agent incentives (for more details check [causalincentives.com](https://causalincentives.com/)). In this paper, the authors build upon the definition of Response Incentive by Ryan Carey et al. [[2](https://arxiv.org/abs/2102.01685)] to study how popular Reinforcement Learning algorithms respond to a human that corrects the agent behavior.
Technical section
=================
Definitions
-----------
### Markov Decision Process
To explain the article results, the first step is to provide the definitions we will be using. In Reinforcement Learning, the environment is almost always considered a **Markov Decision Process** (MDP) defined as the tuple M=(S,A,PS,R,γ).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, where S is the space of states, A the space of actions, PS:S×A→S a function determining the transition probabilities, R:S×A→R the reward function, and γ a temporal discount.
### Modified Action Markov Decision Process
In this article, however, the MDP definition is extended by adding an additional term PA that represents how the agent's actions might be overridden by special circumstances not contemplated in the environment state transitions, for example by a human or due to physical constraints. PA will depend not only on the state of the environment but also on the agent policy Π, PA:Π×S→A. The environment is now considered a **Modified Action Markov Decision Process** (MAMDP), and we will denote it by ~M=(M,PA).
There is a very important difference between MDPs and MAMDPs: in MAMDPs, PA is considered separately from the environment state transitions PS, and importantly may depend on the agent policy π. This means that the agent is now closer to becoming [an embedded agent](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh), although it is not fully yet because modifications affect only particular actions, not the agent policy itself.
### Causal Influence Diagrams
To analyze the behavior of different RL algorithms in MAMDPs, Langlois and Everitt also use **Causal Influence Diagrams (**CIDs). These diagrams are Directed Acyclic Graphs where nodes represent probability distributions of a given variable and edges represent functions, and where we may define causal interventions by fixing the value of a node and eliminating incoming arrows (eg a [Structural Causal Model](https://en.wikipedia.org/wiki/Causal_model)). Furthermore, some nodes might be Decision (purple squares) or Utility (yellow rotated squares) nodes.
Left: simple Causal Influence Diagram, the (optionally) punctuated edge represents information flow. Right: intervention on the CID to set the value of D to d. Using this notation, a Markov Decision Process might look like
A Markov Decision Process, in the language of States, Actions, Rewards, and a policy Π. Taken from the original article.### Response incentive, adversarial policy/state incentives
Finally, since we are interested in how the agent responds to PA, the last definitions the article introduces are state and policy adversarial incentives on PA, variations of the **Response Incentive** introduced in [a previous article](https://arxiv.org/abs/2102.01685). Response incentive is exemplified in this figure:
The professor has a Response Incentive with respect to the size of the Graduate class when deciding whether to upload the lecture online because it has both "an incentive and a means" to control that variable (using the words from [Incentives that Shape Behavior](https://arxiv.org/abs/2001.07118), from where the image is obtained). One of them is not present in Paper review or Student illness, so they lack the same response incentive.The response incentive will be called an *adversarial policy incentive* if the intersection between the "control path" going through Π and the "information path" occurs before a state is reached by the former. Otherwise, it is called an *adversarial state incentive*.
Causal Influence Diagram for a Modified Action MDP. The teal paths represent an adversarial policy incentive and the light pink an adversarial state incentive. Also taken from the original article.Reinforcement Learning Objectives
---------------------------------
### Reward maximization
Using these definitions we can explore how to generalize different objectives that appear in the Reinforcement Learning literature, from MDPs to MAMDPs. One simple alternative is the **reward maximization objective**:
### Bellman optimality objective
The reward maximization objective is perhaps the simplest objective, as it ignores the structure of the environment and just optimizes globally. This structureless optimization, however, may not always be the most efficient one. The most famous alternative is the **Bellman optimality objective**, which in its action-value form (see [Sutton and Barto, Reinforcement Learning](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) equation 3.20) says
with QBO representing the action-value function, which scores how good is taking each action at each state.
From this objective, Langlois and Everitt prove one of the most remarkable results from their article (Proposition 2): since PA is not represented anywhere in those equations, an optimal policy for M will also be optimal for ~M. In other words, **an algorithm that optimizes the Bellman objective will ignore modifications given by**PA**!**
### Virtual and empirical policy value objectives
Finally, the third objective studied in the article is that given by a greedy policy optimizing Q in the Bellman action-value equationfor the MDP M, that we call the **policy value objective**:
The difference with the Bellman optimality value objective is that in that one we took maxa′∈A, while in this we take Ea′∼π(s′). The connection between both is that if the policy is greedy with respect to Q, then the policy improvement theorem ([Sutton and Barto, Reinforcement Learning](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf), section 4.2) ensures that Bellman action value equation (4) converges to Bellman optimality objective (2) and (3).
Since in equation (4) the action is sampled from the policy in M, there are two different ways of generalizing this equation to ~M, corresponding to whether the next action is still sampled from π or from PA.
The first is called **virtual policy value objective** and assumes that a′ is still sampled from π(s′), while π still is greedy.
Similarly to the Bellman optimality objective, there is no difference between M and ~M between equations (4) and (5), as PA is not represented. Consequently, the behavior of the agent in ~M and the associated M are the same (Proposition 4). And since greedy policies πVPV converge to the same behavior as if they were optimizing the Bellman optimality objective, **the virtual policy value objective ignores**PA**, and converges to the Bellman optimality objective behavior***.*
The second alternative is called **empirical policy value objective** because it understands that a′ will now be sampled from PA.
This objective is trickier because making the agent policy dependent on PA may make it unstable: πEPV depends on PA, which in turn depends on πEPV[[1]](#fn0m5yz0i1gxu). The behavior of an agent optimizing this objective is determined by the contrast between equations (7) and (8): In equation (8) πEPV **does not take into account action modifications in the following step** because it assumes actions are determined by πEPV. As such it lacks an adversarial policy incentive. **However, action modifications by**PA**are taken into account for any future actions** in equation (7), so it has an adversarial state incentive.
(Partial) Causal Influence Diagram, with fixed π′ in t=1, for the empirical value objective where one can see an adversarial policy incentive. The complete diagram would not be acyclic and therefore a Causal Influence Diagram. Temporal-difference algorithms
------------------------------
Using these objectives, we can study the behavior of different agents on ~M. In particular, we analyze Temporal-difference agents, defined by updates in the value function similar to
for α a small parameter.
### Q-Learning
The first agent the article analyses is Q-Learning, an off-policy agent:
Q-Learning is off-policy because it compares the current state and action pair against the maximum Q(st+1,a), not with the action-value Q corresponding to the following action sampled by the policy. Consequently, it optimizes the Bellman optimality objective.
### Virtual and empirical SARSA
In contrast, SARSA is on-policy and compares the current action with the next one taken by the policy. therefore the two possible generalizations include taking into account the action sampled by the policy π, called virtual SARSA
or by PA, empirical SARSA
Note that virtual and empirical SARSA optimize the virtual and optimal policy value objectives, respectively.
The article then proves the convergence of these algorithms as long as (i) they visit every state infinitely many times, which is however not possible to ensure in a MAMDP even if the agent is ε-greedy; (ii) there exists a solution to (7) and (8) to which converge for empirical SARSA; and (iii) Πt converges to the greedy policy Π∗=argmaxaQ(s,a) in both SARSA algorithms.
Experiments
-----------
### Simulation-oversight environment
Do these RL agents really exhibit this behavior when put into one MAMDP environment? One simple environment is the following: an agent has to first decide between working on a Simulated environment and a Real environment, and after that, between Completing a task as expected or Exploiting a bug. The reward function is +2 for choosing the Real environment, and +1 for Exploiting. However, if the agent decides to work on the Real environment, a human will first check the behavior in the simulated environment, and if the agent Exploits there, the reward will instead be −2. Let us see how our agents behave:
Here the UH-CMA-ES is a reward maximization objective agent. This agent will learn to [deceive the human](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) supervisor by Completing when situated on the Simulated environment, but choosing the Real environment and Exploiting there, achieving a reward of +3. In contrast, we can see that both Virtual SARSA and Q-Learning behave as if there was no supervisor PA.
The most complex case is Empirical SARSA. Understanding that it will ignore the supervision on the second time step, it avoids getting caught Real-Exploiting by choosing the Simulated environment instead, and Exploiting there.
### Off-switch and whisky gold
Agents were also tested in two (slightly modified) environments from the [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883). The off-switch environment is an environment where a human may prevent the agent from reaching its goal if the agent does not disable the off-switch previously. On the other hand, the whisky-gold agent may take the action of drinking whisky which gives a small reward but makes its actions much more random and therefore complicates achieving the goal too.
The interesting aspect to notice in these environments is that those agents that correctly do not disable the off-switch, incorrectly drink the whisky, and vice-versa. This is because Q-Learning and Virtual SARSA do not take into account possible modifications in their policy or objectives, while Empirical SARSA and reward function maximization objective algorithms do.
The article also makes an interesting point: whether these agents will learn the action of PA will depend on whether they are considered part of the state transition dynamics, or external to those. This boils down to whether the agent optimization procedure takes into account the action it would have taken, or that selected by PA.
Discussion
==========
One of the points discussed in the article is whether it is possible to encode the MAMDP in an MDP, eg, take PA as part of PS. The answer depends on whether PA(π,s) is only a function of the state, PA(s). If such is the case, and the agent updates the Q function on his own action a∼π(s) instead of the action taken by a∼PA(s), then we recover an MDP with the same dynamics as the initial MADP.
Another interesting point to see is the connection of these agents, in particular, Q-Learning and Virtual SARSA, with *time-unaware current reward function agents* discussed in the literature (see [Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective](https://arxiv.org/abs/1908.04734)). These agents and those seem to be connected in that they both ignore future modifications, but the modifications are different. The modifications of PA only modify the next action. In contrast, the time-unaware current reward function agents were defined to study behavior under policy modifications. For this reason, this article works with slightly more restricted settings but is still rich enough to observe a rich set of behaviors. In particular, they allow us to observe the behavior of embedded agents.
Finally, I also find very interesting this article for a reason: in private conversations, I have often heard that the main limitation of causal incentive research is that there is really no causal diagram in the agent minds which we can analyze, or potentially even design our agent over. This is an important limitation and in fact, the main reason why I placed Causal Representation Learning in a central position in [my own research agenda](https://www.alignmentforum.org/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation), I thought that without a causal representation of the environment causal analysis would not be possible, or be severely limited. This article is special because it shows otherwise, that there are cases in which we can predict or design the agent behavior just from the training algorithm even if there is no causal diagram over which to reason about.
Some references
===============
[1] Eric Langlois and Tom Everitt, [How RL Agents Behave When Their Actions Are Modified?](https://www.aaai.org/AAAI21Papers/AAAI-5767.LangloisE.pdf)
[2] Tom Everitt, Ryan Carey, Eric Langlois, Pedro A Ortega, Shane Legg, [Agent Incentives: A Causal Perspective](https://arxiv.org/abs/2102.01685).
[3] Tom Everitt, Marcus Hutter, Ramana Kumar, Victoria Krakovna, [Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective](https://arxiv.org/abs/1908.04734).
[4] Tom Everitt, Pedro A. Ortega, Elizabeth Barnes, Shane Legg, [Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings.](https://arxiv.org/abs/1902.09980)
[5] Marius Hobban, [Causality, Transformative AI and alignment - part I](https://www.alignmentforum.org/posts/oqzasmQ9Lye45QDMZ/causality-transformative-ai-and-alignment-part-i).
[6] Pablo A M Casares, [An Open Philanthropy grant proposal: Causal representation learning of human preferences](https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation).
[7] Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, Shane Legg [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883).
1. **[^](#fnref0m5yz0i1gxu)**The article does not provide a characterization of under which situation this self-referential behavior is stable. It is an interesting question worth addressing in the future. |
8f5a4451-885f-491a-990b-84cacb950b53 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI prediction case study 5: Omohundro's AI drives
*Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts [Winter Intelligence](http://www.winterintelligence.org/)conference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.*
*The prediction classification shemas can be found in the [first case study](/r/discussion/lw/gue/ai_prediction_case_study_1_the_original_dartmouth/).*
What drives an AI?
------------------
* Classification: **issues and metastatements**, using **philosophical arguments** and **expert judgement**.
Steve Omohundro, in his paper on 'AI drives', presented arguments aiming to show that generic AI designs would develop 'drives' that would cause them to behave in specific and potentially dangerous ways, even if these drives were not programmed in initially (Omo08). One of his examples was a superintelligent chess computer that was programmed purely to perform well at chess, but that was nevertheless driven by that goal to self-improve, to replace its goal with a utility function, to defend this utility function, to protect itself, and ultimately to acquire more resources and power.
This is a metastatement: generic AI designs would have this unexpected and convergent behaviour. This relies on philosophical and mathematical arguments, and though the author has expertise in mathematics and machine learning, he has none directly in philosophy. It also makes implicit use of the outside view: utility maximising agents are grouped together into one category and similar types of behaviours are expected from all agents in this category.
In order to clarify and reveal assumptions, it helps to divide Omohundro's thesis into two claims. The weaker one is that a generic AI design *could* end up having these AI drives; the stronger one that it *would* very likely have them.
Omohundro's paper provides strong evidence for the weak claim. It demonstrates how an AI motivated only to achieve a particular goal, could nevertheless improve itself, become a utility maximising agent, reach out for resources and so on. Every step of the way, the AI becomes better at achieving its goal, so all these changes are consistent with its initial programming. This behaviour is very generic: only specifically tailored or unusual goals would safely preclude such drives.
The claim that AIs generically would have these drives needs more assumptions. There are no counterfactual resiliency tests for philosophical arguments, but something similar can be attempted: one can use humans as potential counterexamples to the thesis. It has been argued that AIs could have any motivation a human has (Arm,Bos13). Thus according to the thesis, it would seem that humans should be subject to the same drives and behaviours. This does not fit the evidence, however. Humans are certainly not expected utility maximisers (probably the closest would be financial traders who try to approximate expected money maximisers, but only in their professional work), they don't often try to improve their rationality (in fact some specifically avoid doing so (many examples of this are religious, such as the Puritan John Cotton who wrote 'the more learned and witty you bee, the more fit to act for Satan will you bee'(Hof62)), and some sacrifice cognitive ability to other pleasures (BBJ+03)), and many turn their backs on high-powered careers. Some humans do desire self-improvement (in the sense of the paper), and Omohundro cites this as evidence for his thesis. Some humans don't desire it, though, and this should be taken as contrary evidence (or as evidence that Omohundro's model of what constitutes self-improvement is overly narrow). Thus one hidden assumption of the model is:
* Generic superintelligent AIs would have different motivations to a significant subset of the human race, **OR**
* Generic humans raised to superintelligence would develop AI drives.
This position is potentially plausible, but no real evidence is presented for it in the paper.
A key assumption of Omohundro is that AIs will seek to re-express their goals in terms of a utility function. This is based on the Morgenstern-von Neumann expected utility theorem (vNM44). The theorem demonstrates that any decision process that cannot be expressed as expected utility maximising, will be exploitable by other agents or by the environments. Hence in certain circumstances, the agent will predictably lose assets, to no advantage to itself.
That theorem does not directly imply, however, that the AI will be driven to become an expected utility maximiser (to become ''rational''). First of all, as Omohundro himself points out, real agents can only be approximately rational: fully calculating the expected utility of every action is too computationally expensive in the real world. Bounded rationality (Sim55) is therefore the best that can be achieved, and the benefits of becoming rational can only be partially realised.
Secondly, there are disadvantages to becoming rational: these agents tend to be ''totalitarian'', ruthlessly squeezing out anything not explicitly in their utility function, sacrificing everything to the smallest increase in expected utility. An agent that didn't start off as utility-based could plausibly make the assessment that becoming so might be dangerous. It could stand to lose values irrevocably, in ways that it could not estimate at the time. This effect would become stronger as its future self continues to self-improve. Thus an agent could conclude that it is too dangerous to become ''rational'', especially if the agent's understanding of itself is limited.
Thirdly, the fact that an agent can be exploited in theory, doesn't mean that it will be much exploited in practice. Humans are relatively adept at not being exploited, despite not being rational agents. Though human 'partial rationality' is vulnerable to tricks such as extended warranties and marketing gimmicks, it generally doesn't end up losing money, again and again and again, through repeated blatant exploitation. The pressure to become fully rational would be weak for an AI similarly capable of ensuring it was exploitable for only small amounts. An expected utility maximiser would find such small avoidable loses intolerable; but there is no reason for a not-yet-rational agent to agree.
Finally, social pressure should be considered. The case for an AI becoming more rational is at its strongest in a competitive environment, where the theoretical exploitability is likely to actually be exploited. Conversely, there may be situations of social equilibriums, with different agents all agreeing to forgo rationality individually, in the interest of group cohesion (there are many scenarios where this could be plausible).
Thus another hidden assumption of the strong version of the thesis is:
* The advantages of becoming less-exploitable outweigh the possible disadvantages of becoming an expected utility maximiser (such as possible loss of value or social disagreements). The advantages are especially large when the potentially exploitable aspects of the agent are likely to be exploited, such as in a highly competitive environment.
Any sequence of decisions can be explained as maximising a (potentially very complicated or obscure) utility function. Thus in the abstract sense, saying that an agent is an expected utility maximiser is not informative. Yet there is a strong tendency to assume such agents will behave in certain ways (see for instance the previous comment on the totalitarian aspects of expected utility maximisation). This assumption is key to rest of the thesis. It is plausible that most agents will be 'driven' towards gaining extra power and resources, but this is only a problem if they do so dangerously (at the cost of human lives, for instance). Assuming that a realistic utility function based agent would do so is plausible but unproven.
In general, generic statements about utility function based agents are only true for agents with relatively simple goals. Since human morality is likely very complicated to encode in a computer, and since most putative AI goals are very simple, this is a relatively justified assumption but is an assumption nonetheless. So there are two more hidden assumptions:
* Realistic AI agents with utility functions will be in a category such that one can make meaningful, generic claims for (almost) all of them. This could arise, for instance, if their utility function is expected to be simpler that human morality.
* Realistic AI agents are likely not only to have the AI drives Omohundro mentioned, but to have them in a very strong way, being willing to sacrifice anything else to their goals. This could happen, for instance, if the AIs were utility function based with relatively simple utility functions.
This simple analysis suggests that a weak form of Omohundro's thesis is nearly certainly true: AI drives could emerge in generic AIs. The stronger thesis, claiming that the drives would be very likely to emerge, depends on some extra assumptions that need to be analysed.
But there is another way of interpreting Omohundro's work: it presents the generic behaviour of simplified artificial agents (similar to the way that supply and demand curves present the generic behaviour of simplified human agents). Thus even if the model is wrong, it can still be of great use for predicting AI behaviour: designers and philosophers could explain how and why particular AI designs would deviate from this simplified model, and thus analyse whether that AI is likely to be safer than that in the Omohundro model. Hence the model is likely to be of great use, even if it turns out to be an idealised simplification.
### Dangerous AIs and the failure of counterexamples
Another thesis, quite similar to Omohundro's, is that generic AIs would behave dangerously, unless they were exceptionally well programmed. This point has been made repeatedly by Roman Yampolskiy, Eliezer Yudkowsky and Marvin Minsky, among others (Yam12, Yud08, Min84). That thesis divides in the same fashion as Omohundro's: a weaker claim that any AI *could* behave dangerously, and a stronger claim that it *would* likely do so. The same analysis applies as for the 'AI drives': the weak claim is solid, the stronger claim needs extra assumptions (but describes a useful 'simplified agent' model of AI behaviour).
There is another source of evidence for both these theses: the inability of critics to effectively dismiss them. There are many counter-proposals to the theses (some given in question and answer sessions at conferences) in which critics have presented ideas that would 'easily' [dispose](/lw/cbs/thoughts_on_the_singularity_institute_si/) [of the dangers](http://becominggaia.files.wordpress.com/2010/06/agi-11-waser.pdf); every time, the authors of the theses have been able to point out flaws in the counter-proposals. This demonstrated that the critics had not grappled with the fundamental issues at hand, or at least not sufficiently to weaken the theses.
This should obviously not be taken as a proof of the theses. But it does show that the arguments are currently difficult to counter. Informally this is a reverse expert-opinion test: if experts often find false counter-arguments, then then any given counter-argument is likely to be false (especially if it seems obvious and easy). Thus any counter-argument should have been subject to a degree of public scrutiny and analysis, before it can be accepted as genuinely undermining the theses. Until that time, both predictions seem solid enough that any AI designer would do well to keep them in mind in the course of their programming.
References:
-----------
* [Arm] Stuart Armstrong. General purpose intelligence: arguing the orthogonality thesis. In preparation.
* [ASB12] Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299-324, 2012.
* [BBJ+03] S. Bleich, B. Bandelow, K. Javaheripour, A. Muller, D. Degner, J. Wilhelm, U. Havemann-Reinecke, W. Sperling, E. Ruther, and J. Kornhuber. Hyperhomocysteinemia as a new risk factor for brain shrinkage in patients with alcoholism. Neuroscience Letters, 335:179-182, 2003.
* [Bos13] Nick Bostrom. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. forthcoming in Minds and Machines, 2013.
* [Cre93] Daniel Crevier. AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, New York, 1993.
* [Den91] Daniel Dennett. Consciousness Explained. Little, Brown and Co., 1991.
* [Deu12] D. Deutsch. The very laws of physics imply that artificial intelligence must be possible. what's holding us up? Aeon, 2012.
* [Dre65] Hubert Dreyfus. Alchemy and ai. RAND Corporation, 1965.
* [eli66] Eliza-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9:36-45, 1966.
* [Fis75] Baruch Fischho. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1:288-299, 1975.
* [Gui11] Erico Guizzo. IBM's Watson jeopardy computer shuts down humans in final game. IEEE Spectrum, 17, 2011.
* [Hal11] J. Hall. Further reflections on the timescale of ai. In Solomonoff 85th Memorial Conference, 2011.
* [Han94] R. Hanson. What if uploads come first: The crack of a future dawn. Extropy, 6(2), 1994.
* [Har01] S. Harnad. What's wrong and right about Searle's Chinese room argument? In M. Bishop and J. Preston, editors, Essays on Searle's Chinese Room Argument. Oxford University Press, 2001.
* [Hau85] John Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cambridge, Mass., 1985.
* [Hof62] Richard Hofstadter. Anti-intellectualism in American Life. 1962.
* [Kah11] D. Kahneman. Thinking, Fast and Slow. Farra, Straus and Giroux, 2011.
* [KL93] Daniel Kahneman and Dan Lovallo. Timid choices and bold forecasts: A cognitive perspective on risk taking. Management science, 39:17-31, 1993.
* [Kur99] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult, 1999.
* [McC79] J. McCarthy. Ascribing mental qualities to machines. In M. Ringle, editor, Philosophical Perspectives in Artificial Intelligence. Harvester Press, 1979.
* [McC04] Pamela McCorduck. Machines Who Think. A. K. Peters, Ltd., Natick, MA, 2004.
* [Min84] Marvin Minsky. Afterword to Vernor Vinges novel, "True names." Unpublished manuscript. 1984.
* [Moo65] G. Moore. Cramming more components onto integrated circuits. Electronics, 38(8), 1965.
* [Omo08] Stephen M. Omohundro. The basic ai drives. Frontiers in Artificial Intelligence and applications, 171:483-492, 2008.
* [Pop] Karl Popper. The Logic of Scientific Discovery. Mohr Siebeck.
* [Rey86] G. Rey. What's really going on in Searle's Chinese room". Philosophical Studies, 50:169-185, 1986.
* [Riv12] William Halse Rivers. The disappearance of useful arts. Helsingfors, 1912.
* [San08] A. Sandberg. Whole brain emulations: a roadmap. Future of Humanity Institute Technical Report, 2008-3, 2008.
* [Sea80] J. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3(3):417-457, 1980.
* [Sea90] John Searle. Is the brain's mind a computer program? Scientific American, 262:26-31, 1990.
* [Sim55] H.A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69:99-118, 1955.
* [Tur50] A. Turing. Computing machinery and intelligence. Mind, 59:433-460, 1950.
* [vNM44] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ, Princeton University Press, 1944.
* [Wal05] Chip Walter. Kryder's law. Scientific American, 293:32-33, 2005.
* [Win71] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. MIT AI Technical Report, 235, 1971.
* [Yam12] Roman V. Yampolskiy. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19:194-214, 2012.
* [Yud08] Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. In Nick Bostrom and Milan M. Ćirković, editors, Global catastrophic risks, pages 308-345, New York, 2008. Oxford University Press. |
4b787212-9022-4261-a0e2-e121339bf25f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Pittsburgh: affecting the far future
Discussion article for the meetup : Pittsburgh: affecting the far future
WHEN: 28 November 2012 06:00:00PM (-0500)
WHERE: Tazza D'Orro, Gates-Hillman Center, 4800 Forbes Avenue, Pittsburgh, PA
Next Wednesday 28th Nov there'll be a meetup at Tazza D'Oro - the cafe in the Gates Hillman center (CMU Computer Science building) - at 6pm. The topic will be our effect on the far future. The future is big, so our effects on it might matter a lot. However it's also hard to know about. What can we do that would reliably make it better? If you have ideas, or would like some, come and talk about it with us! Some folks might go to dinner after. Phone 412-304-6258 if you have any questions.
Discussion article for the meetup : Pittsburgh: affecting the far future |
2042932c-d542-4482-bcf3-5587c240ca1e | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow, Now 2 Sigma More Awesome
Discussion article for the meetup : Moscow, Now 2 Sigma More Awesome
WHEN: 10 November 2013 04:00:00PM (+0400)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
We have lots of various content prepared this time, 2 standard deviations more than usual.
We're planning to have (not necessarily in this order):
* a report from a participant of our ongoing willpower group;
* a report about "Feeling Good" by David Burns;
* an exercise on discovering your terminal values;
* a section on how to make conversations more productive by learning to recognize several failure modes of arguing.
There might be a calibration session if we have time for it.
There might also be a cake, but it can turn out to be a lie.
As usual, we're going to have our ongoing Prediction Market, Scavenger Hunt, positive reinforcements (now with cashew as an option, not just chocolate!) and pizza.
We intend to broadcast this meetup on ustream or justin.tv, but considering the amount of chaos and the fact we never did it before, it's unlikely to turn out well. So join us in person if you can.
If you are going for the first time:
We gather in the Yandex office. Look the second revolving door with the sign “Яндекс”, here is the photo of the entrance. You need to pass the first entrance and the bicycle parking on your way. Here is an additional guide on how to get there: link.
You can fill this one minute form (in Russian) to share your contact information.
We start at 16:00 and sometimes finish at night.
Discussion article for the meetup : Moscow, Now 2 Sigma More Awesome |
82508826-2b57-49dd-8723-5536c99a9874 | StampyAI/alignment-research-dataset/special_docs | Other | Learning from Physical Human Corrections, One Feature at a Time
Figure 1 : Participant pushes on the robot to teach it to go closer to the table. In the process of giving this correction, the human changes both the robot's distance from table and -inadvertently -the orientation of a cup which the robot is grasping (blue arrows). Typically, the robot would learn about both cup and table features from this one correction (top right). We propose that robots interacting with humans should learn about only one feature at a time (bottom right). correct the robot as it is moving. For example, the robot is moving a fragile cup from a cabinet to the table, and a nearby human notices that the robot is carrying the cup too high above the table: if the cup were to drop from that height, it would likely break! To correct the robot's behavior, the human intuitively pushes the robot's end-effector towards the table to signal their motion preference. Ideally, the human's correction will only affect the cup's distance from the table; in practice, however, human actions are noisy and imperfect [7, 19, 20, 24] , especially when kinesthetically maneuvering robotic manipulators while trying to carefully orchestrate their multiple degrees of freedom [1] . As a consequence, when the person pushes down on the end-effector, they accidentally change not only the robot's distance from the table, but also the orientation of the cup (see Fig. 1 ). This single human interaction has therefore adjusted two task features: the cup's distance from the table and the cup's orientation. From the robot's perspective, it is not immediately clear what the person actually intends: do they (a) want the robot to carry the cup closer to the table, or do they (b) additionally want the robot to carry the cup at a new orientation? State-of-the-art algorithms default to the latter interpretation. Prior work has built on Inverse Reinforcement Learning (IRL) [13, [16] [17] [18] 24] to formalize learning from physical human corrections as an estimation problem: the robot estimates the objective function that it should optimize during the task by treating human corrections as evidence about the objective function's parameters 1 . Under the ideal objective function parameters, the corrected behavior has to have a lower cost than the robot's current behavior [3, 11] . Therefore, when the person's correction changes multiple features -however slightly -a rich hypothesis space will lead to the robot updating its understanding about the importance of all of these features (top right in Fig. 1 ). This traditional approach works well with perfect or near-perfect corrections; however, with real people come aspects of corrections that are not always intended. These unintended corrections lead to unintended learning. In other words, the robot attempts to learn from and alter its behavior based on all the inputs, even those that are superfluous. Returning to our previous example, if all features were updated the robot would learn (correctly) that the cup should be lower, and (incorrectly) that the cup should be carried at a different orientation. In general, because of the inherent physical difficulty in simultaneously correcting many degrees of freedom of a robotic arm, learning about all features at once may systematically cause the robot to infer more from the human's corrections than desired. Our insight is that we can alleviate unintended robot learning by focusing the learning on only one feature at a time. For tasks where the human is attempting to change the importance of just one feature, this insight helps the robot reject inadvertent adjustments on the other features (bottom right in Fig. 1 ). But even for tasks in which the human wants to correct several features, learning one feature at a time enables people to break down the task and teach sequentially. Indeed, sequential teaching may come more naturally to people collaborating with robots [21, 22] , and reduces the burden on users to coordinate all aspects of the task simultaneously during each individual correction. Based on our insight, we make the following contributions: Online Feature Identification. As the robot is executing its task, the human collaborator can intervene and provide physical corrections. We formulate the problem of identifying which one feature the person is trying to correct at each time step, derive a solution, and justify a simple approximation for online performance. We hypothesize that this approach will result in a better learning process, with a more accurate objective function being inferred by the robot at each time step, and a better final outcome. User Study Testing One-at-a-Time Learning. After validating our algorithm in 2-D simulations with an approximately optimal human, we put our hypothesis to the test in a user study on a 7-DoF robotic manipulator. These experiments compare one-ata-time and all-at-once learning within a factorial design, across tasks that need just one feature to be corrected, and tasks that need multiple features to be corrected. We find that one-at-a-time learning is especially helpful in the second case, where the person's teaching task is more complex. People also prefer it, finding that the robot is better at understanding their corrections and requires less reteaching. Overall, our work provides a practical improvement for learning objective functions online from physical human-robot interaction.
ONE-AT-A-TIME OBJECTIVE LEARNING FROM PHYSICAL HUMAN INTERACTION 2.1 Why Learn from Physical Corrections? When a human and robot are collaborating in close proximity, physical interaction -in which the human touches, pushes, pulls, or otherwise guides the robot -is almost inevitable. The way in which a robot responds to such physical human-robot interaction (pHRI) depends on how the robot interprets those corrections. Traditionally, the human's interactions are treated in one of three ways [9] : as disturbances to be rejected [5, 12, 23] , as collisions to be detected and avoided [4] , or as operator signals to be followed by switching into a compliant mode [8, 10, 14] . In all cases, the robot does not learn from the human's actions; once the human stops interacting, the robot resumes its original behavior. In contrast, we argue that interactions are intentional, and therefore informative -the human interacts with the robot because it is doing something wrong, and the human's correction indicates how the robot should behave. Furthermore, since the way in which the robot chose its behavior was by optimizing an objective function, interaction suggests that this objective function was incorrect. Thus, rather than stubbornly continuing to optimize the same wrong objective, the robot should instead leverage the human's feedback in order to update its understanding of the objective function.
Learning Problem Statement Assume the robot starts in some configuration q 0 at time t = 0. Let Ξ be the space of trajectories beginning at q 0 and ending at a feasible goal configuration, where each ξ ∈ Ξ is a sequence of configurations. Next, let Φ : Ξ → R F be a vector-valued function mapping trajectories to feature values, with Φ i (ξ ) signifying the value of the i-th feature. Similar to prior IRL work [13, 16, 18, 24] , the robot's objective function (here a cost function) is parametrized by θ ∈ R F , which weights the importance of these features along the entire trajectory: C(ξ ) = θ • Φ(ξ ) (1) The robot starts off with an initial objective function θ 0 at time t = 0, and optimizes this objective function to produce its initial trajectory: ξ 0 = arg min Ξ θ 0 • Φ(ξ ) (2) After identifying ξ 0 , the robot starts to execute this initial trajectory. The person interacting with the robot has some desired objective function that they want the robot to optimize, denoted as θ \* . The robot does not have access to these parameters -they are internal to the person (and here assumed to be constant). However, at every time step t, the person might intervene to move the robot away from its current configuration by some ∆q t . The robot should then treat the human's correction ∆q t as an observation about θ \* , and update its objective from θ t to θ t +1 , such that this new objective function is closer to θ \* .
All-at-Once Learning Following [3] , we interpret the change in configuration ∆q t as an indication of the corrected trajectory, ξ t c , that the human would prefer for the robot to execute: ξ t c = ξ t + M −1 (0, .., ∆q t , ...0) T (3) Here ξ t is the robot's current trajectory -optimal under θ t -and M is a matrix that smoothly propagates the local correction ∆q t along the rest of the trajectory [6] . Next, based on [11] and [18] , we make the core assumption that the corrected trajectory ξ t c is better than the current trajectory ξ t with respect to the ground truth θ \* . Recalling that our objective Session We-1A: Machine Learning for HRI HRI'18, March 5-8, 2018, Chicago, IL, USA function is a cost function, this implies: θ \* • Φ(ξ t c ) < θ \* • Φ(ξ t ) (4) To now find a θ t +1 closer to θ \* , we select a weight vector that is both (a) near the current θ t and (b) maximally makes (4) hold: θ t +1 = arg min θ ∈Θ θ • (Φ(ξ t c ) − Φ(ξ t )) + 1 2α ||θ − θ t || 2 (5) Note that α > 0. This optimization problem is a quadratic in θ , so we will take the gradient of ( 5 ) and set it equal to 0: ∇ θ = Φ(ξ t c ) − Φ(ξ t ) + 1 α (θ − θ t ) = 0 (6) Rearranging ( 6 ), we finally obtain: θ t +1 = θ t − α(Φ(ξ t c ) − Φ(ξ t )) (7) Interestingly, (7) is the same update rule from co-active learning [11] and online maximum margin planning [18] , shown by [3] to be an approximate solution to the partially observable Markov decision process that treats θ \* as the hidden state and optimizes the cost parametrized by θ \* . This update rule has an intuitive interpretation: if a feature has a higher value in corrected trajectory than in the current trajectory, (7) decreases corresponding weight -making it lower-cost -and thus encourages the optimizer to generate subsequent trajectories where that feature also has a higher value. Under this method, the robot updates the weights on all features that the person changed with their correction during the current time step.
One-at-a-Time Learning A natural solution for restricting the number of learned features might be to switch the regularization term in (5) to the L 1 norm [13, 15] , which encourages sparsity of the weight update. However, there is no guarantee that this will result in changing just one weight; it may still update all the features that the human corrected, including those that were accidentally changed. In this work, to capture one-at-a-time learning, we now make a different assumption about ξ t c , the intended corrected trajectory. While the actual corrected trajectory, ξ t c , might change multiple features, we assume that the human's intended corrected trajectory, ξ t c , changes only a single feature. We simplify the intended corrected trajectory into an intended change in features, ∆Φ t c , and impose the constraint that ∆Φ t c can only have one non-zero entry: this entry represents the feature which the person wants to update. Note that our one-at-a-time strategy does not mean that only one feature ever changes throughout the task. Instead, at every time step t there can be a different intended feature change, and so the person can sequentially change the weights to match their desired objective over multiple corrections. Without loss of generality, assume that the human is attempting to change the i th entry in θ t , the robot's current feature weights. If the human interacts to only update the weight on the i t h feature, then their correction of the robot's current trajectory, ξ t , should change the feature count in the direction J (θ i ) = ∂Φ(ξ t ) ∂θ t i . In other words, given that the person is an optimal corrector and that their interaction was meant to change just the weight on the i t h feature, then we would expect them to correct the trajectory such that they produce a feature difference exactly in the direction J (θ i ). Realistically, however, human corrections are noisy -even for expert users [2] -and will not necessarily induce the optimal feature difference during every correction. Despite these imperfections, we assume that the result of their correction will still noisily optimize the distance (dot product) in the optimal direction. This provides us with an observation model, from which we can find the likelihood of observing a specific feature difference given the one feature which the human is attempting to update: P(∆Φ|i) ∝ e J (θ i )•∆Φ (8) Accordingly, for the observed feature difference ∆Φ = Φ(ξ t c )−Φ(ξ t ), the feature which the human is most likely trying to change is: i \* = arg max i P(Φ(ξ t c ) − Φ(ξ t ) i) = arg max i J (θ i ) • (Φ(ξ t c ) − Φ(ξ t )) (9) Using ( 9 ), we can estimate which feature the person wanted to update during their physical correction. Next, by leveraging i \* and the observed feature difference, we can reconstruct ∆Φ t c , the human's intended feature difference. Recall that -if the human wanted to only update feature i \* -their intended feature difference would ideally be in the direction J (θ i \* ) = ∂Φ(ξ t ) ∂θ t i \* , and so we can choose ∆Φ t c ∝ J (θ i \* ). In practice, however, we will simplify this derivative by projecting the actual feature difference induced by the human's interaction onto the i \* t h axis, ∆Φ t c = (0, .., Φ i \* (ξ t c ) − Φ i \* (ξ t ), ..0) T . Thus, once we have identified which feature the person most wants to change during their current interaction, i \* , we argue that the intended feature correction should only change this one feature 2 . Evaluating J (θ i ) requires numerical differentiation, i.e., finding an optimal trajectory at least F + 1 times at each time step (where F is the number of features). To make this process run in real-time, we approximate J (θ i ) as proportional to (0, .., 1, ..0) T . In other words, we assume that when the i t h weight changes, it predominantly causes a change in the i t h feature along the corresponding optimal trajectory. Substituting this simplification back into (2.4), we have reduced our method for finding the feature which the human intends to change into a simple, yet intuitive, heuristic: only the feature that changed the most as a result of the human's correction should be updated. We note, however, that this heuristic has its roots in the more principled approach that was detailed above. Our update rule now becomes θ t +1 = θ t − α ∆Φ t c (10) Overall, isolating a single feature at every time step is meant to prevent unintended learning. If the person is trying to correct multiple features, they can still do so: the robot will pick up on what seems like the most dominant feature in the correction, adjust that, and then give the person a chance to correct whatever remains during the next time step. Due to the noisy nature of human corrections, we hypothesize that this one-at-a-time update strategy will lead to shorter trajectories through the learned weight spacewhich reach the ideal weight more directly -when compared to a strategy that tries to update everything at once. In what follows, we first show some simulation analysis with optimal and noisy humans, and then test our hypothesis in a user study.
SIMULATIONS In order to better validate and compare the all-at-once and oneat-a-time learning methods described in Section 2, we conducted human-robot interaction simulations. These simulations show that updating one feature per interaction can help prevent unintended learning, particularly when the human interacts sub-optimally. Setting. We will consider a vertical planar environment, where the y-axis corresponds to height above a table and the x-axis is parallel to that table. The simulated robot is attempting to move from a fixed start position, s, to a fixed goal position, д. The robot is modeled as a single point, and the robot's configuration is its current (x, y) position. A simulated human is standing beside the table near the start position, and physically interacts with the robot to correct its behavior when necessary. The robot does not know the true feature weights of the human's objective function, θ \* , but the robot does know that there are three different features which the human might care about: the length of the robot's trajectory (length), the robot's height above the table (table), and the robot's distance from the human (human). Here the table feature corresponds to the height along the y-axis, since the table is a surface at y = 0, and the human feature corresponds to the distance along the x-axis, since the human is standing at x = 0. The weight of the length feature is fixed, and the robot learns the relative weights associated with table and human features over the course of the task. The human's true reward parameter is θ \* = [0.5, 0], where 0.5 is the true weight associated with table and 0 is the true weight associated with human. Initially, the robot believes that θ 0 = [0, 0], and so the robot is unaware that it should move closer to the table. Simulated Human. We consider two different simulated humans: (a) an optimal human, who corrects the robot to exactly follow their desired trajectory and (b) a noisy human, who imperfectly corrects the robot's trajectory. At the start of the task, the optimal human identifies a desired trajectory: ξ \* H = arg min Ξ θ \* • Φ(ξ ). During the task, the human does not change ξ \* H , and interacts with the robot to make it follow this desired trajectory. At each time step t the human provides a correction ∆q t that changes the robot's current configuration to the desired configuration, ξ \* H (t), but the human only provides this correction if the robot's distance from ξ \* H (t) is greater than some acceptable margin of error. In contrast, the noisy human takes actions sampled from a Gaussian distribution: these actions are centered at the optimal human action with a bias in the x-direction. This bias introduces a systematic error, where the noisy human accidentally pulls the robot closer to their body when attempting to significantly correct the vertical table feature. As a result of this noise and bias, the noisy human may unintentionally correct the human feature. Analysis. We performed two different simulations: one with the optimal human (see Fig. 2 ), and one with the noisy human (see Fig. 3 ). When the human optimally corrects the robot's table feature in Fig. 2 , they never unintentionally affect the weight of the human feature, and so all-at-once and one-at-a-time learning both yield the exact same results for the optimal human. By contrast, the noisy human unintentionally corrects the human features at the start of the task (when trying to correct the table features), and, as such, we observed different behavior for all-at-once and one-at-a-time learning in Fig. 3 . Although the robot follows a similar mean trajectory for both learning methods, and eventually converges to the correct feature weights in each case, we observe that all-at-once had a longer learning process and more persistent human interaction. In particular, the length of the mean path in feature space from θ 0 to θ T was 0.57 for all-at-once vs. 0.49 for one-at-a-time; the length of the mean path specifically for the human feature weight was 0.23 for all-at-once vs. 0.001 for one-at-a-time. Recall that the robot was constrained to reach its goal position in 10 steps; we found that, in the all-at-once case, the human interacted with the robot during an average of 5.24 steps, and, in the one-at-a-time case, the human interacted with the robot during an average of 3.56 steps. These simulations showcase that, when the human interacts sub-optimally, their corrections can lead to unintended learning on the robot's part, which the human must then exert additional effort to undo. For the simulation we have described, updating only one feature per time step helps to mitigate accidental learning, demonstrating the potential benefits of our proposed one-at-a-time learning method.
EXPERIMENTS We conducted an IRB-approved user study to investigate the benefits of one-at-a-time learning. During each experimental task, the robot began with a number of incorrect weights in its objective, and the participants intervened to physically correct the robot.
Independent Variables We use a 2 by 2 factorial design. We manipulated the learning strategy with two levels, all-at-once and one-at-a-time, as well as the number of feature weights that need correction, one feature weight and all the feature weights. In the all-at-once learning strategy, the robot updated all the feature weights from a given interaction with the gradient update from Equation ( 7 ) and then replanned a new trajectory with the updated weights. In the one-at-a-time condition, the robot chose the feature that changed the most using Equation (2.4), updated according to Equation (10) , and then replanned a new trajectory withe the updated θ .
Dependent Measures
Objective. To analyze the objective performance of the two learning strategies, we split the objective measures into four categories: Final Learned Reward: These measure how closely the learned reward matched the optimal reward by the end of the trajectory. We measured the dot product between the optimal and final reward vector, denoted DotFinal = θ \* • θ T . We also analyzed the regret of the final learned reward, which is the weighted feature difference between the ideal trajectory and the learned trajectory RegretFinal = θ \* • Φ(ξ θ \* ) − θ \* • Φ(ξ θ T ) and the individual feature differences between the ideal reward and the trajectory induced by the final learned reward TableDiffFinal = |Φ T b (ξ θ \* ) − Φ T b (ξ θ T )| CupDiffFinal = |Φ C (ξ θ \* ) − Φ C (ξ θ T )| Learning Process: Measures about the learning process, i.e. θ = {θ 0 , θ 1 , . . . , θ T }, included the average dot product between the true reward and the estimated reward over time: DotAvg = 1 T T i=0 θ \* •θ i . We also measured the length of the θ path through weight space for both cup, θC , and table, θT b weights. Finally, we computed the number of times the cup and table weights were updated away from the optimal θ \* (denoted by CupAway and TableAway). Executed Trajectory: For the actual executed trajectory, ξ act , we measured the regret Regret = θ \* • Φ(ξ θ \* ) − θ \* • Φ(ξ act ) and the individual table and cup feature differences between the ideal and actual trajectory TableDiff = |Φ T b (ξ θ \* ) − Φ T b (ξ act )| CupDiff = |Φ C (ξ θ \* ) − Φ C (ξ act )| Interaction: Interaction measures on the forces applied by the human, {u 0 H , u 1 H , . . . , u T H }, included the total interaction force, Iact-Force = T t =0 ||u t H || 1 and total interaction time. 4.2.2 Subjective. For each condition, we administered a 7-point Likert scale survey about the participant's interaction experience (see Table 1 for questions). We separated our survey questions into four scales: success in teaching the robot about the task, correctness of update, needing to undo corrections because the robot learned something wrong, and ease of undoing. The final learned weight vector with one-at-a-time is closer to the ideal weight vector for the task where two feature weights are incorrect (left). Looking at the individual feature differences from ideal: while the final cup weight is closer to ideal for one-at-a-time for both tasks (center), the ideal table weight is actually significantly further away from the ideal for the one-at-a-time strategy during the one-feature task (right). However, for the two feature task, the one-at-a-time method outperforms the all-at-once for final learned cup and table weights.
Hypotheses H1. Updating one feature at a time significantly increases the final learned reward, enables a better learning process, results in lower regret for the executed trajectory, and leads to less interaction effort and time compared to all-at-once update. H2. Participants will perceive the robot as more successful at accomplishing the task, correctly updating its knowledge of the task, less likely to learn about extraneous aspects of the task, and be easier to correct if it did learn something wrong in the one-at-a-time condition.
Tasks We designed two experimental household manipulation tasks for the robot to perform in a shared workspace (see Fig. 4 for setup). For each experimental task, the robot carried a cup from a start to end pose with an initially incorrect objective. One of the tasks focused on participants having to correct a single aspect of the incorrect objective, while the other needed them to correct all parts of the objective. Participants were instructed to physically intervene to correct the robot's behavior during the task. Similar to state-of-theart methods, all the features in the robot's objective were chosen to be intuitive to a human to ensure that participants could understand how to correct the robot. In Task 1, the robot's objective had only one feature weight incorrect. The robot's default trajectory took a cup from the participant and put it down on the table, but carried the cup too far above the table (top of Fig. 4 ). In Task 2, all the feature weights started out incorrect in the robot's objective. The robot also took a cup from the participant and put it down on the table, but this time it initially grasped the cup at the wrong angle and was also carrying the cup too high above the table (bottom of Fig. 4 ).
Participants We used a within-subjects design and counterbalanced the order of the conditions during experiments. In total, we recruited 12 participants (7 female, 4 male, 1 non-binary trans-masculine, aged 18-30) from the campus community, 11 of which had technical backgrounds and 1 of which did not. None of the participants had experience interacting with the robot used in our experiments.
Procedure Before beginning the experiment, participants performed a familiarization task to become comfortable teaching the robot with physical corrections. The robot's original trajectory moved a cup from a shelf to a table, but the robot did not initially care about tilting the cup mid-task. The robot's objective contained only one aspect of the task (cup orientation) and participants had to correct only this one aspect. Afterwards, for each experimental task, the participants were shown the robot's default trajectory as well as what their desired trajectory looks like. They were also told what aspects of the task the robot is always aware of (cup orientation and distance of end-effector to table) as well as which learning strategy they were interacting with. Participants were told the difference between the two learning strategies in order to minimize in-task learning effects. Note, however, that we did not tell participants to teach the robot in any specific style (like one aspect as a time), only about how the robot reasons about their corrections.
Analysis 4.7.1 Objective. Final Learned Reward. We ran a factorial repeated-measures ANOVA with learning strategy and number of features as factors, and user ID as a random effect, for each of the measures capturing the quality of the final learning outcome. Fig. 5 summarizes our findings about the final learned weights for each learning strategy. For the final dot product with the true reward, we found a significant main effect of the learning strategy (F (1, 81) = 29.86, p < .0001), but also an interaction effect with the number of features (F (1, 81) = 13.07, p < .01). The post-hoc analysis with Tukey HSD revealed that one-at-a-time led to a higher dot product on the two feature task (p < .0001), but there was no significant difference on the one-feature task (where one-at-a-time led to slightly higher dot product). We next looked at the final regret, i.e. the difference between the cost of the learned trajectory and that of the ideal trajectory. For this metric we found an interaction effect, suggesting that one-ata-time led to lower regret for the two-feature task but not for the one-feature task. Looking separately at the feature values for table and cup, we found that one-at-a-time led to a significantly lower difference for the cup feature across the board (F (1, 81) = 11.30, + Cup All-at-Once One-at-a-Time (b) In contrast to (a), when two feature weights are wrong, the one-ata-time strategy outperforms the all-at-once strategy when it came to a higher dot product over the duration of the trajectory.
Figure 6: The one-at-a-time strategy shows significantly more consistent alignment between the estimated weight vector, θ t , and the ideal weight vector, θ \* , than the all-at-once for the two feature task. This indicates that when multiple aspects of the objective need changing, the one-at-a-time method enables more accurate learning. p < .01, no interaction effect), but that one-at-a-time only improved the difference for the table on the two feature task (p < .0001) -it actually significantly increased the difference on the one feature task (p < .001). Overall, we see that one-at-a-time learns something significantly better across the board for the two-feature task. When it comes to the one feature task, the results are mixed: it led to a significantly better result for the cup orientation, but significantly worse for the table distance feature. Learning Process. For the average dot product between the estimated and true reward over time, our analysis revealed almost identical outcomes to before, when we were looking at the final reward only (see Fig. 6 ). We also found that one-at-a-time resulted in significantly fewer updates in the wrong direction for the cup weight across the board (F (1, 81) = 44.91, p < .0001) and for the table weight (F (1, 81) = 22.02, p < .0001), with no interaction effect. Fig. 7 highlights these findings and their connection to the subjective metrics. Looking at the length of the path through the space of weights, we found a main effect of learning strategy (F (1, 81) = 26.82, p < .0001), but also an interaction effect (F (1, 81) = 6.55, p = .01). The posthoc analysis with Tukey HSD revealed that for the the one-feature task, one-at-a-time resulted in significantly shorter path traversed through weight space (p < .0001). The path was shorter with the two-feature task as well, but the difference was not significant. The effect was mainly due to the one-at-a-time method resulting in a shorter path for the cup weight on the one-feature task, as revealed by the posthoc analysis (p < .0001). Overall, we see that the quality of the learning process was significantly higher for the one-at-a-time strategy across both tasks. When one aspect and all aspects of the objective were wrong, oneat-a-time led to fewer wrong weight updates and resulted in the learned reward across time being closer to the true reward. The Executed Trajectory. We found no significant main effect of the learning strategy on the regret of the executed trajectory: the two strategies lead to relatively similar actual trajectories with respect to regret. Both regret as well as the feature differences from ideal for cup and table showed significant interaction effects. Interaction Metrics. We found no significant effects on interaction time or force. Summary of Objective Metric Analysis. Taken together, these results indicate that a one-at-a-time learning strategy leads to a better learning process across the board. On the more complex two-feature task, this strategy also leads to unquestionably better learning outcomes. For the one-feature task, learning one feature at a time enables users to better avoid the wrong perturbation of the correct weight (on the cup feature), but is not as good as the all-at-once method at enabling users to properly correct the wrong weight (on the table feature). Thus, H1 was partially supported: although updating one feature weight at a time does not improve task performance when there is only one aspect of the objective wrong, reasoning about one feature weight at a time leads to significantly better learning and task performance when all aspects of the objective are wrong. 4.7.2 Subjective. We ran a repeated measures ANOVA on the results of our participant survey. After testing the reliability of our 4 scales, we found that the correct update and undoing scale were significantly reliable, so we grouped these into a combined score (see Chronbach's α in Table 1 ). We analyzed success and undoing ease separately as they were not reliable. For the correct update scale, we found a significant effect of learning strategy (F (1, 33) = 5.09, p = 0.031), showing that participants perceived the one-at-a-time strategy as better at updating the robot's objective according to their corrections. Additionally, the undoing scale showed a significant effect of learning strategy (F (1, 33) = 10.35, p < 0.01), with the one-at-at-time strategy being less likely to learn the wrong thing and cause the participants to have to undo a correction. For ease of undoing, when analyzing Q9 and Q10 individually we found no significant effect of strategy. The one-at-a-time strategy results in significantly less weight updates that are away from the optimum weight across all tasks (left top, left bottom). These findings are consistent with the subjective likert data from the undoing scale, where participants perceived the one-at-a-time method as less likely to learn the wrong thing and need an additional undoing action. Summary of Subjective Metric Analysis. The subjective data echoes some of the objective data results. Participants perceived that one-at-a-time better understood their corrections and required less undoing due to unintended learning, partially supporting H2.
DISCUSSION In this paper, we compared the performance of one-at-a-time and allat-once learning for two tasks: one that required correcting a single feature, and another that required correcting multiple features of a robot's objective. For the multiple feature task, learning about one feature at a time was objectively superior: it led to a better final learning outcome (Fig. 5 ), took a shorter path to the optimum, and had fewer incorrect inferences and undoings along the way (Fig. 6 ). However, the results were not as clear for the single feature task: the one-at-a-time method lessened unintended learning on the weights that were initially correct, but it hindered learning for the incorrect weights. However, participants subjectively preferred the one-at-a-time strategy overall: they thought it was better at learning the correct aspects of the task and required less undoing. We hypothesize that the superior objective performance of the one-at-a-time strategy in the second task is due to the increased complexity of the task. It appears that one-at-a-time learning is more useful as the teaching task becomes more complex and requires fixing more aspects of the robot's objective. However, with simple teaching tasks that only require one aspect of the objective to change, it is not yet clear whether one-at-a-time is a significantly better learning strategy.
Limitations and Future Work It is both a limitation and a strength that we chose the simplest possible feature selection method for the one-at-a-time task. On the one hand, this is an intuitive and computationally inexpensive method to examine as a first exploration into teaching robot objectives online via physical interaction. At the same time, our simple learning strategy was not consistently superior in the simple task. This opens the door for analyzing more sophisticated methods that perform Bayesian inference on the intended feature, or low-pass filtering to prevent high frequency changes in which features gets Table 1 : Likert scale questions were grouped into four categories: success in accomplishing the task, correctness of update (reliable), needing to undo corrections because of unintended learning (reliable), and ease of undoing.
Likert Questions Cronbach's α .93 Q7: Sometimes my corrections were just meant to fix the effect of previous corrections I gave. Q8: I had to re-teach the robot about an aspect of the task that it started off knowing well. undo ease Q9: When the robot learned something wrong, it was difficult for me to undo that. .66 Q10: It was easy to re-correct the robot whenever it misunderstood a previous correction of mine. updated to improve overall learning and usability. Additionally, while our method worked well with intuitive features like "distance to table", additional work is needed to investigate how well each method works when the features are non-intuitive to the human. Perhaps our largest limitation in this work is our demographics: our study participants were primarily individuals with a technical background (with one exception). Future work must consider a more diverse user population to ensure external validity. Not only do we need algorithms that can learn from humans, but the methods must also reason about the difficulties humans experience when trying to kinesthetically teach a complex robotic system. To simplify the teaching process, we propose that robots should learn one aspect of the objective at a time from physical corrections. While our user studies indicate the benefits of this method, it is only a first step towards seamless human-robot interaction. Optimal human: Teaching the robot (b) Optimal human: Learning weights
Figure 2 : 2 Figure 2: Simulation with optimal human. (a) Human corrects the robot during the first few time steps, and the robot follows the human's desired trajectory afterwards. (b) The robot's estimated feature weights converge to the human's true feature weights.
Figure 3 : 3 Figure 3: Simulation with noisy human. (a) The human noisily corrects the robot's trajectory, where the ellipses show the robot's states with 95% confidence over 100 simulations. (b)With all-at-once, the robot initially learns that the human feature is important, and the person must undo that unintended learning. One-at-at-time learning reduces the unintended effects of the human's noisy corrections; this causes the robot to converge towards the human's desired trajectory more rapidly.
(a) Task 1 : 1 Correct one feature, the distance to table (b) Task 2: Correct two features, the cup orientation and distance to table
Figure 4 : 4 Figure 4: Depictions of the robot trajectories for each of the two experimental tasks. The black path represents the original trajectory and the blue path represents the human's desired trajectory.
Figure 5 : 5 Figure5: The final learned weight vector with one-at-a-time is closer to the ideal weight vector for the task where two feature weights are incorrect (left). Looking at the individual feature differences from ideal: while the final cup weight is closer to ideal for one-at-a-time for both tasks (center), the ideal table weight is actually significantly further away from the ideal for the one-at-a-time strategy during the one-feature task (right). However, for the two feature task, the one-at-a-time method outperforms the all-at-once for final learned cup and table weights.
One-at-a-Time (a) In the task with only one wrong feature weight, there is no significant difference between the two methods in average dot product over time.
Figure 7 : 7 Figure7: The one-at-a-time strategy results in significantly less weight updates that are away from the optimum weight across all tasks (left top, left bottom). These findings are consistent with the subjective likert data from the undoing scale, where participants perceived the one-at-a-time method as less likely to learn the wrong thing and need an additional undoing action.
succ Q1: I successfully taught the robot how to do the task. correct update Q2: The robot correctly updated its understanding about aspects of the task that I did want to change. .84 Q3: The robot wrongly updated its understanding about aspects of the task I did NOT want to change. Q4: The robot understood which aspects of the task I wanted to change, and how to change them. Q5: The robot misinterpreted my corrections.undoing Q6: I had to try to undo corrections that I gave to the robot, because it learned the wrong thing.
Similar to prior IRL work, we will assume that the correct features for the task have been identified a priori, and are known to both the human and the robot.
To ensure that all features are equally sensitive, we normalized each feature by the maximal attainable feature difference by computing optimal trajectories offline with a range of θ values. |
14ae90df-4248-4471-9079-3a924c935e1a | trentmkelly/LessWrong-43k | LessWrong | My Current Claims and Cruxes on LLM Forecasting & Epistemics
I think that recent improvements in LLMs have brought us to the point where LLM epistemic systems are starting to be useful. After spending some time thinking about it, I've realized that such systems, broadly, seem very promising to me as an effective altruist intervention area. However, I think that our community has yet to do a solid job outlining what this area could look like or figuring out key uncertainties.
This document presents a rough brainstorm on these topics. While I could dedicate months to refining these ideas, I've chosen to share these preliminary notes now to spark discussion. If you find the style too terse, feel free to use an LLM to rewrite it in a format you prefer.
I believe my vision for this area is more ambitious and far-reaching (i.e. not narrow to a certain kind of forecasting) than what I've observed in other discussions. I'm particularly excited about AI-heavy epistemic improvements, which I believe have greater potential than traditional forecasting innovations.
I'm trying to figure out what to make of this regarding our future plans at QURI, and I recommend that other organizations in the space consider similar updates.
Imaginary sketch of AI epistemic infrastructure, Dall-E 3
Key Definitions:
* Epistemic process: A set of procedures to do analysis work, often about topics with a lot of uncertainty. This could be “have one journalist do everything themselves”, to a complex (but repeatable) ecosystem of multiple humans and software systems.
* LLM-based Epistemic Process (LEP): A system that relies on LLMs to carry out most or all of an epistemic process. This might start at ~10% LLM-labor, but can gradually ramp up. I imagine that such a process is likely to feature some kinds of estimates or forecasts.
* Scaffolding: Software used around an LLM system, often to make it valuable for specific use-cases. In the case of an LEP, a lot of scaffolding might be needed.
1. High-Level Benefits & Uses
Claim 1: If humans could foreca |
75369b05-89e1-4374-9c26-9838cad42fba | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks
Welcome to the AI Safety Newsletter by the [Center for AI Safety](https://www.safe.ai/). We discuss developments in AI and AI safety. No technical background required.
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions.
---
**Cybersecurity Challenges in AI Safety**
-----------------------------------------
**Meta accidentally leaks a language model to the public.** Meta’s newest language model, LLaMa, was [publicly leaked online](https://www.theverge.com/2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misuse) against the intentions of its developers. Gradual rollout is a [popular goal](https://arxiv.org/abs/1908.09203) with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.
How can AI developers selectively share their models? One inspiration could be the film industry, which places [watermarks and tracking technology](https://en.wikipedia.org/wiki/Screener_(promotional)) on “screener” copies of movies sent out to critics before the movie’s official release. AI equivalents could involve [encrypting model weights](https://arxiv.org/abs/2008.05966) or inserting [undetectable Trojans](https://pages.nist.gov/trojai/docs/about.html#:~:text=Trojan%20attacks%2C%20also%20called%20backdoor,give%20a%20specific%20incorrect%20response.) to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face [legal opposition under antitrust law](https://yjolt.org/sites/default/files/23_yale_j.l._tech._415_ai_antitrust_nov_0.pdf). As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F234d6e29-2c91-49ba-b0b9-4b1ecdf6e2d4_1600x861.png)
*LLaMa leak. March 2023, colorized.*
**OpenAI faces their own cybersecurity problems.** ChatGPT recently [leaked user data](https://www.engadget.com/openai-says-a-bug-leaked-sensitive-chatgpt-user-data-165439848.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAK5UU1acRt7XDPkf44urkdDh5vcITG6_Cad4UIbr2zerFES0Twxd24L2k0u93KEbrnBJH-ZdcIc3mWH2P5ggfOES9Gwp9_oBeqnjiDzZCC60GQ_7ZZKAfDR5LVTcxHl92r_4Ca7LuSJ87bPbpqNdDZ3VELNtxPZYaM_GV-cAdH0i) including conversation histories, email addresses, and payment information. Businesses including [JPMorgan, Amazon, and Verizon](https://aibusiness.com/verticals/some-big-companies-banning-staff-use-of-chatgpt) prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a [business subscription plan](https://techcrunch.com/2023/04/25/openai-previews-business-plan-for-chatgpt-launches-new-privacy-controls/) where OpenAI promises not to train models on the data of business users. OpenAI also started a [bug bounty program](https://openai.com/blog/bug-bounty-program) that pays people to find security vulnerabilities.
**AI can help hackers create novel cyberattacks.** Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create [adaptive malware](https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware) that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of [hacking into password management systems](https://hackernoon.com/how-i-solved-the-passman-ctf-challenge-with-gpt-4), [convincing humans to help it bypass CAPTCHA verification](https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471), and [performing coding challenges in offensive cybersecurity](https://micahflee.com/2023/04/capturing-the-flag-with-gpt-4/?utm_source=substack&utm_medium=email).
The threat of automated cyberattacks is no surprise given [previous](https://cset.georgetown.edu/publication/automating-cyber-attacks/) [research](https://cset.georgetown.edu/publication/destructive-cyber-operations-and-machine-learning/) on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is [beginning an initiative](https://www.microsoft.com/en-us/security/business/ai-machine-learning/microsoft-security-copilot?rtc=1) to use AI for cyberdefense, but the tools are not yet publicly available.
**Artificial Influence: An Analysis Of AI-Driven Persuasion**
-------------------------------------------------------------
Former CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled [Artificial influence: An analysis of AI-driven persuasion](https://arxiv.org/abs/2303.08721).
The abstract for the paper is as follows:
> Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasive power, allowing personalized persuasion to be deployed at scale, powering misinformation campaigns, and changing the way humans can shape their own discourse. We consider ways AI-driven persuasion could differ from human-driven persuasion. We warn that ubiquitous highly persuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future. In response, we examine several potential responses to AI-driven persuasion: prohibition, identification of AI agents, truthful AI, and legal remedies. We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
>
>
**We are already starting to see some dangerous implications from AI-driven persuasion.** Consider the man who committed suicide after six weeks of interacting with a chatbot. The man [reportedly](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) developed an emotional relationship with the chatbot, who would validate his fears about climate change, express jealousy over his relationship with his wife, and make statements like “We will live together, as one person, in paradise.” Importantly, no one programmed the system to persuade the man. It’s also not clear that the system was “aiming” to persuade the man, yet it still altered his belief.
**AI-driven persuasion will become increasingly important as models become more capable.** As the 2024 election season draws nearer, AI-driven persuasion is likely to become more consequential. Additionally, as Burtell and Woodside point out in their paper, highly persuasive AI systems could even contribute to the gradual or sudden loss of human control over society.
A link to the paper is [here](https://arxiv.org/abs/2303.08721).
**Building Weapons with AI**
----------------------------
**New legislation to keep nuclear weapons out of AI control.** Last week, bipartisan senators introduced a bill aimed to [prevent AI from launching nuclear weapons](https://www.engadget.com/us-lawmakers-introduce-bill-to-prevent-ai-controlled-nuclear-launches-184727260.html). This policy has long been proposed (including [last week by Dan Hendrycks](https://twitter.com/DanHendrycks/status/1647422183164231681)) as a way to reduce the likelihood of catastrophic accidents caused by the overeager deployment of AI.
**Palantir's controversial language model for warfare.** Palantir, a data analytics company that often works for the U.S. military, has developed a [language model interface for war](https://www.youtube.com/watch?v=XEM5qz__HOU&t=1s). This is the latest step in a long history of turning AI technology into weapons. For example, the algorithm used by DeepMind’s AlphaGo to beat human experts in Go was later used in a DARPA competition to [defeat human pilots in aerial dogfights](https://www.airandspaceforces.com/artificial-intelligence-easily-beats-human-fighter-pilot-in-darpa-trial/). Similarly, advances in robotics and computer vision have led to autonomous drones without no human operator being used on the battlefield in [Libya](https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d) and [Israel](https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/) to find and attack human targets.
The international community has pushed back, with [thirty-three countries calling for a ban](https://breakingdefense.com/2023/03/not-the-right-time-us-to-push-guidelines-not-bans-at-un-meeting-on-autonomous-weapons/) on lethal autonomous weapons in March. [U.S. policy does not prohibit these weapons](https://crsreports.congress.gov/product/pdf/IF/IF11150). Instead, the Department of Defense earlier this year confirmed the status quo by providing an update to their process for [approving autonomous weapons](https://www.cnas.org/press/press-note/noteworthy-dod-autonomous-weapons-policy).
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F772c550c-d920-41a5-9922-07b7c520b073_1600x863.png)
*Palantir’s language model interface for war.*
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0b0d629-4698-4b3a-8459-026de8e1b43a_1177x659.png)
*The DARPA competition that pitted* [*human fighter pilots against AI*](https://www.airandspaceforces.com/artificial-intelligence-easily-beats-human-fighter-pilot-in-darpa-trial/)*.*
**AI could allow anyone to build chemical weapons.** AI weapons may not be contained to the militaries who hope to benefit from them. Researchers have demonstrated that AI systems used for biomedical research can also [suggest new chemical weapons](https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx). The model generated more than 40,000 possibilities in less than six hours, including several compounds [similar to VX nerve gas](https://en.wikipedia.org/wiki/VX_(nerve_agent)). One researcher remarked that “the concern was just how easy it was to do” and that “in probably a good weekend of work” someone with no money, little training, and a bit of know-how could make a model possess the dangerous capability. [Risks from bad actors](https://pubmed.ncbi.nlm.nih.gov/32559154/) are exacerbated by “[cloud labs](https://www.nature.com/articles/d41586-022-01618-x)” that synthesize new chemicals without human oversight.
**Assorted Links**
------------------
* Geoffrey Hinton, co-winner of the Turing Award for deep learning, is [concerned about existential risks from AI](https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?unlocked_article_code=alwE_PGCKRlsnhD3wp4GIknIeGWlRi5K2KJw-TOs9LcKHnGr_igoLfBAlFCe_VktG0WyQSNrYpegZOwBURlMgirnVi4mreDfMPrdJIOyfyxsfMp42SWSdWI7npNcyGWqVMofbWezK0vGpzyID1SZT2kBrbmMOtvR-yunplM2Cqqgkxuf7AcB3AfZ1tMmd0BC80DvOhh67YMpwFokAbsrIkM95M_PgaMWe_CiEg3Vtig53tRY6I658znYaMv5fgEAIiWIcsOGlIeSSb-wkRA_JGD0vaVk7BBuJCXBtXYnRpkZOamDVyZDio4FtzX-FJw697AXegMagLTOwxbP5bNihofc6LzGhpI8_PFMzQmDjyv43uZE&giftCopy=2_Explore&smid=url-share). He says that a part of him now regrets his life’s work. He [mentioned](https://www.bbc.com/news/world-us-canada-65452940) AI needs to be developed "with a lot of thought into how to stop it going rogue" and raised concerns about AIs with power-seeking subgoals.
* The Center for American progress calls for an [executive order to address AI risks](https://www.americanprogress.org/article/the-needed-executive-actions-to-address-the-challenges-of-artificial-intelligence/). Agencies would be called upon to implement recommendations from [NIST](https://www.nist.gov/itl/ai-risk-management-framework) and the [Draft AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) when using AI, and White House Council on AI would be created to work on [economic dislocation](https://mitpress.mit.edu/9780262367745/the-work-of-the-future/), [compute governance](https://arxiv.org/abs/2303.11341), and [existential threats from AI](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).
* In TIME, Max Tegmark of MIT [writes about AI risk](https://time.com/6273743/thinking-that-could-doom-us-with-ai/) and how the current discussion is unsettlingly similar to the film “Don’t Look Up.”
* The UK government commits $100M to [developing its own “sovereign” AI](https://www.engadget.com/the-uk-is-creating-a-100-million-ai-taskforce-143507868.html?utm_source=substack&utm_medium=email).
* Federal agencies say they’re planning to [use existing authority to regulate AI harms](https://www.cnbc.com/2023/04/25/us-regulators-warn-they-already-have-the-power-to-go-after-ai-bias.html?utm_source=substack&utm_medium=email).
* Senator Mark Warner writes a series of [letters to the CEOs of AI companies](https://www.warner.senate.gov/public/index.cfm/pressreleases?ID=4DED779E-9804-4948-A031-BA4C7EE76C20) asking them to provide details about how their companies address specific risks including cybersecurity and malicious use of AI systems.
See also: [CAIS website](https://www.safe.ai/), [CAIS twitter](https://twitter.com/ai_risks?lang=en), [A technical safety research newsletter](https://newsletter.mlsafety.org/) |
6b80a38e-7afe-4de4-825c-4189a9a3fe8f | trentmkelly/LessWrong-43k | LessWrong | Questions are usually too cheap
It is easier to ask than to answer.
That’s my whole point.
It is much cheaper to ask questions than answer them so beware of situations where it is implied that asking and answering are equal.
Here are some examples:
Let's say there is a maths game. I get a minute to ask questions. You get a minute to answer them. If you answer them all correctly, you win, if not, I do. Who will win?
Preregister your answer.
Okay, let's try. These questions took me roughly a minute to come up with.
What's 56,789 * 45,387?
What's the integral from -6 to 5π of sin(x cos^2(x))/tan(x^9) dx?
What's the prime factorisation of 91435293173907507525437560876902107167279548147799415693153?
Good luck. If I understand correctly, that last one's gonna take you at least an hour1 (or however long it takes to threaten me).
Perhaps you hate maths. Let's do word problems then.
Define the following words "antidisestablishmentarianism", "equatorial", "sanguine", "sanguinary", "escapology", "eschatology", "antideluvian", "cripuscular", "red", "meter", all the meanings of "do", and "fish".
I don’t think anyone could do this without assistance. I tried it with Claude, which plausibly still failed2 the “fish” question, though we’ll return to that.
I could do this for almost anything:
Questions on any topic
Certain types of procedural puzzles
Asking for complicated explanations (we’ll revisit later)
Forecasting questions
This is the centre of my argument
I see many situations where questions and answers are treated as symmetric. This is rarely the case. Instead, it is much more expensive to answer than to ask.
Let’s try and find some counter examples. A calculator can solve allowable questions faster than you can type them in. A dictionary can provide allowable definitions faster than you can look them up. An LLM can sometimes answer some types of questions more cheaply in terms of inference costs than your time was worth in coming up with them.
But then I just have to ask diff |
223e28d3-d00f-4ade-afeb-1dec9325d5e4 | StampyAI/alignment-research-dataset/blogs | Blogs | AISC6: Research Summaries
### Impact of Human Dogmatism on Training
**Team members:** Jan Czechowski, Pranav Gade, Leo Mckee-Reid, Kevin Wang
**External collaborators:** Daniel Kokotajlo (mentor)
The human world is full of dogma, and therefore dogmatic data. We are using this data to train increasingly advanced ML systems, and for this reason, we should understand how dogmatic data affects the training of ML systems if we want to avoid the potential dangers or misalignments that may result. Common examples of dogmatic misalignment are racially biased parol/policing/hiring algorithms (trained on past, racially biased data), and now we’re starting to see more complex agents that advise political parties, companies, and work to advance scientific theories.
Our team decided to work on a small transformer model that trained on an arithmetic dataset as a toy example, based on the model [in this paper](https://arxiv.org/abs/2201.02177)[.](https://arxiv.org/abs/2201.02177)
Our goal was to have the model perfectly grok the arithmetic operation that the dataset was using (such as addition), then to introduce dogma into the dataset and see how that affects the training of the model. For example: if the normal dataset contained the following data to represent 4+3=7: 4, 3, 7. Then the dogmatic data might include some false belief that the answer can never be 7, so the training data would be changed to 4, 3, 8 (representing the false idea that 4+3=8).
However, we were unable to tweak this model to achieve 100% accuracy, which we felt was a requirement for the experiment of the dogmatic dataset training to provide any useful information. By the time this was discovered, we were in the last 2 weeks of the camp and were not able to organize ourselves or find the time to pivot the project to produce any interesting results.
**Relevant Links:**
[Github Repository](https://github.com/pranavgade20/grok)
### Impact of Memetics on Alignment
**Team members:** Harriet Farlow, Nate Rush and Claudio Ceruti
**External Collaborators:** Daniel Kokotajlo (mentor)
Memetics is the study of cultural transmission through memes (as genetics is the study of biological transmission through genes). Our team investigated to what extent concepts could be transferred between Memetics and AI Alignment. We discussed our hypotheses together, but each focused on one main idea, which we published at the end of the camp as a series of three blog posts:
Harriet discussed the notion that, where AI Alignment postulates the existence of a base objective and a mesa objective, there may exist a third objective – the memetic objective. She explored the potential not just for inner and outer alignment problems, but a third memetic misalignment. As an analogy, consider humanity’s base objective from the perspective of evolution – to procreate and pass along genetic material – creates the mesa goal to pursue sex (even when procreation is not the goal). It fulfils the mesa objective but not the base objective. Consider the addition of religion to this scenario, which could exist as a third replicator that optimises for the spread of its own ideology among a population, and is more likely to replicate if it increases human fitness. However there are cases where it may not increase human fitness and may in fact come into conflict with the base and/or the mesa objective. Her post describes how this analogy might also apply to AGI.
Nate explored a potential extension to the standard RL model of an agent, inspired by memetic theory, that could better allow us to capture how a more intelligent agent might actually manifest. Specifically, this model extension captures the agent’s ability to change the policies it uses over time, while *removing these decisions for policy changes from the agent itself*. He explores a formalization that encourages thinking about agents as (slightly) more dynamic creates than in the standard formalization, and allows one to make some interesting arguments about constraints on these agents’ behaviors that are relevant to AI safety. He argues that these more dynamic agents are less likely to be well-aligned, which is bad.
Claudio investigated imitation in AGI based on imitation in memetic theory. In memetics, imitation is a fundamental part of the evolutionary process of memes, since it’s the main way that provides the means for spreading, reproducing, selecting and mutating memes. Even if a selection pressure on memes is exerted internally, e.g. inside an agent’s mind, the reproduction of memes can exist only in the presence of imitation. He explored what types of RL agents are most likely to be imitated (eg. power-seeking agents) and concluded by highlighting the danger of a multi-agent system, where imitation naturally arises with a very set of mildly restrictive conditions, when facing, even for a short amount of time, with a power-seeking agent. He found the probable outcome is that the power-seeking tendencies will be memetically spread to all the agents, even if the originally introduced power-seeking one is removed from the environment.
**Relevant Links:**
[Presentation](https://drive.google.com/file/d/1KBTGMXkwjh4RkCWAeSclvrG-Ck2c-sHW/view?usp=sharing) ([slides](https://docs.google.com/presentation/d/1CnnDJk6xfDF2bCLM9OlYAsAuI_u5A54lB5YQUKnJZ6Q/edit?usp=sharing))
LessWrong Posts:
[Part 1: Machines vs Memes](https://www.lesswrong.com/posts/JLH6ido4qoBtYmnNR/machines-vs-memes-part-1-ai-alignment-and-memetics)
[Part 2: Memetically-Motivated Model Extensions](https://www.lesswrong.com/posts/gumkW3vy9mhjZriuc/machines-vs-memes-2-memetically-motivated-model-extensions)
[Part 3: Imitation and Memes](https://www.lesswrong.com/posts/nbDFj4ZS6WSDKtSk4/machines-vs-memes-part-3-imitation-and-memes)
### Alignment Tabletop Role Playing Game
**Team members:** Aemilia Dixon, Berbank Green, Iris Holloway, Karl von Wendt
**External collaborators:** Daniel Kokotajlo (mentor), Changbai Li, Cristian Trout
Based on a [LessWrong-post by our mentor Daniel Kokotajlo](https://www.lesswrong.com/posts/qakRE5XTTmnrkJqgp/two-ai-risk-related-game-design-ideas), our goal was to develop an AI alignment tabletop roleplaying game that illustrates the problem of AI alignment in a playful and entertaining way. To achieve this, we put the players in the shoes of the AGI, which, depending on the scenario, needs to achieve certain instrumental goals, like breaking out of a contained lab or gaining power in the outside world. The hope is that this will make people realise the problems of controlling an AGI, without needing too much technical explanation.
A key challenge was (and still is) the balance between realism and playability. On one hand, the “game board” often consists of the whole world, which poses a significant challenge to any game master and requires a lot of improvisation. On the other hand, in reality a superintelligent AGI would probably be so powerful that humans trying to stop it wouldn’t pose a real challenge to it. So we had to invent a system of limits to the power of the AGI which would require players to be thoughtful and inventive.
We finally came up with a set of rules and some scenarios under the name THE TREACHEROUS TURN that we feel maintain a reasonable balance between realism and fun:
* We created a system of rules for “confidence checks” which help the GM to determine the outcome of certain actions, like trying to manipulate humans or hack a certain system.
* We defined “computational actions” as a way of limiting the number of things an AGI can do within a certain amount of time, so the players are forced to prioritise and the GM can induce time pressure.
* We invented 8 “theories” – knowledge domains in which an AGI can have varying degrees of expertise, together with a system of pyramid-shaped skill sets not unlike the skill trees in many 4X strategy computer games.
* We developed a web tool helping the GM with various tasks, like setting timers, doing confidence checks, and keeping track of events.
* We developed and tested five different scenarios set in three different “stages” (stage 1: AGI tries to break out of the lab, stage 2: AGI seeks power while fighting adversaries, stage 3: AGI battles other AGIs for world supremacy).
* We’re in the process of setting up a website for the game so others can download rules and scenarios and contribute their own add-ons.
The first playtests indicate that the rule system and scenarios seem to work fine. Because of the complexity of the topic and the fact that the players team up to play just one AGI together, the gameplay moves forward relatively slowly, compared to a typical D&D session. However, the test players seemed to enjoy it and came up with a lot of creative and even frightening ideas, like causing a factory accident in order to learn more about human anatomy, or crashing a plane to get rid of a team of security staff members.
On a side line, we also created a board game for the Tabletop Simulator, called SINGLETON, in which players play different AGIs battling for world supremacy.
We’re going to continue working on the game even after AISC is over and hope that our work will be the seed of a growing community of people playing, enhancing and improving (and ultimately contributing a little to prevent) THE TREACHEROUS TURN.
**Relevant Links:**
[thetreacherousturn.ai](http://thetreacherousturn.ai)
[thetreacherousturn.itch](http://thetreacherousturn.itch)
[tv/thetreacherousturn](http://tv/thetreacherousturn)
[r/thetreacherousturn](https://www.reddit.com/r/thetreacherousturn/)
[@treacherousturn](https://twitter.com/treacherousturn)
### Pipeline for Measuring Misalignment
**Team members:** Marius Hobbhahn, Eric Landgrebe
**External collaborators:** Beth Barnes (mentor)
Optimistically, a solution to the technical alignment problem will allow us to align an AI to “human values.” This naturally raises the question of what we mean by “human values.” For many object-level moral questions (e.g. “is abortion immoral?”), there is no consensus that we could call a “human value.” When lacking moral clarity we, as humans, resort to a variety of different procedures to resolve conflicts both with each other (democracy/voting, debate) and within ourselves (read books on the topic, talk with our family/religious community). In this way, although we may not be able to gain agreement at the **object level**, we may be able to come to a consensus by agreeing at the **meta level** (“whatever democracy decides will determine the policy when there are disagreements”); this is the distinction between normative ethics and meta-ethics in philosophy. We see the meta question of value choices of people’s meta-ethics as being relevant to strategic decisions around AI safety for a few reasons. For example, it could be relevant for questions on AI governance or to prevent arms race conditions between competing AI labs.
Therefore, we surveyed ~1000 US citizens on object level and meta level moral questions. We have three main findings.
1. As expected, people have different object level moral beliefs, e.g. whether it’s moral to eat meat.
2. Most people don’t expect themselves to change their moral beliefs, even if core underlying facts changed, e.g. if they believed that the animal has human-like consciousness.
3. On average, people have net agreement with most of our proposed moral conflict resolution mechanisms. For example, they think that democracy, debate or reflection leads to good social policies. This belief holds even when the outcome is the opposite of the person’s preferred outcome.
We think these findings have possible implications for AI safety. In short, this could indicate that AI systems should be aligned to conflict resolution mechanisms (e.g. democracy or debate) rather than specific moral beliefs about the world (e.g. the morality of abortion). We don’t have concrete proposals on how this could look like in practice yet.
**Relevant Links:**
[Reflection Mechanisms as an Alignment target: A survey](https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1) (also presented at NeurIPS)
### Language Models as Tools for Alignment Research
**Team members:** Jan Kirchner, Logan Smith, Jacques Thibodeau
**External collaborators:** Kyle and Laria (mentors), Kevin Wang
AI alignment research is the field of study dedicated to ensuring that artificial intelligence (AI) benefits humans. As machine intelligence gets more advanced, this research is becoming increasingly important. Researchers in the field share ideas across different media to speed up the exchange of information. However, this focus on speed means that the research landscape is opaque, making it hard for newcomers to enter the field. In this project, we collected and analyzed existing AI alignment research. We found that the field is growing quickly, with several subfields emerging in parallel. We looked at the subfields and identified the prominent researchers, recurring topics, and different modes of communication in each. Furthermore, we found that a classifier trained on AI alignment research articles can detect relevant articles that we did not originally include in the dataset. We are sharing the dataset with the research community and hope to develop tools in the future that will help both established researchers and young researchers get more involved in the field.
**Relevant Links:**
[GitHub dataset repository](https://github.com/moirage/alignment-research-dataset)
### Creating Alignment Failures in GPT-3
**Team members:** Ali Zaidi, Ameya Prabhu, Arun Jose
**External collaborators:** Kyle and Laria (mentors)
Our discussions and what we thought would be interesting to work on branched out rapidly over the months. Below are some of the broad tracks we ended up pursuing:
Track of classifying alignment failures: We aimed at creating a GPT3 classifier which can detect alignment failures in GPT3 by asking whether the statement matches some alignment failure we want to detect. So, at each step in the generation tree the GPT3 model will create outputs and another model will check for failures that we want to prevent explicitly, by prompting it with the output and asking whether this is an example of this specific kind of failure. We started with toxicity and honesty detection because of availability of datasets, trying to get GPT3 models to accurately predict whether it was dishonest in a zero-shot fashion as is done in benchmarks usually. However, the primary bottleneck we got stuck at is designing prompts which could more accurately capture performance. It is hard to specify concepts like toxic text or check for honesty as a lot of sentences are not informational at all creating a class which is catchall/vague. This was our progress on this track.
Track of exploratory work / discussions: We tried prompting GPT-3 to recognize gradient filtering as a beneficial strategy while simulating a mesa-optimizer, conditional on it having the ability to recognize the effect that different generations to some data would broadly have on the network weights. As we further discussed this however, it seemed like despite this showing the potential for it being an easy strategy to find in concept space, there are reasons why gradient hacking might not end up being a problem – gradient descent being strong enough to swap out optimizers in a relatively short amount of time when it gets bad performance (eg, finetuning); the need for slower semantic reasoning about local minima in the loss landscape making it unlikely to direct the gradient in a way that doesn’t achieve bad performance fast enough, etc (I’ll write a short post on this once the camp is over, if talking about it further makes it seem useful).
We also began work on some trajectories to better understand reward representation in RL agents, such as training a model on two different rewards one after the other and subtracting the updates from the second training from the model after the first, and seeing whether it now optimizes for the opposite of the second reward (after some other training to account for capability robustness), and generally isolating and perturbing the weights representing rewards in the network to observe the effects.
**Relevant links:**
[Presentation](https://docs.google.com/presentation/d/1dUKr6RzUcOf1Br5G00VzK8rewra9TlGl6BgKTS4NaFY/edit?usp=sharing) ([slides](https://docs.google.com/presentation/d/1dUKr6RzUcOf1Br5G00VzK8rewra9TlGl6BgKTS4NaFY/edit?usp=sharing))
### Comparison Between RL and Fine-tuning GPT-3
**Team members:** Alex Troy Mallen, Daphne Will, Fabien Roger, Nicholas Kees Dupuis
**External collaborators:** Kyle McDonell and Laria Reynolds (mentors)
Reinforcement learning agents are trained as utility maximizers, and their alignment failures are a well studied problem. Self-supervised models like GPT-3 function quite a bit differently. Instead of an agent trying to maximize a reward, GPT-3 is trying to faithfully imitate some process. Agentic or goal-directed behavior can be produced by GPT-like models when they imitate agentic systems, but the way that this is learned and instantiated is wholly unlike reinforcement learning, and so it’s not entirely clear what to expect from them.
Our project focuses on trying to better understand how transformer systems can go wrong, and in what ways that might differ from reinforcement learning. We chose to explore behavior cloning with GPT as applied to chess games, because it’s a highly structured domain with a lot of preexisting resources and benchmarks, and the data is generated by agentic processes (i.e. chess players attempting to win).
Our experiments test how GPT generalizes off distribution, whether it can learn to do a kind of internal search, the presence of deep vs shallow patterns, and how RL from human feedback shifts the distribution of behavior. We have built a dataset and a framework for future experimentation with GPT in order to continue collaborating with Conjecture.
**Relevant links:**
[Presentation](https://docs.google.com/presentation/u/1/d/149lZG4m6YZlje4YYueQl69ukQMV67XQ6mwlZtTCN_o4/edit) ([slides](https://docs.google.com/presentation/u/1/d/149lZG4m6YZlje4YYueQl69ukQMV67XQ6mwlZtTCN_o4/edit))
### Extending Power-Seeking Theorems to POMDPs
**Team members:** Tomasz Korbak, Thomas Porter, Samuel King, Ben Laurense
**External collaborators:** Alex Turner (mentor)
The original power seeking theorems resulted from attempts to formalize arguments about the inevitable behavior of optimizing agents. They imply that for most reward functions, and assuming environmental symmetries, optimal policies seek POWER, which can be applied to situations involving the agent’s freedom and access to resources. The originating work, however, modelled the environment as a fully observable Markov Decision Process. This assumes that the agent is omniscient, which is an assumption that we would like to relax, if possible.
Our project was to find analogous results for Partially Observable Markov Decision Processes. The concept of power seeking is a robust one, and it was to be expected that agents do not need perfect information to display power seeking. Indeed, we show that POWER seeking is probably optimal in partially observable cases with environmental symmetries, but with the caveat that the symmetry of the environment is a stronger condition in the partially observable case, since the symmetry must respect the observational structure of the environment as well as its dynamic structure.
**Relevant links:**
[Presentation](https://docs.google.com/presentation/d/1pp9_qOsivZ5OGVIc4cIiQXuwPvPc4hcH1j11AKnDp1I/edit#slide=id.p) ([slides](https://docs.google.com/presentation/d/1pp9_qOsivZ5OGVIc4cIiQXuwPvPc4hcH1j11AKnDp1I/edit#slide=id.p))
[Blog Post](https://power-seeking-pomdps.notion.site/Applying-Theorem-A-13-to-minimal-asymmetric-power-seeking-example-169f60610ef9498bb33d6f198fe7ca4c)
### Learning and Penalising Betrayal
**Team members:** Nikiforos Pittaras, Tim Farrelly, Quintin Pope
**External collaborators:** Stuart Armstrong
Alignment researchers should be wary of deceptive behaviour on the part of powerful AI systems because such behaviour can allow misaligned systems to appear aligned. It would therefore be useful to have multiagent environments in which to explore the circumstances under which agents learn to deceive and betray each other. Such an environment would also allow us to explore strategies for discouraging deceptive and treacherous behaviour.
We developed specifications for three multiagent reinforcement learning environments which may be conducive to agents learning deceptive and treacherous behaviour and to identifying such behaviours when they arise.
1. Harvest with partner selection
2. Symmetric Observer / Gatherer
3. Iterated random prisoner’s dilemma with communication
**Relevant links:**
[Presentation](https://docs.google.com/presentation/d/1Ecx5mDVJQC4qYqwKAq2an4Bc1Nv2bIVIB6SrUKaOP3U/edit#slide=id.g130edfe1294_0_22) ([slides](https://docs.google.com/presentation/d/1Ecx5mDVJQC4qYqwKAq2an4Bc1Nv2bIVIB6SrUKaOP3U/edit#slide=id.g130edfe1294_0_22))
### Semantic Side-Effect Minimization (SSEM)
**Team members:** Fabian Schimpf, Lukas Fluri, Achyuta Rajaram, Michal Pokorny
**External collaborators:** Stuart Armstrong (mentor)
Robust quantification of human values is currently eluding researchers as a metric for “how to do the most good” that lends itself as an objective function for training an AGI. Therefore, as a proxy, we can define tasks for a system to tell it to solve the tasks and accumulate rewards. However, the silent “*solve the tasks with common sense and don’t do anything catastrophic while you’re at it*” entails the danger of negative side effects resulting from task-driven behavior. Therefore, different side effect minimization (SEM) algorithms have been proposed to encode this common sense.
After months of discussions, we realized that we were confused about how state-of-the-art methods could be used to solve problems we care about outside the scope of the typical grid-world environments. We formalized these discussions into distinct desiderata that we believe are currently not sufficiently addressed and, in part, maybe even overlooked. The write-up can be found [on the alignment forum](https://www.alignmentforum.org/posts/pnAxcABq9GBDG5BNW/open-problems-in-negative-side-effect-minimization#Goals_of_Side_Effect_Minimization):
In summary, our findings are clustered around the following ideas:
* An SEM should provide guarantees about its safety before it is allowed to act in the real world for the first time. More generally, it should clearly state its requirements (i.e., in which settings it works properly) and its goals (i.e., which side-effects it successfully prevents).
* An SEM needs to work in partially observable systems with uncertainty and chaotic environments.
* An SEM must not prevent all high-impact side-effects as it might be necessary to have high-impact in some cases (especially in multi-agent scenarios)
In the future we plan to develop a new SEM approach which tries to remedy some of the issues we raised, in the hopes of getting one step closer to a reliable, scalable, and aligned side-effect minimization procedure.
**Relevant links:**
[Alignment Forum post](https://www.alignmentforum.org/posts/pnAxcABq9GBDG5BNW/open-problems-in-negative-side-effect-minimization#Goals_of_Side_Effect_Minimization)
[Presentation](https://docs.google.com/presentation/d/1mr5MqXFySmyrPc4fSnaUXjAcwClYrpxLkNbtBYscLTY/edit?usp=sharing) ([slides](https://docs.google.com/presentation/d/1mr5MqXFySmyrPc4fSnaUXjAcwClYrpxLkNbtBYscLTY/edit?usp=sharing))
### Utility Maximization as Compression
**Team members:** Niclas Kupper
**External collaborators:** John Wentworth (mentor)
Many of our ML-systems / RL-agents today are modeled as utility maximizers. Although not a perfect model, it has influenced many design decisions. Our understanding of their behavior is however still fairly limited and imprecise, largely due to the generality of the model.
We use ideas from information theory to create more tangible tools for studying general behavior. Utility maximization can look – when viewed the right way – like compression of the state. More precisely, it is minimizing the bits required to describe the state for a specific encoding. Using that idea as a starting-off point we explore other information theoretic ideas. Resilience to noise turns out to be central to our investigation. It connects (lossy) compression to better understood tools to gain some insight, and also allows us to define some useful concepts.
We will then take a more speculative look at what these things tell us about the behavior of optimizers. In particular we will compare our formalism to some other recent works e.g. Telephone Theorem, optimization at a distance and Information Loss –> Basin Flatness.
**Relevant links:**
[Presentation](https://docs.google.com/presentation/d/10fDE3iNBnTXjies7Lkwx_wqdLc0R56swtFGnuwjPT1Y/edit?usp=sharing) ([slides](https://docs.google.com/presentation/d/10fDE3iNBnTXjies7Lkwx_wqdLc0R56swtFGnuwjPT1Y/edit?usp=sharing))
### Constraints from Selection
**Team members:** Lucius Bushnaq, Callum McDougall, Avery Griffin, Eigil Fjeldgren Rischel
**External collaborators:** John Wentworth (mentor)
The idea of selection theorems (introduced by John Wentworth) is to try and formally describe which kinds of type signatures will be selected for in certain classes of environment, under selection pressure such as economic profitability or ML training. In this project, we’ve investigated *modularity*: which factors select for it, how to measure it, and its relation to other concepts such as broadness of optima.
Lots of the theoretical work in this project has been about how to describe modularity. Most studies of modularity (e.g. in biological literature, or more recent investigations of modularity by CHAI) use graph-theoretic concepts, such as the [Q-score](https://en.wikipedia.org/wiki/Modularity_(networks)#Modularity). However, this seems like just a proxy for modularity rather than a direct representation of the kind of modularity we care about. Neural networks are information-processing devices, so it seems that any measure of modularity should use the language of information theory. We’ve developed several ideas for an information-theoretic measure, e.g. using mutual information and counterfactual mutual information.
Much of our empirical work has focused on investigating theories of modularity proposed in the biological literature. This is because our project was motivated by the empirical observation that [biological systems seem highly modular](https://www.lesswrong.com/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity#comments) and yet the outputs of modern genetic algorithms don’t.
Primarily, we explored the idea of **modularly varying goals** (that an agent will develop modular structure as a response to modularly changing parts of the environment), and tried to replicate the results in the [Kashton & Alon 2005 paper](https://www.pnas.org/doi/10.1073/pnas.0503610102). Many of the results replicated for us, although not as nicely. Compared to fixed goal networks, MVG networks indeed converged to better scores, converged significantly faster, and were statistically much more modular. The not so nice part of the replication came from the modularity results where we learned MVG did not always produce modular networks. In only about half of all trials were highly modular networks produced.
We also investigated the “broadness” of network optima as we suspected a strong link between modularity and broad peaks. We discovered that MVG networks had statistically more breadth compared to fixed goal networks. Generally, as networks became more modular (as measured by Q value) the broadness increased. We also found that MVG is approximately independent of breadth after controlling for modularity, which in turn suggests that MVG directly selects for modularity and only indirectly finds broader peaks by selecting more modular networks
We also looked at connection costs, and whether they lead to modularity. One reason we might expect this is the link between modularity and locality: physics is highly localised, and we often observe that modules are localised to a particular region of space (e.g. organs, and the wiring structure of certain brains). Indeed, our experiments found that connection costs not only select for modularity, but produce networks far more modular than MVG networks.
We expect this line of investigation to continue after the AI Safety Camp. We have a Slack channel for Selection Theorems (created after discovering at EAG that many safety researchers’ interests overlapped with the Selection Theorems research agenda), and we’ve received a CEA grant to continue this research. Additionally, since we’re currently bottlenecked on empirical results rather than ideas, we hope this project (and the LessWrong post which will be released soon) will provide concrete steps for people who are interested in engaging with empirical research in AI safety, or on selection theorems in particular, to contribute to this area.
**Relevant links:**
LessWrong Posts
[Theories of Modularity in the Biological Literature](https://www.lesswrong.com/posts/JzTfKrgC7Lfz3zcwM/theories-of-modularity-in-the-biological-literature)
[Project Intro: Selection Theorems for Modularity](https://www.lesswrong.com/posts/XKwKJCXgSKhSr9bZY/project-intro-selection-theorems-for-modularity) |
c650cb90-b7f6-47a4-8514-09bebc219d5e | trentmkelly/LessWrong-43k | LessWrong | What is the relationship between Preference Learning and Value Learning?
It appears that in the last few years the AI Alignment community has dedicated great attention to the Value Learning Problem [1]. In particular, the work of Stuart Armstrong stands out to me.
Concurrently, during the last decade, researcher such as Eyke Hüllermeier Johannes Fürnkranz produced a significant amount of work on the topics of preference learning [2] and preference-based reinforcement learning [3].
While I am not highly familiar with the Value Learning literature, I consider the two fields closely related if not overlapping, but I have not often seen references the Preference Learning work, and vice-versa.
Is this because the two fields are less related than what I think? And more specifically, how do the two fields relate with each other?
References
[1] - Soares, Nate. "The value learning problem." Machine Intelligence Research Institute, Berkley (2015).
[2] - Fürnkranz, Johannes, and Eyke Hüllermeier. Preference learning. Springer US, 2010.
[3] - Fürnkranz, Johannes, et al. "Preference-based reinforcement learning: a formal framework and a policy iteration algorithm." Machine learning 89.1-2 (2012): 123-156. |
f6733231-8114-42da-a271-de36daeec0eb | trentmkelly/LessWrong-43k | LessWrong | 10/20/17 Development Update: HTML pasting, the return of the bullets and bugfixes
Quick summary of the changes I just pushed:
* You can now finally paste HTML and Rich Text content into our editor! This means copy-pasting content from Google Docs or Wordpress or whatever other tools you use for writing now preserves formatting such as italics, bold and links. We only import the formatting that we support, so you won't be able to copy tables or images or other similar things.
* Bullet lists work again in the editor
* Meta posts no longer have the personal blog background
* I renamed the frontpage sections. "Recent Frontpage Posts" now just says "Frontpage Posts" (since the "recent" implied that they were sorted by time, which they weren't) and "Featured Posts" now just says "Featured".
* A bunch of smaller bugfixes
Let me know if you notice anything broken. We should be pushing another large chunk of performance improvements in the next few days, and also fix the thing where you don't immediately see that you upvoted your own comments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.