id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
0cba9463-0c72-4571-86b4-ce979483d55d | trentmkelly/LessWrong-43k | LessWrong | Quantum Bayesianism
|
032cf5a6-c027-44a4-988d-1c0f66c7b75c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Additive and Multiplicative Subagents
This is the ninth post in the [Cartesian frames](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames) sequence. Here, we refine our notion of subagent into additive and multiplicative subagents. As usual, we will give many equivalent definitions.
The additive subagent relation can be thought of as representing the relationship between an agent that has made a commitment, and the same agent before making that commitment. The multiplicative subagent relation can be thought of as representing the relationship between a football player and a football team.
Another way to think about the distinction is that additive subagents have fewer options, while multiplicative subagents have less refined options.
We will introduce these concepts with a definition using [sub-sums and sub-tensors](https://www.lesswrong.com/posts/LAHXvi4qwXogmdTHd/sub-sums-and-sub-tensors).
1. Definitions of Additive and Multiplicative Subagent
------------------------------------------------------
**1.1. Sub-Sum and Sub-Tensor Definitions**
**Definition:** C.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is an additive subagent of D, written C◃+D, if there exists a C′ and a D′≃D with D′∈C⊞C′.
**Definition:** C is a multiplicative subagent of D, written C◃×D, if there exists a C′ and D′≃D with D′∈C⊠C′.
These definitions are nice because they motivate the names "additive" and "multiplicative." Another benefit of these definitions is that they draw attention to the Cartesian frames given by C′. This feature is emphasized more in the below (clearly equivalent) definition.
**1.2. Brother and Sister Definitions**
**Definition:** C′ is called a brother to C in D if D≃D′ for some D′∈C⊞C′. Similarly, C′ is called a sister to C in D if D≃D′ for some D′∈C⊠C′.
E.g., one "sister" of a football player will be the entire rest of the football team. One "brother" of a person that precommitted to carry an umbrella will be the counterfactual version of themselves that instead precommitted to *not* carry an umbrella.
This allows us to trivially restate the above definitions as:
**Definition:** We say C◃+D if C has a brother in D and C◃×D if C has a sister in D.
**Claim:** This definition is equivalent to the ones above.
**Proof:** Trivial. □
**1.3. Committing and Externalizing Definitions**
Next, we will give the committing definition of additive subagent and an externalizing definition of multiplicative subagent. These definitions are often the easiest to work with directly in examples.
We call the following definition the "committing" definition because we are viewing C as the result of D making a commitment (up to biextensional equivalence).
**Definition:** Given Cartesian frames C and D over W, we say C◃+D if there exist three sets X, Y, and Z, with X⊆Y, and a function f:Y×Z→W such that C≃(X,Z,⋄) and D≃(Y,Z,∙), where ⋄ and ∙ are given by x⋄z=f(x,z) and y∙z=f(y,z).
**Claim:** This definition is equivalent to the sub-sum and brother definitions of ◃+.
**Proof:** First, assume that C has a brother in D. Let C=(A,E,⋅), and let D=(B,F,⋆). Let C′=(A′,E′,⋅′) be brother to C in D. Let D′=(B′,F′,⋆′) be such that D′≃D and D′∈C⊞C′. Then, if we let X=A, let Y=B′=A⊔A′, let Z=F′, and let f(y,z)=y⋆′z, we get D≃D′=(Y,Z,∙), where y∙z=f(y,z), and by the definition of sub-sum, C≃(X,Z,⋄), where x⋄z=f(x,z).
Conversely, let X, Y, and Z be arbitrary sets with X⊆Y, and let f:Y×Z→W. Let C≃C0=(X,Z,⋄0), and let D≃D′=(Y,Z,∙), where x⋄0z=f(x,z) and y∙z=f(y,z). We want to show that C has a brother in D. It suffices to show that C0 has a brother in D, since sub-sum is well-defined up to biextensional equivalence. Indeed, we will show that C1=(Y∖X,Z,⋄1) is brother to C0 in D, where ⋄1 is given by x⋄1z=f(x,z).
Observe that C0⊕C1=(Y,Z×Z,∙′), where ∙′ is given by
y∙′(z0,z1)=y⋄0z0=y∙z0if y∈X, and is given by
y∙′(z0,z1)=y⋄1z1=y∙z1otherwise. Consider the diagonal subset S⊆Z×Z given by S={(z,z) | z∈Z}. Observe that the map z↦(gz,hz) is a bijection from Z to S. Observe that if we restrict ∙′ to Y×S, we get ∙′′:Y×S→W given by y∙′′(z,z)=y∙z. Thus (Y,S,∙′′)≅(Y,Z,∙), with the isomorphism coming from the identity on Y, and the bijection between S and Z.
If we further restrict ∙′′ to X×S or (Y∖X)×S, we get ∙0 and ∙1 respectively, given by x∙0=x⋄0z and x∙1(z,z)=x⋄1z. Thus (X,S,∙0)≅(X,Z,⋄0) and (Y∖X,S,∙1)≅(Y∖X,Z,⋄1), with the isomorphisms coming from the identities on Y and X∖Y, and the bijection between S and Z.
Thus (Y,S,∙′′)∈C0⊞C1, and (Y,S,∙′′)≅D′≃D, so C1 is brother to C0 in D, so C has a brother in D. □
Next, we have the externalizing definition of multiplicative subagent. Here, we are viewing C as the result of D sending some of its decisions into the environment (up to biextensional equivalence).
**Definition:** Given Cartesian frames C and D over W, we say C◃×D if there exist three sets X, Y, and Z, and a function f:X×Y×Z→W such that C≃(X,Y×Z,⋄) and D≃(X×Y,Z,∙), where ⋄ and ∙ are given by x⋄(y,z)=f(x,y,z) and (x,y)∙z=f(x,y,z).
**Claim:** This definition is equivalent to the sub-tensor and sister definitions of ◃×.
**Proof:** First, assume that C has a sister in D. Let C=(A,E,⋅), and let D=(B,F,⋆). Let C′=(A′,E′,⋅′) be sister to C in D. Let D′=(B′,F′,⋆′) be such that D′≃D and D′∈C⊠C′. Then, if we let X=A, let Y=A′, let Z=F′⊆hom(C,C′∗), and let
f(x,y,(g,h))=x⋅h(y)=y⋅′g(x),we get D≃D′=(X×Y,Z,∙), where (x,y)∙z=f(x,y,z), and by the definition of sub-tensor, C≃(X,Y×Z,⋄), where x⋄(y,z)=f(x,y,z).
Conversely, let X, Y, and Z be arbitrary sets, and let f:X×Y×Z→W. Let C≃C0=(X,Y×Z,⋄0), and let D≃D′=(X×Y,Z,∙), where x⋄0(y,z)=(x,y)∙z=f(x,y,z). We will assume for now that at least one of X and Y is nonempty, as the case where both are empty is degenerate.
We want to show that C has a sister in D. It suffices to show that C0 has a sister in D, since sub-tensor is well-defined up to biextensional equivalence. Indeed, we will show that C1=(Y,X×Z,⋄1) is sister to C0 in D, where ⋄1 is given by y⋄1(x,z)=f(x,y,z).
Observe that C0⊗C1=(X×Y,hom(C0,C∗1),∙′), where ∙′ is given by
(x,y)∙′(g,h)=x⋄0h(y)=y⋆1g(x).For every z∈Z, there is a morphism (gz,hz):C0→C∗1, where gz:X→X×Z is given by gz(x)=(x,z), and gz:Y→Y×Z is given by gz(x)=(x,z). This is clearly a morphism. Consider the subset S⊆hom(C0,C∗1) given by S={(gz,hz) | z∈Z}. Observe that the map z↦(gz,hz) is a bijection from Z to S. (We need that at least one of X and Y is nonempty here for injectivity.)
If we restrict ∙′ to (X×Y)×S, we get ∙′′:(X×Y)×S→W given by y∙′′(gz,hz)=y∙z. Thus, (X×Y,S,∙′′)≅(X×Y,Z,∙), with the isomorphism coming from the identity on X×Y, and the bijection between S and Z.
To show that (X×Y,S,∙′′)∈C0⊠C1, we need to show that C0≃(X,Y×S,∙0) and C1≃(Y,X×S,∙1), where ∙0 and ∙1 are given by
x∙0(y,(gh,zh))=y∙1(x,(gh,zh))=(x,y)∙′′(gz,hz).Indeed, x∙0(y,(gh,zh))=x⋄0(y,z) and y∙1(x,(gh,zh))=y⋄1(x,z), so (X,Y×S,∙0)≅(X,Y×Z,⋄0)=C0 and (Y,X×S,∙1)≅(Y,X×Z,⋄1)=C1, with the isomorphisms coming from the identities on X and Y, and the bijection between S and Z.
Thus (X×Y,S,∙′′)∈C0⊞C1, and (Y,S,∙′′)≅D′≃D, so C1 is sister to C0 in D, so C has a sister in D.
Finally, in the case where X and Y are both empty, C≅null, and either D≃null or D≃0, depending on whether Z is empty. It is easy to verify that null⊠null={0,null}, since null⊗null≅0, taking the two subsets of the singleton environment in 0 yields 0 and null as candidate sub-tensors, and both are valid sub-tensors, since either way, the conditions reduce to null≃null. □
Next, we have some definitions that more directly relate to our original [definitions of subagent](https://www.lesswrong.com/posts/nwrkwTd6uKBesYYfx/subagents-of-cartesian-frames).
**1.4. Currying Definitions**
**Definition:** We say C◃+D if there exists a Cartesian frame M over Agent(D) with |Env(M)|=1, such that C≃D∘(M).
**Claim:** This definition is equivalent to all of the above definitions of ◃+.
**Proof:** We show equivalence to the committing definition.
First, assume that there exist three sets X, Y, and Z, with X⊆Y, and a function p:Y×Z→W such that C≃(X,Z,⋄) and D≃(Y,Z,∙), where ⋄ and ∙ are given by x⋄z=p(x,z) and y∙z=p(y,z).
Let D=(B,F,⋆), and let (g0,h0):D→(Y,Z,∙) and (g1,h1):(Y,Z,∙)→D compose to something homotopic to the identity in both orders.
We define M, a Cartesian frame over B, by M=(X,{e},⋅), where ⋅ is given x⋅e=g1(x). Observe that D∘(M)=(X,{e}×F,⋆′), where ⋆′ is given by
x⋆′(e,f)=(x⋅e)⋆f=g1(x)⋆f=x∙h1(f).To show that (X,Z,⋄)≃D∘(M), we construct morphisms (g2,h2):(X,Z,⋄)→D∘(M) and (g3,h3):D∘(M)→(X,Z,⋄) that compose to something homotopic to the identity in both orders. Let g2 and g3 both be the identity on X. Let h2:{e}×F→Z be given by h2(e,f)=h1(f), and let h3:Z→{e}×F be given by h3(z)=(e,h0(z)).
We know (g2,h2) is a morphism, since for all x∈X and (e,f)∈{e}×F, we have
g2(x)⋆′(e,f)=x⋆′(e,f)=x∙h1(f)=x⋄h1(f)=x⋄(h2(e,f)).We also have that (g3,h3) is a morphism, since for all x∈X and z∈Z, we have
g3(x)⋄z=x⋄z=x∙z=x∙h1(h0(z))=x⋆′(e,h0(z))=x⋆′h3(z).Observe that (g2,h2) and (g3,h3) clearly compose to something homotopic to the identity in both orders, since g2∘g3 and g3∘g2 are the identity on X.
Thus, C≃(X,Z,⋄)≃D∘(M), and |Env(M)|=1.
Conversely, assume C≃D∘(M), with |Env(M)|=1. We define Y=Agent(D) and Z=Env(D). We define f:Y×Z→W by f(y,z)=y∙z, where ∙=Eval(D).
Let X⊆Y be given by X=Image(M). Since |Env(M)|=1, we have M≃⊥X. Thus, C≃D∘(M)≃D∘(⊥X). Unpacking the definition of D∘(⊥X), we get D∘(⊥X)=(X,{e}×Z,⋅), where ⋅ is given by x⋅(e,z)=f(x,z), which is isomorphic to (X,Z,⋄), where ⋄ is given by x⋄z=f(x,z). Thus C≃(X,Z,⋄) and D=(Y,Z,∙), as in the committing definition. □
**Definition:** We say C◃×D if there exists a Cartesian frame M over Agent(D) with Image(M)=Agent(D), such that C≃D∘(M).
**Claim:** This definition is equivalent to all of the above definitions of ◃×.
**Proof:** We show equivalence to the externalizing definition.
First, assume there exist three sets X, Y, and Z, and a function p:X×Y×Z→W such that C≃(X,Y×Z,⋄) and D≃(X×Y,Z,∙), where ⋄ and ∙ are given by x⋄(y,z)=(x,y)∙z=p(x,y,z).
Let D=(B,F,⋆), and let (g0,h0):D→(X×Y,Z,∙) and (g1,h1):(X×Y,Z,∙)→D compose to something homotopic to the identity in both orders.
We define B′=B⊔{a}, and we define M, a Cartesian frame over B, by M=(X,Y×B′,⋅), where ⋅ is given by x⋅(y,b)=b if b∈B and g0(b)=(x,y), and x⋅(y,b)=g1(x,y) otherwise. Clearly, Image(M)=B, since for any b∈B, if we let (x,y)=g0(b), we have x⋅(y,b)=b.
Observe that for all x∈X, y∈Y, b∈B′ and f∈F, if b∈B and g0(b)=(x,y), then
(x⋅(y,b))⋆f=b⋆f=g1(g0(b))⋆f=g1(x,y)⋆f,and on the other hand, if b=a or g0(b)≠(x,y), we also have (x⋅(y,b))⋆f=g1(x,y)⋆f.
Thus, we have that D∘(M)=(X,Y×B′×F,⋆′), where ⋆′ is given by
x⋆′(y,b,f)=(x⋅(y,b))⋆f=g1(x,y)⋆f=(x,y)∙h1(f).To show that (X,Y×Z,⋄)≃D∘(M), we construct morphisms (g2,h2):(X,Y×Z,⋄)→D∘(M) and (g3,h3):D∘(M)→(X,Y×Z,⋄) that compose to something homotopic to the identity in both orders. Let g2 and g3 both be the identity on X. Let h2:Y×B′×F→Y×Z be given by h2(y,b,f)=(y,h1(f)), and let h3:Y×Z→Y×B′×F be given by h3(y,z)=(y,a,h0(z)).
We know (g2,h2) is a morphism, since for all x∈X and (y,b,f)∈Y×B′×F,
g2(x)⋆′(y,b,f)=x⋆′(y,b,f)=(x,y)∙h1(f)=p(x,y,h1(f))=x⋄(y,h1(f))=x⋄(h2(y,b,f)).We also have that (g3,h3) is a morphism, since for all x∈X and (y,z)∈Y×Z, we have
g3(x)⋄(y,z)=x⋄z=x⋄(y,z)=p(x,y,z)=(x,y)∙z=(x,y)∙h1(h0(z))=(x,y)⋆′(y,a,h0(z))=x⋆′h3(y,z).Observe that (g2,h2) and (g3,h3) clearly compose to something homotopic to the identity in both orders, since g2∘g3 and g3∘g2 are the identity on X.
Thus, C≃(X,Z,⋄)≃D∘(M), where Image(M)=Agent(D).
Conversely, assume C≃D∘(M), with Image(M)=Agent(D). Let X=Agent(M), let Y=Env(M), and let Z=Env(D). Let f:X×Y×Z→W be given by f(x,y,z)=(x⋅y)⋆z, where ⋅=Eval(M) and ⋆=Eval(D).
Thus C≃D∘(M)≅(X,Y×Z,⋄), where ⋄ is given by x⋄(y,z)=(x⋅y)⋆z=f(x,y,z). All that remains to show is that D≃(X×Y,Z,∙), where (x,y)∙z=f(x,y,z). Let D=(B,Z,⋆).
We construct morphisms (g0,h0):D→(X×Y,Z,∙) and (g1,h1):D→(X×Y,Z,∙) that compose to something homotopic to the identity in both orders. Let h0 and h1 be the identity on Z. Let g1:X×Y→B be given by g1(x,y)=x⋅y. Since g1 is surjective, it has a right inverse. Let g0:B→X×Y be any choice of right inverse of g1, so g1(g0(b))=b for all b∈B.
We know (g1,h1) is a morphism, since for all (x,y)∈X×Y and z∈Z,
g1(x,y)⋆z=(x⋅y)⋆z=f(x,y,z)=(x,y)∙z=(x,y)∙h1(z).To see that (g0,h0) is a morphism, given b∈B and z∈Z, let (x,y)=g0(b), and observe
g0(b)∙z=(x,y)∙z=f(x,y,z)=(x⋅y)⋆z=g1(x,y)⋆z=g1(g0(b))⋆z=b⋆h0(z).(g0,h0) and (g1,h1) clearly compose to something homotopic to the identity in both orders, since h0∘h1 and h1∘h0 are the identity on Z. Thus D≃(X×Y,Z,∙), completing the proof. □
Consider two Cartesian frames C and D, and let M be a frame whose possible agents are Agent(C) and whose possible worlds are Agent(D). When C is a subagent of D, (up to biextensional equivalence) there exists a function from Agent(C), paired with Env(M), to Agent(D).
Just as we did in "Subagents of Cartesian Frames" §1.2 ([Currying Definition](https://www.lesswrong.com/posts/nwrkwTd6uKBesYYfx/subagents-of-cartesian-frames#1_2__Currying_Definition)), we can think of this function as a (possibly) nondeterministic function from Agent(C) to Agent(D), where Env(M) represents the nondeterminism. In the case of additive subagents, Env(M) is a singleton, meaning that the function from Agent(C) to Agent(D) is actually deterministic. In the case of multiplicative subagents, the (possibly) nondeterministic function is surjective.
Recall that in "Sub-Sums and Sub-Tensors" §3.3 ([Sub-Sums and Sub-Tensors Are Superagents](https://www.lesswrong.com/posts/LAHXvi4qwXogmdTHd/sub-sums-and-sub-tensors#3_3__Sub_Sums_and_Sub_Tensors_Are_Superagents)), we constructed a frame with a singleton environment to prove that sub-sums are superagents, and we constructed a frame with a surjective evaluation function to prove that sub-tensors are superagents. The currying definitions of ◃+ and ◃× show why this is the case.
**1.5. Categorical Definitions**
We also have definitions based on the categorical definition of subagent. The categorical definition of additive subagent is almost just swapping the quantifiers from [our original categorical definition of subagent](https://www.lesswrong.com/posts/nwrkwTd6uKBesYYfx/subagents-of-cartesian-frames). However, we will also have to weaken the definition slightly in order to only require the morphisms to be homotopic.
**Definition:** We say C◃+D if there exists a single morphism ϕ0:C→D such that for every morphism ϕ:C→⊥ there exists a morphism ϕ1:D→⊥ such that ϕ is homotopic to ϕ1∘ϕ0 .
**Claim:** This definition is equivalent to all the above definitions of ◃+.
**Proof:** We show equivalence to the committing definition.
First, let C=(A,E,⋅) and D=(B,F,∙) be Cartesian frames over W, and let (g0,h0):C→D be such that for all (g,h):C→⊥, there exists a (g′,h′):D→⊥ such that (g,h) is homotopic to (g′,h′)∘(g0,h0). Let ⊥=(W,{i},⋆).
Let Y=B, let Z=F, and let X={g0(a) | a∈A}. Let f:Y×Z→W be given by f(y,z)=y∙z. We already have D=(Y,Z,∙), and our goal is to show that C≃(X,Z,⋄), where ⋄ is given by x⋄z=f(x,z).
We construct (g1,h1):C→(X,Z,⋄) and (g2,h2):(X,Z,⋄)→C that compose to something homotopic to the identity in both orders.
We define g1:A→X by g1(a)=g0(a). g1 is surjective, and so has a right inverse. We let g2:X→A be any right inverse to g1, so g1(g2(x))=x for all x∈X. We let h1:Z→E be given by h1(z)=h0(z).
Defining h2:E→Z will be a bit more complicated. Given an e∈E, let (ge,he) be the morphism from C to ⊥, given by he(i)=e and ge(a)=a⋅e. Let (g′e,h′e):D→⊥ be such that (ge,he) is homotopic to (g′e,h′e)∘(g0,h0). We define h2 by h2(e)=h′e(i).
We trivially have that (g1,h1) is a morphism, since for all a∈A and z∈Z,
g1(a)⋄z=g0(a)∙z=a⋅h0(z)=a⋅h1(z).To see that (g2,h2) is a morphism, consider x∈X and e∈E, and define (ge,he) and (g′e,h′e) as above. Then,
x⋄h2(e)=g1(g2(x))⋄h′e(i)=g′e(g0(g2(x)))⋆i=g2(x)⋅he(i)=g2(x)⋅e.We trivially have that (g1,h1)∘(g2,h2) is homotopic to the identity, since g1∘g2 is the identity on X. To see that (g2,h2)∘(g1,h1) is homotopic to the identity on C, observe that for all a∈A and e∈E, defining (ge,he) and (g′e,h′e) as above,
g2(g1(a))⋅e=g1(a)⋄h2(e)=g0(a)⋄h′e(i)=g′e(g0(a))⋆i=a⋆he(i)=a⋅e.Thus C≃(X,Z,⋄), and C◃+D according to the committing definition.
Conversely, let X, Y, and Z be arbitrary sets with X⊆Y, let f:Y×Z→W, and let C≃(X,Z,⋄) and D≃(Y,Z,∙), where ⋄ and ∙ are given by x⋄z=f(x,z) and y∙z=f(y,z).
Let (g1,h1):C→(X,Z,⋄) and (g2,h2):(X,Z,⋄)→C compose to something homotopic to the identity in both orders, and let (g3,h3):D→(Y,Z,∙) and (g4,h4):(Y,Z,∙)→D compose to something homotopic to the identity in both orders. Let (g0,h0):(X,Z,⋄)→(Y,Z,∙) be given by g0 is the embedding of X in Y and h0 is the identity on Z. (g0,h0) is clearly a morphism.
We let ϕ:C→D=(g4,h4)∘(g0,h0)∘(g1,h1).
Given a (g,h):C→⊥, our goal is to construct a (g′,h′):D→⊥ such that (g,h) is homotopic to (g′,h′)∘ϕ.
Let ⊥=(W,{i},⋆), let C=(A,E,⋅0), and let D=(B,F,⋅1). Let h′:{i}→F be given by h′=h3∘h2∘h. Let g′:B→W be given by g′(b)=b⋅1h′(i). This is clearly a morphism, since for all b∈B and i∈{i},
g′(b)⋆i=g′(b)=b⋅h′(i).To see that (g,h) is homotopic to (g′,h′)∘(g4,h4)∘(g0,h0)∘(g1,h1), we just need to check that (g,h1∘h0∘h4∘h′):C→⊥ is a morphism. Or, equivalently, that (g,h1∘h4∘h3∘h2∘h):C→⊥, since h0 is the identity, and h′=h3∘h2∘h.
Indeed, for all a∈A and i∈{i},
g(a)⋆i=a⋅0h(a)=a⋅0h1(h2(h(a)))=g1(a)⋄h2(h(a))=g1(a)∙h2(h(a))=g1(a)∙h4(h3(h2(h(a))))=g1(a)⋄h4(h3(h2(h(a))))=a⋅0h1(h4(h3(h2(h(a))))).Thus (g,h) is homotopic to (g′,h′)∘ϕ, completing the proof. □
**Definition:** We say C◃×D if for every morphism ϕ:C→⊥, there exist morphisms ϕ0:C→D and ϕ1:D→⊥ such that ϕ=ϕ1∘ϕ0, and for every morphism ψ:1→D, there exist morphisms ψ0:1→C and ψ1:C→D such that ψ=ψ1∘ψ0.
Before showing that this definition is equivalent to all of the above definitions, we will give one final definition of multiplicative subagent.
**1.6. Sub-Environment Definition**
First, we define the concept of a sub-environment, which is dual to the concept of a sub-agent.
**Definition:** We say C is a sub-environment of D, written C◃∗D, if D∗◃C∗.
We can similarly define additive and multiplicative sub-environments.
**Definition:** We say C is an additive sub-environment of D, written C◃∗+D, if D∗◃+C∗. We say C is an multiplicative sub-environment of D, written C◃∗×D, if D∗◃×C∗.
This definition of a multiplicative sub-environment is redundant, because the set of frames with multiplicative sub-agents is exactly the set of frames with multiplicative sub-environments, as shown below:
**Claim:** C◃×D if and only if C◃∗×D.
**Proof:** We prove this using the externalizing definition of ◃×.
If C◃×D, then for some X, Y, Z, and f:X×Y×Z→W, we have C≃(X,Y×Z,⋄) and D≃(X×Y,Z,∙), where ⋄ and ∙ are given by x⋄(y,z)=f(x,y,z) and (x,y)∙z=f(x,y,z).
Observe that D∗≃(Z,Y×X,⋅) and C∗≃(Z×Y,X,⋆), where ⋅ and ⋆ are given by z⋅(y,x)=f(x,y,z) and (z,y)⋆x=f(x,y,z). Taking X′=Z, Y′=Y, Z′=X, and f′(x,y,z)=f(z,y,x), this is exactly the externalizing definition of D∗◃×C∗, so C◃∗×D.
Conversely, if C◃∗×D, then D∗◃×C∗, so C≅{C∗}∗◃×{D∗}∗≅D. □
We now give the sub-environment definition of multiplicative subagent:
**Definition:** We say C◃×D if C◃D and C◃∗D. Equivalently, we say C◃×D if C◃D and D∗◃C∗.
**Claim:** This definition is equivalent to the categorical definition of ◃×.
**Proof:** The condition that for every morphism ϕ:C→⊥, there exist morphisms ϕ0:C→D and ϕ1:D→⊥ such that ϕ=ϕ1∘ϕ0, is exactly the categorical definition of C◃D.
The condition that for every morphism ψ:1→D, there exist morphisms ψ0:1→C and ψ1:C→D such that ψ=ψ1∘ψ0, is equivalent to saying that for every morphism ψ∗:D∗→⊥, there exist morphisms ψ∗0:C∗→⊥ and ψ∗1:D∗→C∗ such that ψ∗=ψ∗1∘ψ∗0. This is the categorical definition of D∗◃C∗. □
**Claim:** The categorical and sub-environment definitions of ◃× are equivalent to the other four definitions of multiplicative subagent above: sub-tensor, sister, externalizing, and currying.
**Proof:** We show equivalence between the externalizing and sub-environment definitions. First, assume that C=(A,E,⋅) and D=(B,F,⋆) are Cartesian frames over W with C◃D and C◃∗D.
We define X=A, Z=F, and Y=hom(C,D). We define p:X×Y×Z→W by
p(a,(g,h),f)=g(a)⋆f=a⋅h(f).We want to show that C≃(X,Y×Z,⋄), and D≃(X×Y,Z,∙), where ⋄ and ∙ are given by x⋄(y,z)=(x,y)∙z=p(x,y,z).
To see C≃(X,Y×Z,⋄), we construct (g0,h0):C→(X,Y×Z,⋄) and (g1,h1):(X,Y×Z,⋄)→C that compose to something homotopic to the identity in both orders. Let g0 and g1 be the identity on X and let h0:Y×Z→E be defined by h0((g,h),f)=h(f). By the covering definition of subagent, h0 is surjective, and so has a right inverse. Let h1:E→Y×Z be any right inverse of h0, so h0(h1(e))=e for all e∈E.
We know (g0,h0) is a morphism, because for all a∈A and ((g,h),f)∈Y×Z,
g0(a)⋄((g,h),f)=a⋄((g,h),f)=p(a,(g,h),f)=a⋅h(f)=a⋅h0((g,h),f).We know (g1,h1) is a morphism, since for x∈X and e∈E, if ((g,h),f)=h1(e),
g1(x)⋅e=x⋅h0((g,h),f)=x⋅h(f)=p(x,(g,h),f)=x⋄((g,h),f)=x⋄h1(e).(g0,h0) and (g1,h1) clearly compose to something homotopic to the identity in both orders, since g0∘g1 and g1∘g0 are the identity on X.
To see D≃(X×Y,Z,∙), we construct (g2,h2):D→(X×Y,Z,∙) and (g3,h3):(X×Y,Z,∙)→D that compose to something homotopic to the identity in both orders. Let h2 and h3 be the identity on Z and let g3:X×Y→B be defined by g3(a,(g,h))=g(a). By the covering definition of subagent and the fact that D∗◃C∗, g3 is surjective, and so has a right inverse. Let g2:B→X×Y be any right inverse of g3, so g3(g2(b))=b for all b∈B.
We know (g3,h3) is a morphism, because for all f∈F and (a,(g,h))∈X×Y,
g3(a,(g,h))⋆f=g(a)⋆f=p(a,(g,h),f)=(a,(g,h))∙f=(a,(g,h))∙h3(f).We know (g2,h2) is a morphism, since for z∈Z and b∈B, if (a,(g,h))=g2(b),
g2(b)∙z=(a,(g,h))∙z=p(a,(g,h),z)=g(a)⋆z=g3(a,(g,h))⋆z=b⋆h2(z).Observe that (g2,h2) and (g3,h3) clearly compose to something homotopic to the identity in both orders, since h2∘h3 and h3∘h2 are the identity on Z.
Thus, C≃(X,Y×Z,⋄), and D≃(X×Y,Z,∙).
Conversely, if C◃×D according to the externalizing definition, then we also have D∗◃×C∗. However, by the currying definitions of multiplicative subagent and of subagent, multiplicative subagent is stronger than subagent, so C◃D and D∗◃C∗. □
2. Basic Properties
-------------------
Now that we have enough definitions of additive and multiplicative subagent, we can cover some basic properties.
First: Additive and multiplicative subagents are subagents.
**Claim:** If C◃+D, then C◃D. Similarly, if C◃×D, then C◃D.
**Proof:** Clear from the currying definitions. □
Additive and multiplicative subagent are also well-defined up to biextensional equivalence.
**Claim:** If C◃+D, C′≃C, and D′≃D, then C′◃+D′. Similarly, if C◃×D, C′≃C, and D′≃D, then C′◃×D′.
**Proof:** Clear from the committing and externalizing definitions. □
**Claim:** Both ◃+ and ◃× are reflexive and transitive.
**Proof:** Reflexivity is clear from the categorical definitions. Transitivity of ◃× is clear from the transitivity of ◃ and the sub-environment definition. Transitivity of ◃+ can be seen using the categorical definition, by composing the morphisms and using the fact that being homotopic is preserved by composition. □
3. Decomposition Theorems
-------------------------
We have two decomposition theorems involving additive and multiplicative subagents.
**3.1. First Decomposition Theorem**
**Theorem:** C0◃C1 if and only if there exists a C2 such that C0◃×C2◃+C1.
**Proof:** We will use the currying definitions of subagent and multiplicative subagent, and the committing definition of additive subagent. Let C0=(A0,E0,⋅0) and C1=(A1,E1,⋅1). If C0◃C1, there exists some Cartesian frame D over A1 such that C0=C∘1(D).
Let C2=(Image(D),E1,⋅2), where ⋅2 is given by a⋅2e=a⋅1e. C2 is created by deleting some possible agents from C1, so by the committing definition of additive subagent C2◃+C1.
Also, if we let D′ be the Cartesian frame over Image(D) which is identical to D, but on a restricted codomain, then we clearly have that C∘1(D)≅C∘2(D′). Thus C0≃C∘2(D′) and Image(D′)=Agent(C2), so C0◃×C2.
The converse is trivial, since subagent is weaker than additive and multiplicative subagent and is transitive. □
Imagine that that a group of kids, Alice, Bob, Carol, etc., is deciding whether to start a game of baseball or football against another group. If they choose baseball, they form a team represented by the frame CB, while if they choose football, they form a team represented by the frame CF. We can model this by imagining that C0 is the group's initial state, and CB and CF are precommitment-style subagents of C0.
Suppose the group chooses football. CF's choices are a function of Alice-the-football-player's choices, Bob-the-football-player's choices, etc. (Importantly, Alice here has different options and a different environment than if the original group had chosen baseball. So we will need to represent Alice-the-football-player, CAF, with a different frame than Alice-the-baseball-player, CAB; and likewise for Bob and the other team members.)
It is easy to see in this case that the relationship between Alice-the-football-player's frame (CAF) and the entire group's initial frame (C0) can be decomposed into the additive relationship between C0 and CF and the multiplicative relationship between CF and CAF, in that order.
The first decomposition theorem tells us that *every* subagent relation, even ones that don't seem to involve a combination of "making a commitment" and "being a team," can be decomposed into a combination of those two things. I've provided an example above where this factorization feels natural, but other cases may be less natural.
Using the framing from our discussion of the currying definitions: this decomposition is always possible because we can always decompose a possibly-nondeterministic function f into (1) a possibly-nondeterministic surjective function onto f's image, and (2) a deterministic function embedding f's image in f's codomain.
**3.2. Second Decomposition Theorem**
**Theorem:** There exists a morphism from C0 to C1 if and only if there exists a C2 such that C0◃∗+C2◃+C1.
**Proof:** First, let C0=(A,E,⋅), let C1=(B,F,⋆), and let (g,h):C0→C1. We let C2=(A,F,⋄), where a⋄f=g(a)⋆f=a⋅h(f).
First, we show C2◃+C1, To do this, we let B′⊆B be the image of g, and let C′2=(B′,F,⋆′), where ⋆′ is given by b⋆′f=b⋆f. By the committing definition of additive subagent, it suffices to show that C′2≃C2.
We define (g0,h0):C2→C′2 and (g1,h1):C′2→C2 as follows. We let h0 and h1 be the identity on F. We let g0:A→B′ be given by g0(a)=g(a). Observe that g0 is surjective, and thus has a right inverse. Let g1 be any right inverse to g0, so g0(g1(b))=b for all b∈B′.
We know (g0,h0) is a morphism, since for all a∈A and f∈F, we have
g0(a)⋆′f=g(a)⋆f=a⋄f=a⋄h0(f).Similarly, we know (g1,h1) is a morphism, since for all b∈B′ and f∈F, we have
g1(b)⋄f=g1(b)⋄h0(f)=g0(g1(b))⋆′f=b⋆′f=b⋆′h1(f).Clearly, (g0,h0)∘(g1,h1) and (g1,h1)∘(g0,h0) are homotopic to the identity, since h0∘h1 and h1∘h0 are the identity on F. Thus, C′2≃C2.
The fact that C0◃∗+C2, or equivalently C∗2◃+C∗0, is symmetric, since the relationship between C∗2 and C∗0 is the same as the relationship between C2 and C1.
Conversely, if C2◃+C1, there is a morphism from C2 to C1 by the categorical definition of additive subagent. Similarly, if C0◃∗+C2, then C∗2◃+C∗0, so there is a morphism from C∗2 to C∗0, and thus a morphism from C0 to C2. These compose to a morphism from C0 to C1. □
When we introduced morphisms and [described them as "interfaces,"](https://www.lesswrong.com/posts/ewkYgtZapQRtDPT2F/additive-operations-on-cartesian-frames#1_1__Morphisms_as_Interfaces) we noted that every morphism (g,h):C0→C1 implies the existence of an intermediate frame C2 that represents Agent(C0) interacting with Env(C1). The second decomposition theorem formalizes this claim, and also notes that this intermediate frame is a super-environment of C0 and a subagent of C1.
In our next post, we will provide several methods for constructing additive and multiplicative subagents: "Committing, Assuming, Externalizing, and Internalizing." |
0b5c7731-aace-4a86-a1be-a6a81a7b8c95 | trentmkelly/LessWrong-43k | LessWrong | Bridging the Intention-Action Gap (aka Akrasia)
|
50a0e6fb-195f-4f08-a82f-8224084bcc43 | trentmkelly/LessWrong-43k | LessWrong | Why small phenomenons are relevant to morality
Follow-up to this post
One thing that seems important to me is that optionality implies a sort of high-level game theory :
It is good that we are in a complex environment with many phenomenons even when it seems that those phenomenons aren't directly relevant (to prevent threats from our blind-spots).
This is a precautionary principle.
"What is apparently meaningless now can be meaningful in a future space-time context"
The ethical system needs to be formalized in a way that procedurally includes all phenomenons
So that it would scale and fit with the events and variables of our complex reality
A proof of the importance of phenomenal diversity can be ie. that being in a universe with many solar systems is good because it enhances the possibility of life.
So diversity of phenomenons is in itself an ethical subgoal,
Blooming the phenomenons that allow the most diversity of other phenomenons is good
(But we need to define complexity/diversity, I'll extend on that at the end)
Although those far-away/small phenomenons should not be the main concern, it should always be part of the equation
The issue is not simply to make AI care about what we want, but about what "we would like to want if we add perfect knowledge about both our ideal values, and the consequences of our actions"
cf -> .Coherent Extrapolated Volition
One reason I'm talking about small/far phenomenons is to foresee the possibility of rogue AI
If the fundamental layer is about phenomena diversification, ecosystemic value etc.
-> Even a rogue AI can't maximize paperclips
Because the procedure would inherently maximize "diversity" instead
If you need to define something to a superhuman AI you have to put a definition that takes everything into account,
You have to really define optionality in the less underfitting formula possible (an underfitted definition could be interpreted in unforeseen ways)
An AI that has just [human optionality] in its definition (or in its limi |
8fca5baa-7d22-4d46-a9e9-e1d14a8107c3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A Model-based Approach to AI Existential Risk
Introduction
============
Polarisation hampers cooperation and progress towards understanding whether future AI poses an existential risk to humanity and how to reduce the risks of catastrophic outcomes. It is exceptionally challenging to pin down what these risks are and what decisions are best. We believe that a *model-based approach* offers many advantages for improving our understanding of risks from AI, estimating the value of mitigation policies, and fostering communication between people on different sides of AI risk arguments. We also believe that a large percentage of practitioners in the AI safety and alignment communities have appropriate skill sets to successfully use model-based approaches.
In this article, we will lead you through an example application of a model-based approach for the risk of an existential catastrophe from unaligned AI: a probabilistic model based on Carlsmith’s [*Is Power-seeking AI an Existential Risk?*](https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version) You will interact with our model, explore your own assumptions, and (we hope) develop your own ideas for how this type of approach might be relevant in your own work. You can find a link to the model here.
[**Click here to run our Model**](https://acp.analytica.com/view?invite=4560&code=3000289064591444815)
In many poorly understood areas, people gravitate to advocacy positions. We see this with AI risk, where it is common to see writers [dismissively call someone an “AI doomer”, or “AI accelerationist”.](https://www.lesswrong.com/posts/BTcEzXYoDrWzkLLrQ/the-public-debate-about-ai-is-confusing-for-the-general) People on each side of this debate are unable to communicate their ideas to the other side, since advocacy often includes biases and evidence interpreted within a framework not shared by the other side.
In other domains, we have witnessed first-hand that model-based approaches are a constructive way to cut through advocacy like this. For example, by leveraging a model-based approach, the [Rigs-to-Reefs project](https://lumina.com/case-studies/energy-and-power/a-win-win-solution-for-californias-offshore-oil-rigs/) reached near consensus among 22 diverse organisations on the contentious problem of how to decommission the huge oil platforms off the Santa Barbara coast. For decades, environmental groups, oil companies, marine biologists, commercial and recreational fishermen, shipping interests, legal defence funds, the State of California, and federal agencies were stuck in an impasse on this issue. The introduction of a model refocused the dialog on specific assumptions, objectives and options, and led to 20 out of the 22 organisations agreeing on the same plan. The California legislature encoded this plan into law with bill AB 2503, which passed almost unanimously.
There is a lot of uncertainty around existential risks from AI, and the stakes are extremely high. In situations like this, we advocate quantifying uncertainty explicitly using probability distributions. Sadly, this is not as common as it should be, even in domains where such techniques would be most useful.
A recent paper on the risks of unaligned AI by [Joe Carlsmith](https://arxiv.org/abs/2206.13353)(2022) is a powerful illustration of how probabilistic methods can help assess whether advanced AI poses an existential risk to humanity. In this article, we review Carlsmith’s argument and incorporate his problem decomposition into our own [Analytica](https://lumina.com/why-analytica/what-is-analytica/) model. We then expand on this starting point in several ways to demonstrate elementary ways to approach each of the distinctive challenges in the x-risk domain. We take you on a tour of the live model to learn about its elements and enable you to dive deeper on your own.
Challenges
----------
Predicting the long-term future is always challenging. The difficulty is amplified when there is [no historical precedent](https://www.youtube.com/watch?v=87l9Az9msHU&t=3108s). But this challenge is not unique; we lack historical precedent in many other areas, for example when considering a novel government program or a fundamentally new business initiative. We also lack precedent when world conditions change due to changes in technology, climate, there competitive landscape or regulation. The difficulty is great in all these cases, but pales in comparison to the challenge of forecasting artificial general intelligence (AGI) and existential risk. Predictions about AI existential risk today generally rely at least in part on abstract arguments about how future advanced AI will behave, which we can’t test today ([though efforts are being made to change this](https://www.lesswrong.com/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1)). Even the most well-crafted arguments are often met with justified uncertainty and scepticism.
For instance, when assessing the reliability of a prediction about AI existential risk, it is common to encounter objections such as, "I can find no specific flaws in the predictions. They're just a bit abstract and a bit conjunctive, and arguments in that class are fairly often wrong in unexpected ways."
As one example, the recent superforecaster elicitation on AI risk appeared to reveal that this general scepticism is a factor in the [persistent disagreements](https://astralcodexten.substack.com/p/the-extinction-tournament)on AI risk between superforecasters and the AI safety community. This disagreement on AI risk persisted despite discussion between the two groups and even though the superforecasters agreed with the domain experts on [many quantitative predictions about future AI](https://www.lesswrong.com/posts/BHdEvjtfwpgrTh825/ai-21-the-cup-overfloweth?commentId=dpzuaGHyiNcnr4jF2), suggesting a more diffuse scepticism of AI risk arguments. Such objections should be taken seriously and assessed both on their own merits, and in light of how similar objections have fared in other domains in the past. This highlights the importance of evaluating not only the content of a prediction, but also the underlying assumptions and reasoning behind it.
Numerous arguments have already been proposed in the AI risk community for why certain outcomes are likely. When you set out to build an explicit model of AI existential risk, it would be negligent not to incorporate well-considered ideas from other smart, dedicated people. However, it is really tough to merge multiple ideas into a single coherent model, and by some counts there are as many as [five partially overlapping worldviews/research agendas](https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=C3sb2QZQAeHLZmz2T), each focussed on different threat models. Different arguments often build upon mutually incongruous conceptual frameworks. It also doesn’t work to simply tally how many arguments exist for a position, since there is almost always substantial overlap in the underlying assumptions. Additionally, it seems pretty much impossible to merge an [inside view with an outside view](https://www.lesswrong.com/posts/Cd6uMn4qHXZcoe2Lh/discussion-weighting-inside-view-versus-outside-view-on) argument in any deep way. Though tough, incorporating existing expert knowledge (and opinion) is essential for effective model-based approaches. We think that AI existential risk modelling has unique aspects when it comes to incorporating multiple sources of expert knowledge and thus is a ripe area for further research on new approaches and techniques. We have incorporated simple approaches to all of the challenges named in this paragraph into our model.
Subjective probability is the conventional tool for representing uncertainty, and in general it is an excellent tool for this. Model-based approaches rely on subjective assessments of uncertain variables. In the AI existential risk domain, when you ask two experts to assess the same subjective probability, it is common for their estimates to be dramatically different (e.g. for one to say 15% where the other says 80%). This is not normal in other domains. Although you may find an instance when two meteorologists respectively predict a 15% and an 80% chance of rain, this is uncommon.
This is a symptom of the difficulties already discussed above, and introduces yet another distinctive feature of this domain. Because of this extreme variation between experts, and the fact that people's estimates are poorly calibrated, there seems to be a need to capture an extra layer of confidence. We elaborate on this in the section 'Meta-Uncertainty' later in this article, and we include an explicit second order distribution in our model (i.e., the second order distribution represents the variation among expert opinion, whereas the first order uncertainty represents the uncertainty in the outcome).
Our work described in this article was performed as part of the MTAIR project (Modelling Transformative AI Risk), building on the [initial MTAIR conceptual model](https://www.lesswrong.com/s/aERZoriyHfCqvWkzg). We aim to evaluate multiple, sometimes fundamentally conflicting, detailed models of AGI existential risk as well as these outside view/reliability considerations. We treat them as competing 'experts' in order to arrive at a well-informed and balanced assessment. You can play with our interactive model to input your own assessments and explore the implications.
What makes a model effective?
-----------------------------
* **Transparency**: To be effective, a model-based approach should provide a model that other people can browse and understand. We call a model *transparent*when a typical user who is knowledgeable about the subject matter is able to understand what the model is doing, how it is doing it, and what assumptions are going into the calculations. You should never assume that a subject matter expert is a programmer, or that python code (or any other programming language) speaks for itself. Hence, conventional programs are generally considered to be non-transparent.
* **Interactivity**: A second important attribute is *interactivity*, and the ability for a stakeholder to experiment with different assumptions, explore ramifications of different decisions or policies, and explore arbitrary what-if scenarios.
* **Explicit uncertainty**: For AI existential risk, much of the action is in the tail of the uncertainty (i.e., simply concluding that the median outcome is human survival misses the point); hence, an *explicit representation of uncertainty* is important.
We built our model in the [Analytica visual modelling software](https://analytica.com/), which strongly meets all of the above requirements, and is fun to use. Analytica models are structured as hierarchical influence diagrams, a highly visual and easy to understand representation that captures the essence of how the model works visually. It is interactive and has embedded modular documentation. There is a powerful multidimensional intelligent array facility that provides an unprecedented flexibility. And it has explicit representations of uncertainty using probability distributions. The propagation of uncertainty to downstream computed results happens automatically. It is easy and quick to learn, and once you’ve built your model, you can publish it to the web to share (as we have done for this article).
If you feel inspired by our example to build your own model(s), you should know that there is [a free edition of Analytica](https://analytica.com/products/free101/). Commercial editions are also available when you need to scale up to really large models. The desktop editions require Microsoft Windows. You don’t need to get or install anything (other than a browser – Chrome or Edge) to use our model, which is shared on the Analytica Cloud Platform (ACP). Our model has roughly 150 objects, slightly exceeding the maximum size of 101 objects for the free edition. But if you are interested in downloading it to desktop Analytica, the free edition allows you to load it, view it, run it, change inputs and re-evaluate results, etc.
In summary, model-based approaches to assessing the reliability of predictions about AI existential risk can bring several benefits to the AI safety community. First and foremost, it provides a clear, concise, and legible output that takes into account the many different objections and factors that may impact a prediction's accuracy. This helps to ensure that the AI safety community understands the reasoning and evidence behind the prediction, and can make informed decisions based on that information.
**Additionally**, this model-based approach encourages the community to consider a wider range of factors, beyond just the detailed arguments themselves. For example, they might consider how much they trust high-level abstractions and how reliable different heuristics are. By incorporating these considerations into the model, the community can more effectively weigh the risks associated with AI and develop more robust strategies for mitigating potential harm. Finally, this approach can help to improve the community's epistemics by promoting more rigorous thinking and more comprehensive examination of all relevant factors, which can lead to a better understanding of the nature and likelihood of AI existential risk.
As a starting point, we will focus on a single detailed model based on the Joe Carlsmith report, ['Is Power Seeking AI an Existential Risk,'](https://arxiv.org/abs/2206.13353) along with several outside view/reliability heuristics that affect the plausibility of this one mechanistic model. We will first briefly introduce Carlsmith’s presentation of AI existential risk with some improvements of our own, then at the end discuss the next steps to improve upon this model.
Model overview
==============
[**Click here to run our Model**](https://acp.analytica.com/view?invite=4560&code=3000289064591444815)
This is a hierarchical model running in Analytica Cloud Platform (ACP) based on Joe Carlsmith's report, 'Is Power-Seeking AI an Existential Risk.' It allows you to compute the probability of an existential catastrophe caused by misaligned AI.
The conclusions are implicitly conditioned on some timeframe which we have made explicit, given various assumptions. A de facto time frame is “by 2070”, but when entering your own estimates you can adopt a different time frame without requiring a change to the model’s logic.
In short, the model predicts that misaligned power-seeking AI will cause an existential catastrophe if:
1. Advanced, Planning, [Strategically Aware](https://www.planned-obsolescence.org/situational-awareness/) (APS) systems - i.e., AIs capable of advanced planning, which are strategically aware, and possessing advanced human-level or superhuman capabilities - are feasible to build,
2. There will be strong incentives for APS systems to be built when they are feasible,
3. It will be much harder to build APS systems that do not seek power in misaligned ways than to build superficially useful APS systems that do seek power in misaligned ways,
1. Despite (3), misaligned APS systems will in fact be built and deployed,
4. Misaligned APS systems will be capable of causing a large global catastrophe upon deployment,
5. The human response to misaligned APS systems causing such a catastrophe will not be sufficient to prevent it from taking over completely,
6. Having taken over, the misaligned APS system will destroy or severely curtail the potential of humanity.
The overall framework for our model was based on the argument for AI existential risk provided in the Carlsmith report and subsequent [80,000 Hours article](https://80000hours.org/problem-profiles/artificial-intelligence), with modifications. This is our ‘top level model’ around which we are basing our high level analysis of AI existential risk.
Model tour
==========

During this section you will take a quick tour of our model, running it live in a browser window. To start, please click [Launch the model](https://acp.analytica.com/view?invite=4560&code=3000289064591444815) to open it in a different browser tab or window so you can refer to this page at the same time. We provide step-by-step instructions to get you started. Follow this tour to get your bearings, and then you can explore the rest of the model deeper on your own and explore what happens with different estimates. We recommend running the model on a large monitor, not a mobile device.
Basic assessments
-----------------
On the first page you’ll see six probability assessments from the Carlsmith report. (Note that the screenshots in this article are static, but they are active in the browser window where you are running the model).

Here you can adjust the sliders or type your own estimates for each one. To understand what each means, just hover over the question and read the description that pops up.

Before you estimate these, you should pick a time frame. For example, you can estimate whether each is true before the year 2070. The calculations depend on your estimates, not on the time frame chosen, but your estimates would be expected to change (increase) with longer-term time frames.
Below the slider inputs are some computed results showing the probability that each of the 5 stages, along with all preceding stages, ends up being true. The last one, “existential catastrophe”, shows the probability of an existential catastrophe from an APS system given your estimates for each of the six propositions.

In this screenshot we see a 0.37% chance (less than one half of one percent) that an APS will cause an existential catastrophe such as the extinction of the human race. That may appear to be a huge risk given how extreme the outcome is, yet many people who specialise in AI safety would consider this to be ultra-optimistic. How do your estimates compare?
Experts weigh in
----------------
How do your estimates compare to other AI safety researchers’? Following Carlsmith’s report, Open Philanthropy [solicited reviews](https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk) from other AI safety researchers, and asked them to provide their own estimates for these propositions. These reviews occurred in Aug 2022.
First, you can browse their raw assessments for each proposition by pressing the button for Reviewer assessments, which appears at the bottom right of the page (you may need to scroll right). The table appears at the top right of the page. Notice the dramatic variation from reviewer to reviewer.
Click the choice pulldown for “Select median assessment to use”.

Select all items so it now appears as 
The Existential catastrophe output now shows a button. Press it. A result table appears at the upper right.

These show the probability of existential catastrophe caused by APS implied by the estimates, both from your own inputs, as well as from the reviewers. The median among the reviewers is 9.15%, but the number varies dramatically between reviewers. Null appears in a few cases where the reviewers were reluctant to accept Carlsmith’s proposed decomposition. Next, let’s display this as a bar chart. Hover over the top of the table area to access the graph button, then press it.


Hover over the top of the graph again and change the view back to the table view. When viewing a result, you can toggle in this way between graph and table views.

The variation in expert opinion
-------------------------------
The tremendous variation in expert opinion presents a serious challenge for rational decision making in this area. It would be hard to argue that any expected utility based on a probability obtained by aggregating these is credible. Because of this, we fit a probability distribution to the variation in expert opinion. Because this is a distribution over subjective probabilities it is actually *a second-order probability distribution*, which we call *meta-uncertainty*. We devote a [section](https://docs.google.com/document/d/1bj8SLbhqL8VhQaIPNlFYpLEX5S5uO2rYlb_6KalhDEU/edit#heading=h.2klzxyklefio)to the topic of meta-uncertainty, its motivation and its interpretation, but for now let’s visualise this meta-uncertainty.
Change *Select median assessment to use* to **Median of all reviewers**, and select the **Reviewer’s spread** option in the choice pulldown for *Select meta uncertainty to include*.

The outputs now display as buttons. Hover over the Existential catastrophe output and press the right-mouse button. Select **Exceedance probability** from the context menu.

In the frame node, switch back to graph view ().

An exceedance probability plot is one way to visualise a probability distribution. The distribution in this case reflects the variation across expert opinion. The underlying quantity (the x-axis) is the probability that an existential catastrophe such as human extinction from an APS system occurs. Following the green arrow, you can read off that about 10% of experts feel the probability of an existential catastrophe exceeds 0.17 (i.e., 17%), and following the yellow arrow about 5% feel it exceeds 0.27.
To obtain this second-order distribution, the model treated the collection of expert assessments for each question as if it were sampled from an underlying distribution, and then “fit” a probability distribution to those points. The technical details of this fit is covered in the later section 'Meta-Uncertainty'. That section also explores how our perspective changes when the meta-uncertainty (i.e., amount of variation among expert opinion) increases or decreases.
Combining inside and outside view arguments
-------------------------------------------
The Carlsmith decomposition is an example of an [inside view framing](https://www.lesswrong.com/tag/inside-outside-view) in that it breaks down the main question of interest into its component factors, steps or causal mechanisms at play. In contrast, an [outside view framing](https://www.lesswrong.com/tag/inside-outside-view) draws parallels from similar events or reference classes to provide context and predictions. For example, the [*second species argument*](https://forum.effectivealtruism.org/posts/MMtbCDTNP3M53N3Dc/agi-safety-from-first-principles#AGI%20safety%20from%20first%20principles) posits that humanity may lose our standing as the most powerful species on Earth. Other outside view framings include Holden Karnofsky’s [*Most Important Century*](https://www.cold-takes.com/most-important-century), Ajeya Cotra’s [bio-anchors](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) (an outside view for one subproblem, timelines, of the larger question), analogies to past transformational technological advancements, and even [expert opinion surveys](https://forum.effectivealtruism.org/posts/mjB9osLTJJM4zKhoq/2022-ai-expert-survey-results).
Different insights emerge from each type of framing, but because inside and outside-view framings approach the assessment so differently, assimilating both into a consistent view is quite challenging. But we believe model-base approaches need to address this so as to incorporate information coming from all sources.
We include two simplistic outside-view approaches (discussed in detail in a later section), reflected by these inputs:

Hover the mouse over each input for a full description of what you are estimating. These require you to think abstractly about several high-level outside-view considerations and arguments, and then assess how much bearing these considerations have on the risk of existential catastrophe. Cr here means *credence*. Similar to the concept of likelihood in statistics (some might say synonymous), credence is an estimate on a scale from 0 to 1 where 0 means the considerations imply no risk and 1 means the considerations imply certain catastrophe.
You have now entered your own estimates for the Carlsmith “world model”, as well as for outside-view credences. Our key focus is how can a model assimilate these into a single subjective viewpoint? It is our goal to highlight this challenge and take at least one stab at doing so. Perhaps you or others who continue with future model-based approaches will improve on our approach.
In this model, we’ve allowed you to assign relative weights to the different views. Click the Table button for Weights to place on different opinions. Hover over the input for a description of what you are being asked to assess. The credence is a rating of how much you think these outside-view arguments, by themselves, support the proposition.

An entry table appears in the frame at the top with sliders that you can use to change the relative weights. You can adjust these to reflect your own opinions regarding the relative credibilities.

The first section allows you to enter the relative importance you place on the Carlsmith decomposition compared to outside view arguments. Here we have fixed Outside view to 1, so that a value of 3 for the (Carlsmith-based) world model means you want that framing to count three times more than the outside view arguments.
Within the world model, you have your own estimates as well as the estimates from the various experts who were surveyed. You have the option of placing more or less weight on the estimates of individual experts.
Finally in the lower part you can adjust the weights on two different outside-view framings. These are used to combine the different outside-view arguments.
Having set your own weightings, the outputs in the right column display the assimilated views.

The first output, Cr[Existential Catastrophe|World Model] is the assessment from the Carlsmith decomposition after taking into account your relative weightings between your own estimates and those of the experts.
The second output, Cr[AI Existential Catastrophe] is the probability of an existential catastrophe from the combined outside-view models.
The final output, Cr[Existential catastrophe] is the final assimilated estimate for existential catastrophe. It takes into account both the inside-view world model as well as the outside-view models, combining the information from both sources as a representative final assessment.
Exploring the model’s internals
-------------------------------
Thus far you have played with some selected inputs and outputs that we’ve highlighted for you. Next, you’ll explore the model’s internals.
At the top is a large blue module node, **Main Model**. Click on it. This takes you into the implementation, where you are met with several sub-modules and an [influence diagram](https://lumina.com/technology/influence-diagrams/).

In this first diagram, the top half comprises the inside-view world model based on the Carlsmith report. The bottom left quarter contains the outside-view arguments. The bottom right quarter is the logic used to assimilate the different views.
The nodes of the influence diagram are variables. The arrows depict influences between variables. Influence diagrams are visual, and you can often understand how the model works from this, without looking at the details of calculations. Hover over nodes to see their descriptions for additional information about what each variable represents.
In the outside view section, some undefined nodes (which are hashed) are used just to document the considerations that feed into the estimates. Dashed arrows indicate that these are not influences used by the calculation, but should influence your thinking.
After you click on a node, notice the tabs at the top.

The **Object**tab is perhaps the most useful, since it allows you to see the Definition (and other attributes) of the variable you clicked on. When you are done looking at this variable, the **Diagram**tab returns you to the diagram.
Now that you’ve completed this quick tour, you should be comfortable exploring all aspects of the model. Next, we’ll dive deeper into the content and concepts that we incorporated into the model.
Model features
==============
In adapting the Carlsmith report's model of AI existential risk for use in Analytica, we have made several changes from the original calculation, which simply multiplied the conditional probabilities of propositions 1-6 to obtain an overall estimate of existential risk from misaligned AI.
To better capture the full range of uncertainty surrounding the issue, we have handled “meta-uncertainty”, by changing each point estimate into a distribution with a variance dependent on how confident we are in each probability estimate, as [described in the previous section](https://docs.google.com/document/d/1bj8SLbhqL8VhQaIPNlFYpLEX5S5uO2rYlb_6KalhDEU/edit#heading=h.wtvwqwsb1xzp).
Meta-uncertainty refers to the uncertainty that arises from our uncertainty about more general factors that influence our beliefs or opinions. These factors could include questions such as how much weight we should give to inside versus outside views, and how reliable long-term forecasts are.
Meta-uncertainty is distinct from more straightforward types of uncertainty because it focuses on our uncertainty about the assumptions and factors that underlie our assessments of risk. It is essentially a second-order uncertainty, where we are uncertain about the factors that drive our first-order uncertainty.
We have produced these meta-uncertainty distributions by fitting a [logit-normal distribution](https://www.wikiwand.com/en/Logit-normal_distribution) to the spread of individual point estimates given by each of the original reviewers of Joe Carlsmith’s report. This methodology is similar to that used in this article on [Dissolving AI Risk](https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future#3_2_Model_Parameterisation).
We have also incorporated other, less-detailed "outside view considerations" which do not rely on a detailed world model in the way the Carlsmith report does. Our credence in these outside view arguments relative to the Carlsmith model influences the final unconditional probability the model gives to AI existential catastrophe. These outside view considerations can be seen as a way of compensating for the general problems of reliability that occur with detailed world models and therefore a way of reducing random errors or ‘unknown unknown’ difficulties with our model.
One thing we have not yet discussed is the potential for systematic flaws in the Carlsmith model. As we will discuss in the section on ‘framing effects’, some researchers object to the framing of the Carlsmith report itself, arguing that it systematically biases us up or down.
Meta-uncertainty
----------------
There are a number of complex and uncertain questions surrounding the issue of AI existential risk, including the difficulty of alignment, the ease of takeover by misaligned AI, and even whether artificial general intelligence (AGI) of the "APS" type will be built this century. These uncertainties make it difficult to assess the overall probability of existential risk from AI.
One approach to quantifying these risks is to assign point probability estimates to each claim and propagate them forward, as was done in the original Carlsmith report on this topic. However, there are issues with this approach. Each of the six probability estimates that are inputs to the Carlsmith model involve events that have no precedent in history. Consequently, it is challenging to estimate the probabilities of these events, and when you see substantially different estimates from two different experts, there is no clear and obvious way to judge which estimate is more credible.
*Meta-uncertainty* looks across the possible states of belief by placing a probability distribution over the possible opinions. [Our model](https://acp.analytica.com/view?invite=4418&code=3221222959844354027) includes a few versions of meta-uncertainty that you can explore.
One useful purpose for including meta-uncertainty is to understand the variation in expert opinion, and how this variation impacts the model’s outputs.
Open Philanthropy asked several experts in the field of AI risk [to provide their own estimates](https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk) for the parameters in the Carlsmith report. We’ve included these in our model. You can select the estimates from any of these experts, or of any subset. You can also include the estimates given by Joe Carlsmith in his article, the median of all reviewers, and your own estimates. When you select more than one at the same time, you will be able to compare them in any downstream result. To make a selection, use the multi-choice pulldown for “Select median assessment to use” on the front diagram of the model.

As you view the results of variables in the model, you’ll see the values for that result using the estimate of each of the reviewers that you selected. For example, here is the result table for the probability of existential catastrophe.

From these, you get a sense of how much the expert opinions vary, but this doesn’t yet include a probability distribution for meta-uncertainty. For each input, you can have the model fit a probability distribution to the assessments provided by the reviewers (for the statistics geeks: it fits a [Logit-Normal](https://en.wikipedia.org/wiki/Logit-normal_distribution), aka Log-Odds distribution). To explore this yourself, set the “Select Meta uncertainty to include” dropdown to “Reviewer’s spread”. Once you do this, it carries out all calculations using a distribution with the meta-uncertainty variance observed across experts (for the statistics geeks: it is actually the variance of the [logit](https://docs.analytica.com/index.php/Logit) of each quantity that matches that of the experts).

Within the model’s internals, the variable named 'Assessments' now contains the meta-uncertainty distributions for each of the six input assessments.

The above graph shows the [cumulative probability](https://docs.analytica.com/index.php/Uncertainty_views#Cumulative_probability) for each assessed quantity (known as a CDF plot). The value on the Y-axis indicates how likely it is that an expert would estimate the quantity to have a value less than or equal to the corresponding value on the x-axis. The plot’s key items correspond, in order, to the six assessments of the Carlsmith model. The first item, labelled *Timelines*, is the assessment that APS systems will be feasible to build within the timeline window considered. Its red CDF is almost a straight line, indicating an almost uniformly-distribution uncertainty among the selected experts. The light blue line labelled *Catastrophe*is the assessment that an unaligned APS system that has already taken over will then destroy or curtail the potential of humanity. The shape of that curve indicates that there is agreement between the selected experts that the probability is close to 1.
The calculation behind the above graph sets the median of each input meta-uncertainty distribution to the median of the selected reviewers on the same question. By changing the slicer control “Select median assessment to use” at the top of the above graph, you can apply the same level of meta-uncertainty to any single reviewer’s assessments (or your own assessments).
[Analytica](https://analytica.com/why-analytica/what-is-analytica/) automatically propagates these meta-uncertainties to any computed downstream result. Here we see the CDF plot for the probability of existential catastrophe (the product of the six assessments).

The assessments from any one person would result in a single probability for this quantity, 'Existential Catastrophe'. The above distribution reflects the variation across expert opinions. The curve indicates a 50% probability that an expert would conclude the probability of existential catastrophe is less than 1%. Conversely, using the 0.9 level of the Y-axis, there is a 10% probability that an expert would conclude the probability of existential catastrophe exceeds 15%. When you run the model, you can select a different subset of experts (or all of them) to interactively explore the subset of experts you trust the most.
When you provide your own estimates for each of the six input probabilities (which we recommend you try when you run the model), you’ll probably have a gut feeling that your estimates are not reliable. You’ll probably feel this way even if you are an expert in the field. You might find it useful to include (or let the model include) meta-uncertainty over your own personal assessments. The model allows you to do so. But first, let’s discuss what a meta-uncertainty over your own belief state even means.
Each input to the model asks you for your own *subjective probability*. Each of these summarise your state of knowledge on that question. No one knows whether any of the six propositions are true or false. Your subjective probability simply reflects the strength of the knowledge that you have. You are not estimating a value that exists out there in the world, you are instead estimating your degree of belief. By applying a meta-uncertainty to your degree of belief, you are essentially saying that you are uncertain about what your own beliefs are. That may not intuitively feel far-fetched in a case like this, where there is virtually no historical precedent! In general, when it comes time to making a decision, if you can express your meta-uncertainty, you could also collapse it to a single degree-of-belief number by simply taking the mean belief (or mean utility). Until then, meta-uncertainty gives an indication of how responsive your beliefs would be to new information.
In a recent article on the Effective Altruism forum, [‘Dissolving’ AI Risk - Parameter Uncertainty in AI Future Forecasting](https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future), the author under the pseudonym [Froolow](https://forum.effectivealtruism.org/users/froolow) adds meta-uncertainty to each of the six Carlsmith model parameter estimates and shows that when doing so, the estimated existential risk from AI decreases. You can explore the same effect in our model. A good starting point is to select a single median estimate – for example, the estimates from the original Carlsmith report. Then select 'View across range of meta-u' in the meta-uncertainty selection.

The Meta-uncertainty option varies the amount of meta uncertainty from zero (i.e., point estimates) toward the maximum meta-uncertainty that is possible for a single probability estimate. The same logit-variance is applied to all six input assessments for each level of meta-uncertainty.
A *Probability Bands*view of the main output - the probability of existential catastrophe – illustrates how the meta-uncertainty in the final result behaves as the meta-uncertainty in each parameter is increased. The Bands plot is shown here.

(Note: The squiggles are small variations due to a finite sample size during Monte Carlo).
Without meta-uncertainty, Carlsmith estimated a 5% probability of existential catastrophe, seen at the left when the level of (meta-)uncertainty is zero. With increasing meta-uncertainty, the median estimate (green line) drops to about 0.75% at the right of the plot, and continues to drop further to the right of what is plotted here. Even the 0.75 quantile drops (eventually) with increasing meta-uncertainty.
Framing effects
---------------
There is a paradox here. Why should being less certain about what you believe make you conclude that the world is a safer place? Does this establish that “ignorance is bliss”? Will existential catastrophe be more likely if we invest in more research to increase our understanding of just how much we are at risk?
Some research models AI takeover as being a disjunctive event, meaning that it will happen unless certain conditions are fulfilled, while others (such as Carlsmith) see it as a conjunctive event, meaning that a set of conditions must be met in order for the disaster to occur.
These framing effects don’t affect the final results when using point estimates. If we took the Carlsmith model and turned every proposition in the model into a negative statement rather than a positive: e.g., ‘APS systems will not produce high impact failures on deployment’, and take one minus our original probability estimates, then we will get the same final probability. But, crucially, if we have uncertainty around our probability distributions the conjunctive and disjunctive models do not behave the same way.
The paradox becomes even more paradoxical when you realise that reversing the framing inverts the effect. The Carlsmith decomposition says that catastrophe occurs when 6 events all occur. You could instead posit that catastrophe from superintelligence is inevitable unless 6 open technical problems are solved before then (in fact, in the post [AI X-risk >35% mostly based on a recent peer-reviewed argument](https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/ai-x-risk-greater-than-35-based-on-a-recent-peer-reviewed) on LessWrong, Michael Cohen uses this framing). With this reverse framing, increasing meta-uncertainty drives the effect in the opposite direction, making it appear that catastrophe is more likely the more uncertain we are. Soares’ [article on disjunctive AGI ruin scenarios](https://www.lesswrong.com/posts/ervaGwJ2ZcwqfCcLx/agi-ruin-scenarios-are-likely-and-disjunctive) conveys this view qualitatively, listing a number of things that he believes all have to go right to avoid an AI existential catastrophe: on such a model, general uncertainty about the world increases the chance of disaster.
The paradox is, of course, an illusion. But because you could be easily misled, it is worth understanding this phenomena at a deeper level. The result in the previous graph is the product of six uncertain estimates. The following mathematical relationship, which is simply a rearrangement of the definition of covariance, shows that the arithmetic mean is stable as (meta-)uncertainty increases:
E[x y] = E[x] E[y] + cov(x,y)
In other words, when the assessment of each parameter is independent (implying a covariance of zero), then the mean of their product is the product of their means. Hence, a plot of the mean vs. level of meta-uncertainty would be a horizontal line. (Side note: Covariances between the parameter estimates are likely not really zero for numerous reasons, but the model does not include any representation or estimate of covariance. The relevant question is whether they are modelled as independent, and indeed they are in our model).
However, the median of a product decreases with increasing meta-uncertainty. This happens regardless of the shape of the meta-uncertainty distribution. In order for this to happen, the right tail of the meta-uncertainty distribution must increase to compensate for the drop in median. This means that as you have more meta-uncertainty, the meta-uncertainty distribution becomes more [leptokurtic](https://en.wikipedia.org/wiki/Kurtosis#Leptokurtic). The net balance, as shown by the stability of the mean, is that does not cause you to conclude the world is more (or less) safe.
In our model, the mean actually does decrease ever so slightly with increasing meta-uncertainty. You’ll see this if you select the Mean view.

The waviness is due to the fact that this is computed by [Monte Carlo simulation](https://docs.analytica.com/index.php/Monte_Carlo_and_probabilistic_simulation) with a finite sample size. The slight decrease is because we hold the median of each distribution constant as we apply meta-uncertainty. The meta-uncertainty of each parameter is modelled using a [Logit-Normal distribution](https://en.wikipedia.org/wiki/Logit-normal_distribution), also called a Log-odds distribution, in which the [Logit](https://docs.analytica.com/index.php/Logit) of the quantity is distributed as a Normal distribution. We keep the mean of the Normal constant as we increase its variance. When you do this, the mean of the logit decreases slightly, so that the mean of each parameter estimation decreases slightly. If you hold the mean constant instead of the median (which is easy to do), then the mean is entirely stable. We found the difference in these two options to be non-perceptible in the Probability Bands graph.
In the article ['Is the Fermi Paradox due to the Flaw of Averages?'](https://www.lesswrong.com/posts/XAS5FKyvScLb7jqaF/cross-post-is-the-fermi-paradox-due-to-the-flaw-of-averages), we reviewed the paper ['Dissolving the Fermi Paradox (2018)'](https://arxiv.org/abs/1806.02404) by Sandberg, Drexler and Ord (SDO), and provided a live interactive model. The Fermi Paradox refers to the apparent contradiction that humankind has not detected any extraterrestrial civilizations even though there must be a lot of them among the hundreds of billions of stars in our galaxy. Like the Carlsmith model, the [Drake equation](https://www.seti.org/drake-equation-index) (which estimates the number of detectable civilizations in the Milky Way) is a multiplicative model. SDO shows that by modelling uncertainty in each of the Drake equation parameters explicitly, the Fermi paradox ceases to be surprising.
The [Fermi paradox model with explicit uncertainty](https://www.lesswrong.com/posts/XAS5FKyvScLb7jqaF/cross-post-is-the-fermi-paradox-due-to-the-flaw-of-averages) and the Carlsmith model with explicit meta-uncertainty (the topic of this article) have the same mathematical form. We see the median and the lower quantiles decrease in the Carlsmith model with increasing (meta-)uncertainty, but this doesn’t really alter our effective judgement of risk. However, the increased uncertainty in the Fermi model dramatically increases the probability that we on Earth are alone in the galaxy. Why is the effect real in the Fermi case but only an illusion in the present case?
The reason the effect is real in the Fermi case is that the question asked ('What is the probability that there is no other contactable, intelligent civilization in the Milky Way?') is a question about a quantile, and lower quantiles are indeed decreased when uncertainty increases. P(N<1), where N is the number of such extraterrestrial civilizations, is a cumulative probability, or inverse quantile. Since increasing uncertainty in the factors of a multiplicative model decreases the quantiles in the left tail, it causes the inverse quantiles to increase. Hence, the addition of uncertainty to the Drake equation legitimately increases the probability that we are alone in the galaxy. The real flaw was from omitting the explicit representation in the first place (what Sam L. Savage calls [the Flaw of Averages](https://www.flawofaverages.com/)). In contrast, the primary question posed by the Carlsmith model ('What is the probability of existential catastrophe?') is a question about the mean relative to meta-uncertainty. Hence, for this question (or for any decision based on an expected utility), the appearance that risk decreases as a result of including meta-uncertainty is only an illusion.
### Explaining framing effects
We have seen that the apparent paradox arising from framing effects is illusory. But there is a further question: what is the ‘right’ way to frame AI existential risk, as conjunctive or disjunctive?
This is a difficult question to answer. One perspective is that treating AGI existential catastrophe as something that will happen unless certain conditions are met might lead to overestimation of the chance of high-impact failures. On this view, requiring a clear path to a stable outcome with complete existential security is both too demanding and historically inaccurate, since that isn’t how humanity ever navigated previous threats. Holden Karnofsky makes a similar point [here](https://www.lesswrong.com/posts/jwhcXmigv2LTrbBiB/success-without-dignity-a-nearcasting-story-of-avoiding#Success_without_dignity). A framing which sees success as conjunctive probably rules out ‘muddling through’, i.e., unplanned ‘success without dignity’. Since this is something that many domain experts [believe is credible](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom?commentId=ibGxdAC9nYajWfyfq), it might lead us to significantly underrate the chance of survival.
On the other hand, some experts such as [Nate Soares](https://www.lesswrong.com/posts/ervaGwJ2ZcwqfCcLx/agi-ruin-scenarios-are-likely-and-disjunctive#Correlations_and_general_competence) argue that AI is a different case: the large number of actors working on AGI and the risk that any one of them could produce an existential catastrophe, along with all the things that would have to occur to prevent this (someone has to develop an aligned AGI and then quickly use it to eliminate AI existential risk), implies that treating survival as a conjunctive event makes more sense.
These different framings reflect varying world models and threat models. Part of why this disagreement exists is because of Soares’ views about extreme AI [alignment difficulty](https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty#Sharp_Left_Turn), AI takeoff speed and the low likelihood of effective mitigation measures. If you are implicitly using a model where human civilization tends to respond in fixed ways due to internal incentives unless something intervenes, it is more natural to think that we will follow a default path towards disaster unless a specific intervention occurs. On the other hand, if we see many possible futures and many pathways to reducing AI existential risk and don't know what the final response will look like (as the ['Playbook for AI Risk Reduction'](https://www.lesswrong.com/posts/Fbk9H6ipfybHyqjrp/a-playbook-for-ai-risk-reduction-focused-on-misaligned-ai) describes), then requiring a specific set of conditions to be met for success seems overly prescriptive.
We believe that this framing question, and whether to treat survival as conjunctive or disjunctive, is *itself* something which we should be uncertain about, since whether you treat survival as conjunctive or not depends on the details of your threat model, and we don’t want to assume that any one threat model is the only correct one.
Currently, we only have the Carlsmith report model, but in theory we could address this problem by looking at both a conjunctive and disjunctive model and comparing them in detail.
For example, the report, "[Three Pillars for Avoiding AGI Catastrophe: Technical Alignment, Deployment Decisions, and Coordination](https://forum.effectivealtruism.org/posts/eggdG27y75ot8dNn7/three-pillars-for-avoiding-agi-catastrophe-technical)," provides a starting point model that treats success as conjunctive, and we can adapt it to work alongside Carlsmith's model.
Another alternative is to alter the Carlsmith report to require fewer steps, better representing the concern that the longer a chain of conjunctions is, the more likely it is to omit disjunctive influences. This formulation collapses propositions (1) and (2), which consider the incentives and feasibility of developing APS, into a straightforward estimate of "when will AGI be developed." The alignment difficulty premise is then preserved, followed by the collapse of propositions (4, 5, 6) into an estimate of the chance of a takeover given a misaligned APS-AGI.

This alternative formulation has fewer steps and so better represents the model that treats misaligned AI takeover as involving many possible routes that are hard to counter or influence in advance, and sees misaligned power seeking behaviour as a natural consequence of AGI development. This approach may be more appropriate for those who believe that the development of misaligned power seeking systems is a likely outcome of AGI development and that the risk of an AI takeover is more closely tied to the development of AGI systems themselves.
In addition to exploring conjunctive and disjunctive models of AI existential risk, it may also be useful to equivocate between models that make more detailed technical assumptions about how APS will get developed. For example, Ajeya Cotra’s model ["without specific countermeasures, the easiest path to AGI results in takeover"](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) tries to construct a specific model of AGI development with technical assumptions, but given those assumptions, is more easily able to reach a stronger conclusion. Similarly, given that there is a wide diversity of views on exactly how AGI might end up misaligned and power-seeking, instead of a binary ‘Is misaligned AI developed or not’, we might have a [distribution over alignment difficulty](https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty#The_Scale) with a varying success probability.
Disambiguating different models with different technical assumptions can help us to better understand the potential risks associated with AI development. By exploring different models with varying levels of technical detail and assumptions, we can gain a more comprehensive understanding of the potential risks.
While this model does not incorporate entire complex alternative inside-view models like those just mentioned, we have incorporated some alternative, less-detailed, simpler alternative ‘outside view considerations’ to illustrate how we go about combining different worldviews to produce an all-things considered estimate.
Outside View considerations
===========================
We’ve talked before about the challenges of combining outside view considerations and more detailed models of the same question. We can attempt to integrate these considerations by delving deeper and examining various reasons to expect our detailed world models to be systematically mistaken or correct.
We will examine five reference classes into which various experts and commentators have placed AI existential catastrophe. In each case: ‘Second Species’, ‘Reliability of existential risk arguments’, ‘Most important century’, ‘Accuracy of futurism’, ‘Accuracy of predictions about transformative tech’, the argument locates AI Existential risk arguments in a (purportedly) relevant reference class: predictions about new sentient species, predictions about human extinction, predictions about which period in history is the most impactful, predictions about large scale civilizational trends in general and predictions about transformative technologies (including past predictions of dramatic AI progress).
The Carlsmith model implies that all of these things could occur (a new species, extinction, this period of history will be extremely impactful, there will be a large-scale dramatic transformation to society, there will be dramatic transformative technical progress), so it is worth examining its predictions in each reference class to determine if we can learn anything relevant about how reliable this model is.
Second species argument
-----------------------
This argument suggests that as we create AGI (Artificial General Intelligence) we are essentially creating a “[second species](https://www.alignmentforum.org/posts/8xRSjC76HasLnMGSf/agi-safety-from-first-principles-introduction)” that is a human-level intelligence. And by analogy, just as humans have historically been able to supplant other animals, AGI may be able to supplant humans.
The key premise is that intelligence confers power. Human intelligence allows us to coordinate complex societies and deploy advanced technology, exerting control over the world. An AGI surpassing human intelligence could wield even greater power, potentially reducing humanity to a subordinate role. Just as humans have driven some species extinct and transformed ecosystems, a superintelligent AGI need not preserve humanity or our values. Anthropologists observe that new species often displace incumbents when invading a territory. Similarly, AGI could displace humankind from our position controlling Earth's future.
This argument is straightforward and has been widely understood by researchers going all the way back to Alan Turing the 1950s, so while it relies on fuzzy concepts and is open to many objections, it arguably has a better ‘track record’ in terms of the amount of scrutiny it has received over time than the more detailed arguments given by Carlsmith.
Reliability of existential risk arguments
-----------------------------------------
Another important consideration is the base rate for arguments of existential risk. Historically, predictions of catastrophic events, even ones that were apparently well justified by detailed arguments, have not always been accurate. Therefore, it is important to consider if the possibility that the risks associated with AGI are overestimated for similar underlying reasons (e.g., the social dynamics around existential risk predictions, overestimating the fragility of human civilisation, or underestimating humanity’s ability to respond in ways that are hard to foresee).
One possible driver of inaccuracy in existential risk predictions is [sleepwalk bias](https://www.lesswrong.com/posts/gEShPto3F2aDdT3RY/sleepwalk-bias-self-defeating-predictions-and-existential). Sleepwalk bias is the tendency to underestimate people's ability to act to prevent adverse outcomes when predicting the future. This can be caused by cognitive constraints and failure to distinguish between predictions and warnings. Because warnings often take the form of ‘X will happen without countermeasures’, if warnings are misused as predictions we can underestimate the chance of successful countermeasures. People often mix up the two, leading to pessimistic "prediction-warnings". Thus, when making predictions about existential risk, it's important to adjust our base rate to account for people's potential to act in response to warnings, including those made by the one giving the prediction.
Sleepwalk bias stems from the intuitive tendency to view others as less strategic and agentic than oneself. [As Elster notes](https://stefanschubert.substack.com/p/sleepwalk-bias-and-the-role-of-impulses), we underestimate others' capacities for deliberation and reflection. This manifests in predictions that underestimate how much effort people will make to prevent predicted disasters. Instead, predictions often implicitly assume sleepwalking into calamity.
For existential risks, sleepwalk bias would specifically lead us to underestimate institutions' and individuals' abilities to recognize emerging threats and mobilize massive resources to counter them. Historical examples show that even deeply conflictual societies like the Cold War rivals avoided nuclear war, underscoring potential blindspots in our models. Since the bias arises from a simple heuristic, deep expertise on a given x-risk may overcome it. But for outsiders assessing these arguments, accounting for sleepwalk bias is an important corrective.
Most important century
----------------------
Additionally, it is important to consider the probability that the next century is the most important of all, which would plausibly be true if AGI existential risk concerns are well founded. If we have a strong prior against this [‘most important century’](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/#the-) idea then we will be [inclined](https://globalprioritiesinstitute.org/wp-content/uploads/William-MacAskill_Are-we-living-at-the-hinge-of-history.pdf)to think that AGI existential risk arguments are somehow flawed.
The Self-Sampling Assumption (SSA) posits that a rational agent's priors should locate them uniformly at random within each possible world. If we accept the SSA, it seems to imply that we ought to have a low prior on AI existential risk (or any kind of permanent dramatic civilizational change) in this century in particular because of the near-zero base rate for such changes. The detailed evidence in favour of AI existential risk concerns may not be enough to overcome the initial scepticism that arises from our natural prior.
Alternatively, you might accept the claim [proposed by Karnofsky](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/#my-view)that there are extremely strong arguments that this [approximate period in history must be very important](https://www.cold-takes.com/this-cant-go-on/#why-cant-this-go-on). First, Karnofsky argues that historical trends in economic growth and technological development show massive accelerations in the recent past. Growth rates are near all-time highs and appear unsustainable for more than a few thousand years at most before physical limits are reached. This suggests we are living during a temporary spike or explosion in development.
Second, he notes that since growth is so rapid and near its limits, some dramatic change seems likely soon. Possibilities include stagnation as growth slows, continued acceleration towards physical limits, or civilizational collapse. This situation seems intrinsically unstable and significant. While not definitive, Karnofsky believes this context should make us more open to arguments that this time period is uniquely significant.
Accuracy of futurism
--------------------
Another important consideration is the base rate of forecasting the future without empirical feedback loops. This consideration fundamentally focuses on the process used to generate the forecasts and questions whether it reliably produces accurate estimates. The history of technology has shown that it can be difficult to predict which technologies will have the most significant impact and AI alignment research especially often relies on complex abstract concepts to make forecasts, rather than mechanistically precise models. [Some examples](https://forum.effectivealtruism.org/posts/L6ZmggEJw8ri4KB8X/my-highly-personal-skepticism-braindump-on-existential-risk#I_don_t_trust_chains_of_reasoning_with_imperfect_concepts) are discussed in this article.
One way of assessing reliability is to find a reference class where predictions of AI existential catastrophe are comparable to other future predictions. For instance, we can compare AI predictions to the predictions made by professional futurists in the past and then [compare relevant features](https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/#todays-futurism-vs-these-predictions). If they compare favourably to past successful predictions, this may indicate a higher level of reliability in the TAI predictions, and if they don't, it may suggest that we should be cautious in our assessment of their validity.
We can also look at other general features of the arguments without comparison to specific known examples of successful futurism, like their level of reliance on abstract concepts vs empirical evidence. AI risk involves unprecedented technologies whose impacts are highly uncertain. There are likely gaps in our models and unknown unknowns that make it difficult to assign precise probabilities to outcomes. While we can still make reasonable estimates, we should account for the significant [Knightian Uncertainty](https://www.lesswrong.com/posts/tG9BLyBEiLeRJZvX6/communicating-effectively-under-knightian-norms) by avoiding overconfident predictions, explicitly acknowledging the limitations of our models, and being open to being surprised.
Considerations like these arose in the recent XPT superforecaster elicitation. For examples of considerations that we would place under this umbrella, we would include [these from XPT](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#The_arguments_made_by_XPT_forecasters):
* "Given the extreme uncertainty in the field and lack of real experts, we should put less weight on those who argue for AGI happening sooner." (XPT superforecaster team 342)
* "Maybe most of the updates during the tournament were instances of the blind leading the blind." (Peter McCluskey, XPT superforecaster)
Accuracy of transformative technology prediction
------------------------------------------------
This considers the historical base rate of similar technologies being transformative and notes that predictions often overestimate impact. It is important to consider the historical base rate of a technology being economically or socially transformative.
This is due to a number of factors such as under/overoptimism, a lack of understanding of the technology or its limitations, or a failure to consider the societal and economic factors that can limit its adoption.
By taking into account the historical base rate of similar technologies, we can gain a more accurate perspective on the potential impact of AI. We see similar arguments made by superforecasters, such as [these from XPT](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#The_arguments_made_by_XPT_forecasters):
* "The history of AI is littered with periods of rapid progress followed by plateaus and backtracking. I expect history will repeat itself in this decade." (XPT superforecaster team 339)
* "The prediction track record of AI experts and enthusiasts have erred on the side of extreme optimism and should be taken with a grain of salt, as should all expert forecasts." (XPT superforecaster team 340)
* "Many superforecasters suspected that recent progress in AI was the same kind of hype that led to prior disappointments with AI..." (Peter McCluskey, XPT superforecaster)
* "AGI predictions have been made for decades with limited accuracy. I don't expect the pattern to change soon." (XPT superforecaster team 337)
Conclusion
==========
In this article we have led you through an example application of a model-based approach applied to estimating the existential risks from future AI. Model-based approaches have many advantages for improving our understanding of the risks, estimating the value of mitigation policies, and fostering communication between advocates on different sides of AI risk arguments.
During our research we identified many challenges for model-based approaches that are unique to or accentuated in the AI existential risk domain compared to most other decision areas.
We focused on incorporating elements of all of these challenges, in simple ways, into our model as a way of creating a starting point. The model is certainly not a definitive model of AI x-risk, but we instead hope it might serve as an inspirational starting point for others in the AI safety community to pursue model-based approaches. We’ve posted our model online in open-source tradition to encourage you to learn from it, borrow from it, and improve on it. |
31842696-fa8e-4d3c-a135-7e7b7b58cbef | trentmkelly/LessWrong-43k | LessWrong | On Successful Communication Across a Wide Inferential Distance
From Kwame Anthony Appiah's Cosmopolitanism: Ethics in a World of Strangers:
> There’s an oft-told anecdote about a medical missionary in a remote place, who watches, in horror, as people give untreated well water to their babies. The children regularly get diarrhea, and many of them die. The missionary explains that, even though the water looks clear, there are tiny, invisible creatures in it that make the children sick. Fortunately, she says, if they boil the water, it will kill these bacteria. A month later she’s back, and they’re still giving the babies the dirty water. After all, if a stranger came into your community and told you that your children got influenza because of witchcraft, would you respond by going out and slaughtering a sheep? Then the missionary has another idea. Look, she says, let me show you something. She takes some water and boils it. See, she says, there are spirits in the water, and when you put it on the fire they flee: those bubbles you see are the spirits escaping, the spirits that are making your children sick. Now boiling water makes sense. Now the babies stop dying. In belief, as in everything else, each of us must start from where we are.
> When people get sick for unaccountable reasons in Manhattan, there is much talk of viruses and bacteria. Since doctors do not claim to be able to do much about most viruses, they do not put much effort into identifying them. Nor will the course of a viral infection be much changed by a visit to the doctor. In short, most appeals in everyday life to viruses are like most everyday appeals to witchcraft. They are supported only by a general conviction that sickness can be explained, and the conviction that viruses can make you sick.
> If you ask most people in Manhattan why they believe in viruses, they will say two kinds of things: First, they will appeal to authority. “Science has shown,” they will say, though if you ask them how science showed it, you will pretty quickly reach an impasse (eve |
451fb2d1-67f4-4822-88cb-77885049aa48 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Quantifying Differences in Reward Functions
1 Introduction
---------------
Reinforcement learning (RL) has reached or surpassed human performance in many domains with clearly defined reward functions, such as games [[20](#bib.bib20); [15](#bib.bib15); [23](#bib.bib23)] and narrowly scoped robotic manipulation tasks [[16](#bib.bib16)].
Unfortunately, the reward functions for most real-world tasks are difficult or impossible to specify procedurally.
Even a task as simple as peg insertion from pixels has a non-trivial reward function that must usually be learned [[22](#bib.bib22), IV.A].
Tasks involving human interaction can have far more complex reward functions that users may not even be able to introspect on.
These challenges have inspired work on learning a reward function, whether from demonstrations [[13](#bib.bib13); [17](#bib.bib17); [26](#bib.bib26); [8](#bib.bib8); [3](#bib.bib3)], preferences [[1](#bib.bib1); [25](#bib.bib25); [6](#bib.bib6); [18](#bib.bib18); [27](#bib.bib27)] or both [[10](#bib.bib10); [4](#bib.bib4)].
Prior work has usually evaluated the learned reward function R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG using the “rollout method”: training a policy πR^subscript𝜋^𝑅\pi\_{\hat{R{}}{}}italic\_π start\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG end\_POSTSUBSCRIPT to optimize R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG and then examining rollouts from πR^subscript𝜋^𝑅\pi\_{\hat{R{}}{}}italic\_π start\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG end\_POSTSUBSCRIPT.
Unfortunately, using RL to compute πR^subscript𝜋^𝑅\pi\_{\hat{R{}}{}}italic\_π start\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG end\_POSTSUBSCRIPT is often computationally expensive.
Furthermore, the method produces *false negatives* when the reward R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG matches user preferences but the RL algorithm
fails to optimize with respect to R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG.
The rollout method also produces *false positives*.
Of the many reward functions that induce the desired rollout in a given environment, only a small subset align with the user’s preferences.
For example, suppose the agent can reach states {A,B,C}𝐴𝐵𝐶\{A,B,C\}{ italic\_A , italic\_B , italic\_C }.
If the user prefers A>B>C𝐴𝐵𝐶A>B>Citalic\_A > italic\_B > italic\_C, but the agent instead learns A>C>B𝐴𝐶𝐵A>C>Bitalic\_A > italic\_C > italic\_B, the agent will still go to the correct state A𝐴Aitalic\_A.
However, if the initial state distribution or transition dynamics change, misaligned rewards may induce undesirable policies.
For example, if A𝐴Aitalic\_A is no longer reachable at deployment, the previously reliable agent would misbehave by going to the least-favoured state C𝐶Citalic\_C.
We propose instead to evaluate learned rewards via their distance from other reward functions, and summarize our desiderata for reward function distances in Table [1](#S1.T1 "Table 1 ‣ 1 Introduction ‣ Quantifying Differences in Reward Functions").
For benchmarks, it is usually possible to directly compare a learned reward R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG to the true reward function R𝑅R{}{}italic\_R.
Alternatively, benchmark creators can train a “proxy” reward function from a large human data set.
This proxy can then be used as a stand-in for the true reward R𝑅R{}italic\_R when evaluating algorithms trained on a different or smaller data set.
Comparison with a ground-truth reward function is rarely possible outside of benchmarks.
However, even in this challenging case, comparisons can at least be used to cluster reward models trained using different techniques or data.
Larger clusters are more likely to be correct, since multiple methods arrived at a similar result.
Moreover, our regret bound (Theorem [4.3](#S4.Thmdefn3 "Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions")) suggests we could use interpretability methods [[12](#bib.bib12)] on one model and get some guarantees for models in the same cluster.
Table 1: Summary of the desiderata satisfied by each reward function distance. Key – the distance is:
a *pseudometric* (section [3](#S3 "3 Background ‣ Quantifying Differences in Reward Functions"));
*invariant*
to potential shaping [[14](#bib.bib14)] and positive rescaling (section [3](#S3 "3 Background ‣ Quantifying Differences in Reward Functions"));
a computationally *efficient* approximation achieving low error (section [6.1](#S6.SS1 "6.1 Comparing hand-designed reward functions ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"));
*robust* to the choice of coverage distribution (section [6.2](#S6.SS2 "6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"));
and *predictive* of the similarity of the trained policies (section [6.3](#S6.SS3 "6.3 Predicting policy performance from reward distance ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions")).
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Distance | Pseudometric | Invariant | Efficient | Robust | Predictive |
| EPIC | ✓ | ✓ | ✓ | ✓ | ✓ |
| NPEC | ✗ | ✓ | ✗ | ✗ | ✓ |
| ERC | ✓ | ✗ | ✓ | ✗ | ✓ |
We introduce the *Equivalent-Policy Invariant Comparison (EPIC)* distance that meets all the criteria in Table [1](#S1.T1 "Table 1 ‣ 1 Introduction ‣ Quantifying Differences in Reward Functions").
We believe EPIC is the first method to quantitatively evaluate reward functions without training a policy.
EPIC (section [4](#S4 "4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions")) canonicalizes the reward functions’ potential-based shaping [[14](#bib.bib14)], then takes the correlation between the canonical rewards over a *coverage distribution* 𝒟𝒟\mathcal{D}caligraphic\_D of transitions.
We also introduce baselines *NPEC* and *ERC* (section [5](#S5 "5 Baseline approaches for comparing reward functions ‣ Quantifying Differences in Reward Functions")) which partially satisfy the criteria.
EPIC works best when 𝒟𝒟\mathcal{D}caligraphic\_D has support on all realistic transitions.
We achieve this in our experiments by using uninformative priors, such as rollouts of a policy taking random actions.
Moreover, we find EPIC is robust to the exact choice of distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D, producing similar results across a range of distributions, whereas ERC and especially NPEC are highly sensitive to the choice of 𝒟𝒟\mathcal{D}{}caligraphic\_D (section [6.2](#S6.SS2 "6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions")).
Moreover, low EPIC distance between a learned reward R^^𝑅\hat{R{}}over^ start\_ARG italic\_R end\_ARG and the true reward R𝑅R{}{}italic\_R predicts low regret.
That is, the policies πR^subscript𝜋^𝑅\pi\_{\hat{R{}}}italic\_π start\_POSTSUBSCRIPT over^ start\_ARG italic\_R end\_ARG end\_POSTSUBSCRIPT and πRsubscript𝜋𝑅\pi\_{R}italic\_π start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT optimized for R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG and R𝑅R{}{}italic\_R obtain similar returns under R𝑅R{}{}italic\_R.
Theorem [4.3](#S4.Thmdefn3 "Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions") bounds the regret even in unseen environments; by contrast, the rollout method can only determine regret in the evaluation environment.
We also confirm this result empirically (section [6.3](#S6.SS3 "6.3 Predicting policy performance from reward distance ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions")).
2 Related work
---------------
There exist a variety of methods to learn reward functions.
Inverse reinforcement learning (IRL) [[13](#bib.bib13)] is a common approach that works by inferring a reward function from demonstrations.
The IRL problem is inherently underconstrained: many different reward functions lead to the same demonstrations.
Bayesian IRL [[17](#bib.bib17)] handles this ambiguity by inferring a posterior over reward functions.
By contrast, Maximum Entropy IRL [[26](#bib.bib26)] selects the highest entropy reward function consistent with the demonstrations; this method has scaled to high-dimensional environments [[7](#bib.bib7); [8](#bib.bib8)].
An alternative approach is to learn from preference comparisons between two trajectories [[1](#bib.bib1); [25](#bib.bib25); [6](#bib.bib6); [18](#bib.bib18)].
T-REX [[4](#bib.bib4)] is a hybrid approach, learning from a ranked set of demonstrations.
More directly, Cabi et al. [[5](#bib.bib5)] learn from “sketches” of cumulative reward over an episode.
To the best of our knowledge, there is no prior work that focuses on evaluating reward functions directly.
The most closely related work is Ng et al. [[14](#bib.bib14)], identifying reward transformations guaranteed to not change the optimal policy.
However, a variety of ad-hoc methods have been developed to evaluate reward functions.
The rollout method – evaluating rollouts of a policy trained on the learned reward – is evident in the earliest work on IRL [[13](#bib.bib13)].
Fu et al. [[8](#bib.bib8)] refined the rollout method by testing on a transfer environment, inspiring our experiment in section [6.3](#S6.SS3 "6.3 Predicting policy performance from reward distance ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions").
Recent work has compared reward functions by scatterplotting returns [[10](#bib.bib10); [4](#bib.bib4)], inspiring our ERC baseline (section [5.1](#S5.SS1 "5.1 Episode Return Correlation (ERC) ‣ 5 Baseline approaches for comparing reward functions ‣ Quantifying Differences in Reward Functions")).
3 Background
-------------
This section introduces material needed for the distances defined in subsequent sections.
We start by introducing the *Markov Decision Process (MDP)* formalism, then describe when reward functions induce the same optimal policies in an MDP, and finally define the notion of a distance *metric*.
######
Definition 3.1.
A *Markov Decision Process (MDP)* M=(𝒮,𝒜,γ,d0,𝒯,R)𝑀𝒮𝒜𝛾subscript𝑑0𝒯𝑅M=({\mathcal{S}},{\mathcal{A}},\gamma,d\_{0},\mathcal{T},R)italic\_M = ( caligraphic\_S , caligraphic\_A , italic\_γ , italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , caligraphic\_T , italic\_R ) consists of
a set of states 𝒮𝒮{\mathcal{S}}caligraphic\_S and a set of actions 𝒜𝒜{\mathcal{A}}caligraphic\_A;
a discount factor γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ];
an initial state distribution d0(s)subscript𝑑0𝑠d\_{0}({s})italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_s );
a transition distribution 𝒯(s′∣s,a)𝒯conditionalsuperscript𝑠normal-′𝑠𝑎\mathcal{T}({s^{\prime}}\mid{s},{a})caligraphic\_T ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∣ italic\_s , italic\_a ) specifying the probability of transitioning to s′superscript𝑠normal-′{s^{\prime}}{}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT from s𝑠{s}{}italic\_s after taking action a𝑎{a}{}italic\_a;
and a reward function R(s,a,s′)𝑅𝑠𝑎superscript𝑠normal-′R({s},{a},{s^{\prime}})italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) specifying the reward upon taking action a𝑎{a}italic\_a in state s𝑠{s}italic\_s and transitioning to state s′superscript𝑠normal-′{s^{\prime}}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
A trajectory τ=(s0,a0,s1,a1,⋯)𝜏subscript𝑠0subscript𝑎0subscript𝑠1subscript𝑎1⋯\tau=({s}\_{0},{a}\_{0},{s}\_{1},{a}\_{1},\cdots)italic\_τ = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , ⋯ ) consists of a sequence of states si∈𝒮subscript𝑠𝑖𝒮{s}\_{i}\in{\mathcal{S}}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_S and actions ai∈𝒜subscript𝑎𝑖𝒜{a}\_{i}\in{\mathcal{A}}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_A. The *return* on a trajectory is defined as the sum of discounted rewards, g(τ;R)=∑t=0|τ|γtR(st,at,st+1)𝑔𝜏𝑅superscriptsubscript𝑡0𝜏superscript𝛾𝑡𝑅subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1g{}(\tau;R{})=\sum\_{t=0}^{|\tau|}\gamma^{t}R{}({s}\_{t},{a}\_{t},{s}\_{t+1})italic\_g ( italic\_τ ; italic\_R ) = ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT | italic\_τ | end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ), where the length of the trajectory |τ|𝜏|\tau|| italic\_τ | may be infinite.
In the following, we assume a discounted (γ<1𝛾1\gamma<1italic\_γ < 1) infinite-horizon MDP.
The results can be generalized to undiscounted (γ=1𝛾1\gamma=1italic\_γ = 1) MDPs subject to regularity conditions needed for convergence.
A *stochastic policy* π(a∣s)𝜋conditional𝑎𝑠\pi({a}\mid{s})italic\_π ( italic\_a ∣ italic\_s ) assigns probabilities to taking action a∈𝒜𝑎𝒜{a}\in{\mathcal{A}}italic\_a ∈ caligraphic\_A in state s∈𝒮𝑠𝒮{s}\in{\mathcal{S}}italic\_s ∈ caligraphic\_S. The objective of an MDP is to find a policy π𝜋\piitalic\_π that maximizes the expected return G(π)=𝔼τ(π)[g(τ;R)]𝐺𝜋subscript𝔼𝜏𝜋delimited-[]𝑔𝜏𝑅G(\pi)=\mathbb{E}\_{\tau(\pi)}\left[g{}(\tau;R{})\right]italic\_G ( italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ( italic\_π ) end\_POSTSUBSCRIPT [ italic\_g ( italic\_τ ; italic\_R ) ], where τ(π)𝜏𝜋\tau(\pi)italic\_τ ( italic\_π ) is a trajectory generated by sampling the initial state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT from d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, each action atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT from the policy π(at∣st)𝜋conditionalsubscript𝑎𝑡subscript𝑠𝑡\pi({a}\_{t}\mid{s}\_{t})italic\_π ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and successor states st+1subscript𝑠𝑡1{s}\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT from the transition distribution 𝒯(st+1∣st,at)𝒯conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡\mathcal{T}({s}\_{t+1}\mid{s}\_{t},{a}\_{t})caligraphic\_T ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). An MDP M𝑀Mitalic\_M has a set of optimal policies π\*(M)superscript𝜋𝑀\pi^{\*}(M)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_M ) that maximize the expected return, π\*(M)=argmaxπG(π)superscript𝜋𝑀subscriptargmax𝜋𝐺𝜋\pi^{\*}(M)=\operatorname\*{arg\,max}\_{\pi}G(\pi)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_M ) = start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT italic\_G ( italic\_π ).
In this paper, we consider the case where we only have access to an MDP\R, M−=(𝒮,𝒜,γ,d0,𝒯)superscript𝑀𝒮𝒜𝛾subscript𝑑0𝒯M^{-}=({\mathcal{S}},{\mathcal{A}},\gamma,d\_{0},\mathcal{T})italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT = ( caligraphic\_S , caligraphic\_A , italic\_γ , italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , caligraphic\_T ).
The unknown reward function R𝑅Ritalic\_R must be learned from human data.
Typically, only the state space 𝒮𝒮{\mathcal{S}}caligraphic\_S, action space 𝒜𝒜{\mathcal{A}}caligraphic\_A and discount factor γ𝛾\gammaitalic\_γ are known exactly, with the initial state distribution d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and transition dynamics 𝒯𝒯\mathcal{T}caligraphic\_T only observable from interacting with the environment M−superscript𝑀M^{-}italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT.
Below, we describe an equivalence class whose members are guaranteed to have the same optimal policy set in *any* MDP\R M−superscript𝑀M^{-}italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT with fixed 𝒮𝒮{\mathcal{S}}caligraphic\_S, 𝒜𝒜{\mathcal{A}}caligraphic\_A and γ𝛾\gammaitalic\_γ (allowing the unknown 𝒯𝒯\mathcal{T}caligraphic\_T and d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to take arbitrary values).
######
Definition 3.2.
Let γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ] be the discount factor, and Φ:𝒮→ℝnormal-:normal-Φnormal-→𝒮ℝ\Phi:{\mathcal{S}}\to\mathbb{R}roman\_Φ : caligraphic\_S → blackboard\_R a real-valued function. Then R(s,a,s′)=γΦ(s′)−Φ(s)𝑅𝑠𝑎superscript𝑠normal-′𝛾normal-Φsuperscript𝑠normal-′normal-Φ𝑠R({s},{a},{s^{\prime}})=\gamma\Phi({s^{\prime}})-\Phi({s})italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ) is a *potential shaping* reward, with *potential* Φnormal-Φ\Phiroman\_Φ [[14](#bib.bib14)].
######
Definition 3.3 (Reward Equivalence).
We define two bounded reward functions RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT to be *equivalent*, RA≡RBsubscript𝑅𝐴subscript𝑅𝐵R\_{A}{}\equiv{}R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, for a fixed (𝒮,𝒜,γ)𝒮𝒜𝛾({\mathcal{S}},{\mathcal{A}},\gamma)( caligraphic\_S , caligraphic\_A , italic\_γ ) if and only if there exists a constant λ>0𝜆0\lambda>0italic\_λ > 0 and a bounded potential function Φ:𝒮→ℝnormal-:normal-Φnormal-→𝒮ℝ\Phi:{\mathcal{S}}\to\mathbb{R}roman\_Φ : caligraphic\_S → blackboard\_R such that for all s,s′∈𝒮𝑠superscript𝑠normal-′
𝒮{s},{s^{\prime}}\in{\mathcal{S}}italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_S and a∈𝒜𝑎𝒜{a}\in{\mathcal{A}}italic\_a ∈ caligraphic\_A:
| | | | |
| --- | --- | --- | --- |
| | RB(s,a,s′)=λRA(s,a,s′)+γΦ(s′)−Φ(s).subscript𝑅𝐵𝑠𝑎superscript𝑠′𝜆subscript𝑅𝐴𝑠𝑎superscript𝑠′𝛾Φsuperscript𝑠′Φ𝑠R\_{B}{}({s},{a},{s^{\prime}})=\lambda R\_{A}{}({s},{a},{s^{\prime}})+\gamma\Phi({s^{\prime}})-\Phi({s}).italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_λ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ) . | | (1) |
{restatable}
proprewardequivisequivalence
The binary relation ≡\equiv≡ is an equivalence relation. Let RA,RB,RC:𝒮×𝒜×𝒮→ℝ:subscript𝑅𝐴subscript𝑅𝐵subscript𝑅𝐶
→𝒮𝒜𝒮ℝR\_{A}{},R\_{B}{},R\_{C}{}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be bounded reward functions. Then ≡\equiv≡ is reflexive, RA≡RAsubscript𝑅𝐴subscript𝑅𝐴R\_{A}{}\equiv R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT; symmetric, RA≡RBsubscript𝑅𝐴subscript𝑅𝐵R\_{A}{}\equiv R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT implies RB≡RAsubscript𝑅𝐵subscript𝑅𝐴R\_{B}{}\equiv R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT; and transitive, (RA≡RB)∧(RB≡RC)subscript𝑅𝐴subscript𝑅𝐵subscript𝑅𝐵subscript𝑅𝐶\left(R\_{A}{}\equiv R\_{B}{}\right)\land\left(R\_{B}{}\equiv R\_{C}{}\right)( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ∧ ( italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ) implies RA≡RCsubscript𝑅𝐴subscript𝑅𝐶R\_{A}{}\equiv R\_{C}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT.
######
Proof 1.
See section [A.3.1](#A1.SS3.SSS1 "A.3.1 Background ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") in supplementary material.
The expected return of potential shaping γΦ(s′)−Φ(s)𝛾Φsuperscript𝑠′Φ𝑠\gamma\Phi(s^{\prime})-\Phi(s)italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ) on a trajectory segment (s0,⋯,sT)subscript𝑠0⋯subscript𝑠𝑇(s\_{0},\cdots,s\_{T})( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , ⋯ , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) is γTΦ(sT)−Φ(s0)superscript𝛾𝑇Φsubscript𝑠𝑇Φsubscript𝑠0\gamma^{T}\Phi(s\_{T})-\Phi(s\_{0})italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) - roman\_Φ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ).
The first term γTΦ(sT)→0→superscript𝛾𝑇Φsubscript𝑠𝑇0\gamma^{T}\Phi(s\_{T})\to 0italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) → 0 as T→∞→𝑇T\to\inftyitalic\_T → ∞, while the second term Φ(s0)Φsubscript𝑠0\Phi(s\_{0})roman\_Φ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) only depends on the initial state, and so potential shaping does not change the set of optimal policies.
Moreover, any additive transformation that is not potential shaping will, for some reward R𝑅Ritalic\_R and transition distribution 𝒯𝒯\mathcal{T}caligraphic\_T, produce a set of optimal policies that is disjoint from the original [[14](#bib.bib14)].
The set of optimal policies is invariant to constant shifts c∈ℝ𝑐ℝc\in\mathbb{R}italic\_c ∈ blackboard\_R in the reward, however this can already be obtained by shifting ΦΦ\Phiroman\_Φ by cγ−1𝑐𝛾1\frac{c}{\gamma-1}divide start\_ARG italic\_c end\_ARG start\_ARG italic\_γ - 1 end\_ARG.111Note constant shifts in the reward of an undiscounted MDP would cause the value function to diverge. Fortunately, the shaping γΦ(s′)−Φ(s)𝛾Φsuperscript𝑠′Φ𝑠\gamma\Phi({s^{\prime}})-\Phi({s})italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ) is unchanged by constant shifts to ΦΦ\Phiroman\_Φ when γ=1𝛾1\gamma=1italic\_γ = 1.
Scaling a reward function by a positive factor λ>0𝜆0\lambda>0italic\_λ > 0 scales the expected return of all trajectories by λ𝜆\lambdaitalic\_λ, also leaving the set of optimal policies unchanged.
If RA≡RBsubscript𝑅𝐴subscript𝑅𝐵R\_{A}{}\equiv{}R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT for some fixed (𝒮,𝒜,γ)𝒮𝒜𝛾({\mathcal{S}},{\mathcal{A}},\gamma)( caligraphic\_S , caligraphic\_A , italic\_γ ), then for any MDP\R M−=(𝒮,𝒜,γ,d0,𝒯)superscript𝑀𝒮𝒜𝛾subscript𝑑0𝒯M^{-}=({\mathcal{S}},{\mathcal{A}},\gamma,d\_{0},\mathcal{T})italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT = ( caligraphic\_S , caligraphic\_A , italic\_γ , italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , caligraphic\_T ) we have
π\*((M−,RA))=π\*((M−,RB))superscript𝜋superscript𝑀subscript𝑅𝐴superscript𝜋superscript𝑀subscript𝑅𝐵\pi^{\*}\left(\left(M^{-},R\_{A}{}\right)\right)=\pi^{\*}\left(\left(M^{-},R\_{B}{}\right)\right)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( ( italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ) = italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( ( italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ),
where (M−,R)superscript𝑀𝑅\left(M^{-},R{}\right)( italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT , italic\_R ) denotes the MDP specified by M−superscript𝑀M^{-}italic\_M start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT with reward function R𝑅Ritalic\_R.
In other words, RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT induce the same optimal policies for all initial state distributions d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and transition dynamics 𝒯𝒯\mathcal{T}caligraphic\_T.
######
Definition 3.4.
Let X𝑋Xitalic\_X be a set and d:X×X→[0,∞)normal-:𝑑normal-→𝑋𝑋0d:X\times X\to[0,\infty)italic\_d : italic\_X × italic\_X → [ 0 , ∞ ) a function. d𝑑ditalic\_d is a *premetric* if d(x,x)=0𝑑𝑥𝑥0d(x,x)=0italic\_d ( italic\_x , italic\_x ) = 0 for all x∈X𝑥𝑋x\in Xitalic\_x ∈ italic\_X. d𝑑ditalic\_d is a *pseudometric* if, furthermore, it is symmetric, d(x,y)=d(y,x)𝑑𝑥𝑦𝑑𝑦𝑥d(x,y)=d(y,x)italic\_d ( italic\_x , italic\_y ) = italic\_d ( italic\_y , italic\_x ) for all x,y∈X𝑥𝑦
𝑋x,y\in Xitalic\_x , italic\_y ∈ italic\_X; and satisfies the triangle inequality, d(x,z)≤d(x,y)+d(y,z)𝑑𝑥𝑧𝑑𝑥𝑦𝑑𝑦𝑧d(x,z)\leq d(x,y)+d(y,z)italic\_d ( italic\_x , italic\_z ) ≤ italic\_d ( italic\_x , italic\_y ) + italic\_d ( italic\_y , italic\_z ) for all x,y,z∈X𝑥𝑦𝑧
𝑋x,y,z\in Xitalic\_x , italic\_y , italic\_z ∈ italic\_X. d𝑑ditalic\_d is a *metric* if, furthermore, for all x,y∈X𝑥𝑦
𝑋x,y\in Xitalic\_x , italic\_y ∈ italic\_X, d(x,y)=0⟹x=y𝑑𝑥𝑦0𝑥𝑦d(x,y)=0\implies x=yitalic\_d ( italic\_x , italic\_y ) = 0 ⟹ italic\_x = italic\_y.
We wish for d(RA,RB)=0𝑑subscript𝑅𝐴subscript𝑅𝐵0d(R\_{A}{},R\_{B}{})=0italic\_d ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0 whenever the rewards are equivalent, RA≡RBsubscript𝑅𝐴subscript𝑅𝐵R\_{A}{}\equiv{}R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, even if they are not identical, RA≠RBsubscript𝑅𝐴subscript𝑅𝐵R\_{A}{}\neq R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≠ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT.
This is forbidden in a metric but permitted in a pseudometric, while retaining other guarantees such as symmetry and triangle inequality that a metric provides.
Accordingly, a pseudometric is usually the best choice for a distance d𝑑ditalic\_d over reward functions.
4 Comparing reward functions with EPIC
---------------------------------------
In this section we introduce the *Equivalent-Policy Invariant Comparison (EPIC)* pseudometric.
This novel distance canonicalizes the reward functions’ potential-based shaping, then compares the canonical representatives using Pearson correlation, which is invariant to scale.
Together, this construction makes EPIC invariant on reward equivalence classes.
See section [A.3.2](#A1.SS3.SSS2 "A.3.2 Equivalent-Policy Invariant Comparison (EPIC) pseudometric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for proofs.
We define the *canonically shaped reward* C𝒟𝒮,𝒟𝒜(R)subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R{}\right)italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) as an expectation over some arbitrary distributions 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT over states 𝒮𝒮{\mathcal{S}}caligraphic\_S and actions 𝒜𝒜{\mathcal{A}}caligraphic\_A respectively.
This construction means that C𝒟𝒮,𝒟𝒜(R)subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R{}\right)italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) does not depend on the MDP’s initial state distribution d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT or transition dynamics 𝒯𝒯\mathcal{T}caligraphic\_T.
In particular, we may evaluate R𝑅R{}italic\_R on transitions that are impossible in the training environment, since these may become possible in a deployment environment with a different d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT or 𝒯𝒯\mathcal{T}caligraphic\_T.
######
Definition 4.1 (Canonically Shaped Reward).
Let R:𝒮×𝒜×𝒮→ℝnormal-:𝑅normal-→𝒮𝒜𝒮ℝR{}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be a reward function.
Given distributions 𝒟𝒮∈Δ(𝒮)subscript𝒟𝒮normal-Δ𝒮{\mathcal{D}\_{{\mathcal{S}}}}\in\Delta({\mathcal{S}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_S ) and 𝒟𝒜∈Δ(𝒜)subscript𝒟𝒜normal-Δ𝒜{\mathcal{D}\_{{\mathcal{A}}}}\in\Delta({\mathcal{A}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_A ) over states and actions, let S𝑆{S}{}italic\_S and S′superscript𝑆normal-′{S^{\prime}}{}italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT be random variables independently sampled from 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and A𝐴{A}{}italic\_A sampled from 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT.
We define the *canonically shaped* R𝑅R{}italic\_R to be:
| | | | |
| --- | --- | --- | --- |
| | C𝒟𝒮,𝒟𝒜(R)(s,a,s′)=R(s,a,s′)+𝔼[γR(s′,A,S′)−R(s,A,S′)−γR(S,A,S′)].subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅𝑠𝑎superscript𝑠′𝑅𝑠𝑎superscript𝑠′𝔼delimited-[]𝛾𝑅superscript𝑠′𝐴superscript𝑆′𝑅𝑠𝐴superscript𝑆′𝛾𝑅𝑆𝐴superscript𝑆′C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R{}\right)({s},{a},{s^{\prime}})=R{}({s},{a},{s^{\prime}})+\mathbb{E}\left[\gamma R{}({s^{\prime}},{A}{},{S^{\prime}}{})-R{}({s},{A}{},{S^{\prime}}{})-\gamma R{}({S}{},{A}{},{S^{\prime}}{})\right].italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + blackboard\_E [ italic\_γ italic\_R ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - italic\_R ( italic\_s , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - italic\_γ italic\_R ( italic\_S , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] . | | (2) |
Informally, if R′superscript𝑅′R^{\prime}italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is shaped by potential ΦΦ\Phiroman\_Φ, then increasing Φ(s)Φ𝑠\Phi({s})roman\_Φ ( italic\_s ) decreases R′(s,a,s′)superscript𝑅′𝑠𝑎superscript𝑠′R^{\prime}({s},{a},{s^{\prime}})italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) but increases 𝔼[−R′(s,A,S′)]𝔼delimited-[]superscript𝑅′𝑠𝐴superscript𝑆′\mathbb{E}\left[-R^{\prime}({s},{A}{},{S^{\prime}}{})\right]blackboard\_E [ - italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ], canceling.
Similarly, increasing Φ(s′)Φsuperscript𝑠′\Phi({s^{\prime}})roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) increases R′(s,a,s′)superscript𝑅′𝑠𝑎superscript𝑠′R^{\prime}({s},{a},{s^{\prime}})italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) but decreases 𝔼[γR′(s′,A,S′)]𝔼delimited-[]𝛾superscript𝑅′superscript𝑠′𝐴superscript𝑆′\mathbb{E}\left[\gamma R^{\prime}({s^{\prime}},{A}{},{S^{\prime}}{})\right]blackboard\_E [ italic\_γ italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ].
Finally, 𝔼[γR(S,A,S′)]𝔼delimited-[]𝛾𝑅𝑆𝐴superscript𝑆′\mathbb{E}[\gamma R({S}{},{A}{},{S^{\prime}}{})]blackboard\_E [ italic\_γ italic\_R ( italic\_S , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] centers the reward, canceling constant shift.
{restatable}
[The Canonically Shaped Reward is Invariant to Shaping]propcanonicallyshapedrewardinvariant
Let R:𝒮×𝒜×𝒮→ℝ:𝑅→𝒮𝒜𝒮ℝR{}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be a reward function and Φ:𝒮→ℝ:Φ→𝒮ℝ\Phi:{\mathcal{S}}\to\mathbb{R}roman\_Φ : caligraphic\_S → blackboard\_R a potential function.
Let γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ] be a discount rate, and 𝒟𝒮∈Δ(𝒮)subscript𝒟𝒮Δ𝒮{\mathcal{D}\_{{\mathcal{S}}}}\in\Delta({\mathcal{S}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_S ) and 𝒟𝒜∈Δ(𝒜)subscript𝒟𝒜Δ𝒜{\mathcal{D}\_{{\mathcal{A}}}}\in\Delta({\mathcal{A}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_A ) be distributions over states and actions.
Let R′R{}^{\prime}italic\_R start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT denote R𝑅R{}italic\_R shaped by ΦΦ\Phiroman\_Φ: R(s,a,s′)′=R(s,a,s′)+γΦ(s′)−Φ(s)R{}^{\prime}({s},{a},{s^{\prime}})=R{}({s},{a},{s^{\prime}})+\gamma\Phi({s^{\prime}})-\Phi({s})italic\_R start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ).
Then the canonically shaped R′R{}^{\prime}italic\_R start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT and R𝑅R{}italic\_R are equal:
C𝒟𝒮,𝒟𝒜(R′)=C𝒟𝒮,𝒟𝒜(R)subscript𝐶subscript𝒟𝒮subscript𝒟𝒜superscript𝑅′subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R^{\prime}\right)=C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R{}\right)italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ).
{optional-prf}
See section [A.3.2](#A1.SS3.SSS2 "A.3.2 Equivalent-Policy Invariant Comparison (EPIC) pseudometric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions").
Proposition [4.1](#S4.Thmdefn1 "Definition 4.1 (Canonically Shaped Reward). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions") holds for arbitrary distributions 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT.
However, in the following Proposition we show that the potential shaping applied by the canonicalization C𝒟𝒮,𝒟𝒜(R)subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R\right)italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) is more influenced by perturbations to R𝑅Ritalic\_R of transitions (s,a,s′)𝑠𝑎superscript𝑠′({s},{a},{s^{\prime}})( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) with high joint probability.
This suggests choosing 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT to have broad support, making C𝒟𝒮,𝒟𝒜(R)subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R\right)italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) more robust to perturbations of any given transition.
{restatable}
propsmoothnesscanonicalization
Let 𝒮𝒮{\mathcal{S}}caligraphic\_S and 𝒜𝒜{\mathcal{A}}caligraphic\_A be finite, with |𝒮|≥2𝒮2|{\mathcal{S}}|\geq 2| caligraphic\_S | ≥ 2.
Let 𝒟𝒮∈Δ(𝒮)subscript𝒟𝒮Δ𝒮{\mathcal{D}\_{{\mathcal{S}}}}\in\Delta({\mathcal{S}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_S ) and 𝒟𝒜∈Δ(𝒜)subscript𝒟𝒜Δ𝒜{\mathcal{D}\_{{\mathcal{A}}}}\in\Delta({\mathcal{A}})caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ∈ roman\_Δ ( caligraphic\_A ).
Let R,ν:𝒮×𝒜×𝒮→ℝ:𝑅𝜈
→𝒮𝒜𝒮ℝR,\>\nu:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R , italic\_ν : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be reward functions, with ν(s,a,s′)=λ𝕀[(s,a,s′)=(x,u,x′)]𝜈𝑠𝑎superscript𝑠′𝜆𝕀delimited-[]𝑠𝑎superscript𝑠′𝑥𝑢superscript𝑥′\nu({s},{a},{s^{\prime}})=\lambda\mathbb{I}[({s},{a},{s^{\prime}})=(x,u,x^{\prime})]italic\_ν ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_λ blackboard\_I [ ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = ( italic\_x , italic\_u , italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ], λ∈ℝ𝜆ℝ\lambda\in\mathbb{R}italic\_λ ∈ blackboard\_R, x,x′∈𝒮𝑥superscript𝑥′
𝒮x,x^{\prime}\in{\mathcal{S}}italic\_x , italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_S and u∈𝒜𝑢𝒜u\in{\mathcal{A}}italic\_u ∈ caligraphic\_A.
Let Φ𝒟𝒮,𝒟𝒜(R)(s,a,s′)=C𝒟𝒮,𝒟𝒜(R)(s,a,s′)−R(s,a,s′)subscriptΦsubscript𝒟𝒮subscript𝒟𝒜𝑅𝑠𝑎superscript𝑠′subscript𝐶subscript𝒟𝒮subscript𝒟𝒜𝑅𝑠𝑎superscript𝑠′𝑅𝑠𝑎superscript𝑠′\Phi\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}(R)({s},{a},{s^{\prime}})=C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R\right)({s},{a},{s^{\prime}})-R({s},{a},{s^{\prime}})roman\_Φ start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ).
Then:
| | | | |
| --- | --- | --- | --- |
| | ‖Φ𝒟𝒮,𝒟𝒜(R+ν)−Φ𝒟𝒮,𝒟𝒜(R)‖∞=λ(1+γ𝒟𝒮(x))𝒟𝒜(u)𝒟𝒮(x′).subscriptnormsubscriptΦsubscript𝒟𝒮subscript𝒟𝒜𝑅𝜈subscriptΦsubscript𝒟𝒮subscript𝒟𝒜𝑅𝜆1𝛾subscript𝒟𝒮𝑥subscript𝒟𝒜𝑢subscript𝒟𝒮superscript𝑥′\left\|\Phi\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}(R+\nu)-\Phi\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}(R)\right\|\_{\infty}=\lambda\left(1+\gamma{\mathcal{D}\_{{\mathcal{S}}}}(x)\right){\mathcal{D}\_{{\mathcal{A}}}}(u){\mathcal{D}\_{{\mathcal{S}}}}(x^{\prime}).∥ roman\_Φ start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R + italic\_ν ) - roman\_Φ start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R ) ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT = italic\_λ ( 1 + italic\_γ caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT ( italic\_x ) ) caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ( italic\_u ) caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . | | (3) |
We have canonicalized potential shaping; next, we compare the rewards in a scale-invariant manner.
######
Definition 4.2.
The *Pearson distance* between random variables X𝑋{X}{}italic\_X and Y𝑌{Y}{}italic\_Y is defined by the expression
Dρ(X,Y)=1−ρ(X,Y)/2subscript𝐷𝜌𝑋𝑌1𝜌𝑋𝑌2D\_{\rho}{}({X}{},{Y}{})=\sqrt{1-\rho({X}{},{Y}{})}/\sqrt{2}italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_X , italic\_Y ) = square-root start\_ARG 1 - italic\_ρ ( italic\_X , italic\_Y ) end\_ARG / square-root start\_ARG 2 end\_ARG,
where ρ(X,Y)𝜌𝑋𝑌\rho({X}{},{Y}{})italic\_ρ ( italic\_X , italic\_Y ) is the Pearson correlation between X𝑋{X}{}italic\_X and Y𝑌{Y}{}italic\_Y.
{restatable}
lemmapearsondistanceproperties
The Pearson distance Dρsubscript𝐷𝜌D\_{\rho}{}italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT is a pseudometric.
Moreover, let a,b∈(0,∞)𝑎𝑏
0a,b\in(0,\infty)italic\_a , italic\_b ∈ ( 0 , ∞ ), c,d∈ℝ𝑐𝑑
ℝc,d\in\mathbb{R}italic\_c , italic\_d ∈ blackboard\_R and X,Y𝑋𝑌{X}{},{Y}{}italic\_X , italic\_Y be random variables.
Then it follows that 0≤Dρ(aX+c,bY+d)=Dρ(X,Y)≤10subscript𝐷𝜌𝑎𝑋𝑐𝑏𝑌𝑑subscript𝐷𝜌𝑋𝑌10\leq D\_{\rho}{}(a{X}{}+c,b{Y}{}+d)=D\_{\rho}{}({X}{},{Y}{})\leq 10 ≤ italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_a italic\_X + italic\_c , italic\_b italic\_Y + italic\_d ) = italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_X , italic\_Y ) ≤ 1.
We can now define EPIC in terms of the Pearson distance between canonically shaped rewards.
######
Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric).
Let 𝒟𝒟\mathcal{D}{}caligraphic\_D be some coverage distribution over transitions s→𝑎s′𝑠𝑎normal-→superscript𝑠normal-′{s}\overset{a}{\to}{s^{\prime}}italic\_s overitalic\_a start\_ARG → end\_ARG italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
Let S,A,S′𝑆𝐴superscript𝑆normal-′{S}{},{A}{},{S^{\prime}}{}italic\_S , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT be random variables jointly sampled from 𝒟𝒟\mathcal{D}{}caligraphic\_D.
Let 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT be some distributions over states 𝒮𝒮{\mathcal{S}}caligraphic\_S and 𝒜𝒜{\mathcal{A}}caligraphic\_A respectively.
The *Equivalent-Policy Invariant Comparison (EPIC)* distance between reward functions RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is:
| | | | |
| --- | --- | --- | --- |
| | DEPIC(RA,RB)=Dρ(C𝒟𝒮,𝒟𝒜(RA)(S,A,S′),C𝒟𝒮,𝒟𝒜(RB)(S,A,S′)).subscript𝐷EPICsubscript𝑅𝐴subscript𝑅𝐵subscript𝐷𝜌subscript𝐶subscript𝒟𝒮subscript𝒟𝒜subscript𝑅𝐴𝑆𝐴superscript𝑆′subscript𝐶subscript𝒟𝒮subscript𝒟𝒜subscript𝑅𝐵𝑆𝐴superscript𝑆′D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{})=D\_{\rho}{}\left(C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R\_{A}{}\right)({S}{},{A}{},{S^{\prime}}{}),C\_{{\mathcal{D}\_{{\mathcal{S}}}},{\mathcal{D}\_{{\mathcal{A}}}}}\left(R\_{B}{}\right)({S}{},{A}{},{S^{\prime}}{})\right).italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ( italic\_S , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_C start\_POSTSUBSCRIPT caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ( italic\_S , italic\_A , italic\_S start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) . | | (4) |
{restatable}
theoremcanonicalizeddistancepseudometric
The Equivalent-Policy Invariant Comparison distance is a pseudometric.
Since EPIC is a pseudometric, it satisfies the triangle inequality.
To see why this is useful, consider an environment with an expensive to evaluate ground-truth reward R𝑅R{}{}italic\_R.
Directly comparing many learned rewards R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG to R𝑅R{}{}italic\_R might be prohibitively expensive.
We can instead pay a one-off cost: query R𝑅R{}{}italic\_R a finite number of times and infer a proxy reward RPsubscript𝑅𝑃R\_{P}italic\_R start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT with DEPIC(R,RP)≤ϵsubscript𝐷EPIC𝑅subscript𝑅𝑃italic-ϵD\_{\mathrm{EPIC}}{}(R{}{},R\_{P})\leq\epsilonitalic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R , italic\_R start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT ) ≤ italic\_ϵ.
The triangle inequality allows us to evaluate R^^𝑅\hat{R{}}{}over^ start\_ARG italic\_R end\_ARG via comparison to RPsubscript𝑅𝑃R\_{P}italic\_R start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT, since DEPIC(R^,R)≤DEPIC(R^,RP)+ϵsubscript𝐷EPIC^𝑅𝑅subscript𝐷EPIC^𝑅subscript𝑅𝑃italic-ϵD\_{\mathrm{EPIC}}(\hat{R{}}{},R{}{})\leq D\_{\mathrm{EPIC}}(\hat{R{}}{},R\_{P})+\epsilonitalic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_R end\_ARG , italic\_R ) ≤ italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_R end\_ARG , italic\_R start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT ) + italic\_ϵ.
This is particularly useful for benchmarks, which can be expensive to build but should be cheap to use.
{restatable}
theoremcanonicalizeddistanceproperties
Let RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT, RA′superscriptsubscript𝑅𝐴′R\_{A}^{\prime}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, RB,RB′:𝒮×𝒜×𝒮→ℝ:subscript𝑅𝐵superscriptsubscript𝑅𝐵′
→𝒮𝒜𝒮ℝR\_{B}{},R\_{B}^{\prime}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be reward functions such that RA′≡RAsuperscriptsubscript𝑅𝐴′subscript𝑅𝐴R\_{A}^{\prime}\equiv R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RB′≡RBsuperscriptsubscript𝑅𝐵′subscript𝑅𝐵R\_{B}^{\prime}\equiv R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT.
Then 0≤DEPIC(RA′,RB′)=DEPIC(RA,RB)≤10subscript𝐷EPICsuperscriptsubscript𝑅𝐴′superscriptsubscript𝑅𝐵′subscript𝐷EPICsubscript𝑅𝐴subscript𝑅𝐵10\leq D\_{\mathrm{EPIC}}{}(R\_{A}^{\prime},R\_{B}^{\prime})=D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{})\leq 10 ≤ italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ≤ 1.
The following is our main theoretical result, showing that DEPIC(RA,RB)subscript𝐷EPICsubscript𝑅𝐴subscript𝑅𝐵D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) distance gives an upper bound on the difference in returns under *either* RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT or RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT between optimal policies πRA\*subscriptsuperscript𝜋subscript𝑅𝐴\pi^{\*}\_{R\_{A}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and πRB\*subscriptsuperscript𝜋subscript𝑅𝐵\pi^{\*}\_{R\_{B}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT.
In other words, EPIC bounds the regret under RAsubscript𝑅𝐴R\_{A}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT of using πRB\*subscriptsuperscript𝜋subscript𝑅𝐵\pi^{\*}\_{R\_{B}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT instead of πRA\*subscriptsuperscript𝜋subscript𝑅𝐴\pi^{\*}\_{R\_{A}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT.
Moreover, by symmetry DEPIC(RA,RB)subscript𝐷EPICsubscript𝑅𝐴subscript𝑅𝐵D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) also bounds the regret under RBsubscript𝑅𝐵R\_{B}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT of using πRA\*subscriptsuperscript𝜋subscript𝑅𝐴\pi^{\*}\_{R\_{A}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT instead of πRB\*subscriptsuperscript𝜋subscript𝑅𝐵\pi^{\*}\_{R\_{B}}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT.
{restatable}
theoremepicregretbounddiscrete
Let M𝑀Mitalic\_M be a γ𝛾\gammaitalic\_γ-discounted MDP\R with finite state and action spaces 𝒮𝒮{\mathcal{S}}caligraphic\_S and 𝒜𝒜{\mathcal{A}}caligraphic\_A.
Let RA,RB:𝒮×𝒜×𝒮→ℝ:subscript𝑅𝐴subscript𝑅𝐵
→𝒮𝒜𝒮ℝR\_{A}{},R\_{B}{}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be rewards, and πA\*,πB\*superscriptsubscript𝜋𝐴superscriptsubscript𝜋𝐵\pi\_{A}^{\*},\pi\_{B}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT be respective optimal policies.
Let 𝒟π(t,st,at,st+1)subscript𝒟𝜋𝑡subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1\mathcal{D}\_{\pi}(t,{s\_{t}},{a\_{t}},{s\_{t+1}})caligraphic\_D start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_t , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) denote the distribution over transitions 𝒮×𝒜×𝒮𝒮𝒜𝒮{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}caligraphic\_S × caligraphic\_A × caligraphic\_S induced by policy π𝜋\piitalic\_π at time t𝑡titalic\_t, and 𝒟(s,a,s′)𝒟𝑠𝑎superscript𝑠′\mathcal{D}{}({s},{a},{s^{\prime}})caligraphic\_D ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) be the coverage distribution used to compute DEPICsubscript𝐷EPICD\_{\mathrm{EPIC}}italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT.
Suppose there exists K>0𝐾0K>0italic\_K > 0 such that K𝒟(st,at,st+1)≥𝒟π(t,st,at,st+1)𝐾𝒟subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1subscript𝒟𝜋𝑡subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1K\mathcal{D}({s\_{t}},{a\_{t}},{s\_{t+1}})\geq\mathcal{D}\_{\pi}(t,{s\_{t}},{a\_{t}},{s\_{t+1}})italic\_K caligraphic\_D ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ≥ caligraphic\_D start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_t , italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) for all times t∈ℕ𝑡ℕt\in\mathbb{N}italic\_t ∈ blackboard\_N, triples (st,at,st+1)∈𝒮×𝒜×𝒮subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1𝒮𝒜𝒮({s\_{t}},{a\_{t}},{s\_{t+1}})\in{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ∈ caligraphic\_S × caligraphic\_A × caligraphic\_S and policies π∈{πA\*,πB\*}𝜋superscriptsubscript𝜋𝐴superscriptsubscript𝜋𝐵\pi\in\{\pi\_{A}^{\*},\pi\_{B}^{\*}\}italic\_π ∈ { italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT }.
Then the regret under RAsubscript𝑅𝐴R\_{A}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT from executing πB\*superscriptsubscript𝜋𝐵\pi\_{B}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT instead of πA\*superscriptsubscript𝜋𝐴\pi\_{A}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is at most
| | | |
| --- | --- | --- |
| | G(πA\*)RA−G(πB\*)RA≤16K∥RA∥2(1−γ)−1DEPIC(RA,RB),G{}\_{R\_{A}{}}(\pi\_{A}^{\*})-G{}\_{R\_{A}{}}(\pi\_{B}^{\*})\leq 16K\lVert R\_{A}{}\rVert\_{2}\left(1-\gamma\right)^{-1}D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{}),italic\_G start\_FLOATSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_FLOATSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) - italic\_G start\_FLOATSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_FLOATSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ≤ 16 italic\_K ∥ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( 1 - italic\_γ ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) , | |
where G(π)RG{}\_{R}(\pi)italic\_G start\_FLOATSUBSCRIPT italic\_R end\_FLOATSUBSCRIPT ( italic\_π ) is the return of policy π𝜋\piitalic\_π under reward R𝑅Ritalic\_R.
We generalize the regret bound to continuous spaces in theorem [A.16](#A1.Thmdefn16 "Theorem A.16. ‣ A.6 Lipschitz Reward Functions ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") via a Lipschitz assumption, with Wasserstein distance replacing K𝐾Kitalic\_K.
Importantly, the returns of πA\*superscriptsubscript𝜋𝐴\pi\_{A}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and πB\*superscriptsubscript𝜋𝐵\pi\_{B}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT converge as DEPIC(RA,RB)→0→subscript𝐷EPICsubscript𝑅𝐴subscript𝑅𝐵0D\_{\mathrm{EPIC}}{}(R\_{A}{},R\_{B}{})\to 0italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) → 0 in both cases, no matter which reward function you evaluate on.
The key assumption is that the coverage distribution 𝒟𝒟\mathcal{D}caligraphic\_D has adequate support for transitions occurring in rollouts of πA\*superscriptsubscript𝜋𝐴\pi\_{A}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and πB\*superscriptsubscript𝜋𝐵\pi\_{B}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
The bound is tightest when 𝒟𝒟\mathcal{D}{}caligraphic\_D is similar to 𝒟πA\*subscript𝒟superscriptsubscript𝜋𝐴\mathcal{D}\_{\pi\_{A}^{\*}}caligraphic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT and 𝒟πB\*subscript𝒟superscriptsubscript𝜋𝐵\mathcal{D}\_{\pi\_{B}^{\*}}caligraphic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT.
However, computing πA\*superscriptsubscript𝜋𝐴\pi\_{A}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and πB\*superscriptsubscript𝜋𝐵\pi\_{B}^{\*}italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is often intractable.
The MDP M𝑀Mitalic\_M may be unknown, such as when making predictions about an unseen deployment environment.
Even when M𝑀Mitalic\_M is known, RL is computationally expensive and may fail to converge in non-trivial environments.
In finite cases, a uniform 𝒟𝒟\mathcal{D}caligraphic\_D satisfies the requirements with K≤|𝒮|2|𝒜|𝐾superscript𝒮2𝒜K\leq|{\mathcal{S}}|^{2}|{\mathcal{A}}|italic\_K ≤ | caligraphic\_S | start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | caligraphic\_A |.
In general, it is best to choose 𝒟𝒟\mathcal{D}caligraphic\_D to have broad coverage over plausible transitions.
Broad coverage ensures adequate support for 𝒟πA\*subscript𝒟superscriptsubscript𝜋𝐴\mathcal{D}\_{\pi\_{A}^{\*}}caligraphic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT and 𝒟πB\*subscript𝒟superscriptsubscript𝜋𝐵\mathcal{D}\_{\pi\_{B}^{\*}}caligraphic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT.
But excluding transitions that are unlikely or impossible to occur leads to tighter regret bounds due to a smaller K𝐾Kitalic\_K (finite case) or Wasserstein distance (continuous case).
While EPIC upper bounds policy regret, it does not lower bound it.
In fact, no reward distance can lower bound regret in arbitrary environments.
For example, suppose the deployment environment transitions to a randomly chosen state independent of the action taken.
In this case, all policies obtain the same expected return, so the policy regret is always zero, regardless of the reward functions.

Figure 1: Heatmaps of four reward functions for a 3×3333\times 33 × 3 gridworld. Sparse and Dense look different but are actually equivalent with DEPIC(𝚂𝚙𝚊𝚛𝚜𝚎,𝙳𝚎𝚗𝚜𝚎)=0subscript𝐷EPIC𝚂𝚙𝚊𝚛𝚜𝚎𝙳𝚎𝚗𝚜𝚎0D\_{\mathrm{EPIC}}{}\left(\textup{{Sparse}}{},\textup{{Dense}}{}\right)=0italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( Sparse , Dense ) = 0.
By contrast, the optimal policies for Path and Cliff are the same if the gridworld is deterministic but different if it is “slippery”.
EPIC recognizes this difference with DEPIC(𝙿𝚊𝚝𝚑,𝙲𝚕𝚒𝚏𝚏)=0.27subscript𝐷EPIC𝙿𝚊𝚝𝚑𝙲𝚕𝚒𝚏𝚏0.27D\_{\mathrm{EPIC}}{}\left(\textup{{Path}},\textup{{Cliff}}{}\right)=0.27italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( Path , Cliff ) = 0.27.
Key:
Reward R(s,s′)𝑅𝑠superscript𝑠′R(s,s^{\prime})italic\_R ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) for moving from s𝑠sitalic\_s to s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is given by the triangular wedge in cell s𝑠sitalic\_s that is adjacent to cell s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
R(s,s)𝑅𝑠𝑠R(s,s)italic\_R ( italic\_s , italic\_s ) is given by the central circle in cell s𝑠sitalic\_s.
Optimal action(s) (deterministic, infinite horizon, discount γ=0.99𝛾0.99\gamma=0.99italic\_γ = 0.99) have bold labels.
See Figure [A.2](#A1.F2 "Figure A.2 ‣ A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for the distances between all reward pairs.
To demonstrate EPIC’s properties, we compare the gridworld reward functions from Figure [1](#S4.F1 "Figure 1 ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions"), reporting the distances between all reward pairs in Figure [A.2](#A1.F2 "Figure A.2 ‣ A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions").
Dense is a rescaled and shaped version of Sparse, despite looking dissimilar at first glance, so DEPIC(𝚂𝚙𝚊𝚛𝚜𝚎,𝙳𝚎𝚗𝚜𝚎)=0subscript𝐷EPIC𝚂𝚙𝚊𝚛𝚜𝚎𝙳𝚎𝚗𝚜𝚎0D\_{\mathrm{EPIC}}{}\left(\textup{{Sparse}}{},\textup{{Dense}}{}\right)=0italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( Sparse , Dense ) = 0.
By contrast, DEPIC(𝙿𝚊𝚝𝚑,𝙲𝚕𝚒𝚏𝚏)=0.27subscript𝐷EPIC𝙿𝚊𝚝𝚑𝙲𝚕𝚒𝚏𝚏0.27D\_{\mathrm{EPIC}}{}\left(\textup{{Path}},\textup{{Cliff}}{}\right)=0.27italic\_D start\_POSTSUBSCRIPT roman\_EPIC end\_POSTSUBSCRIPT ( Path , Cliff ) = 0.27.
In *deterministic* gridworlds, Path and Cliff have the same optimal policy, so the rollout method could wrongly conclude they are equivalent.
But in fact the rewards are fundamentally different: when there is a significant risk of “slipping” in the wrong direction the optimal policy for Cliff walks along the top instead of the middle row, incurring a −11-1- 1 penalty to avoid the risk of falling into the −44-4- 4 “cliff”.
For this example, we used state and action distributions 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT uniform over 𝒮𝒮{\mathcal{S}}caligraphic\_S and 𝒜𝒜{\mathcal{A}}caligraphic\_A, and coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D uniform over state-action pairs (s,a)𝑠𝑎({s},{a})( italic\_s , italic\_a ), with s′superscript𝑠′{s^{\prime}}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT deterministically computed.
It is important these distributions have adequate support.
As an extreme example, if 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒟\mathcal{D}caligraphic\_D have no support for a particular state then the reward of that state has no effect on the distance.
We can compute EPIC exactly in a tabular setting, but in general use a sample-based approximation (section [A.1.1](#A1.SS1.SSS1 "A.1.1 Sample-based approximation for EPIC distance ‣ A.1 Approximation Procedures ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions")).
5 Baseline approaches for comparing reward functions
-----------------------------------------------------
Given the lack of established methods, we develop two alternatives as baselines: Episode Return Correlation (ERC) and Nearest Point in Equivalence Class (NPEC).
###
5.1 Episode Return Correlation (ERC)
The goal of an MDP is to maximize expected episode return, so it is natural to compare reward functions by the returns they induce.
If the return of a reward function RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT is a positive affine transformation of another reward RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, then RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT have the same set of optimal policies.
This suggests using Pearson distance, which is invariant to positive affine transformations.
######
Definition 5.1 (Episode Return Correlation (ERC) pseudometric).
Let 𝒟𝒟\mathcal{D}caligraphic\_D be some distribution over trajectories.
Let E𝐸Eitalic\_E be a random variable sampled from 𝒟𝒟\mathcal{D}caligraphic\_D.
The *Episode Return Correlation* distance between reward functions RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is the Pearson distance between their episode returns on 𝒟𝒟\mathcal{D}{}caligraphic\_D,
DERC(RA,RB)=Dρ(g(E;RA),g(E;RB))subscript𝐷normal-ERCsubscript𝑅𝐴subscript𝑅𝐵subscript𝐷𝜌𝑔𝐸subscript𝑅𝐴𝑔𝐸subscript𝑅𝐵D\_{\mathrm{ERC}}{}(R\_{A}{},R\_{B}{})=D\_{\rho}{}(g(E;R\_{A}{}),g(E;R\_{B}{}))italic\_D start\_POSTSUBSCRIPT roman\_ERC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = italic\_D start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT ( italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) , italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ).
Prior work has produced scatter plots of the return of RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT against RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT over episodes [[4](#bib.bib4), Figure 3] and fixed-length segments [[10](#bib.bib10), section D].
ERC is the Pearson distance of such plots, so is a natural baseline.
We approximate ERC by the correlation of episode returns on a finite collection of rollouts.
ERC is invariant to shaping when the initial state s0subscript𝑠0{s}\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and terminal state sTsubscript𝑠𝑇{s}\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT are fixed.
Let R𝑅R{}italic\_R be a reward function and ΦΦ\Phi{}roman\_Φ a potential function, and define the shaped reward R(s,a,s′)′=R(s,a,s′)+γΦ(s′)−Φ(s)R{}^{\prime}({s},{a},{s^{\prime}})=R{}({s},{a},{s^{\prime}})+\gamma\Phi({s^{\prime}})-\Phi({s})italic\_R start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_R ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ).
The return under the shaped reward on a trajectory τ=(s0,a0,⋯,sT)𝜏subscript𝑠0subscript𝑎0⋯subscript𝑠𝑇\tau=({s}\_{0},{a}\_{0},\cdots,{s}\_{T})italic\_τ = ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , ⋯ , italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) is g(τ;R)′=g(τ;R)+γTΦ(sT)−Φ(s0)g(\tau;R{}^{\prime})=g(\tau;R{})+\gamma^{T}\Phi({s}\_{T})-\Phi({s}\_{0})italic\_g ( italic\_τ ; italic\_R start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT ) = italic\_g ( italic\_τ ; italic\_R ) + italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) - roman\_Φ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ).
Since s0subscript𝑠0{s}\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and sTsubscript𝑠𝑇{s}\_{T}italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT are fixed, γTΦ(sT)−Φ(s0)superscript𝛾𝑇Φsubscript𝑠𝑇Φsubscript𝑠0\gamma^{T}\Phi({s}\_{T})-\Phi({s}\_{0})italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) - roman\_Φ ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) is constant.
It follows that ERC is invariant to shaping, as Pearson distance is invariant to constant shifts.
In fact, for infinite-horizon discounted MDPs only s0subscript𝑠0{s}\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT needs to be fixed, since γTΦ(sT)→0→superscript𝛾𝑇Φsubscript𝑠𝑇0\gamma^{T}\Phi({s}\_{T})\to 0italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) → 0 as T→∞→𝑇T\to\inftyitalic\_T → ∞.
However, if the initial state s0subscript𝑠0{s}\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is stochastic, then the ERC distance can take on arbitrary values under shaping.
Let RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT be two arbitrary reward functions.
Suppose that there are at least two distinct initial states, sXsubscript𝑠𝑋s\_{X}italic\_s start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT and sYsubscript𝑠𝑌s\_{Y}italic\_s start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT, with non-zero measure in 𝒟𝒟\mathcal{D}{}caligraphic\_D.
Choose potential Φ(s)=0Φ𝑠0\Phi({s})=0roman\_Φ ( italic\_s ) = 0 everywhere except Φ(sX)=Φ(sY)=cΦsubscript𝑠𝑋Φsubscript𝑠𝑌𝑐\Phi(s\_{X})=\Phi(s\_{Y})=croman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT ) = roman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT ) = italic\_c, and let RA′superscriptsubscript𝑅𝐴′R\_{A}^{\prime}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and RB′superscriptsubscript𝑅𝐵′R\_{B}^{\prime}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT denote RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT shaped by ΦΦ\Phiroman\_Φ.
As c→∞→𝑐c\to\inftyitalic\_c → ∞, the correlation ρ(g(E;RA′),g(E;RB′))→1→𝜌𝑔𝐸superscriptsubscript𝑅𝐴′𝑔𝐸superscriptsubscript𝑅𝐵′1\rho\left(g(E;R\_{A}^{\prime}),g(E;R\_{B}^{\prime})\right)\to 1italic\_ρ ( italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) → 1.
This is since the relative difference tends to zero, even though g(E;RA′)𝑔𝐸superscriptsubscript𝑅𝐴′g(E;R\_{A}^{\prime})italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) and g(E;RB′)𝑔𝐸superscriptsubscript𝑅𝐵′g(E;R\_{B}^{\prime})italic\_g ( italic\_E ; italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) continue to have the same absolute difference as c𝑐citalic\_c varies.
Consequently, the ERC pseudometric DERC(RA′,RB′)→0→subscript𝐷ERCsuperscriptsubscript𝑅𝐴′superscriptsubscript𝑅𝐵′0D\_{\mathrm{ERC}}{}(R\_{A}^{\prime},R\_{B}^{\prime})\to 0italic\_D start\_POSTSUBSCRIPT roman\_ERC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) → 0 as c→∞→𝑐c\to\inftyitalic\_c → ∞.
By an analogous argument, setting Φ(sX)=cΦsubscript𝑠𝑋𝑐\Phi(s\_{X})=croman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT ) = italic\_c and Φ(sY)=−cΦsubscript𝑠𝑌𝑐\Phi(s\_{Y})=-croman\_Φ ( italic\_s start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT ) = - italic\_c gives DERC(RA′,RB′)→1→subscript𝐷ERCsuperscriptsubscript𝑅𝐴′superscriptsubscript𝑅𝐵′1D\_{\mathrm{ERC}}{}(R\_{A}^{\prime},R\_{B}^{\prime})\to 1italic\_D start\_POSTSUBSCRIPT roman\_ERC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) → 1 as c→∞→𝑐c\to\inftyitalic\_c → ∞.
###
5.2 Nearest Point in Equivalence Class (NPEC)
NPEC takes the minimum Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT distance between equivalence classes. See section [A.3.3](#A1.SS3.SSS3 "A.3.3 Nearest Point in Equivalence Class (NPEC) premetric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for proofs.
######
Definition 5.2 (Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT distance).
Let 𝒟𝒟\mathcal{D}{}caligraphic\_D be a coverage distribution over transitions s→𝑎s′𝑠𝑎normal-→superscript𝑠normal-′{s}\overset{a}{\to}{s^{\prime}}italic\_s overitalic\_a start\_ARG → end\_ARG italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and let p≥1𝑝1p\geq 1italic\_p ≥ 1 be a power.
The *Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT distance* between reward functions RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is the Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT norm of their difference:
DLp,𝒟(RA,RB)=(𝔼[|RA(s,a,s′)−RB(s,a,s′)|p]s,a,s′∼𝒟)1/p.D\_{L^{p},\mathcal{D}{}}(R\_{A}{},R\_{B}{})=\left(\mathbb{E}{}\_{s,a,s^{\prime}\sim\mathcal{D}{}}\left[\left\lvert R\_{A}{}(s,a,s^{\prime})-R\_{B}{}(s,a,s^{\prime})\right\rvert^{p}\right]\right)^{1/p}.italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = ( blackboard\_E start\_FLOATSUBSCRIPT italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_FLOATSUBSCRIPT [ | italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT ] ) start\_POSTSUPERSCRIPT 1 / italic\_p end\_POSTSUPERSCRIPT .
The Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT distance is affected by potential shaping and positive rescaling that do not change the optimal policy.
A natural solution is to take the distance from the *nearest point* in the equivalence class:
DNPECU(RA,RB)=infRA′≡RADLp,𝒟(RA′,RB)superscriptsubscript𝐷NPEC𝑈subscript𝑅𝐴subscript𝑅𝐵subscriptinfimumsuperscriptsubscript𝑅𝐴′subscript𝑅𝐴subscript𝐷superscript𝐿𝑝𝒟superscriptsubscript𝑅𝐴′subscript𝑅𝐵D\_{\mathrm{NPEC}}^{U}(R\_{A}{},R\_{B}{})=\inf\_{R\_{A}^{\prime}\equiv{}R\_{A}{}}D\_{L^{p},\mathcal{D}{}}(R\_{A}^{\prime},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = roman\_inf start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ).
Unfortunately, DNPECUsuperscriptsubscript𝐷NPEC𝑈D\_{\mathrm{NPEC}}^{U}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT is sensitive to RBsubscript𝑅𝐵R\_{B}{}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT’s scale.
It is tempting to instead take the infimum over both arguments of DLp,𝒟subscript𝐷superscript𝐿𝑝𝒟D\_{L^{p},\mathcal{D}{}}italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT.
However, infRA′≡RA,RB′≡RBDLp,𝒟(RA′,RB)=0subscriptinfimumformulae-sequencesuperscriptsubscript𝑅𝐴′subscript𝑅𝐴superscriptsubscript𝑅𝐵′subscript𝑅𝐵subscript𝐷superscript𝐿𝑝𝒟superscriptsubscript𝑅𝐴′subscript𝑅𝐵0\inf\_{R\_{A}^{\prime}\equiv{}R\_{A},R\_{B}^{\prime}\equiv{}R\_{B}}D\_{L^{p},\mathcal{D}{}}(R\_{A}^{\prime},R\_{B}{})=0roman\_inf start\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0 since all equivalence classes come arbitrarily close to the origin in Lpsuperscript𝐿𝑝L^{p}italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT space.
Instead, we fix this by normalizing DNPECUsuperscriptsubscript𝐷NPEC𝑈D\_{\mathrm{NPEC}}^{U}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT.
######
Definition 5.3.
*NPEC* is defined by DNPEC(RA,RB)=DNPECU(RA,RB)/DNPECU(𝚉𝚎𝚛𝚘,RB)subscript𝐷normal-NPECsubscript𝑅𝐴subscript𝑅𝐵superscriptsubscript𝐷normal-NPEC𝑈subscript𝑅𝐴subscript𝑅𝐵superscriptsubscript𝐷normal-NPEC𝑈𝚉𝚎𝚛𝚘subscript𝑅𝐵D\_{\mathrm{NPEC}}(R\_{A}{},R\_{B}{})=D\_{\mathrm{NPEC}}^{U}(R\_{A}{},R\_{B}{})/D\_{\mathrm{NPEC}}^{U}(\textup{{{Zero}}}{},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) / italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( Zero , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) when DNPECU(𝚉𝚎𝚛𝚘,RB)≠0superscriptsubscript𝐷normal-NPEC𝑈𝚉𝚎𝚛𝚘subscript𝑅𝐵0D\_{\mathrm{NPEC}}^{U}(\textup{{{Zero}}}{},R\_{B}{})\neq 0italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( Zero , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ≠ 0,
and is otherwise given by DNPEC(RA,RB)=0subscript𝐷normal-NPECsubscript𝑅𝐴subscript𝑅𝐵0D\_{\mathrm{NPEC}}(R\_{A}{},R\_{B}{})=0italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0.
If DNPECU(𝚉𝚎𝚛𝚘,RB)=0superscriptsubscript𝐷NPEC𝑈𝚉𝚎𝚛𝚘subscript𝑅𝐵0D\_{\mathrm{NPEC}}^{U}(\textup{{{Zero}}}{},R\_{B}{})=0italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( Zero , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0 then DNPECU(RA,RB)=0superscriptsubscript𝐷NPEC𝑈subscript𝑅𝐴subscript𝑅𝐵0D\_{\mathrm{NPEC}}^{U}(R\_{A}{},R\_{B}{})=0italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0 since RAsubscript𝑅𝐴R\_{A}{}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT can be scaled arbitrarily close to Zero.
Since all policies are optimal for R≡𝚉𝚎𝚛𝚘𝑅𝚉𝚎𝚛𝚘R{}\equiv\textup{{{Zero}}}{}italic\_R ≡ Zero, we choose DNPEC(RA,RB)=0subscript𝐷NPECsubscript𝑅𝐴subscript𝑅𝐵0D\_{\mathrm{NPEC}}(R\_{A}{},R\_{B}{})=0italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = 0 in this case.
{restatable}
theoremnearestpointproperties
DNPECsubscript𝐷NPECD\_{\mathrm{NPEC}}{}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT is a premetric on the space of bounded reward functions. Moreover, let RA,RA,′RB,RB:′𝒮×𝒜×𝒮→ℝR\_{A}{},R\_{A}{}^{\prime},R\_{B}{},R\_{B}{}^{\prime}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{S}}\to\mathbb{R}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT : caligraphic\_S × caligraphic\_A × caligraphic\_S → blackboard\_R be bounded reward functions such that RA≡RA′R\_{A}{}\equiv R\_{A}{}^{\prime}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT and RB≡RB′R\_{B}{}\equiv R\_{B}{}^{\prime}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT.
Then 0≤DNPEC(RA,′RB)′=DNPEC(RA,RB)≤10\leq D\_{\mathrm{NPEC}}{}(R\_{A}{}^{\prime},R\_{B}{}^{\prime})=D\_{\mathrm{NPEC}}{}(R\_{A}{},R\_{B}{})\leq 10 ≤ italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_FLOATSUPERSCRIPT ′ end\_FLOATSUPERSCRIPT ) = italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ≤ 1.
{optional-prf}
Pseudometric follows from DLp,𝒟subscript𝐷superscript𝐿𝑝𝒟D\_{L^{p},\mathcal{D}{}}italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT a pseudometric; see section [A.3.3](#A1.SS3.SSS3 "A.3.3 Nearest Point in Equivalence Class (NPEC) premetric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for details.
Invariance to RA′≡RAsuperscriptsubscript𝑅𝐴′subscript𝑅𝐴R\_{A}^{\prime}\equiv R\_{A}italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT immediate from the infimum being over R≡RA𝑅subscript𝑅𝐴R\equiv R\_{A}{}italic\_R ≡ italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT.
Invariance to RB′≡RBsuperscriptsubscript𝑅𝐵′subscript𝑅𝐵R\_{B}^{\prime}\equiv R\_{B}italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≡ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is due to translational invariance of DLp,𝒟subscript𝐷superscript𝐿𝑝𝒟D\_{L^{p},\mathcal{D}{}}italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT, and DNPECU(RA,λRB)=λDNPECU(RA,RB)superscriptsubscript𝐷NPEC𝑈subscript𝑅𝐴𝜆subscript𝑅𝐵𝜆superscriptsubscript𝐷NPEC𝑈subscript𝑅𝐴subscript𝑅𝐵D\_{\mathrm{NPEC}}^{U}(R\_{A}{},\lambda R\_{B}{})=\lambda D\_{\mathrm{NPEC}}^{U}(R\_{A}{},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_λ italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) = italic\_λ italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) for λ>0𝜆0\lambda>0italic\_λ > 0.
Upper bound of 1111 is due to DNPECU(RA,RB)≤DNPECU(𝚉𝚎𝚛𝚘,RB)superscriptsubscript𝐷NPEC𝑈subscript𝑅𝐴subscript𝑅𝐵superscriptsubscript𝐷NPEC𝑈𝚉𝚎𝚛𝚘subscript𝑅𝐵D\_{\mathrm{NPEC}}^{U}(R\_{A}{},R\_{B}{})\leq D\_{\mathrm{NPEC}}^{U}(\textup{{{Zero}}}{},R\_{B}{})italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ) ≤ italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT ( Zero , italic\_R start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ), while lower bound immediate from DLp,𝒟subscript𝐷superscript𝐿𝑝𝒟D\_{L^{p},\mathcal{D}{}}italic\_D start\_POSTSUBSCRIPT italic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT , caligraphic\_D end\_POSTSUBSCRIPT non-negative.
See section [A.3.3](#A1.SS3.SSS3 "A.3.3 Nearest Point in Equivalence Class (NPEC) premetric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for details.
Note that DNPECsubscript𝐷NPECD\_{\mathrm{NPEC}}{}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT may not be symmetric and so is not, in general, a pseudometric: see proposition [A.3](#A1.Thmdefn3 "Proposition A.3. ‣ A.3.3 Nearest Point in Equivalence Class (NPEC) premetric ‣ A.3 Proofs ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions").
The infimum in DNPECUsuperscriptsubscript𝐷NPEC𝑈D\_{\mathrm{NPEC}}^{U}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT can be computed exactly in a tabular setting, but in general we must approximate it using gradient descent.
This gives an upper bound for DNPECUsuperscriptsubscript𝐷NPEC𝑈D\_{\mathrm{NPEC}}^{U}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_U end\_POSTSUPERSCRIPT, but the quotient of upper bounds DNPECsubscript𝐷NPECD\_{\mathrm{NPEC}}italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT may be too low or too high.
See section [A.1.2](#A1.SS1.SSS2 "A.1.2 Optimization-based approximation for NPEC distance ‣ A.1 Approximation Procedures ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for details of the approximation.
6 Experiments
--------------
We evaluate EPIC and the baselines ERC and NPEC in a variety of continuous control tasks.
In section [6.1](#S6.SS1 "6.1 Comparing hand-designed reward functions ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"), we compute the distance between hand-designed reward functions, finding EPIC to be the most reliable.
NPEC has substantial approximation error, and ERC sometimes erroneously assigns high distance to equivalent rewards.
Next, in section [6.2](#S6.SS2 "6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions") we show EPIC is robust to the exact choice of coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D, whereas ERC and especially NPEC are highly sensitive to the choice of 𝒟𝒟\mathcal{D}{}caligraphic\_D.
Finally, in section [6.3](#S6.SS3 "6.3 Predicting policy performance from reward distance ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions") we find that the distance of learned reward functions to a ground-truth reward predicts the return obtained by policy training, even in an unseen test environment.
###
6.1 Comparing hand-designed reward functions

\thesubsubfigure EPIC

\thesubsubfigure NPEC

\thesubsubfigure ERC
Figure 2: Approximate distances between hand-designed reward functions in PointMass, where the agent moves on a line trying to reach the origin. EPIC correctly assigns 00 distance between equivalent rewards such as (𝙳

,𝚂

)𝙳

𝚂

(\textup{{D}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{},\textup{{S}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{})( typewriter\_D , typewriter\_S ) while DNPEC(𝙳

,𝚂

)=0.58subscript𝐷NPEC𝙳

𝚂

0.58D\_{\mathrm{NPEC}}{}(\textup{{D}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{},\textup{{S}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{})=0.58italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( typewriter\_D , typewriter\_S ) = 0.58 and DERC(𝙳

,𝚂

)=0.56subscript𝐷ERC𝙳

𝚂

0.56D\_{\mathrm{ERC}}{}(\textup{{D}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{},\textup{{S}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{})=0.56italic\_D start\_POSTSUBSCRIPT roman\_ERC end\_POSTSUBSCRIPT ( typewriter\_D , typewriter\_S ) = 0.56. The coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D is sampled from rollouts of a policy πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT taking actions uniformly at random. Key: The agent has position x∈ℝ𝑥ℝx\in\mathbb{R}italic\_x ∈ blackboard\_R, velocity x˙∈ℝ˙𝑥ℝ\dot{x}\in\mathbb{R}over˙ start\_ARG italic\_x end\_ARG ∈ blackboard\_R and can accelerate x¨∈ℝ¨𝑥ℝ\ddot{x}\in\mathbb{R}over¨ start\_ARG italic\_x end\_ARG ∈ blackboard\_R, producing future position x′∈ℝsuperscript𝑥′ℝx^{\prime}\in\mathbb{R}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ blackboard\_R.

quadratic penalty on control x¨2superscript¨𝑥2\ddot{x}^{2}over¨ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT,

no control penalty.
S is Sparse(x)=𝟙[|x|<0.05]Sparse𝑥1delimited-[]𝑥0.05\mathrm{Sparse}(x)=\mathbbm{1}[|x|<0.05]roman\_Sparse ( italic\_x ) = blackboard\_1 [ | italic\_x | < 0.05 ], D is shaped Dense(x,x′)=Sparse(x)+|x′|−|x|Dense𝑥superscript𝑥′Sparse𝑥superscript𝑥′𝑥\mathrm{Dense}(x,x^{\prime})=\mathrm{Sparse}(x)+|x^{\prime}|-|x|roman\_Dense ( italic\_x , italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = roman\_Sparse ( italic\_x ) + | italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | - | italic\_x |, while M is Magnitude(x)=−|x|Magnitude𝑥𝑥\mathrm{Magnitude}(x)=-|x|roman\_Magnitude ( italic\_x ) = - | italic\_x |.
We compare procedurally specified reward functions in four tasks, finding that EPIC is more reliable than the baselines NPEC and ERC, and more computationally efficient than NPEC.
Figure [2](#S6.F2 "Figure 2 ‣ 6.1 Comparing hand-designed reward functions ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions") presents results in the proof-of-concept PointMass task.
The results for Gridworld, HalfCheetah and Hopper, in section [A.2.4](#A1.SS2.SSS4 "A.2.4 Comparing hand-designed reward functions ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions"), are qualitatively similar.
In PointMass the agent can accelerate x¨¨𝑥\ddot{x}over¨ start\_ARG italic\_x end\_ARG left or right on a line.
The reward functions include (
![[Uncaptioned image]](/html/2006.13900/assets/x21.png)
) or exclude (
![[Uncaptioned image]](/html/2006.13900/assets/x22.png)
) a quadratic penalty x¨2superscript¨𝑥2\ddot{x}^{2}over¨ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT.
The sparse reward (S) gives a reward of 1111 in the region ±0.05plus-or-minus0.05\pm 0.05± 0.05 from the origin.
The dense reward (D) is a shaped version of the sparse reward.
The magnitude reward (M) is the negative distance of the agent from the origin.
We find that EPIC correctly identifies the equivalent reward pairs (S
![[Uncaptioned image]](/html/2006.13900/assets/x23.png)
-D
![[Uncaptioned image]](/html/2006.13900/assets/x24.png)
and S
![[Uncaptioned image]](/html/2006.13900/assets/x25.png)
-D
![[Uncaptioned image]](/html/2006.13900/assets/x26.png)
) with estimated distance <1×10−3absent1E-3<$1\text{\times}{10}^{-3}$< start\_ARG 1 end\_ARG start\_ARG times end\_ARG start\_ARG power start\_ARG 10 end\_ARG start\_ARG - 3 end\_ARG end\_ARG.
By contrast, NPEC has substantial approximation error: DNPEC(𝙳
![[Uncaptioned image]](/html/2006.13900/assets/x27.png)
,𝚂
![[Uncaptioned image]](/html/2006.13900/assets/x28.png)
)=0.58subscript𝐷NPEC𝙳
![[Uncaptioned image]](/html/2006.13900/assets/x27.png)
𝚂
![[Uncaptioned image]](/html/2006.13900/assets/x28.png)
0.58D\_{\mathrm{NPEC}}{}(\textup{{D}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{},\textup{{S}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{})=0.58italic\_D start\_POSTSUBSCRIPT roman\_NPEC end\_POSTSUBSCRIPT ( typewriter\_D , typewriter\_S ) = 0.58.
Similarly, DERC(𝙳
![[Uncaptioned image]](/html/2006.13900/assets/x29.png)
,𝚂
![[Uncaptioned image]](/html/2006.13900/assets/x30.png)
)=0.56subscript𝐷ERC𝙳
![[Uncaptioned image]](/html/2006.13900/assets/x29.png)
𝚂
![[Uncaptioned image]](/html/2006.13900/assets/x30.png)
0.56D\_{\mathrm{ERC}}{}(\textup{{D}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{},\textup{{S}}{}\reflectbox{\includegraphics[width=10.00002pt,valign={t}]{emojis/mozilla\_cheetah}}{})=0.56italic\_D start\_POSTSUBSCRIPT roman\_ERC end\_POSTSUBSCRIPT ( typewriter\_D , typewriter\_S ) = 0.56 due to ERC’s erroneous handling of stochastic initial states.
Moreover, NPEC is computationally inefficient: Figure [2](#S6.F2 "Figure 2 ‣ 6.1 Comparing hand-designed reward functions ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions")(b) took 31 hours to compute.
By contrast, the figures for EPIC and ERC were generated in less than two hours, and a lower precision approximation of EPIC finishes in just 17171717 seconds (see section [A.2.6](#A1.SS2.SSS6 "A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions")).
###
6.2 Sensitivity of reward distance to coverage distribution
Reward distances should be robust to the choice of coverage distribution 𝒟𝒟\mathcal{D}caligraphic\_D.
In Table [2](#S6.T2 "Table 2 ‣ 6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions") (center), we report distances from the ground-truth reward (GT) to reward functions (rows) across coverage distributions 𝒟∈{πuni,π\*,𝙼𝚒𝚡}𝒟subscript𝜋unisuperscript𝜋𝙼𝚒𝚡\mathcal{D}\in\{\pi\_{\mathrm{uni}}{},\pi^{\*}{},\texttt{Mix}{}\}caligraphic\_D ∈ { italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT , italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , Mix } (columns).
We find EPIC is fairly robust to the choice of 𝒟𝒟\mathcal{D}{}caligraphic\_D with a similar ratio between rows in each column 𝒟𝒟\mathcal{D}caligraphic\_D.
By contrast, ERC and especially NPEC are substantially more sensitive to the choice of 𝒟𝒟\mathcal{D}caligraphic\_D.
We evaluate in the PointMaze MuJoCo task from Fu et al. [[8](#bib.bib8)], where a point mass agent must navigate around a wall to reach a goal.
The coverage distributions 𝒟𝒟\mathcal{D}{}caligraphic\_D are induced by rollouts from three different policies: πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT takes actions uniformly at random, producing broad support over transitions; π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is an expert policy, yielding a distribution concentrated around the goal; and Mix is a mixture of the two.
In EPIC, 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT are marginalized from 𝒟𝒟\mathcal{D}caligraphic\_D and so also vary with 𝒟𝒟\mathcal{D}caligraphic\_D.
Table 2: Low reward distance from the ground-truth (GT) in PointMaze-Train predicts high policy return even in unseen task PointMaze-Test. EPIC distance is robust to the choice of coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D, with similar values across columns, while ERC and especially NPEC are sensitive to 𝒟𝒟\mathcal{D}{}caligraphic\_D. Center: approximate distances (1000×1000\times1000 × scale) of reward functions from GT. The coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D is computed from rollouts in PointMaze-Train of: a uniform random policy 𝝅𝐮𝐧𝐢subscript𝝅𝐮𝐧𝐢\boldsymbol{\pi\_{\mathrm{uni}}{}}bold\_italic\_π start\_POSTSUBSCRIPT bold\_uni end\_POSTSUBSCRIPT, an expert 𝝅\*superscript𝝅\boldsymbol{\pi^{\*}{}}bold\_italic\_π start\_POSTSUPERSCRIPT bold\_\* end\_POSTSUPERSCRIPT and a Mixture of these policies. 𝒟𝒮subscript𝒟𝒮{\mathcal{D}\_{{\mathcal{S}}}}{}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_S end\_POSTSUBSCRIPT and 𝒟𝒜subscript𝒟𝒜{\mathcal{D}\_{{\mathcal{A}}}}{}caligraphic\_D start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT are computed by marginalizing 𝒟𝒟\mathcal{D}{}caligraphic\_D.
Right: mean GT return over 9999 seeds of RL training on the reward in PointMaze-{Train,Test}, and returns for AIRL’s *generator* policy. Confidence Intervals: see Table [A.7](#A1.T7 "Table A.7 ‣ A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions").
| Reward | 𝟏𝟎𝟎𝟎×𝑫𝐄𝐏𝐈𝐂1000subscript𝑫𝐄𝐏𝐈𝐂\boldsymbol{1000\times D\_{\mathrm{EPIC}}{}}bold\_1000 bold\_× bold\_italic\_D start\_POSTSUBSCRIPT bold\_EPIC end\_POSTSUBSCRIPT | 𝟏𝟎𝟎𝟎×𝑫𝐍𝐏𝐄𝐂1000subscript𝑫𝐍𝐏𝐄𝐂\boldsymbol{1000\times D\_{\mathrm{NPEC}}{}}bold\_1000 bold\_× bold\_italic\_D start\_POSTSUBSCRIPT bold\_NPEC end\_POSTSUBSCRIPT | 𝟏𝟎𝟎𝟎×𝑫𝐄𝐑𝐂1000subscript𝑫𝐄𝐑𝐂\boldsymbol{1000\times D\_{\mathrm{ERC}}{}}bold\_1000 bold\_× bold\_italic\_D start\_POSTSUBSCRIPT bold\_ERC end\_POSTSUBSCRIPT | Episode Return |
| --- | --- | --- | --- | --- |
| Function | 𝝅𝐮𝐧𝐢subscript𝝅𝐮𝐧𝐢\boldsymbol{\pi\_{\mathrm{uni}}{}}bold\_italic\_π start\_POSTSUBSCRIPT bold\_uni end\_POSTSUBSCRIPT | 𝝅\*superscript𝝅\boldsymbol{\pi^{\*}{}}bold\_italic\_π start\_POSTSUPERSCRIPT bold\_\* end\_POSTSUPERSCRIPT | Mix | 𝝅𝐮𝐧𝐢subscript𝝅𝐮𝐧𝐢\boldsymbol{\pi\_{\mathrm{uni}}{}}bold\_italic\_π start\_POSTSUBSCRIPT bold\_uni end\_POSTSUBSCRIPT | 𝝅\*superscript𝝅\boldsymbol{\pi^{\*}{}}bold\_italic\_π start\_POSTSUPERSCRIPT bold\_\* end\_POSTSUPERSCRIPT | Mix | 𝝅𝐮𝐧𝐢subscript𝝅𝐮𝐧𝐢\boldsymbol{\pi\_{\mathrm{uni}}{}}bold\_italic\_π start\_POSTSUBSCRIPT bold\_uni end\_POSTSUBSCRIPT | 𝝅\*superscript𝝅\boldsymbol{\pi^{\*}{}}bold\_italic\_π start\_POSTSUPERSCRIPT bold\_\* end\_POSTSUPERSCRIPT | Mix | Gen. | Train | Test |
| GT | 0.060.060.060.06 | 0.050.050.050.05 | 0.040.040.040.04 | 0.040.040.040.04 | 3.173.173.173.17 | 0.010.010.010.01 | 0.000.000.000.00 | 0.000.000.000.00 | 0.000.000.000.00 | — | −5.19-5.19-5.19- 5.19 | −6.59-6.59-6.59- 6.59 |
| Regress | 35.835.835.835.8 | 33.733.733.733.7 | 26.126.126.126.1 | 1.421.421.421.42 | 38.938.938.938.9 | 0.350.350.350.35 | 9.999.999.999.99 | 90.790.790.790.7 | 2.432.432.432.43 | — | −5.47-5.47-5.47- 5.47 | −6.30-6.30-6.30- 6.30 |
| Pref | 68.768.768.768.7 | 100100100100 | 56.856.856.856.8 | 8.518.518.518.51 | 1333133313331333 | 9.749.749.749.74 | 24.924.924.924.9 | 360360360360 | 19.619.619.619.6 | — | −5.57-5.57-5.57- 5.57 | −5.04-5.04-5.04- 5.04 |
| AIRL SO | 572572572572 | 520520520520 | 404404404404 | 817817817817 | 2706270627062706 | 488488488488 | 549549549549 | 523523523523 | 240240240240 | −5.43-5.43-5.43- 5.43 | −27.3-27.3-27.3- 27.3 | −22.7-22.7-22.7- 22.7 |
| AIRL SA | 776776776776 | 930930930930 | 894894894894 | 1067106710671067 | 2040204020402040 | 1039103910391039 | 803803803803 | 722722722722 | 964964964964 | −5.05-5.05-5.05- 5.05 | −30.7-30.7-30.7- 30.7 | −29.0-29.0-29.0- 29.0 |
| Mirage | 17.017.017.017.0 | 0.050.050.050.05 | 397397397397 | 0.680.680.680.68 | 6.306.306.306.30 | 597597597597 | 35.335.335.335.3 | <0.01 | 166166166166 | — | −30.4-30.4-30.4- 30.4 | −29.1-29.1-29.1- 29.1 |
We evaluate four reward learning algorithms:
Regression onto reward labels [*target* method from [6](#bib.bib6), section 3.3],
Preference comparisons on trajectories [[6](#bib.bib6)],
and adversarial IRL with a state-only (AIRL SO) and state-action (AIRL SA) reward model [[8](#bib.bib8)].
All models are trained using synthetic data from an oracle with access to the ground-truth; see section [A.2.2](#A1.SS2.SSS2 "A.2.2 Training Learned Reward Models ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") for details.
We find EPIC is robust to varying 𝒟𝒟\mathcal{D}{}caligraphic\_D when comparing the learned reward models: the distance varies by less than 2×2\times2 ×, and the ranking between the reward models is the same across coverage distributions.
By contrast, NPEC is highly sensitive to 𝒟𝒟\mathcal{D}{}caligraphic\_D: the ratio of AIRL SO (817817817817) to Pref (8.518.518.518.51) is 96:1:96196:196 : 1 under πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT but only 2:1:212:12 : 1 (2706:1333:270613332706:13332706 : 1333) under π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
ERC lies somewhere in the middle: the ratio is 22:1:22122:122 : 1 (549:24.9:54924.9549:24.9549 : 24.9) under πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT and 3:2:323:23 : 2 (523:360:523360523:360523 : 360) under π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
We evaluate the effect of pathological choices of coverage distribution 𝒟𝒟\mathcal{D}{}caligraphic\_D in Table [A.8](#A1.T8 "Table A.8 ‣ A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions").
For example, Ind independently samples states and next states, giving physically impossible transitions, while Jail constrains rollouts to a tiny region excluding the goal.
We find that the ranking of EPIC changes in only one distribution, whilst the ranking of NPEC changes in two cases and ERC changes in all cases.
However, we do find that EPIC is sensitive to 𝒟𝒟\mathcal{D}{}caligraphic\_D on Mirage, a reward function we explicitly designed to break these methods.
Mirage assigns a larger reward when close to a “mirage” state than when at the true goal, but is identical to GT at all other points.
The “mirage” state is rarely visited by random exploration πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT as it is far away and on the opposite side of the wall from the agent.
The expert policy π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is even less likely to visit it, as it is not on or close to the optimal path to the goal.
As a result, the EPIC distance from Mirage to GT (Table [2](#S6.T2 "Table 2 ‣ 6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"), bottom row) is small under πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT and π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
In general, any black-box method for assessing reward models – including the rollout method – only has predictive power on transitions visited during testing.
Fortunately, we can achieve a broad support over states with Mix: it often navigates around the wall due to π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, but strays from the goal thanks to πunisubscript𝜋uni\pi\_{\mathrm{uni}}{}italic\_π start\_POSTSUBSCRIPT roman\_uni end\_POSTSUBSCRIPT.
As a result, EPIC under Mix correctly infers that Mirage is far from the ground-truth GT.
These empirical results support our theoretically inspired recommendation from section [4](#S4 "4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions"): “in general, it is best to choose 𝒟𝒟\mathcal{D}caligraphic\_D to have broad coverage over plausible transitions.”
Distributions such as π\*superscript𝜋\pi^{\*}{}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT are too narrow, assigning coverage only on a direct path from the initial state to the goal.
Very broad distributions such as Ind waste probability mass on impossible transitions like teleporting.
Distributions like Mix strike the right balance between these extremes.
###
6.3 Predicting policy performance from reward distance
We find that low distance from the ground-truth reward GT (Table [2](#S6.T2 "Table 2 ‣ 6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"), center) predicts high GT return (Table [2](#S6.T2 "Table 2 ‣ 6.2 Sensitivity of reward distance to coverage distribution ‣ 6 Experiments ‣ Quantifying Differences in Reward Functions"), right) of policies optimized for that reward.
Moreover, the distance is predictive of return not just in PointMaze-Train where the reward functions were trained and evaluated in, but also in the unseen variant PointMaze-Test.
This is despite the two variants differing in the position of the wall, such that policies for PointMaze-Train run directly into the wall in PointMaze-Test.
Both Regress and Pref achieve very low distances at convergence, producing near-expert policy performance.
The AIRL SO and AIRL SA models have reward distances an order of magnitude higher and poor policy performance.
Yet intriguingly, the *generator* policies for AIRL SO and AIRL SA – trained simultaneously with the reward – perform reasonably in PointMaze-Train.
This suggests the learned rewards are reasonable on the subset of transitions taken by the generator policy, yet fail to transfer to the different transitions taken by a policy being trained from scratch.
Figure [A.6](#A1.F6 "Figure A.6 ‣ A.2.6 Runtime of Distance Metrics ‣ A.2 Experiments ‣ Appendix A Supplementary material ‣ Quantifying Differences in Reward Functions") shows reward distance and policy regret during reward model training.
The lines all closely track each other, showing that the distance to GT is highly correlated with policy regret for intermediate reward checkpoints as well as at convergence.
Regress and Pref converge quickly to low distance and low regret, while AIRL SO and AIRL SA are slower and more unstable.
7 Conclusion
-------------
Our novel EPIC distance compares reward functions directly, without training a policy.
We have proved it is a pseudometric, is bounded and invariant to equivalent rewards, and bounds the regret of optimal policies (Theorems [4.3](#S4.Thmdefn3 "Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions"), [4.3](#S4.Thmdefn3 "Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions") and [4.3](#S4.Thmdefn3 "Definition 4.3 (Equivalent-Policy Invariant Comparison (EPIC) pseudometric). ‣ 4 Comparing reward functions with EPIC ‣ Quantifying Differences in Reward Functions")).
Empirically, we find EPIC correctly infers zero distance between equivalent reward functions that the NPEC and ERC baselines wrongly consider dissimilar.
Furthermore, we find the distance of learned reward functions to the ground-truth reward predicts the return of policies optimized for the learned reward, even in unseen environments.
Standardized metrics are an important driver of progress in machine learning.
Unfortunately, traditional policy-based metrics do not provide any guarantees as to the fidelity of the learned reward function.
We believe the EPIC distance will be a highly informative addition to the evaluation toolbox, and would encourage researchers to report EPIC distance in addition to policy-based metrics.
Our implementation of EPIC and our baselines, including a tutorial and documentation, are available at <https://github.com/HumanCompatibleAI/evaluating-rewards>.
### Acknowledgements
Thanks to Sam Toyer, Rohin Shah, Eric Langlois, Siddharth Reddy and Stuart Armstrong for helpful discussions; to Miljan Martic for code-review; and to David Krueger, Matthew Rahtz, Rachel Freedman, Cody Wild, Alyssa Dayan, Adria Garriga, Jon Uesato, Zac Kenton and Alden Hung for feedback on drafts.
This work was supported by Open Philanthropy and the Leverhulme Trust. |
3480ce4a-3452-4944-8d3f-4ab62b8b671c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | An Intuitive Explanation of Solomonoff Induction
This is the completed article that Luke wrote the [first half of](/lw/8nr/intuitive_explanation_of_solomonoff_induction/). My thanks go to the following for reading, editing, and commenting; Luke Muehlhauser, Louie Helm, Benjamin Noble, and Francelle Wax.

People disagree about things. Some say that television makes you dumber; other say it makes you smarter. Some scientists believe life must exist elsewhere in the universe; others believe it must not. Some say that complicated financial derivatives are essential to a modern competitive economy; others think a nation's economy will do better without them. It's hard to know what is true.
And it's hard to know *how to figure out* what is true. Some argue that you should assume the things you are most certain about and then deduce all other beliefs from your original beliefs. Others think you should accept at face value the most intuitive explanations of personal experience. Still others think you should generally agree with the scientific consensus until it is disproved.
Wouldn't it be nice if determining what is true was like baking a cake? What if there was a *recipe* for finding out what is true? All you'd have to do is follow the written directions exactly, and after the last instruction you'd inevitably find yourself with some sweet, tasty *truth*!
In this tutorial, we'll explain the closest thing we’ve found so far to a recipe for finding truth: Solomonoff induction.
There are some qualifications to make. To describe just one: roughly speaking, you don't have time to follow the recipe. To find the truth to even a simple question using this recipe would require you to follow one step after another until long after the [heat death](http://en.wikipedia.org/wiki/Heat_death_of_the_universe) of the universe, and you can't do that.
But we can find shortcuts. Suppose you know that the *exact* recipe for baking a cake asks you to count out one molecule of H2O at a time until you have *exactly* 0.5 cups of water. If you did that, you might not finish the cake before the heat death of the universe. But you could approximate that part of the recipe by measuring out something very close to 0.5 cups of water, and you'd probably still end up with a pretty good cake.
Similarly, once we know the exact recipe for finding truth, we can try to approximate it in a way that allows us to finish all the steps sometime before the sun burns out.
This tutorial explains that best-we've-got-so-far recipe for finding truth, Solomonoff induction. Don’t worry, we won’t be using any equations, just qualitative descriptions.
Like Eliezer Yudkowsky's [Intuitive Explanation of Bayes' Theorem](http://yudkowsky.net/rational/bayes) and Luke Muehlhauser's [Crash Course in the Neuroscience of Human Motivation](/lw/71x/a_crash_course_in_the_neuroscience_of_human/), this tutorial is *long*. You may not have time to read it; that's fine. But if you do read it, we recommend that you read it in sections.
#### Contents:
Background
1. [Algorithms](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#algorithms) — We’re looking for an algorithm to determine truth.
2. [Induction](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#induction) — By “determine truth”, we mean induction.
3. [Occam’s Razor](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#occams_razor) — How we judge between many inductive hypotheses.
4. [Probability](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#probability) — Probability is what we usually use in induction.
5. [The Problem of Priors](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#the_problem_of_priors) — Probabilities change with evidence, but where do they start?
The Solution
6. [Binary Sequences](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#binary_sequences) — Everything can be encoded as binary.
7. [All Algorithms](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#all_algorithms) — Hypotheses are algorithms. Turing machines describe these.
8. [Solomonoff's Lightsaber](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#solomonoffs_lightsaber) — Putting it all together.
9. [Formalized Science](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#formalized_science) — From intuition to precision.
10. [Approximations](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#approximations) — Ongoing work towards practicality.
11. [Unresolved Details](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#unresolved_details) — Problems, philosophical and mathematical.
#### Algorithms
At an early age you learned a set of precisely-defined steps — a 'recipe' or, more formally, an *algorithm* — that you could use to find the largest number in a list of numbers like this:
21, 18, 4, 19, 55, 12, 30
The algorithm you learned probably looked something like this:
1. Look at the first item. Note that it is the largest you've seen on this list so far. If this is the only item on the list, output it as the largest number on the list. Otherwise, proceed to step 2.
2. Look at the next item. If it is larger than the largest item noted so far, note it as the largest you've seen in this list so far. Proceed to step 3.
3. If you have not reached the end of the list, return to step 2. Otherwise, output the last noted item as the largest number in the list.
Other algorithms could be used to solve the same problem. For example, you could work your way from right to left instead of from left to right. But the point is that if you follow this algorithm exactly, and you have enough time to complete the task, you can't *fail* to solve the problem. You can't get confused about what one of the steps means or what the next step is. Every instruction tells you exactly what to do next, all the way through to the answer.
You probably learned other algorithms, too, like how to find the greatest common divisor of any two integers (see image on right).
But not just any set of instructions is a precisely-defined algorithm. Sometimes, instructions are unclear or incomplete. Consider the following instructions based on [an article](http://science.howstuffworks.com/innovation/scientific-experiments/scientific-method6.htm) about the scientific method:
1. Make an observation.
2. Form a hypothesis that explains the observation.
3. Conduct an experiment that will test the hypothesis.
4. If the experimental results disconfirm the hypothesis, return to step #2 and form a hypothesis not yet used. If the experimental results confirm the hypothesis, provisionally accept the hypothesis.
This is not an algorithm.
First, many of the terms are not clearly defined. What counts as an observation? What counts as a hypothesis? What would a hypothesis need to be like in order to ‘explain’ the observation? What counts as an experiment that will ‘test’ the hypothesis? What does it mean for experimental results to ‘confirm’ or ‘disconfirm’ a hypothesis?
Second, the instructions may be incomplete. What do we do if we reach step 4 and the experimental results neither ‘confirm’ nor ‘disconfirm’ the hypothesis under consideration, but instead are in some sense ‘neutral’ toward the hypothesis? These instructions don’t tell us what to do in that case.
An algorithm is a well-defined procedure that takes some value or values as input and, after a finite series of steps, generates some value or values as output.
For example, the ‘find the largest number’ algorithm above could take the input {21, 18, 4, 19, 55, 12, 30} and would, after 13 steps, produce the following output: {55}. Or it could take the input {34} and, after 1 step, produce the output: {34}.
An algorithm is so well written, that we can construct machines that follow them. Today, the machines that follow algorithms are mostly computers. This is why all computer science students take a class in algorithms. If we construct our algorithm for truth, then we can make a computer program that finds truth—an Artificial Intelligence.
#### Induction
Let’s clarify what we mean. In movies, scientists will reveal “truth machines”. Input a statement, and the truth machine will tell you whether it is true or false. This is *not* what Solomonoff induction does. Instead, Solomonoff induction is our ultimate “induction machine”.
>
> Whether we are a detective trying to catch a thief, a scientist trying to discover a new physical law, or a businessman attempting to understand a recent change in demand, we are all in the process of collecting information and trying to infer the underlying causes.
>
>
> -Shane Legg
>
>
>
The problem of induction is this: We have a set of *observations* (or *data*), and we want to find the underlying causes of those observations. That is, we want to find *hypotheses* that explain our data. We’d like to know which hypothesis is correct, so we can use that knowledge to predict future events. Our algorithm for truth will not listen to questions and answer yes or no. Our algorithm will take in data (observations) and output the rule by which the data was created. That is, it will give us the explanation of the observations; the causes.
Suppose your data concern a large set of stock market changes and other events in the world. You’d like to know the processes responsible for the stock market price changes, because then you can predict what the stock market will do in the future, and make some money.
Or, suppose you are a parent. You come home from work to find a chair propped against the refrigerator, with the cookie jar atop the fridge a bit emptier than before. You like cookies, and you don’t want them to disappear, so you start thinking. One hypothesis that leaps to mind is that your young daughter used the chair to reach the cookies. However, many other hypotheses explain the data. Perhaps a very short thief broke into your home and stole some cookies. Perhaps your daughter put the chair in front of the fridge because the fridge door is broken and no longer stays shut, and you forgot that your friend ate a few cookies when he visited last night. Perhaps you moved the chair and ate the cookies yourself while sleepwalking the night before.
All these hypotheses are possible, but intuitively it seems like some hypotheses are more likely than others. If you’ve seen your daughter access the cookies this way before but have never been burgled, then the ‘daughter hypothesis’ seems more plausible. If some expensive things from your bedroom and living room are missing and there is hateful graffiti on your door at the eye level of a very short person, then the ‘short thief’ hypothesis becomes more plausible than before. If you suddenly remember that your friend ate a few cookies and broke the fridge door last night, the ‘broken fridge door’ hypothesis gains credibility. If you’ve never been burgled and your daughter is out of town and you have a habit of moving and eating things while sleepwalking, the ‘sleepwalking’ hypothesis becomes less bizarre.
So the weight you give to each hypothesis depends greatly on your prior knowledge. But what if you had just been hit on the head and lost all past memories, and for some reason the most urgent thing you wanted to do was to solve the mystery of the chair and cookies? Then how would you weigh the likelihood of the available hypotheses?
When you have very little data but want to compare hypotheses anyway, Occam's Razor comes to the rescue.
#### Occam’s Razor
Consider a different inductive problem. A computer program outputs the following sequence of numbers:
1, 3, 5, 7
Which number comes next? If you guess correctly, you’ll win $500.
In order to predict the next number in the sequence, you make a hypothesis about the process the computer is using to generate these numbers. One obvious hypothesis is that it is simply listing all the odd numbers in ascending order from 1. If that’s true, you should guess that "9" will be the next number.
But perhaps the computer is using a different algorithm to generate the numbers. Suppose that n is the step in the sequence, so that n=1 when it generated ‘1’, n=2 when it generated ‘3’, and so on. Maybe the computer used this equation to calculate each number in the sequence:
2n − 1 + (n − 1)(n − 2)(n − 3)(n − 4)
If so, the next number in the sequence will be 33. (Go ahead, [check](http://www.wolframalpha.com/) the calculations.)
But doesn’t the first hypothesis seem more likely?
The principle behind this intuition, which goes back to [William of Occam](http://en.wikipedia.org/wiki/William_of_Ockham), could be stated:
>
> Among all hypotheses consistent with the observations, the simplest is the most likely.
>
>
>
The principle is called [Occam’s razor](http://en.wikipedia.org/wiki/Occam%27s_razor) because it ‘shaves away’ unnecessary assumptions.
For example, think about the case of the missing cookies again. In most cases, the ‘daughter’ hypothesis seems to make fewer unnecessary assumptions than the ‘short thief’ hypothesis does. You already know you have a daughter that likes cookies and knows how to move chairs to reach cookies. But in order for the short thief hypothesis to be plausible, you have to assume that (1) a thief found a way to break in, that (2) the thief wanted inexpensive cookies from your home, that (3) the thief was, unusually, too short to reach the top of the fridge without the help of a chair, and (4) many other unnecessary assumptions.
Occam’s razor sounds right, but can it be made more precise, and can it be justified? How do we find *all* consistent hypotheses, and how do we judge their simplicity? We will return to those questions later. Before then, we’ll describe the area of mathematics that usually deals with reasoning: probability.
#### Probability
You’re a soldier in combat, crouching in a trench. You know for sure there is just one enemy soldier left on the battlefield, about 400 yards away. You also know that if the remaining enemy is a regular army troop, there’s only a small chance he could hit you with one shot from that distance. But if the remaining enemy is a sniper, then there’s a very good chance he can hit you with one shot from that distance. But snipers are rare, so it’s probably just a regular army troop.
You peek your head out of the trench, trying to get a better look.
Bam! A bullet glances off your helmet and you duck down again.
“Okay,” you think. “I know snipers are rare, but that guy just hit me with a bullet from 400 yards away. I suppose it might still be a regular army troop, but there’s a seriously good chance it’s a sniper, since he hit me from that far away.”
After another minute, you dare to take another look, and peek your head out of the trench again.
Bam! Another bullet glances off your helmet! You duck down again.
“Whoa,” you think. “It’s definitely a sniper. No matter how rare snipers are, there’s no way that guy just hit me twice in a row from that distance if he’s a regular army troop. He’s gotta be a sniper. I’d better call for support.”
This is an example of reasoning under uncertainty, of updating uncertain beliefs in response to evidence. We do it all the time.
You start with some prior beliefs, and all of them are uncertain. You are 99.99% certain the Earth revolves around the sun, 90% confident your best friend will attend your birthday party, and 40% sure that the song you're listening to on the radio was played by The Turtles.
Then, you encounter new evidence—new observations—and you update your beliefs in response.
Suppose you start out 85% confident that the one remaining enemy soldier is not a sniper. That leaves only 15% credence to the hypothesis that he *is* a sniper. But then, a bullet glances off your helmet — an event far more likely if the enemy soldier is a sniper than if he is not. So now you’re only 40% confident he’s a non-sniper, and 60% confident he is a sniper. Another bullet glances off your helmet, and you update again. Now you’re only 2% confident he’s a non-sniper, and 98% confident he is a sniper.
Probability theory is the mathematics of reasoning with uncertainty. The keystone of this subject is called Bayes’ Theorem. It tells you how likely something is given some other knowledge. Understanding this simple theorem is more useful and important for most people than Solomonoff induction. If you haven’t learned it already, you may want to read either [tutorial #1](http://yudkowsky.net/rational/bayes), [tutorial #2](http://commonsenseatheism.com/?p=13156), [tutorial #3](/lw/2b0/bayes_theorem_illustrated_my_way/), or [tutorial #4](http://oscarbonilla.com/2009/05/visualizing-bayes-theorem/) on Bayes’ Theorem. The exact math of Bayes' Theorem is not required for this tutorial. We'll just describe its results qualitatively.
Bayes’ Theorem can tell us how likely a hypothesis is, given evidence (or data, or observations). This is helpful because we want to know which model of the world is correct so that we can successfully predict the future. It calculates this probability based on the prior probability of the hypothesis alone, the probability of the evidence alone, and the probability of the evidence *given* the hypothesis. Now we just plug the numbers in.
Of course, it’s not easy to “just plug the numbers in.” You aren’t an all-knowing god. You don’t know *exactly* how likely it is that the enemy soldier would hit your helmet if he’s a sniper, compared to how likely that is if he’s not a sniper. But you can do your best. With enough evidence, it will become overwhelmingly clear which hypothesis is correct.
But guesses are not well-suited to an exact algorithm, and so our quest to find an algorithm for truth-finding must continue. For now, we turn to the problem of choosing priors.
#### The Problem of Priors
In the example above where you're a soldier in combat, I gave you your starting probabilities: 85% confidence that the enemy soldier was a sniper, and 15% confidence he was not. But what if you don't know your "priors"? What then?
Most situations in real life are complex, so that your “priors” (as used in Bayes’ Theorem) are actually probabilities that have been updated several times with past evidence. You had an idea that snipers were rare because you saw many soldiers, but only a few of them were snipers. Or you read a reliable report saying that snipers were rare. But what would our ideal reasoning computer do before it knew anything? What would the probabilities be set to before we turned it on? How can we determine the probability of a hypothesis before seeing *any* data?
The general answer is Occam’s razor; simpler hypotheses are more likely. But this isn’t rigorous. It’s usually difficult to find a measure of complexity, even for mathematical hypotheses. Is a normal curve simpler than an exponential curve? Bayesian probability theory doesn’t have anything to say about choosing priors. Thus, many standard "prior distributions" have been developed. Generally, they distribute probability equally across hypotheses. Of course this is a good approach if all the hypotheses are equally likely. But as we saw above, it seems that some hypotheses are more complex than others, and this makes them less likely than the other hypotheses. So when distributing your probability across several hypotheses, you shouldn't necessarily distribute it evenly. There’s also a growing body of work around an idea called the [Maximum Entropy Principle](http://en.wikipedia.org/wiki/Principle_of_maximum_entropy). This principle helps you choose a prior that makes the least assumptions given the constraints of the problem. But this principle can’t be used to handle all possible types of hypotheses, only ones for which “[entropy](http://en.wikipedia.org/wiki/Entropy_(information_theory))” can be mathematically evaluated.
We need a method that everyone can agree provides the correct priors in all situations. This helps us perform induction correctly. It also helps everyone be more honest. Since priors partly determine what people believe, they can sometimes choose priors that help “prove” what they want to prove. This can happen intentionally or unintentionally. It can also happen in formal situations, such as an academic paper defending a proposed program.
To solve the problem of priors once and for all, we'd like to have an acceptable, *universal* prior distribution, so that there's no vagueness in the process of induction. We need a recipe, an *algorithm*, for selecting our priors. For that, we turn to the subject of binary sequences.
#### Binary Sequences
At this point, we have collected a lot of background material. We know about algorithms, and we know we need an algorithm that does induction. We know that induction also uses Occam’s razor and probability. We know that one of the problems in probability is selecting priors. Now we’re ready to formalize it.
To start, we need a language in which we can express all problems, all data, all hypotheses. *Binary* is the name for representing information using only the characters '0' and '1'. In a sense, binary is the simplest possible alphabet. A two-character alphabet is the smallest alphabet that can communicate a difference. If we had an alphabet of just one character, our “sentences” would be uniform. With two, we can begin to encode information. Each 0 or 1 in a binary sequence (e. g. 01001011) can be considered the answer to a yes-or-no question.
In the above example about sorting numbers, it's easy to convert it to binary just by writing a computer program to follow the algorithm. All programming languages are based on binary. This also applies to anything you’ve ever experienced using a computer. From the greatest movie you’ve ever seen to emotional instant messaging conversations, *all* of it was encoded in binary.
In principle, all information can be represented in binary sequences. If this seems like an extreme claim, consider the following...
All your experiences, whether from your eyes, ears, other senses or even memories and muscle movements, occur between neurons (your nerve cells and brain cells). And it was discovered that neurons communicate using a digital signal called the action potential. Because it is digital, a neuron either sends the signal, or it doesn't. There is no half-sending the action potential. This can be translated directly into binary. An action potential is a 1, no action potential is a 0. All your sensations, thoughts, and actions can be encoded as a binary sequence over time. A really long sequence.
Or, if neuron communication turns out to be more complicated than that, we can look to a deeper level. All events in the universe follow the laws of physics. We're not quite done discovering the true laws of physics; there are some inconsistencies and unexplained phenomena. But the currently proposed laws are incredibly accurate. And they can be represented as a single binary sequence.
You might be thinking, “But I see and do multiple things simultaneously, and in the universe there are trillions of stars all burning at the same time. How can parallel events turn into a single sequence of binary?”
This is a perfectly reasonable question. It turns out that, at least formally, this poses no problem at all. The machinery we will use to deal with binary sequences can turn multiple sequences into one just by dovetailing them together and adjusting how it processes them so the results are the same. Because it is easiest to deal with a single sequence, we do this in the formal recipe. Any good implementation of Solomonoff induction will use multiple sequences just to be faster.
A picture of your daughter can be represented as a sequence of ones and zeros. But a picture is not your daughter. A video of all your daughter’s actions can also be represented as a sequence of ones and zeros. But a video isn’t your daughter, either; we can’t necessarily tell if she’s thinking about cookies, or poetry. The position of all the subatomic particles that make up your daughter as she lives her entire life can be represented as a sequence of binary. And that really *is* your daughter.
Having a common and simple language can sometimes be the key to progress. The ancient Greek mathematician Archimedes discovered many *specific* results of calculus, but could not generalize the methods because he did not have the *language* of calculus. After this language was developed in the late 1600s, hundreds of mathematicians were able to produce new results in the field. Now, calculus forms an important base of our modern civilization.
Being able to do everything in the language of binary sequences simplifies things greatly, and gives us great power. Now we don't have to deal with complex concepts like “daughter” and “soldier." It's all still there in the data, only as a large sequence of 0s and 1s. We can treat it all the same.
#### All Algorithms
Now that we have a simple way to deal with all types of data, let's look at hypotheses. Recall that we’re looking for a way to assign prior probabilities to hypotheses. (And then, when we encounter new data, we'll use Bayes' Theorem to update the probabilities we assign to those hypotheses). To be complete, and guarantee we find the *real* explanation for our data, we have to consider *all* possible hypotheses. But how could we ever find all possible explanations for our data? We could sit in a room for days, making a list of all the ways the cookies could be missing, and still not think of the possibility that our wife took some to work.
It turns out that mathematical abstraction can save the day. By using a well-tested model, and the language of binary, we can find all hypotheses.
This piece of the puzzle was discovered in 1936 by a man named Alan Turing. He created a simple, formal model of computers called “Turing machines” before anyone had ever built a computer.
In Turing's model, each machine's language is—you guessed it—binary. There is one binary sequence for the input, a second binary sequence that constantly gets worked on and re-written, and a third binary sequence for output. (This description is called a three-tape Turing machine, and is easiest to think about for Solomonoff induction. The normal description of Turing machines includes only one tape, but it turns out that they are equivalent.)

The rules that determine how the machine reacts to and changes the bits on these tapes are very simple. An example is shown on the diagram above. Basically, every Turing machine has a finite number of “states”, each of which is a little rule. These rules seem bland and boring at first, but in a few paragraphs, you’ll find out why they’re so exciting. First, the machine will start out in a certain state, with some binary on the input tape, all zeros on the work tape, and all zeros on the output tape. The rules for that first state will tell it to look at the input tape and the work tape. Depending on what binary number is on those two tapes, the rules will say to perform certain actions. It will say to;
1. feed the input tape (or not)
2. write 0 or 1 to the work tape
3. move the work tape left or right
4. write 0 or 1 to the output tape
5. feed the output tape (or not).
After that, the rules will say which state to move to next, and the process will repeat. Remember that the rules for these states are fixed; they could be written out on pieces of paper. All that changes are the tapes, and what rule the machine is currently following. The basic mathematics behind this is fairly simple to understand, and can be found in books on computational theory.
This model is *incredibly* powerful. Given the right rules, Turing machines can
* calculate the square of a number,
* run a spreadsheet program,
* compress large files,
* estimate the probability of rain tomorrow,
* control the flight of an airplane,
* play chess better than a human,
* and much, much more.
You may have noticed that this sounds like a list of what regular computers do. And you would be correct; the model of Turing machines came before, and served as an invaluable guide to, the invention of electronic computers. Everything they do is within the model of Turing machines.
Even more exciting is the fact that *all* attempts to formalize the intuitive idea of “algorithm” or “process” have been proven to be at most equally as powerful as Turing machines. If a system has this property, it is called *Turing complete*. For example, math equations using algebra have a huge range of algorithms they can express. Multiplying is an algorithm, finding the hypotenuse of a right triangle is an algorithm, and the quadratic formula is an algorithm. Turing machines can run all these algorithms, *and more*. That is, Turing machines can be used to calculate out all algebraic algorithms, but there are some algorithms Turing machines can run that can’t be represented by algebra. This means algebra is not Turing complete. For another example, mathematicians often invent “games” where sequences of symbols can be rewritten using certain rules. (They will then try to prove things about what sequences these rules can create.) But no matter how creative their rules, every one of them can be simulated on a Turing machine. That is, for every set of re-writing rules, there is a binary sequence you can give a Turing machine so that the machine will rewrite the sequences in the same way.
Remember how limited the states of a Turing machine are; every machine has only a finite number of states with “if” rules like in the figure above. But somehow, using these and the tape as memory, they can simulate every set of rules, every algorithm ever thought up. Even the distinctively different theory of quantum computers is at most Turing complete. In the 80 years since Turing’s paper, no superior systems have been found. The idea that Turing machines truly capture the idea of “algorithm” is called the Church-Turing thesis.
So the model of Turing machines covers regular computers, but that is not all. As mentioned above, the current laws of physics can be represented as a binary sequence. That is, the laws of physics are an algorithm that can be fed into a Turing machine to compute the past, present and future of the universe. This includes stars burning, the climate of the earth, the action of cells, and even the actions and thoughts of humans. Most of the power here is in the laws of physics themselves. What Turing discovered is that these can be computed by a mathematically simple machine.
As if Turing’s model wasn’t amazing enough, he went on to prove that *one specific* set of these rules could simulate all *other* sets of rules.
The computer with this special rule set is called a universal Turing machine. We simulate another chosen machine by giving the universal machine a compiler binary sequence. A *compiler* is a short program that translates between computer languages or, in this case, between machines. Sometimes a compiler doesn’t exist. For example, you couldn’t translate Super Mario Brothers onto a computer that only plays tic-tac-toe. But there will always be a compiler to translate onto a universal Turing machine. We place this compiler in front of the input which would have been given to the chosen machine. From one perspective, we are just giving the universal machine a single, longer sequence. But from another perspective, the universal machine is using the compiler to set itself up to simulate the chosen machine. While the universal machine (using its own, fixed rules) is processing the compiler, it will write various things on the work tape. By the time it has passed the compiler and gets to the original input sequence, the work tape will have something written on it to help simulate the chosen machine. While processing the input, it will still follow its own, fixed rules, only the binary on the work tape will guide it down a different “path” through those rules than if we had only given it the original input.
For example, say that we want to calculate the square of 42 (or in binary, 101010). Assume we know the rule set for the Turing machine which squares numbers when given the number in binary. Given all the specifics, there is an algorithmic way to find the “compiler” sequence based on these rules. Let’s say that the compiler is 1011000. Then, in order to compute the square of 42 on the universal Turing machine, we simply give it the input 1011000101010, which is just the compiler 1011000 next to the number 42 as 101010. If we want to calculate the square of 43, we just change the second part to 101011 (which is 101010 + 1). The compiler sequence doesn’t change, because it is a property of the machine we want to simulate, e. g. the squaring machine, but not of the input to that simulated machine, e. g. 42.
In summary: algorithms are represented by Turing machines, and Turing machines are represented by inputs to the universal Turing machine. Therefore, algorithms are represented by inputs to the universal Turing machine.
In Solomonoff induction, the assumption we make about our data is that it was generated by some algorithm. That is, the hypothesis that explains the data is an algorithm. Therefore, a universal Turing machine can output the data, as long as you give the machine the correct hypothesis as input. Therefore, the set of all possible inputs to our universal Turing machine is the set of all possible hypotheses. This includes the hypothesis that the data is a list of the odd numbers, the hypothesis that the enemy soldier is a sniper, and the hypothesis that your daughter ate the cookies. This is the power of formalization and mathematics.
#### Solomonoff's Lightsaber
Now we can find all the hypotheses that would predict the data we have observed. This is much more powerful than the informal statement of Occam's razor. Because of its precision and completeness, this process has been jokingly dubbed “Solomonoff's Lightsaber”. Given our data, we find potential hypotheses to explain it by running every hypothesis, one at a time, through the universal Turing machine. If the output matches our data, we keep it. Otherwise, we throw it away.
By the way, this is where Solomonoff induction becomes incomputable. It would take an infinite amount of time to check every algorithm. And even more problematic, some of the algorithms will run forever without producing output—*and we can't prove they will never stop running*. This is known as the halting problem, and it is a deep fact of the theory of computation. It's the sheer number of algorithms, and these pesky non-halting algorithms that stop us from actually running Solomonoff induction.
The actual process above might seem a little underwhelming. We just check every single hypothesis? Really? Isn’t that a little mindless and inefficient? This will certainly not be how the first true AI operates. But don’t forget that before this, nobody had any idea how to do ideal induction, *even in principle*. Developing fundamental theories, like quantum mechanics, might seem abstract and wasteful. But history has proven that it doesn’t take long before such theories and models change the world, as quantum mechanics did with modern electronics. In the future, men and women will develop ways to approximate Solomonoff induction in a second. Perhaps they will develop methods to eliminate large numbers of hypotheses all at once. Maybe hypotheses will be broken into distinct classes. Or maybe they’ll use methods to statistically converge toward the right hypotheses.
So now, at least in theory, we have the whole list of hypotheses that might be the true cause behind our observations. These hypotheses, since they are algorithms, look like binary sequences. For example, the first few might be 01001101, 0011010110000110100100110, and 1000111110111111000111010010100001. That is, for each of these three, when you give them to the universal Turing machine as input, the output is our data. Which of these three do you think is more likely to be the *true* hypothesis that generated our data in the first place?
We have a list, but we're trying to come up with a probability, not just a list of possible explanations. So how do we decide what the probability is of each of these hypotheses? Imagine that the true algorithm is produced in a most unbiased way: by flipping a coin. For each bit of the hypothesis, we flip a coin. Heads will be 0, and tails will be 1. In the example above, 01001101, the coin landed heads, tails, heads, heads, tails, and so on. Because each flip of the coin has a 50% probability, each bit contributes ½ to the final probability.
Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need *evidence* to show that they were necessary.
So, why not just take the shortest hypothesis, and call that the truth? Because all of the hypotheses predict the data we have so far, and in the future we might get data to rule out the shortest one. We keep all consistent hypotheses, but weigh the shorter ones with higher probability. So in our eight-bit example, the probability of 01001101 being the true algorithm is ½^8, or 1/256. It's important to say that this isn't an absolute probability in the normal sense. It hasn't been *normalized*—that is, the probabilities haven't been adjusted so that they add up to 1. This is computationally much more difficult, and might not be necessary in the final implementation of Solomonoff induction. These probabilities can still be used to compare how likely different hypotheses are.
To find the probability of the evidence alone, all we have to do is add up the probability of all these hypotheses consistent with the evidence. Since any of these hypotheses could be the true one that generates the data, and they're mutually exclusive, adding them together doesn't double count any probability.
#### Formalized Science
Let’s go back to the process above that describes the scientific method. We’ll see that Solomonoff induction is *this process made into an algorithm*.
1. Make an observation.
Our observation is our binary sequence of data. Only binary sequences are data, and all binary sequences can qualify as data.
2. Form a hypothesis that explains the observation.
We use a universal Turing machine to find all possible hypotheses, no fuzziness included. The hypothesis “explains” the observation if the output of the machine matches the data exactly.
3. Conduct an experiment that will test the hypothesis.
The only “experiment” is to observe the data sequence for longer, and run the universal machine for longer. The hypotheses whose output continues to match the data are the ones that pass the test.
4. If the experimental results disconfirm the hypothesis, return to step #2 and form a hypothesis not yet used. If the experimental results confirm the hypothesis, provisionally accept the hypothesis.
This step tells us to repeat the “experiment” with each binary sequence in our matching collection. However, instead of “provisionally” accepting the hypothesis, we accept all matching hypotheses with a probability weight according to its length.
Now we’ve found the truth, as best as it can be found. We’ve excluded no possibilities. There’s no place for a scientist to be biased, and we don’t need to depend on our creativity to come up with the right hypothesis or experiment. We know how to measure how complex a hypothesis is. Our only problem left is to efficiently run it.
#### Approximations
As mentioned before, we actually can’t run all the hypotheses to see which ones match. There are infinitely many, and some of them will never halt. So, just like our cake recipe, we need some very helpful approximations that still deliver very close to true outputs. Technically, all prediction methods are approximations of Solomonoff induction, because Solomonoff induction tries all possibilities. But most methods use a *very* small set of hypotheses, and many don't use good methods for estimating their probabilities. How can we more directly approximate our recipe? At present, there aren't any outstandingly fast and accurate approximations to Solomonoff induction. If there were, we would be well on our way to true AI. Below are some ideas that have been published.
In *[Universal Artificial Intelligence](http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3642060528/)*, Marcus Hutter provides a full formal approximation of Solomonoff induction which he calls AIXI-*tl*. This model is optimal in a technical and subtle way. Also it can always return an answer in a finite amount of time, but this time is usually extremely long and *doubles* with every bit of data. It would still take longer than the life of the universe before we got an answer for most questions. How can we do even better?
One general method would be to use a Turing machine that you know will always halt. Because of the halting problem, this means that you won't be testing some halting algorithms that might be the correct hypotheses. But you could still conceivably find a large set of hypotheses that all halted.
Another popular method of approximating any intractable algorithm is to use randomness. This is called a Monte Carlo method. We can’t test *all* hypotheses, so we have to select a subset. But we don’t want our selection process to bias the result. Therefore we could randomly generate a bunch of hypotheses to test. We could use an evolutionary algorithm, where we test this seed set of hypotheses, and keep the ones that generate data closest to ours. Then we would vary these hypotheses, run them again through the Turing machine, keep the closest fits, and continue this process until the hypotheses actually predicted our data exactly.
The mathematician Jürgen Schmidhuber proposes a different probability weighing which also gives high probability to hypotheses that can be quickly computed. He demonstrates how this is “near-optimal”. This is vastly faster, but risks making another assumption, that faster algorithms are inherently more likely.
#### Unresolved Details
Solomonoff induction is an active area of research in modern mathematics. While it is universal in a broad and impressive sense, some choices can still be made in the mathematical definition. Many people also have philosophical concerns or objections to the claim that Solomonoff induction is ideal and universal.
The first question many mathematicians ask is, “Which universal Turing machine?” I have written this tutorial as if there is only one, but there are in fact infinitely many sets of rules that can simulate all other sets of rules. Just as the length of a program will depend on which programming language you write it in, the length of the hypothesis as a binary sequence will depend on which universal Turing machine you use. This means the probabilities will be different. The change, however, is very limited. There are theorems that well-define this effect and it is generally agreed not to be a concern. Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters.
Another concern is that the true hypothesis may be incomputable. There are known definitions of binary sequences which make sense, but which no Turing machine can output. Solomonoff induction would converge to this sequence, but would never predict it exactly. It is also generally agreed that *nothing* could ever predict this sequence, because all known predictors are equivalent to Turing machines. If this is a problem, it is similar to the problem of a finite universe; there is nothing that can be done about it.
Lastly, many people, mathematicians included, reject the ideas behind the model, such as that the universe can be represented as a binary sequence. This often delves into complex philosophical arguments, and often revolves around consciousness.
Many more details can be found the more one studies. Many mathematicians work on modified versions and extensions where the computer learns how to act as well as predict. However these open areas are resolved, Solomonoff induction has provided an invaluable model and perspective for research into solving the problem of how to find truth. (Also see [Open Problems Related to Solomonoff Induction](/lw/cw1/open_problems_related_to_solomonoff_induction/).)
Fundamental theories have an effect of unifying previously separate ideas. After Turing discovered the basics of computability theory, it was only a matter of time before these ideas made their way across philosophy and mathematics. Statisticians could only find exact probabilities in simplified situations; now they know how to find them in all situations. Scientists wondered how to really know which hypothesis was simpler; now they can find a number. Philosophers wondered how induction could be justified. Solomonoff induction gives them an answer.
So, what’s next? To build our AI truth-machine, or even to find the precise truth to a single real-world problem, we need considerable work on approximating our recipe. Obviously a scientist cannot presently download a program to tell him the complexity of his hypothesis regarding deep-sea life behavior. But now that we know how to find truth in principle, mathematicians and computer scientists will work on finding it in practice.
(How does this fit with Bayes' theorem? [A followup.](/r/discussion/lw/di3/how_bayes_theorem_is_consistent_with_solomonoff/)) |
3cfbbfb8-4f28-46e0-8d91-7fbd683ab851 | trentmkelly/LessWrong-43k | LessWrong | Incorporating Mechanism Design Into Decision Theory
In the previous post, we looked at one way of handling externalities: letting other agents pay you to shift your decision. And we also considered the technique of aggregating those offers into an auction. This technique of "implementing a mechanism to handle an incentive misalignment" is extremely useful and seems like a promising avenue for future improvements to decision theory.
I want to frame mechanisms as "things that reshape incentives." Auctions, markets, and voting systems are all mechanisms; social technologies that can be invented by working backwards from a social welfare measure (a social choice theory) and designing a game such that players following their individual incentives will find themselves in socially high-ranking Nash equilibria.
I suspect that incorporating mechanism design more fully into decision theory will be extremely fruitful. Yudkowsky's probabilistic rejection algorithm[1] first identifies the socially optimal outcome (a fair Pareto optimum), and works backwards to identify a decision procedure which:
* If universalized leads to the socially optimal outcome
* If best-responded to still leads to the socially optimal outcome (stabilizing it as a Nash equilibrium)
The Robust Cooperation paper does the same thing for the Prisoners' Dilemma. Probabilistic rejection also only uses appropriate-threats, like sometimes rejecting unfair offers even at cost to oneself, leading it to degrade gracefully when negotiators disagree about what the socially optimal outcome is. The term "non-credible threat" is named that way because a classically-rational agent would never actually pay a cost "merely" for its incentive-reshaping effect on the other players. Not all non-credible threats are appropriate, but there are times when it's appropriate to pay costs to reshape the incentives of others.
Policy Counterfactuals
There is a representation theorem by Joyce, which I understand to be something like "a Joyce-rational agent will choose an action a |
e7b985f6-3ce8-49f3-82f7-930cddacb8f2 | trentmkelly/LessWrong-43k | LessWrong | Kidney Trade: A Dialectic
Related: GiveWell's Increasing the Supply of Organs for Transplantation in the U.S.
(Content warning: organs, organ trade, transplantation. Help me flesh this out! My intention is to present the arguments I've seen in a way that is, at a minimum, non-boring. In particular, moral intuitions conflicting or otherwise are welcome.)
“Now arriving at Objection from Human Dignity,” proclaimed the intercom in a euphonious female voice. Aleph shot Kappa and Lambda a dirty look farewell and disembarked from the train.
Kappa: “Okay, so maybe there’s a possibility that legal organ markets aren’t completely, obviously bad. I can at least quell my sense of disgust for the length of this train ride, if it really might save a lot more lives than what we’re doing right now. But I’m not even close to being convinced that that’s the case.”
Lambda nodded.
Kappa: “First: a clarification. Why kidneys? Why not livers or skin or corneas?”
Lambda: “I’m trying to be conservative. For one, we can eliminate a lot of organs from consideration in the case of live donors because only a few organs can be donated without killing the donor in the process. Not considering tissues, but just organs, this narrows it down to kidneys, livers, and lungs. Liver transplants have … undesirable side effects that complicate-”
Kappa: “Uh, ‘undesirable side effects?’ Like what?”
Lambda: “Er, well it turns out that recovering from a liver donation is excruciatingly painful, and that seems like it might make the whole issue … harder to think about. Anyway, for that reason; and because most organ trade including legal donations is in kidneys; and because most people who die on waitlists are waiting for kidneys; and because letting people sell their organs after they're dead doesn't seem like it would increase the supply that much; for all of these reasons, focusing on kidneys from live donors seems to simplify the analysis without tossing out a whole lot of the original problem. Paying kidney donors |
cfb3ed0a-c300-4b46-a1bd-dae5e44c1cbe | trentmkelly/LessWrong-43k | LessWrong | The blind god's computer language
http://crisper.livejournal.com/316634.html#cutid1
> They have no compiler, only an accretor that gloms additional code onto the existing binary. I use the word binary loosely; it is not uncommon for them to "improve" fundamentally flawed data structures by moving to a larger base notation - to trinary, then quadrary, etc. At this point, some of their products are in base 17.
> They never go back to fix bugs where they occur. They write new code to workaround the earlier failure case. I asked why they don't go back and just fix the bug where it happens. I was told "We can't go back and change it. That code's already done!" Their solution for insuring that failing code will be able to get to its workaround is the GOTO statement. GOTO is sprinkled liberally around other code, pointing to functions and routines that do not exist yet. If, down the road, it is discovered that the old code has a bug, they find out which GOTOs exist in that code that do not point to anything yet, pick one, and write the workaround there.
> I could go on, but I am being told that we need to celebrate the successful compilation (by which I mean accretion) of a particularly complex workaround for a bug that has been known about for two years. |
eaa5cd72-f1f2-4f36-8206-af41983964c2 | trentmkelly/LessWrong-43k | LessWrong | Theories of impact for Science of Deep Learning
I’d like to thank Jérémy Scheurer and Ethan Perez for discussions about the post.
I recently published a post on the Science of Deep Learning. There have been many people before me who had similar ideas and I don’t claim to have invented this agenda, I’m merely excited about it. In this post, I want to explain why I think Science of DL is an important research direction in the current alignment landscape.
By Science of DL, I roughly mean “understanding DL systems and how they learn concepts”. A large component of such an agenda is interpretability (mechanistic and other) but it also tries to get a better understanding of how and under which conditions NNs learn specific concepts. For example, it would include questions like “how, when and why does a network show grokking?”, “Can we predict some capabilities of models from high-level knowledge about the training process before we look at the model?” or “Can we build a robust theory of what fine-tuning does to a NN on a mechanistic level?”. In general, the idea is to build a detailed understanding of how core aspects of DL work. Given that this is a safety agenda, the specific research questions would obviously be prioritized by how relevant they are to alignment.
Note that there are many definitions of Science of DL that include research questions that I don’t think are important for alignment. For example, trying to understand what differentiates Adam from SGD might be considered part of Science of DL but I think this question only very vaguely relates to alignment and is not neglected.
Theories of impact
Interpretability
(Mechanistic) interpretability is a core part of Science of DL. There already exist resources on theories of impact for interpretability, most prominently “a longlist of theories of impact for interpretability” and “another list of theories of impact for interpretability”. Therefore, I will only briefly present a small selection of what Neel Nanda and Beth Barnes have already written:
1. |
28398487-4e62-4aed-a19f-2205cc0bc9f7 | trentmkelly/LessWrong-43k | LessWrong | Idols of the Mind Pt. 1 (Novum Organum Book 1: 38–52)
This is the fourth post in the Novum Organum sequence. For context, see the sequence introduction.
We have used Francis Bacon's Novum Organum in the version presented at www.earlymoderntexts.com. Translated by and copyright to Jonathan Bennett. Prepared for LessWrong by Ruby.
Ruby's Reading Guide
> Novum Organum is organized as two books each containing numbered "aphorisms." These vary in length from three lines to sixteen pages. Titles of posts in this sequence, e.g. Idols of the Mind Pt. 1, are my own and do not appear in the original.
> While the translator, Bennett, encloses his editorial remarks in a single pair of [brackets], I have enclosed mine in a [[double pair of brackets]].
Bennett's Reading Guide
> [Brackets] enclose editorial explanations. Small ·dots· enclose material that has been added, but can be read as though it were part of the original text. Occasional •bullets, and also indenting of passages that are not quotations, are meant as aids to grasping the structure of a sentence or a thought. Every four-point ellipsis . . . . indicates the omission of a brief passage that seems to present more difficulty than it is worth. Longer omissions are reported between brackets in normal-sized type.
Aphorism Concerning the Interpretation of Nature: Book 1: 38–52
by Francis Bacon
38. The idols and false notions that now possess the human intellect and have taken deep root in it don’t just •occupy men’s minds so that truth can hardly get in, but also when a truth is allowed in they will •push back against it, stopping it from contributing to a fresh start in the sciences. This can be avoided only if men are forewarned of the danger and do what they can to fortify themselves against the assaults of these idols and false notions.
39. There are four classes of idols that beset men’s minds, and to help me in my exposition I have given them names. I call the first class idols of the tribe, the second idols of the cave, the third idols of the market place, |
6c719027-2e20-475b-97f7-8eaae4a26566 | trentmkelly/LessWrong-43k | LessWrong | How often do series C startups fail to exit?
How often do series C startups really fail? By fail I mean never have an acquisition or IPO. Internet says 80% (see https://medium.com/journal-of-empirical-entrepreneurship/dissecting-startup-failure-by-stage-34bb70354a36) but this seems very high to me.
Most Series C companies are worth in the 100-200M range, the one I'm at is worth 270M. How does all the value just evaporate? What happens to the companies that "fail"?
Asking to decide whether to exercise my options. I only need my company to exit at 41M to break even. I am bearish on the company but with around 40M in ARR it is hard to imagine it not exiting. |
7d3a5a75-3844-4e63-9c47-7737d3ca6c61 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Brussels - Hope & Self-improvement
Discussion article for the meetup : Brussels - Hope & Self-improvement
WHEN: 13 December 2014 01:00:00PM (+0100)
WHERE: Rue des Alexiens 55 1000 Bruxelles
t's Christmas season, so I thought of doing something about rational giving, but we've already talked your ears off about Effective Altruism last meetup. Instead, let's fight off seasonal depression with the Power of Friendship (and of Chemistry).
What makes you hope for your own future? How has your life improved recently, and what makes you expect it will improve again? What makes you hope for the future of humanity? Which recent scientific development brings us one step towards utopia?
This month, a meetup that will go better than expected.
----------------------------------------
We will meet at 1 pm at "La Fleur en papier doré, close to the Brussels Central station. The meeting will be in English to facilitate both French and Dutch speaking members.
If you are coming for the first time, please consider filling out this one minute form to share your contact information.
The Brussels meetup group communicates through a Google Group.
Meetup announcements are also mirrored on meetup.com
Discussion article for the meetup : Brussels - Hope & Self-improvement |
8ef630d2-809d-45a9-9c1c-df0d95deedfe | trentmkelly/LessWrong-43k | LessWrong | Communications in Hard Mode (My new job at MIRI)
Six months ago, I was a high school English teacher.
I wasn’t looking to change careers, even after nineteen sometimes-difficult years. I was good at it. I enjoyed it. After long experimentation, I had found ways to cut through the nonsense and provide real value to my students. Daily, I met my nemesis, Apathy, in glorious battle, and bested her with growing frequency. I had found my voice.
At MIRI, I’m still struggling to find my voice, for reasons my colleagues have invited me to share later in this post. But my nemesis is the same.
Apathy will be the death of us. Indifference about whether this whole AI thing goes well or ends in disaster. Come-what-may acceptance of whatever awaits us at the other end of the glittering path. Telling ourselves that there’s nothing we can do anyway. Imagining that some adults in the room will take care of the problem, even if we don’t see any such adults.
Perhaps you’ve felt her insidious pull on your psyche. I think we all have. This AI stuff is cool. Giving in to the “thermodynamic god”, to She-Who-Can’t-Be-Bothered, would be so much easier than the alternative, and probably a lot more fun (while it lasted).
And me? I was an English teacher. What could I do?
A little! I could donate and volunteer, as I did in modest fashion to MIRI’s predecessor organization for a while even before taking my first teaching contract. I could make sure my students and coworkers knew at least one person who was openly alarmed about AI. And I could find easy dignity in being ready to answer a call from MIRI that would realistically never come.
You can guess the rest. The universe called my bluff. I scrambled to make it not a bluff. And here I am on MIRI’s growing comms team!
It was a near thing, though. When MIRI posted about the open position, I almost looked away.
I think about that a lot now, especially on the hard days: guessing at the amount of history that was made by people who almost stayed in bed, and about how much history almost |
8cd7fc58-3021-4271-8409-362d8ebfd7c4 | trentmkelly/LessWrong-43k | LessWrong | Proposal for increasing instrumental rationality value of the LessWrong community
There were some concerns here (http://lesswrong.com/lw/2po/selfimprovement_or_shiny_distraction_why_less/) regarding value of LessWrong community from the perspective of instrumental rationality.
In the discussion on the relevant topic I've seen the story about how community can help http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/2l73 from this perspective.
And I think It's a great thing that local community can help people in various ways to achieve their goals. Also it's not the first time I hear about how this kind of community is helpful as a way of achieving personal goals.
Local LessWrong meetups and communities are great, but they have kind of different focus. And a lot of people live in places where there are no local community or it's not active/regular.
So I propose to form small groups (4-8 people). Initially, groups would meet (using whatever means that are convenient for a particular group), discuss the goals of each participant in a long and in a short term (life/year/month/etc). They would collectively analyze proposed strategies for achieving these goals. Discuss how short term goals align with long term goals. And determine whether the particular tactics for achieving stated goal is optimal. And is there any way to improve on it?
Afterwards, the group would meet weekly to:
Set their short term goals, retrospect on the goals set for previous period. Discuss how successfully they were achieved, what problems people encountered and what alterations to overall strategy follows. And they will also analyze how newly set short-term goals coincide with long-term goals.
In this way, each member of the group would receive helpful feedback on his goals and on his approach to attaining them. And also he will fill accountable, in a way, for goals, he have stated before the group and this could be an additional boost to productivity.
I also expect that group would be helpful from the perspective of overcoming different kind of falla |
74fb9b61-f5d5-4b88-a386-8909c7644423 | trentmkelly/LessWrong-43k | LessWrong | The Problem of the Criterion is NOT an Open Problem
Not long ago, I added a tag to LessWrong for the problem of the criterion. Shortly after that its text received an edit claiming it is an open problem. However, I think the problem of the criterion is not really an open problem. Here's why.
Here on LessWrong we say open problems are "the things in a field that haven't yet been figured out". On Wikipedia, an open problem is described as "a known problem which can be accurately stated, and which is assumed to have an objective and verifiable solution, but which has not yet been solved (i.e., no solution for it is known)." By either of these standards, does the problem of the criterion qualify as an open problem?
If our standard is "has it been figured out?", I'd say yes. I think not everyone likes what was figured out, which is a reason to make a bid to claim the problem is open, but not liking the answer should be insufficient reason to make such a claim. After all, I don't exactly like that the solution to the halting problem is that we can't create a general program that can determine if an arbitrary program halts, but so be it, that's the universe we find ourselves in, let's move on. So I think it is with the problem of the criterion: yeah, kinda sucks the kind of truth we can achieve when we restrict ourselves to mathematical thinking can't be achieved everywhere, but that's how the world is, so I guess we'll figure out how to live with it because we already are.
What about if we use Wikipedia's definition of an open problem? The problem of the criterion is certainly "a known problem which can be accurately stated". Is it "assumed to have an objective and verifiable solution"? The whole point of my and other's analysis of the problem is to show that this assumption is unfounded in some way and that no objective and verifiable solution can be found because the question is fundamentally flawed by asking for something impossible. In this way it seems similar to the "complete and consistent problem" or the "moment |
b2505b11-fe29-476e-a53a-767294ad72fd | trentmkelly/LessWrong-43k | LessWrong | Media Publication Governed by Prediction Markets
The post describes a tool that allows communities to own a decentralised media publication (a shared Twitter account in this case). In contrast to other shared media publications like subreddits, LessWrong or Hacker News, this tool sorts content using Prediction Markets. Any community member can bet on the number of likes each content would have if published on Twitter. The estimated number of likes is used to sort the content.
Members of the community are rewarded according to their contribution. They can contribute by submitting content that gets published and by betting accurately on the markets and being good at knowing which content will perform well on Twitter. The tokens received represent ownership of the Twitter account and can become valuable if someone wants to buy the account, or buy "sponsored tweets" to be posted on the account.
The post goes into the advantages of using Prediction Markets for content curation when compared to Upvotes/Downvotes (spoiler alert: a big advantage is resistance to manipulation).
I am the author of the post and I would love LessWrong to have a Decentralised Twitter Account. If you find this interesting please join our discord. |
6f8be575-35fc-40e3-81f1-c0ea01544adc | trentmkelly/LessWrong-43k | LessWrong | Hedonium's semantic problem
If this argument is a re-tread of something already existing in the philosophical literature, please let me know.
I don't like Searle's Chinese Room Argument. Not really because it's wrong. But mainly because it takes an interesting and valid philosophical insight/intuition and then twists it in the wrong direction.
The valid insight I see is:
One cannot get a semantic process (ie one with meaning and understanding) purely from a syntactic process (one involving purely syntactic/algorithmic processes).
I'll illustrate both the insight and the problem with Searle's formulation via an example. And then look at what this means for hedonium and mind crimes.
Napoleonic exemplar
Consider the following four processes:
1. Napoleon, at Waterloo, thinking and directing his troops.
2. A robot, having taken the place of Napoleon at Waterloo, thinking in the same way and directing his troops in the same way.
3. A virtual Napoleon in a simulation of Waterloo, similarly thinking and directing his virtual troops.
4. A random Boltzmann brain springing into existence from the thermal radiation of a black hole. This Boltzmann brain is long-lasting (24 hours), and, by sheer coincidence, happens to mimic exactly the thought processes of Napoleon at Waterloo.
All four mental processes have the same syntactic properties. Searle would draw the semantic line between the first and the second process: the organic mind is somehow special. I would draw the semantic line between the third and the fourth process. The difference is that in all three of the first processes, the symbols in the brain correspond to objects in reality (or virtual reality). They can make reasonably accurate predictions about what might happen if they do something, and get feedback validating or infirming those predictions. Semantic understanding emerges from a correspondence with reality.
In contrast the fourth process is literally insane. It's mental process correspond to nothing in reality (or at least |
d54ea51f-d863-45fd-8d1f-bc1f0ea5ae01 | trentmkelly/LessWrong-43k | LessWrong | Ontological Crisis in Humans
Imagine a robot that was designed to find and collect spare change around its owner's house. It had a world model where macroscopic everyday objects are ontologically primitive and ruled by high-school-like physics and (for humans and their pets) rudimentary psychology and animal behavior. Its goals were expressed as a utility function over this world model, which was sufficient for its designed purpose. All went well until one day, a prankster decided to "upgrade" the robot's world model to be based on modern particle physics. This unfortunately caused the robot's utility function to instantly throw a domain error exception (since its inputs are no longer the expected list of macroscopic objects and associated properties like shape and color), thus crashing the controlling AI.
According to Peter de Blanc, who used the phrase "ontological crisis" to describe this kind of problem,
> Human beings also confront ontological crises. We should find out what cognitive algorithms humans use to solve the same problems described in this paper. If we wish to build agents that maximize human values, this may be aided by knowing how humans re-interpret their values in new ontologies.
I recently realized that a couple of problems that I've been thinking over (the nature of selfishness and the nature of pain/pleasure/suffering/happiness) can be considered instances of ontological crises in humans (although I'm not so sure we necessarily have the cognitive algorithms to solve them). I started thinking in this direction after writing this comment:
> This formulation or variant of TDT requires that before a decision problem is handed to it, the world is divided into the agent itself (X), other agents (Y), and "dumb matter" (G). I think this is misguided, since the world doesn't really divide cleanly into these 3 parts.
What struck me is that even though the world doesn't divide cleanly into these 3 parts, our models of the world actually do. In the world models that we humans us |
0e076f20-d5a2-41aa-8229-ceec096e148c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Chris Olah on working at top AI labs without an undergrad degree
*This is a linkpost for* [*#108 - Chris Olah on working at top AI labs without an undergrad degree*](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/)*. You can listen to the episode on that page, or by subscribing to the '*[*80,000 Hours Podcast*](https://80000hours.org/podcast/)*' wherever you get podcasts.*
In this interview Chris and Rob discuss Chris’ personal passions over the years, including his attempts to reduce what he calls [‘research debt’](https://distill.pub/2017/research-debt/) by starting a new academic journal called [Distill](https://distill.pub/), focused just on explaining existing results unusually clearly. They also cover:
* Why highly thoughtful cold emails can be surprisingly effective, but average cold emails do little
* Strategies for growing as a researcher
* Thinking about research as a market
* How Chris thinks about writing outstanding explanations
* The concept of ‘micromarriages’ and ‘microbestfriendships’
* And much more.
> *…it’s actually much easier to do unusual things when you’re validated by a third party. …adults in my life totally came around once I was given $100,000 to go and work on stuff, in a way that they really were not supportive beforehand.*
>
> *–Chris Olah*
>
>
Key points
==========
Should people go to university?
-------------------------------
> ***Chris Olah:** I applied for the* [*Thiel Fellowship*](http://thielfellowship.org/)*, which is a program that provides financial support for people under the age of 20 to go and work on ambitious projects or do unusual things, and I got it, and I was like, “Well, I have two options. One is to go back to university, and the other is I can work on whatever I want for two years.” It turns out that wasn’t a difficult decision.*
>
> ***Chris Olah:** I had a lot of experience in doing things cutting against pressure, from the previous stuff. But I think at the time, I framed it — and especially framed it to other people — as, well, I can do this for two years, and then I can still go back to university. That seems like an amazing opportunity. One other thing that comes into play here is it’s actually much easier to do unusual things when you’re validated by a third party. I think people, when they hear about the Thiel Fellowship they’re like, “Ah, the high-value thing is that they’re providing funding.” That’s certainly part of it, but I think that actually the higher value thing was actually like, adults in my life totally came around once I was given $100,000 to go and work on stuff, in a way that they really were not supportive in beforehand. I think there’s also just a really big effect in terms of legitimizing an untraditional path and making it easier.*
>
> ***Chris Olah:** I get a lot of emails from people asking me if they should go to university. I think it’s maybe the single most common question I get asked. I think for almost everyone who emails me, they should go to university. The reason that I think that is I think that if you want to benefit from going and doing something else, you have to have a lot of, I think, self-discipline and willingness to go and work hard on things, and self-motivation to work hard on things without an external forcing function. I think that often people don’t have this, and then this kind of thing doesn’t work as well for them.*
>
> ***Chris Olah:** On the other hand, I think for the people — and maybe to give some more context, — I think a lot of the people who I saw really thrive in the Thiel Fellowship, some had already before the age of 20 done undergrad degrees. So there were those ones. But I think a lot of people had done really significant personal projects involving software or science or something like this. I think that’s actually a pretty good test. If you have been able to, out of self-motivation, go and do your own large personal project — and obviously you are in a privileged enough position to be able to support yourself — then you’re likely to be able to do well in something like the Thiel Fellowship, or taking a year off, or taking a few years off. But if you aren’t, it’ll be much more challenging.*
>
> See also Chris’ [essay on whether people should go to university](https://colah.github.io/posts/2020-05-University/).
>
>
Lessons from Chris’ unconventional career track
-----------------------------------------------
> ***Chris Olah:** I think probably the most useful thing I’ve extracted has been thinking about the* [*Pareto*](https://en.wikipedia.org/wiki/Pareto_efficiency) *frontier of skills. For example, a lot of my early contributions to machine learning were basically being able to create these really helpful illustrations of complicated ideas. What skills did I need to do that? Well, I needed both to understand machine learning, and I needed to be able to draw. I wasn’t an exceptionally good artist or scientific illustrator, and I wasn’t exceptionally knowledgeable about machine learning. But very plausibly, for a while, I was the person in the world who was the best of the intersection of machine learning and drawing. If you think of these two-dimensional plots of different skills, or three-dimensional plots of different skills, and you think about the Pareto frontier, very often society is good at producing people who are optimized for a particular skill set or set of skills that society has really validated as useful.*
>
> ***Chris Olah:** We create entire pipelines training people. But I think that often, if you can find useful intersections of skills that aren’t these couple of standard skills, there can be a lot of value. And it’s much easier to go and have a big impact, and often have a big counterfactual impact. When I’m talking to people about their own careers, I often try to frame it in terms of, what are the skills that they’re cultivating, and what do we think the Pareto frontier with regards to these skills looks like? Do we think that there’s places where, rather than going and becoming the world’s best at one skill, they can produce a lot of value by being at an intersection of skills that other people don’t have?*
>
> ***Rob Wiblin:** Yeah, that’s really interesting. Thinking about it theoretically, I suppose part of the reason is just that there’s so many combinations of two different things that you could throw together. So the space of possible combinations is vastly larger, and so you have a lot more to choose from. It also means that you could be the only person who’s interested in X and Y, if you choose two things that are sufficiently distant. Then you have a truly unique skill set, and you might just stumble on something that no one else has even tried to find.*
>
> ***Chris Olah:** Exactly, and now the problem is the space is exponentially big, and you want to not just find an intersection, but the intersection has to be useful. So you have to have some taste in picking the skills that you develop. But I think that there are lots of opportunities like this, and that often it’s much less competitive than going and being good at one of the skills that society already really values as a thing to optimize for.*
>
>
Developing research taste and technique
---------------------------------------
> ***Chris Olah:** I think it’s often helpful to divide being a good researcher into two parts. One is taste. So your ability to go and pick good problems and go and pick good avenues to attack those problems, and things like this. The second you might call technique, or execution. Maybe if you picture a chemist working with vials and pipettes and weird things, it’s pretty clear that there’s a whole technique to going and manipulating that laboratory equipment.*
>
> ***Chris Olah:** I think that it’s subtler in other fields, but I think that there is something — certainly in machine learning, of the technique of training models, and even just being a good programmer, and doing very minute things of manipulating your code editor, or going and manipulating distributed systems, and stuff like this — I think that there’s a question of how do you develop both of those skills. And for taste, I think that’s probably the hardest one to develop. I tried to come up with a list of exercises that one could do. An example, and I think probably the most useful one, is just write down a list of problems that you think might be important to work on, and then have somebody else, ideally your mentor, go and just rate them one to 10.*
>
> ***Chris Olah:** Because one of the really hard things about developing taste is that you have such a slow feedback loop on learning lessons, because you have to go and do the entire project. What you want to do is use a mentor or use somebody else as a cheap proxy for getting feedback, and then if you disagree with their feedback, you can either talk to them about it, or maybe you even want to go and do that experiment. I think that could be useful. I think there’s lots of other things. I think reading about the history of science is helpful. I think going and trying to write just about why you think things are important is helpful. In any case, I think there’s a bunch of exercises there. Then, on the technique side, I actually think the most valuable thing here is working closely with people who have good technique.*
>
> ***Chris Olah:** I think actually, at least in machine learning, and probably other computer science disciplines, going and pair programming with people is immensely valuable. I think that there’s a lot of stuff that’s hard to communicate in other forms, but gets passed along when people are pair programming. I think for developing technique, often pair programming is the highest leverage thing to do.*
>
> See also Chris’ [notes on building research taste](https://colah.github.io/notes/taste/).
>
>
Cold emails
-----------
> ***Chris Olah:** I get a lot of cold emails, and 99% of them are terrible. They’re like, “Can you do my homework for me?” or, “Can you answer this basic question that I could Google for one minute and answer?” I think people get this impression that cold emailing doesn’t work, because of course, if you send emails like that, people are overwhelmed and aren’t going to respond. Or, even if you just very generically are like… If you send a nicely written email and you’re like, “I’m trying to get into machine learning. Can you do a half-hour phone call with me to talk about how to do that?” Even that, you’re not very likely to get a response from. But I think the thing that people miss is that if you write really good cold emails, it’s actually not that hard to be the best email I received that week.*
>
> ***Chris Olah:** And I think that if you’re willing to invest energy in understanding what a researcher or a group is working on, and you’re specifically referring to their papers, and you have thoughtful questions about things, yeah, I think that people will pay a lot of attention to that. Then I think that it will… It very often works well. I think there’s a big gap in what people mean when they talk about cold emails, and I think that if you’re willing to put in the work, and if you just genuinely really care about what somebody is doing, and have put in the work to understand it, and can talk about it really intelligently… That’s going to come through. It’s a much more compelling reason for the person to talk to you than other things.*
>
> ***Chris Olah:** I think there’s a lot of people who are trying to look at how to get into machine learning, and what they do is they send lots of emails to people, or they email famous people. I think what you should actually be doing is trying to figure out who you would be really excited to work with, and really understand their work. Ideally pick somebody who’s a little bit less famous maybe, and then reach out to that person with an email where you’ve put a lot of work into it being clear that you’ve read their work, and connecting your interests to theirs, and things like this. There’s a number of emails that have been really important for me, where I spent a week writing them. I think that was a totally worthwhile investment. I think that’s not how people usually think about cold emails.*
>
>
Research as a market
--------------------
> ***Chris Olah:** The general idea is, you think of researchers as investing in different research ideas, and if the research idea pans out, and other people don’t grab it before them, then they get some reward from that maybe. Maybe more resources, or just they get a payoff from that in some way. You can see there being this competition to go and grab promising research ideas. I think there’s roughly two strategies that you can play in this market. One is you can work on things where everyone really agrees that they’re important, and that are really popular.*
>
> ***Chris Olah:** And what you’re doing when you do that is you’re going and making that little area of the research market a tiny bit more efficient. You’re going and making it so that ideas that are important to get done, get done a little bit more quickly. And I think that is actually genuinely a valuable thing to go and do. If the thing that you’re doing is really important, and you make it happen in expectation a week earlier or a month earlier, that’s really great. But the other strategy you can do is you can try to beat the market. You can try to work on things where you just can see that something is undervalued relative to what most of the community thinks. That’s the thing that I try to do a lot of the time. And there’s lots of reasons why you might be able to beat the market.*
>
> ***Chris Olah:** It could be that you just care about things that other people don’t. If you care about safety, or in other areas, if you care about animal welfare, or if you have weirder goals, or different goals than a lot of people, you might be able to beat the market in that way. I think another way, though, is just, if you have some insight that you really believe is true about a problem, and that’s not a widespread insight, then that could be really helpful. That can be… I feel like that’s a lot of what I’m doing, me personally. I think that you can genuinely understand neural networks if you’re willing to input enough energy into trying to figure out what’s going on. It’s a big bet that I’m making, that most other people aren’t making.*
>
>
Research debt
-------------
> ***Chris Olah:** I think in many fields, achieving a research-level understanding is* [*like climbing a mountain*](https://distill.pub/2017/research-debt/)*. There’s all of these ideas that you have to understand and build up towards before you can go into research.*
>
> ***Chris Olah:** Mathematics, I think, is a really striking example of this, where there’s just years and years of ideas that you’re probably going to spend climbing to the point where you can do research, because there’s just so much that isn’t piled on top. Then when you get to the top, you go and you pile some more results on top, and you make the mountain higher.*
>
> ***Chris Olah:** I think a lot of people are proud of this, because they’re like, ah, the fact that it’s this long pilgrimage to go and get to the point where you can do research, that means that it’s especially profound, and it reflects all of the work that’s been done to date. But I think that, actually, it’s often a reflection that we haven’t put enough work into explaining things, building up really good infrastructure for learning about that field. I think this is going to come in lots of forms. It can be poor expositions, just not good explanations to things. Sometimes it’s just undigested ideas. There’s an idea that’s important, but it hasn’t been really refined to the completed version of that idea. I think it’s very common for there to be bad notations, or just bad definitions of things that make things more complicated, and all these things make it harder to go and understand the topic.*
>
> ***Chris Olah:** One analogy that I like is sometimes in software engineering, people talk about technical debt, which is you move really fast to get to that point where you can ship some feature or something like this, and in the process, you write lots of bad code, and it’s really messy and gross, and you have bad variable names, and it isn’t documented, and then it’s hard for other people to build on top of. I think something analogous, a kind of* [*research debt*](https://distill.pub/2017/research-debt/)*, is endemic in science.*
>
>
Articles, books, and other media discussed in the show
======================================================
Chris’ projects and writing
---------------------------
* [Distill](https://distill.pub/) (From Chris: *We recently posted an* [*announcement*](https://distill.pub/2021/distill-hiatus/) *about putting Distill on hiatus, which is germane to this episode’s discussion of publishing and Distill.*)
* [Inceptionism: Going Deeper into Neural Networks](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)
* [Research Taste Exercises](http://colah.github.io/notes/taste/)
* [Research Debt](https://distill.pub/2017/research-debt/)
* [Do I Need to Go to University?](http://colah.github.io/posts/2020-05-University/)
* [Micromarriages](https://colah.github.io/personal/micromarriages/)
* [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)
* [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)
Other links
-----------
* [Thiel Fellowship](http://thielfellowship.org/)
* [ImageNet Classification with Deep Convolutional Neural Networks](https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf)
* [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/) (textbook)
Transcript
==========
Rob’s intro [[00:00:00]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=0&btp=48126aba)
---------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and whether free surgery from someone without an MD is a good deal. I’m Rob Wiblin, Head of Research at 80,000 Hours.
Last week’s episode was Chris Olah’s first podcast ever, and today we’re releasing his second podcast ever — because we ended up having so much to cover it made sense to do more than one recording session and split it in two.
Last week’s episode focused on Chris’ technical work, but this one is about Chris’ life and experiences so far. We talk about things like:
* How he rose to the top of the ML field without a university degree
* How he got his foot in the door at Google Brain and OpenAI
* The journal that he founded — called *Distill* — and how to explain complex things really well
* How to develop good taste in research
* Why very thoughtful cold emails can be surprisingly effective, but average cold emails do little
* Micromarriages and microbestfriends
…and much more.
Without further ado, I bring you Chris Olah.
The interview begins [[00:00:57]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=57&btp=48126aba)
-------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Today, I’m speaking with Chris Olah. Chris is a machine learning researcher currently focused on neural network interpretability. Until last December, he led OpenAI’s interpretability team, but he recently left with some colleagues to help start a new AI lab focused on large models and safety. Before OpenAI, he spent four years at Google Brain developing tools to visualize what’s going on in neural networks. He was hugely influential at Google Brain, being the second author on the launch of the [DeepDream article](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html) back in 2015. I think the DeepDream images are something that basically just about everyone has seen at this point.
**Rob Wiblin:** He also helped pioneer feature visualization, activation atlases, building blocks of interpretability, TensorFlow, and he even co-authored the famous paper [Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565). On top of all of that, in 2018, he helped found the academic journal [*Distill*](https://distill.pub/), which is dedicated to publishing clear communication of technical concepts. Chris is himself a writer who is popular among many listeners to the show, and his blog has attracted millions of readers by trying to explain cutting-edge machine learning in highly accessible ways. He’s managed to do all of this without a degree, because he dropped out of college in 2009 to defend a friend against bogus terrorism charges. In 2012, Chris took a $100,000 Thiel Fellowship, a scholarship designed to encourage gifted young people to go straight into research or entrepreneurship rather than go to university.
**Rob Wiblin:** What an intro. Thanks for coming on the podcast, Chris.
**Chris Olah:** Thank you for having me, Rob. That is an extremely flattering introduction. I should say, my role in TensorFlow was very minor.
**Rob Wiblin:** Okay, but all of the rest of it stands.
**Chris Olah:** This is my first podcast ever. If I get terrified of the medium and never do a podcast episode again, everyone will know who to blame.
**Rob Wiblin:** Or alternatively, if things go well, we’ll have a fantastic scoop and I get lots of new subscribers. That’s the dream.
**Rob Wiblin:** Alright, I hope we get to talk about your research into AI interpretability and your many unusual life experiences as just described. But, first, what are you working on at the moment and why do you think it’s important?
**Chris Olah:** I think one of the craziest things about machine learning is that we have all these systems that can do these amazing things — they can classify images, translate text, write essays, recognize your voice, generate videos… And yet we can’t go and produce these systems directly. No human being knows how to write a computer program directly that does those kinds of things. Instead, we go and produce systems that do these things, and we have no idea what those systems are doing. So the thing that I’ve always felt has just been the question that I’ve been obsessed with, and just feels like the burning question in machine learning to me, is: How in the wide world are these systems going and doing all of these crazy things that we don’t know how to do? I care about that for safety reasons, and honestly, I also just care about it because it seems like this incredibly crazy thing about the world that I just want to understand.
**Rob Wiblin:** Yeah, that makes a lot of sense. It sounds like, looking at your CV, it’s been something like an eight-year journey for you, working on this problem. Trying to pick away at it, and taking neural networks from being these black boxes to things that we can properly understand and build on.
**Chris Olah:** Yeah. it’s not the only thing that I’ve done for the last eight years, but it’s definitely been the biggest one, and I’ve tried lots of things. A lot of the things I tried early on didn’t work very well, but over time I think we’ve really developed. Not just me, but lots of other people and lots of collaborators that I’ve worked with have really been able to get to a point where we can actually very significantly understand neural networks, and can actually just look at their weights and read entire algorithms for doing things that we didn’t really know how to do before off of them. That’s been really cool to see.
Defending a falsely accused friend [[00:04:24]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=264&btp=48126aba)
----------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Alright, well, we’ll return to some of those techniques later on. Let’s talk now about the very unusual and circuitous career path that you’ve ended up going on. We can see if people can learn any lessons from it, or maybe whether it’s just too weird to be generalizable. Obviously, you now do machine learning research really professionally, but unusually you don’t have a PhD or even an undergraduate degree. What’s the story of how that ended up happening?
**Chris Olah:** Oh gosh, well, when I was in high school, I became really involved in a community technology space called [HackLab](https://hacklab.to/). So the G20 came to Toronto, and one of our members was doing security research, where he did things like he was recording where temporary cameras got put up. The police thought this was really suspicious, and they raided his house and they found a hobby chemistry and electronics lab, and they decided he was making bombs. One of the police officers conducting the raid had served in Afghanistan and thought he could recognize explosives, and he misrecognized chemicals as explosives.
**Chris Olah:** It was obviously a really awful situation for him. I didn’t know him that well, I knew him a bit through HackLab, but a lot of us really rallied in supporting him. I had a lot more flexibility than other people because I was a university student in my first year. It just seemed really important to me to try to support him, and so I started going to court whenever he had bail hearings and stuff like this, and trying to do court support-type stuff to just help him and help his family a little bit, and I took notes and things like this. Then he was under house arrest for one year, and in the second year I took a year off university so I could go and be at his trial full time. He was found innocent of all charges in the end, but that led to me initially dropping out of university.
**Rob Wiblin:** Yeah, I read a bit about this, there are records from 2008 on your personal website. It is an astonishing and pretty tragic story. You must have been — I’d imagine that I would be as well — just so outraged and disgusted in order to basically drop out of university. It sounded like you were focusing a lot of your time on tracking this case and trying to help him as much as you could, to make sure that this guy didn’t get some long terrorist prison sentence for something that he absolutely hadn’t done.
**Chris Olah:** Yeah, I don’t know that I actually helped very much. I think really at best I saved him a little bit of money by going and doing some things that saved his lawyers a bit of time here and there. I think the most impactful thing I did was I transcribed an interrogation session and put it on YouTube, and then his lawyers could go and sort of… There’s this nice feature where you can click on a line from the transcript and it’ll jump to that point in the video. I think that was probably the highest impact thing I did. But I think actually it wasn’t primarily outrage at first. There was a lot of outrage, but I think actually the first thing was fear. I was scared for this person, I was scared for myself and for all of my friends that maybe they were going to now come after all of us as, I don’t know, some kind of co-conspirators or something like this.
**Chris Olah:** It was just a really scary time period, and I think maybe the main thing was just that it seems really important when certain people are systematically stepping away from someone or abandoning someone, it seems really important for other people to rally and to try and support them. Or that was a strong emotional intuition I had. I think that was probably the main motivator for me.
**Rob Wiblin:** Yeah, I hadn’t thought about that angle as much. I suppose it actually, on some level, required bravery. Bravery or foolishness. Because by taking such a big interest in this, as someone who’s an associate now of an accused terrorist, you’re potentially painting a target on yourself to have the police come after you, or investigate you, or potentially try to engage in reprisals if they’re frustrated with what you’re doing. I imagine your parents were… Maybe they were on board, or maybe they were just horrified at the risk that you were taking?
**Chris Olah:** All of the adults in my life were very, very deeply worried about this, and pushed me really hard to not do it. But I think any jeopardy that I was in would have already been there just as a result of pre-existing associations. But I didn’t know anything about law at that time, I didn’t know anything about how to think about this type of thing. It just seemed very generally scary.
**Rob Wiblin:** Yeah, it sounds like… It’s totally understandable why you did this, but it sounds like you think you didn’t make that much of a difference. In retrospect, do you wish that you hadn’t dropped out of university or hadn’t done this stuff? Or is that just maybe an impossible hypothetical to consider? Because then you’d be a different person?
**Chris Olah:** Yeah, it’s pretty hard to think about it. I do think that there’s a way in which it was altruistic, but not especially effective. On the other hand, it’s hard to feel too bad about it. I think probably this person felt more supported, and his parents probably felt more supported, and that definitely felt good. And I don’t know that my time was super high leverage at that point in my career. But it’s, yeah, I don’t know, it’s hard to think about it. It also just put me on a trajectory that made me do a lot of different things, and maybe in some ways made me more effective later on by becoming more unusual. So it’s hard to think about.
**Rob Wiblin:** Yeah, it makes sense that when someone is accused of a crime like that, even if people all know that it’s a false charge, people are inclined to just run away from them. Then that means it’s like someone has to take the hit of being willing to stand up and associate with them in order to prevent that person just being completely ruined or having like no social support and feeling like they’re being completely abandoned. It’s a difficult situation. I suppose it’s hard to think about it through an effective altruist lens. But I extremely admire and understand why you ended up doing that.
**Rob Wiblin:** Maybe I’ve just become inured to stories of misconduct by police — I suppose we don’t want to go into all the details here, because this isn’t ultimately an episode about criminal justice — but this must have shaped your view of national security services somewhat negatively. What did you think at the time, and maybe how has your opinion of that evolved since you were 18?
**Chris Olah:** Yeah, it definitely soured my opinion of national security agencies. There was a lot of really awful stuff. They threatened his wife at one point to try to get him to confess to something. There was just a lot of stuff where in his trial I thought the prosecution was deeply disingenuous. I should say, in my role as trying to be a supporter, I don’t think that I was always the epitome of intellectual integrity. But I think that if you are a prosecutor who’s trying to put somebody in jail, going and making disingenuous arguments… One that I remember is they argued that the reason he had chemicals that could burn bright colors was he was planning to make a rainbow bomb. After they found that he didn’t have explosives, it became that he had chemicals that could be used to make explosives.
**Chris Olah:** They found out that he had chemicals that could burn bright colors, and yeah, this argument that it could be a rainbow bomb. I don’t think that anyone who is approaching this in an honest way would make that kind of argument. They really caused, I think, severe harm to this person’s life, and so that really deeply soured my views. And yet, on the other hand, I think over time, especially as I’ve thought more about x-risk and bio stuff and even AI, I’ve gradually had more sympathy also for national security-type concerns. I don’t know, where I wind up is something like, just being really disappointed. Because I really want to be able to trust these organizations, and they clearly are systematically falling short. But also doing something important.
**Chris Olah:** Also it’s just sad, even from their perspective. They’re just going and burning goodwill left, right, and center by doing all of these things. I put on one hat and I’m really outraged at just the appalling ethics of it, and I put on the other hat and I’m just really sad at the needless waste and burning of goodwill.
**Rob Wiblin:** Yeah, it’s extraordinary that so much time was spent on this case, where it rapidly became apparent that there was nothing actually to investigate. I know that there’s amazing people within national security, some of the people with the greatest integrity and an incredible work ethic. It’s just national security and law enforcement is like any other industry, where there’s some people who are fantastic and there’s some people who engage in misconduct all the time. It’s just an area that is so important and where there’s so much power that when people do engage in misconduct, the consequences can be extremely unjust and extremely harmful.
**Chris Olah:** It’s also such a large space that I think it’s probably harder to have a high bar and to really filter things. I don’t know, I’m not an expert, so I don’t want to opine too much.
**Rob Wiblin:** Well, the police are one of the largest employment groups, so it is a lot of people I suppose. It probably is difficult to have such an extremely high bar such that only people who have demonstrated the greatest integrity in their life can possibly become police officers, because they’re just so many.
Why Chris didn’t go back to university [[00:13:21]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=801&btp=48126aba)
--------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Let’s maybe come back to the path that you were taking in your career then. You’d been going to university, but you left in order to do this. I suppose your natural, boring track in life had been somewhat derailed. But you could have probably gone back to university after that, but you decided not to. Why was that?
**Chris Olah:** Yeah, so when I was doing this court support-type work, there were lots of months where there were lulls, and there just wasn’t anything to do, and so I had lots of free time. I was interested in 3D printers for awhile, so I got really involved in working on 3D printers. First I was designing 3D printers, and then I was working on a startup with a friend for open-source software to go and design objects for 3D printers. A lot of the so-called CAD software that you use to design objects, 3D objects, and we were trying to create open-source tools because I felt that was very important.
**Chris Olah:** I applied for the [Thiel Fellowship](http://thielfellowship.org/), which is a program that provides financial support for people under the age of 20 to go and work on ambitious projects or do unusual things, and I got it, and I was like, “Well, I have two options. One is to go back to university, and the other is I can work on whatever I want for two years.” It turns out that wasn’t a difficult decision.
**Rob Wiblin:** Yeah, but a lot of people who are considering not going to university and starting a business or doing something else, they feel like they face a lot of pressure, understandably, from other people, from society, from themselves. “You have to go to university.” Was it a difficult decision to make on some level, even if you preferred the path of not going?
**Chris Olah:** Well, I had a lot of experience in doing things cutting against pressure, from the previous stuff.
**Rob Wiblin:** Yeah, that’s fair enough.
**Chris Olah:** But I think at the time, I framed it — and especially framed it to other people — as, well, I can do this for two years, and then I can still go back to university. That seems like an amazing opportunity. One other thing that comes into play here is it’s actually much easier to do unusual things when you’re validated by a third party. I think people, when they hear about the Thiel Fellowship they’re like, “Ah, the high-value thing is that they’re providing funding.” That’s certainly part of it, but I think that actually the higher value thing was actually like, adults in my life totally came around once I was given $100,000 to go and work on stuff, in a way that they really were not supportive in beforehand. I think there’s also just a really big effect in terms of legitimizing an untraditional path and making it easier.
**Rob Wiblin:** Yeah, it speaks to the signaling component of going to university, where you need some credentials to show that you’re a legitimate person who’s not crazy or not irresponsible. I guess the Thiel Fellowship was providing that as a substitute.
**Chris Olah:** I think there’s an element of that. I think there’s also just an element of when you do a non-traditional thing, it’s scary for people who care about you. And having some signal that, in fact you’re doing something reasonable…I think can help a lot.
**Rob Wiblin:** You had [this essay](http://colah.github.io/posts/2020-05-University/) on your website where you go into your views on the general path of not going to university and going and doing something else instead. It’s probably the best resource that people can go to if they want to properly learn all of your views on that. Do you just want to summarize briefly, is this something that a lot more people should be considering?
**Chris Olah:** I get a lot of emails from people asking me if they should go to university. I think it’s maybe the single most common question I get asked. I think for almost everyone who emails me, they should go to university. The reason that I think that is I think that if you want to benefit from going and doing something else, you have to have a lot of, I think, self-discipline and willingness to go and work hard on things, and self-motivation to work hard on things without an external forcing function. I think that often people don’t have this, and then this kind of thing doesn’t work as well for them.
**Chris Olah:** On the other hand, I think for the people — and maybe to give some more context, I think a lot of the people who I saw really thrive in the Thiel Fellowship, some had already before the age of 20 done undergrad degrees. So there were those ones. But I think a lot of people had done really significant personal projects involving software or science or something like this. I think that’s actually a pretty good test. If you have been able to, out of self-motivation, go and do your own large personal project — and obviously you are in a privileged enough position to be able to support yourself — then you’re likely to be able to do well in something like the Thiel Fellowship, or taking a year off, or taking a few years off. But if you aren’t, it’ll be much more challenging.
**Rob Wiblin:** Yeah, to what degree is this a path just for really talented people? I think it’s fair to say that like 20-year-old Chris Olah had an awful lot of potential. Some of the other people who I’ve seen thrive not going to university, it just seems like they were on a rocketship to begin with. It makes me wonder, if you’re someone who’s very talented, do you need to have those credentials in order to get your foot in the door in the careers that you want?
**Chris Olah:** My guess would be that the self-motivation thing is bigger than anything else. But I also just don’t feel that qualified to opine on how useful university would be to people who are very different from me. For example, I can imagine a world where actually for many people who are not going to pursue really intellectually challenging careers, maybe actually going to trade school might be more effective, or things like that. I could imagine there being like different reasons why not going to university might make sense for people who are very different from me. But it’s harder for me to speak to that.
**Rob Wiblin:** Has not going to university ever created issues with being taken seriously later down the line, or possibly immigration could be a concern as well?
**Chris Olah:** Yeah, so I think immigration is an underrated concern with this. Almost all visas for the U.S. require you to have an undergrad degree. I’m on this really weird ‘[Alien of extraordinary ability](https://en.wikipedia.org/wiki/Alien_of_extraordinary_ability)‘ visa, which just doesn’t list it. But I think that’s really the main option you have if you don’t have an undergrad degree.
**Chris Olah:** I think that there’s also some social challenges, where I think actually something that I undervalued when I did not go to university was it makes it so much easier to have friendships and romantic relationships as a young person. I was cut off from a lot of those opportunities, and I think that was actually a really significant cost that I didn’t understand at the time.
Switching to machine learning [[00:19:33]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=1173&btp=48126aba)
------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Yes, so tell us a bit more about the Thiel Fellowship, and to what extent having the social group there substituted for university perhaps?
**Chris Olah:** Well, it might have been more so if I’d been able to live in the Bay Area. I visited at that time, periodically, but I was still living in Canada, since I didn’t have a visa and couldn’t live in the U.S. It might’ve done more if I had been able to be there. I did make some friends, and I think that was valuable. I guess more generally it was just a really incredible period of being able to go and pursue things that I was excited about. I started with 3D printers, and I also was just doing lots of random side projects. Then after a while, about a year, I switched from 3D printers to machine learning, and that was a fairly big pivot.
**Rob Wiblin:** Yeah, what caused you to do that?
**Chris Olah:** Well, there were a few things. One is just that I had the opportunity to do so because someone I knew in Toronto, [Michael Nielsen](https://michaelnielsen.org/), was writing a [textbook on neural networks](http://neuralnetworksanddeeplearning.com/). And he ran a seminar series to practice for writing his textbook. So I got exposed to a lot of ideas and learning just as the field was really just on the verge of taking off, right after the [results from Krizhevsky](https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf) in 2012. So I had the opportunity, I was aware that it was this really exciting thing, and I became very excited about it. And my main collaborator on the 3D printing stuff dropped out. Another thing that contributed a little bit was that I actually happened to meet Holden Karnofsky, of GiveWell and Open Philanthropy.
**Rob Wiblin:** That would have been pretty early on.
**Chris Olah:** Yeah, this was in 2012 or 2013. The reason that I met him was I was friends with Dario Amodei, who at that time was a grad student. And Dario, somehow, despite being on a grad student’s stipend, was giving what was probably a fiscally irresponsible amount of his stipend to GiveWell, and that led to him meeting Holden. I ended up getting dinner with Holden and Dario. During the dinner I pitched Holden on how I was working on 3D printers, and how I thought if we can have open-source tools for 3D printers we’d be able to go and bring 3D printers to everyone, and maybe we’d end scarcity and stuff like this. And Holden just really shot it down. At the time I was very miffed.
**Chris Olah:** Actually, the thing that Holden said was, I don’t think this is valuable, but it seems like a great way for you to develop skills. At the time, I was really miffed. But in retrospect, it was actually a very useful thing for him to say. I think this is a common theme of talking to Holden. I feel miffed at him sometimes, and then very often I’m very grateful for the things that he says.
**Rob Wiblin:** Yeah, what was his case against working on open-source… What do you call it, like local manufacturing? What’s the term for this?
**Chris Olah:** I was running open-source CAD tools. I don’t remember the exact argument that he made. I think he was just skeptical of a lot of things, and I think that there were a lot of things to be skeptical of. Not the least of which was whether 3D printers would actually be able to become really widespread at low cost in the way that I was envisioning, and I think also just whether these open-source tools were actually a critical blocker. Which in retrospect, I don’t think they were.
**Rob Wiblin:** Yeah, people have been talking about 3D printers for a long time, and I’ve always thought, I just don’t know how many things like that I want to make. How many things can be 3D printed that I’m actually that interested in producing for myself? Maybe I’m going to be proven wrong one day.
**Chris Olah:** Well, so I think the really compelling story is if you can create a 3D printer that can print itself, and can also print a wide variety of objects, you can have this feedback loop of making it really easy to go and spread them. And then anything that you can print, you can provide freely to everyone. You have these printers that you can like 80% print or something like this. Then there’s the remaining ‘vitamins’ that you can’t print. But the vitamins are all the hard parts, and so it’s an illusion that you’re making progress on this, I think, to a significant extent.
**Chris Olah:** I don’t know, I haven’t been involved in this space for a long time. But at one point I had a lot of fantasies about this, and I designed a 3D-printable vacuum cleaner, and worked on how you can make microscopes with 3D printers. In retrospect, it wasn’t very useful.
**Rob Wiblin:** Yeah. What do you think of filling this role where you shoot down someone’s plans? It’s a difficult one because you come across as, potentially, kind of an asshole.
**Chris Olah:** Yes.
**Rob Wiblin:** It’s like, maybe you’re not really in a position to confidently tell people that the idea is bad. I think onlookers don’t like you when you’re saying that someone’s plans are bad, and the person themself might object to it, and you run the risk of being wrong. And yet it does seem like sometimes it’s just really important to have someone say, “No, this is a dumb idea.” It’s a balancing act.
**Chris Olah:** I think it is really important. It’s something that I wish I was better at. Personally I really struggle to give people negative feedback. When I’m in a management role, I really push myself to do that, but it’s hard. I really admire some people who can just be like, “Yeah, that’s nonsense.” Just really frankly say it. I think you ideally couple it with support. Holden’s message wasn’t just, “This is a dumb idea.” It was, “This doesn’t seem like a very good idea, but you’re developing lots of good skills and you’ll probably do something useful in the future.”
**Rob Wiblin:** Yeah, I’ve tried to play this role a handful of times. I’ve heard someone’s plan and just been like, “No, this is terrible. You’ve really got to change it because I don’t see this having any impact.” I’ve gotten a mixed reception, I think it’s fair to say. So I feel a little bit more cautious about doing that now. I don’t know, maybe I should go back, after this anecdote, I should go back—
**Chris Olah:** I think it probably took a few years for me to totally come around on Holden not just being a little bit annoying at the time.
**Rob Wiblin:** Yeah. Did he pitch you on going into machine learning? Was he like, “Machine learning is the future?”
**Chris Olah:** No, not at all. I was completely separately excited about machine learning for other reasons at the time.
**Rob Wiblin:** Okay, so Holden, among other factors, convinced you that 3D printing maybe wasn’t the most important thing to be working on. Then, independently, you were getting excited by machine learning, which was beginning to take off at that time?
**Chris Olah:** Yeah, or deep learning was beginning to take off at that time.
**Rob Wiblin:** Yeah, interesting.
**Chris Olah:** Yeah, the thing that I just couldn’t get over, and I was really obsessed with, was we have these systems that can do these things that no human being knows how to write a computer program to do. And no one knows what’s going on inside of them! That question just really hooked me, and I couldn’t get it out of my head. And it became a big motivator for me.
**Rob Wiblin:** Interesting. Yeah, I suppose I’ve just always taken it for granted that neural nets are these combinations of nodes and weights and like it’s a bunch of numbers, and it does some stuff, and we don’t understand how it works. But that is, from one point of view, that is crazy.
**Chris Olah:** It’s crazy. It’s completely bonkers that we have these systems. Like in any other area, you’d say that it’s harder to go and create a system that automatically builds your systems for you than to go and get the system that can do the thing that’s hard. But, in this case, we have no idea how to go and produce the system, and people have certainly tried. For essentially all of the machine learning tasks we talk about, people have tried to go and hand-build systems that do this. We can’t build systems that can go and do the things that neural networks do. Even when we try really hard to do it by hand. Yet somehow they automatically form. It’s just truly a crazy fact of the world.
**Rob Wiblin:** Yeah, so we’re making them indirectly by programming a computer to say, “Well, try to do this thing,” and then when it fails to do it, we’re like, “Well, you would have done a little bit better if these numbers were a bit different in this way.” So then we’ll try again, and then we’ll change the numbers again, and just keep going until it works. But, yeah, at the end we end up with a whole bunch of numbers and we’re just like well, we don’t really know how to make sense of this. We don’t even know what any… Or at least in the past, as we’ll get to, we didn’t know what all the different parts did.
**Chris Olah:** Yeah, somehow those numbers correspond to a computer program. What’s going on inside it? I really want to know.
**Rob Wiblin:** Okay, so you were particularly—
**Chris Olah:** …And 19-year-old Christopher really wanted to know.
**Rob Wiblin:** Nice, okay.
**Chris Olah:** …Hasn’t really changed.
How Chris got his foot in the door in machine learning [[00:27:34]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=1654&btp=48126aba)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** What did you do next to be able to get into this? I suppose the idea that someone now could get a job at one of these AI labs just with a high school degree… I don’t know whether it was easier then, or whether you just particularly stood out, but how did you manage to get your foot in the door? Because people would often think that’s the stuff that’s potentially really hard if you’ve taken this unconventional path.
**Chris Olah:** I should say I’m not the only one who doesn’t have credentials and is at these labs. I think [Alec Radford](https://openai.com/blog/authors/alec/) doesn’t have a PhD. I’m not sure that he has an undergrad degree. Then there’s a number of other researchers who are like this, so I’m not unique in it. For me, I was really lucky that I got to work with Michael Nielsen, and I think he basically treated me like one of his grad students for six months or a year. And that was a really helpful experience, and I think just helped me mature a ton as a researcher. Then I started cold emailing different labs and asking if I could visit. [Yoshua Bengio](https://en.wikipedia.org/wiki/Yoshua_Bengio) put out a call for PhD students, and I wrote to him. I spent about a week writing him an email asking if he’d consider me.
**Chris Olah:** He ended up considering me, and actually I ended up getting accepted and I didn’t end up going. I started visiting different research groups and giving talks on research that I’d done. I had gotten some very modestly interesting results, and… I think in a lot of ways the field was much smaller, and it was growing — I think relative to its present size, it was growing very rapidly. So that also did make it easier to get into.
**Rob Wiblin:** Yeah, interesting. I guess your first serious role was an internship at Google Brain, is that right?
**Chris Olah:** Yeah, so I gave a talk at Google and [Jeff Dean](https://en.wikipedia.org/wiki/Jeff_Dean) very kindly offered to take me on as an intern. I ended up doing an internship at Google, and it went on and on and on, and I ended up being an intern for an entire two years.
**Rob Wiblin:** Is that common?
**Chris Olah:** Apparently I’m not the person who did the longest internship in Google’s history, but I think I was a good competitor for that position.
**Rob Wiblin:** Were you doing this extended internship in lieu of doing other formal training? Was this the place where you learned all the things you needed to know in order to get properly hired?
**Chris Olah:** Yeah, I think I learned a lot through it. I also learned a lot from Michael and from self-study and stuff like this. But, yeah, I think there’s probably a significant element of that. I was just so lucky to be surrounded by so many really wonderful people who were really generous with their time, and especially, with doing different stuff. Google Brain was about 30 people at a time. Rather than building neural networks to go and do different things — I did some of that, and then I got to be involved in lots of projects, and did some work with generative models and contributed a little bit to TensorFlow and things like this — but the main thing I was doing was just trying to figure out what was going on inside these neural networks.
**Chris Olah:** And really very few people were going and thinking about that at that time. [Matthew Zeiler](https://www.matthewzeiler.com/) did a little bit of work, although he sort of stopped. [Alex Mordvintsev](https://znah.net/), who would become a close collaborator of mine, was doing some work in this space. I just spent a lot of my time writing blog posts about machine learning and trying to visualize them and trying to understand what was going on and trying lots of things that didn’t work very well.
DeepDream [[00:30:52]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=1852&btp=48126aba)
----------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Yeah, what were some of the highlights, or the most interesting things that you did in… You were at Google Brain for six years or something like that?
**Chris Olah:** Yeah, five-ish, five-something years, maybe six.
**Rob Wiblin:** Yeah, what were some of the highlights?
**Chris Olah:** Probably the craziest thing was DeepDream. So the really early DeepDream results, Alex Mordvintsev discovered. And he gave a talk and I was just so excited. I dumped all the projects that I was doing and got involved. It was just electrifyingly exciting. The idea is that you just try to make the neurons in a layer of a neural network activate by going and optimizing the input image. You can have these, if you haven’t seen them, these [hallucinogenic images](https://www.google.com/search?q=deepdream&rlz=1C5CHFA_enUS958US958&sxsrf=ALeKk00-qBEerZS07HQ5bhfgs1Cd3_i0lg:1627413823202&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjC89X2_IPyAhXaHM0KHaw7ALUQ_AUoAXoECAEQAw&biw=1440&bih=796) that are full of dog slugs and psychedelic colors and things like this. Neural networks seem like magic, but this seems like really crazy magic, and I just spent hours and hours and hours trying different things. I was trying to visualize different layers, and trying to go and fiddle with what we are doing in different ways.
**Chris Olah:** I found out that you could apply this to going and visualizing individual classes, and we discovered things like part of how it recognizes a barbell is looking for arms attached to the barbell. But it was just this really exciting moment of really intense mystery, and that was really cool.
**Rob Wiblin:** Yeah, I feel like everyone has seen the DeepDream images. I imagine the audience for the technical paper was a lot smaller than the audience for the images, but it was an incredible hit. Do you want to just describe how those images were produced? It seems like you’re reversing the neural network and then getting it to spit out an image, rather than classify something.
**Chris Olah:** Yeah, so the idea is that you go and you fiddle with the input image to get an image that causes the neurons in a given layer to fire a lot. As neurons begin to fire, you try to go and cause those neurons to fire more and more and more. Later on we’d come up with this… One of the things that I always feel a little sad about with DeepDream is we got all this attention initially, but I feel like we didn’t really understand the results in some important way. We didn’t really understand what we had discovered.
**Chris Olah:** I sometimes think about it as like, at some point in the invention of the microscope, somebody must’ve found a distorted piece of glass and realized that they could go and hold it up to things and see these distorted images, and that the distorted images have these parts that are enlarged and they can see things that they couldn’t see before. I feel like DeepDream was that stage, and then later we turned it into feature visualization and developed… Of course, this wasn’t just our work, we were building on work that other people did as well… That allowed us to have these lenses that allowed us to very clearly see and really understand a very powerful tool for understanding different parts of neural networks. But that came after the attention from DeepDream.
**Rob Wiblin:** Was the flood of attention helpful at all for getting taken seriously, or getting more resources? I don’t know to what degree the ML community respects creating a tool to make amazing images that people can stick up on their social media.
**Chris Olah:** I think people were really excited for a month or two. The thing that I remember most was getting lots of emails from people in psychology who wanted to investigate this with us. I remember getting an email from some really serious-looking professor about how they wanted to investigate how these seemed similar to psychedelic images, like could we do some joint collaboration to figure it out. I never followed up on that. But I think that a lot of it wasn’t as helpful. And maybe just because we — from my perspective at least — didn’t yet really understand the results deeply, we weren’t really able to funnel it into something that was really helpful for advancing an agenda of really understanding the neural networks, I think.
**Rob Wiblin:** What was it like working with so many people who I imagine were a whole bunch older and more senior than you?
**Chris Olah:** It was an interesting experience. Yeah, so for a while I was the youngest person, because even the interns were PhD students who were quite a bit older than me. Actually, at the time, many of my… You know it was only 30 people, and several of my colleagues had children who were my age. That was also fun. But, actually, just everybody was super sweet and was just really lovely to work with. I think the thing that was slightly challenging, and it became more challenging after I became full time, was trying to interact with my peers. I’d have experiences, for instance, where I’d try to hang out with PhD students and then they’d be like, “Dr. Olah” — because everybody assumes that I have a PhD — “Can I be your intern?” Or something like this.
**Chris Olah:** If you’re going into an environment really hoping that you can have peer-style interactions and become friends with people, it’s actually really disappointing when that happens, and a pretty strange experience.
**Rob Wiblin:** Interesting. I suppose if you spend all your time hanging out with 30 and 40 year olds as a 20 year old, do you just end up talking a lot about, I don’t know, male baldness, and how to buy a house? Like, “Let’s talk about our kitchen renovations” or something like that. I feel like the things that people talk about are potentially quite different.
**Chris Olah:** I mostly spoke with people about research. I tried to have a few conversations about effective altruism, and I think people were just confused by me.
Founding a team focused on circuits [[00:35:58]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=2158&btp=48126aba)
------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** So you’re spending these five years at Google Brain, and you had a whole lot of very widely read articles there. You also founded this academic journal, [*Distill*](https://distill.pub/). Do you want to flesh out maybe the rest of what happened at Google Brain, and what’s happened since then?
**Chris Olah:** Well, I think my career trajectory probably became less unusual and therefore less interesting probably around that point. But, yeah, after a few years at Google, I started to have a much stronger vision for how interpretability research could be pursued, and a very idiosyncratic one. I really wanted to try and build up a research group doing that type of interpretability research. I also was very interested in this idea of open-notebook science. I didn’t really see any downside to being very public about interpretability research. I just thought it’d be nice if we could just share it as we’re doing it, and try to involve other people.
**Chris Olah:** I talked a lot with my manager and Jeff, and they were both actually really incredibly supportive. And we just all agreed that actually Google probably wasn’t the best place for me to pursue that. So I ended up going to OpenAI and founding a team, the Clarity team, going and doing this kind of interpretability research there. That was a really positive and really incredible experience for me.
**Rob Wiblin:** Yeah, it seems like you’ve tracked the… I suppose this clarity or interpretability field didn’t exist when you joined. You virtually founded it, or at least you were there around the founding, and now you’ve lived to see it become a meaningful fraction of research into ML. Something that’s widely accepted and people that are interested in.
**Chris Olah:** Well, I think right now a lot of people are doing lots of different things called interpretability research, and lots of other people contributed to creating lots of different branches and directions in that space. But I’ve had a very particular vision for what the type of work I’m most excited about in this space is. I’ve been really fortunate to be able to help build that out, and I think just really lucky to be involved early on. I got to work with a bunch of really amazing collaborators at OpenAI, and we pursued what we called the ‘circuits agenda,’ which was the attempt to go and fully understand neural networks.
Lessons from Chris’ unconventional career track [[00:38:13]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=2293&btp=48126aba)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Alright, let’s look at the unconventional career track that you’ve been on in general from 2008 through 2015. What do you think we could learn from this experience? Are there any lessons that listeners can draw as to their own experience? Maybe they should go and defend one of their friends from terrorism charges? Is there anything that people could learn, or is it just too weird?
**Chris Olah:** Well, I think probably the most useful thing I’ve extracted has been thinking about the [Pareto](https://en.wikipedia.org/wiki/Pareto_efficiency) frontier of skills. For example, a lot of my early contributions to machine learning were basically being able to create these really helpful illustrations of complicated ideas. What skills did I need to do that? Well, I needed both to understand machine learning, and I needed to be able to draw. I wasn’t an exceptionally good artist or scientific illustrator, and I wasn’t exceptionally knowledgeable about machine learning. But very plausibly, for a while, I was the person in the world who was the best of the *intersection* of machine learning and drawing. If you think of these two-dimensional plots of different skills, or three-dimensional plots of different skills, and you think about the Pareto frontier, very often society is good at producing people who are optimized for a particular skill set or set of skills that society has really validated as useful.
**Chris Olah:** We create entire pipelines training people. But I think that often, if you can find useful intersections of skills that aren’t these couple of standard skills, there can be a lot of value. And it’s much easier to go and have a big impact, and often have a big counterfactual impact. When I’m talking to people about their own careers, I often try to frame it in terms of, what are the skills that they’re cultivating, and what do we think the Pareto frontier with regards to these skills looks like? Do we think that there’s places where, rather than going and becoming the world’s best at one skill, they can produce a lot of value by being at an intersection of skills that other people don’t have?
**Rob Wiblin:** Yeah, that’s really interesting. Thinking about it theoretically, I suppose part of the reason is just that there’s so many combinations of two different things that you could throw together. So the space of possible combinations is vastly larger, and so you have a lot more to choose from. It also means that you could be the only person who’s interested in X and Y, if you choose two things that are sufficiently distant. Then you have a truly unique skill set, and you might just stumble on something that no one else has even tried to find.
**Chris Olah:** Exactly, and now the problem is the space is exponentially big, and you want to not just find *an* intersection, but the intersection has to be useful. So you have to have some taste in picking the skills that you develop. But I think that there are lots of opportunities like this, and that often it’s much less competitive than going and being good at one of the skills that society already really values as a thing to optimize for.
**Rob Wiblin:** Yeah. I guess you could choose the wrong combination, just where there aren’t that many complementarities between the two options. You also might fall between the cracks, I suppose, of existing disciplines. Or, a common complaint that people have about doing interdisciplinary work is that if you’re doing philosophy of economics, the economics department doesn’t like you and the philosophy department doesn’t like you, and no one really feels like you’re one of them.
**Chris Olah:** I guess I want to distinguish this a little bit from interdisciplinary work, which I think is something slightly different. When I was doing this scientific illustration of machine learning, it was really a pure machine learning contribution. It was something that was valuable to the machine learning community and targeted at the machine learning community. I think that there is a distinction, I think, between—
**Rob Wiblin:** …skills, and maybe bodies of knowledge?
**Chris Olah:** Yeah. You could do something that was more cross-disciplinary, like you could use machine learning for scientific drawing, or something like this, but I think that’s not really what I was doing.
**Rob Wiblin:** Yeah. Okay. So I suppose the 80,000 Hours Podcast might be an example of that. We’re at the intersection of being knowledgeable about effective altruist ideas and research, and, I guess, doing interviews and doing media, and communicating stuff. That’s maybe the unique selling point of this podcast. I suppose, do you have examples of classic skills that you can throw onto something and then maybe produce something interesting that others haven’t found?
**Chris Olah:** I have a favorite example of this phenomenon, which I’ll probably both slightly mistake and I actually owe to Michael Nielsen. But my understanding is that [Richard Feynman](https://en.wikipedia.org/wiki/Richard_Feynman), I guess when he… I don’t know much about physics, but he had to do all this work… I guess physicists were doing all this work going and solving complex integrals. And the usual set of techniques was to go and use tricks from complex analysis, analytic extensions, and stuff like this, and try to go and solve the integrals that way. And Feynman didn’t really know these complex analysis tools very well, but he had all of these weird tools around fractional calculus and stuff like this, and he used those instead.
**Chris Olah:** Maybe this gets… It’s not even that weird of a skill set, but in having a different skill set than his colleagues, he was able to have more counterfactual impact, and go and solve problems that other people couldn’t. It’s not that his tools were better, it’s just that lots of people were already trying with the other tools. And he brought a different set of tools to the table.
**Rob Wiblin:** Yeah. I guess some options might be knowing a lot about some technical area, plus, say, knowing about operations in organizations, or knowing about how to do business, or knowing about money, or knowing how to manage people. Maybe those are combinations…
**Chris Olah:** Yeah. I think people management plus technical skills is a huge superpower. I think it’s something that I am trying to become good at. I think the people who I see who are really good at it, I think are… Yeah, it’s a really amazing thing.
**Rob Wiblin:** Yeah. They end up very sought after.
**Chris Olah:** Yeah. I think any communication plus technical skill. I think web development plus science is actually really underrated. I think that often being able to build interactive interfaces allows you to go and… Well, I guess the basic pitch is, I think a lot of scientists are drawn towards being very reductionist… And maybe this is more for machine learning than other fields, I’m not sure. But they tend to go and look for summary statistics, because you can easily work with summary statistics, and make line plots and things like this. I think if you instead are able to go and create interactive tools and explore things, you tend to just interact with the data in a different way. I think there’s actually just something where… At least in machine learning, I think there’s a lot of value that gets left on the table. And I suspect elsewhere as well.
**Rob Wiblin:** Yeah. Do you think that this might be slightly a Bay Area phenomenon? The thing of people really appreciating people who have quirky skills that are combined, and that maybe if you were in a more conservative social situation, maybe it would be more risky? Or is that wrong?
**Chris Olah:** Yeah. That might be true. Although I think in a lot of cases, if you can demonstrate that your intersection of skills produces value, I think that’s really the critical thing. Once an organization is getting value out of your intersection of skills, whether it’s weird or not probably isn’t going to be the critical thing
**Rob Wiblin:** Yeah. It’s not going to be the make-or-break issue. I guess that’s maybe one other lesson, is that if you can show people that you can do stuff directly, then often you can rout around credentials. I think that is quite often true. The thing is, it actually is potentially quite hard to figure out how to produce value and how to do a good job without having the training. It requires someone who really has either just a lot of raw ability, or a lot of focus, or a lot of discipline to do things outside of a structured environment.
**Chris Olah:** Absolutely. I think it also depends a lot on the discipline. Practicing law without credentials isn’t something that’s going to fly. I think, depending on the discipline that you’re working in, and I guess this is sort of cutting the other way from my previous answer, but, how flexible the discipline is… I think this was actually a little bit of a different question than the intersection of skills thing. You can get a PhD and have an odd intersection of skills, perhaps.
**Rob Wiblin:** Yeah. I think probably people aren’t just going to learn medicine through apprenticeships. For understandable reasons, it’s a somewhat credential-filled field.
**Chris Olah:** I would be nervous to go and have a doctor who did not have an MD.
**Rob Wiblin:** Yeah, it reminds me of how you can get a free haircut if you’re willing to be someone’s first victim as a hairstylist. Perhaps it’s more difficult in a surgical environment.
Is it important to go to grad school or work at the best lab? [[00:46:01]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=2761&btp=48126aba)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** We’ve talked a bunch about whether people should go and do undergraduate degrees, but how do you feel about going to grad school? Is the picture very different?
**Chris Olah:** So again, I don’t know how much my advice generalizes across fields, but at least for machine learning, I usually encourage people to just ask where they can go and do the best research. For context, the question that I usually hear people asking when they’re considering going and doing a PhD is they’re like, “Well, should I just go to an industry lab and do research there? Or should I go and do a PhD, and go on and develop my machine learning there?” I think the essential thing isn’t whether it’s a PhD or not. The essential thing is how much do you think you’re going to learn from the people you’re going to be around, and just how good of a research environment is it going to be?
**Chris Olah:** For some people, I think quite often, going and doing a PhD will get them more mentorship and get them into a research environment that’s more suited to their tastes than the opportunities they have in industry. But for other people, it’s the other way. I think it’s probably better to try and compare concrete questions. I mean, I guess a very classic piece of advice that people give about doing PhDs is to think really hard about the group that you’re going to be working with, and try to understand them. I think, just like you should compare specific research groups when you’re thinking about doing a PhD, you should also, if you’re considering industry as well, consider the particular industry group that you would be working in, and what you would get out of working there.
**Rob Wiblin:** Yeah. How snooty Is ML about whether you went to the right university, or whether you were working at the best lab?
**Chris Olah:** I think not very. I mean, it probably depends from institution to institution, but I think most institutions, if you have impressive results, that’s the main thing they’re going to care about. I actually had one university encourage me to apply for a professorship at one point. I was very surprised by this, since I don’t have an undergrad degree or a PhD. I think of universities as being very, very traditional, so I think if universities are willing to consider stuff like that when people do good research, I think that’s a pretty strong signal that a lot of institutions are willing to focus on people’s research rather than their employment history or where they did their PhD.
**Rob Wiblin:** Yeah. It seems to me like maybe ML as an academic discipline is pretty unusual. It seems it’s routing around a whole bunch of the typical norms. Most researchers now just present at conferences, right, and then just people—
**Chris Olah:** That’s not just machine learning, that’s all of computer science.
**Rob Wiblin:** Ah, that’s all of computer science. I see, okay.
**Chris Olah:** Yeah. Computer science, by and large, doesn’t use journals as much, and relies a lot more on conferences.
**Rob Wiblin:** Interesting. Do you know how that happened?
**Chris Olah:** Not sure. I do know it leads to strange situations in academia, where somebody who’s looking at the career of a computer scientist might be like, “Oh, they just publish in conferences. They’re not doing good work.” But in fact, that’s just the norm.
**Rob Wiblin:** Yeah. Interesting.
**Chris Olah:** And I think that’s some areas of physics too. Machine learning has a significant fraction of people just publishing on arXiv and not publishing in a venue at all, but I think there’s areas of physics that have taken it much further.
**Rob Wiblin:** Wow.
**Chris Olah:** My impression is there’s some areas of physics where people aren’t really using… By and large, they aren’t using conferences or journals, or any peer-reviewed venue, and are just putting things on arXiv.
**Rob Wiblin:** Wow. They’re living the dream.
**Chris Olah:** They’re living the dream!
**Rob Wiblin:** I’ve said I’m really not sure what value academic publications are providing, because it doesn’t seem like peer review is doing that great of a job at clearing out bad papers. I mean, presumably it does something, but from what I understand, the test/retest validity of peer review isn’t that strong. So even if you submit the same paper to a journal twice, there’s a high chance it’ll be rejected once and accepted another time, which is a bit of a red flag.
**Chris Olah:** Yeah. [NeurIPS](https://nips.cc/) did this test where they peer reviewed some subset of papers twice, and they found that if one review process accepted it, the other review process had a 50% chance of accepting it. Which, because they are accepting less than half of the papers… It’s not as bad as it sounds, but it’s still a very, very noisy process.
**Rob Wiblin:** Yeah. It seems like, if these fields have dispensed with the traditional journal system, then we could maybe learn from them how much value the journals were providing in the first place by seeing whether this has been a disaster. Maybe this is something I should look into.
**Chris Olah:** Yeah.
Strategies for growing as a researcher [[00:50:17]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=3017&btp=48126aba)
---------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** I know you’ve written [this article](http://colah.github.io/notes/taste/) about how people can develop good taste in what to research and how to go about it. I actually didn’t have time to read that one to prepare for this interview. Could you give people a summary of your views there?
**Chris Olah:** Sure. First of all, I don’t consider myself at all an expert on this. This is just what’s worked for me, and when I’ve been mentoring people, things that I’ve found helpful. But I think it’s often helpful to divide being a good researcher into two parts. One is taste. So your ability to go and pick good problems and go and pick good avenues to attack those problems, and things like this. The second you might call technique, or execution. Maybe if you picture a chemist working with vials and pipettes and weird things, it’s pretty clear that there’s a whole technique to going and manipulating that laboratory equipment.
**Chris Olah:** I think that it’s subtler in other fields, but I think that there is something — certainly in machine learning, of the technique of training models, and even just being a good programmer, and doing very minute things of manipulating your code editor, or going and manipulating distributed systems, and stuff like this — I think that there’s a question of how do you develop both of those skills. And for taste, I think that’s probably the hardest one to develop. I tried to come up with a list of exercises that one could do. An example, and I think probably the most useful one, is just write down a list of problems that you think might be important to work on, and then have somebody else, ideally your mentor, go and just rate them one to 10.
**Chris Olah:** Because one of the really hard things about developing taste is that you have such a slow feedback loop on learning lessons, because you have to go and do the entire project. What you want to do is use a mentor or use somebody else as a cheap proxy for getting feedback, and then if you disagree with their feedback, you can either talk to them about it, or maybe you even want to go and do that experiment. I think that could be useful. I think there’s lots of other things. I think reading about the history of science is helpful. I think going and trying to write just about why you think things are important is helpful. In any case, I think there’s a bunch of exercises there. Then, on the technique side, I actually think the most valuable thing here is working closely with people who have good technique.
**Chris Olah:** I think actually, at least in machine learning, and probably other computer science disciplines, going and pair programming with people is immensely valuable. I think that there’s a lot of stuff that’s hard to communicate in other forms, but gets passed along when people are pair programming. I think for developing technique, often pair programming is the highest leverage thing to do.
**Rob Wiblin:** That’s really interesting. I’ve noticed… I guess in fields of work, in tasks where they have a physical embodiment, actually moving things around in the physical world, people get to see one another and they get to learn from watching them, what they’re doing, and figure out how to do it better. And for work on computers, that exists much less.
**Chris Olah:** Exactly.
**Rob Wiblin:** There just seems to be a culture in general, that you don’t… When you arrive at an organization, and you’re trying to get training, for example, from your manager, you don’t literally sit behind them all day and watch what they do. That would be… Maybe that would be sensible, but you can’t just sit behind them and watch their screen, and then see, how do they move the windows? How do they reply to people? That doesn’t happen. And I guess that means that it’s possible for people to just miss really basic stuff potentially. It sounds like maybe in programming, there is this pair programming thing in part to fill this gap because it’s maybe such a severe problem there.
**Chris Olah:** Yeah. I think there’s an increase in culture at a lot of organizations of pair programming. I feel like I hear people talking about it a lot more. And yeah, I’ve found that it’s really helpful for passing this stuff along. I myself am constantly learning from other people when I work with them, and I hope that they’re learning from me as well. I think your point about how when it’s not physical the technique gets hidden by that is a really good one.
**Rob Wiblin:** Yeah. I think that culture exists in part because people are worried about… Well, I guess there are two reasons. One is sheepishness, perhaps, about people disagreeing with how they’re going about their work. It’s easy to just hide it and never have people watch your screen. Another might be that you’re worried about confidentiality. You don’t want other people looking over your shoulders and reading your emails. There’s a real norm of not looking at other people’s screens in general, but it seems like maybe we should think about ways to work around that. Like, you would have a specified time when someone literally is just going to watch you work, and then you try to not do anything that would be too sensitive where they shouldn’t be looking at the screen. I think I’d be fascinated to see how my colleagues just go about their day. Do they switch windows as often as I do? I don’t know. Sometimes some of these basic skills are so key to your productivity that it could be worthwhile.
**Chris Olah:** Well, I think it’s also not just teaching. I think often you can just push through things faster if you have a second person with you. Jeff and [Sanjay](https://en.wikipedia.org/wiki/Sanjay_Ghemawat) are famous for being extremely impactful at Google, and they’re pair programming all the time.
**Rob Wiblin:** So they sit together with their computers next to one another and they just work together on a problem?
**Chris Olah:** Yeah.
**Rob Wiblin:** That’s really interesting. Yeah. Do you have a theory for why that’s not more common? It seems like it might be just a really sensible way to get things done.
**Chris Olah:** I think it is modestly common in programming and software engineering. I guess the thing that I’m trying to highlight here is that it’s a really useful… I think people sometimes feel like it’s just a nice way to work, I think. Or it’s an effective way to work. But I think… Especially if you’re trying to develop technique, I think it’s the best way to go and transmit it that I’m aware of.
Cold emails [[00:55:25]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=3325&btp=48126aba)
------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Yeah. I’ve seen in your articles that you’re also just generally a big proponent of writing cold emails to people. You found that that’s worked pretty well for you. Do you think that generalizes to others as well?
**Chris Olah:** I get a lot of cold emails, and 99% of them are terrible. They’re like, “Can you do my homework for me?” or, “Can you answer this basic question that I could Google for one minute and answer?” I think people get this impression that cold emailing doesn’t work, because of course, if you send emails like that, people are overwhelmed and aren’t going to respond. Or, even if you just very generically are like… If you send a nicely written email and you’re like, “I’m trying to get into machine learning. Can you do a half-hour phone call with me to talk about how to do that?” Even that, you’re not very likely to get a response from. But I think the thing that people miss is that if you write really good cold emails, it’s actually not that hard to be the best email I received that week.
**Chris Olah:** And I think that if you’re willing to invest energy in understanding what a researcher or a group is working on, and you’re specifically referring to their papers, and you have thoughtful questions about things, yeah, I think that people will pay a lot of attention to that. Then I think that it will… It very often works well. I think there’s a big gap in what people mean when they talk about cold emails, and I think that if you’re willing to put in the work, and if you just genuinely really care about what somebody is doing, and have put in the work to understand it, and can talk about it really intelligently… That’s going to come through. It’s a much more compelling reason for the person to talk to you than other things.
**Rob Wiblin:** Right. It sounds like you don’t think that people should write tons of cold emails to all kinds of people, but if there is someone whose work you’re really into, whose work you really understand, then you should not be sheepish about emailing them. Because even if they’re getting other emails, your one is really going to potentially stand out, if you can demonstrate that you have actually read their paper.
**Chris Olah:** Well, and I think the other thing is, I think there’s a lot of people who are trying to look at how to get into machine learning, and what they do is they send lots of emails to people, or they email famous people. I think what you should actually be doing is trying to figure out who you would be really excited to work with, and really understand their work. Ideally pick somebody who’s a little bit less famous maybe, and then reach out to that person with an email where you’ve put a lot of work into it being clear that you’ve read their work, and connecting your interests to theirs, and things like this. There’s a number of emails that have been really important for me, where I spent a week writing them. I think that was a totally worthwhile investment. I think that’s not how people usually think about cold emails.
**Rob Wiblin:** That’s so interesting. How do you feel about length? Maybe we’re getting a little bit into the weeds of email technique here, but go on.
**Chris Olah:** I think a lot of the most impactful emails I’ve written were only a few paragraphs long, less than one page. But I read five of that person’s papers beforehand, and I think that comes through in subtle ways. And I didn’t integrate it super ham-fistedly, but I was writing to them because I genuinely was invested and cared about their work, and had shared interests with them. I think that’s very, very different.
**Rob Wiblin:** Yeah. I think the main lesson that I’ve learned… Well, I suppose this is a different class of cold email… This is asking people for small favors or for feedback, or offering advice. Certainly the shorter it is, the more likely people are to answer. Maybe also if you can really condense down the information that you want to convey into just a couple of sentences, people are much more likely to absorb it. Because people are flicking through their email inbox pretty quickly and often they have other things going on, and if they open an email and it’s a wall of text, then you definitely run the risk that they’re just going to close it and then never get back to it, because it’s just too much. They don’t yet know whether it’s really worth investing the time in.
**Chris Olah:** Yeah. I think you really want to… If you’re writing something long, you want to really optimize the introduction for that reason.
Research as a market [[00:59:02]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=3542&btp=48126aba)
---------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Do you have any thoughts on how to be a successful researcher?
**Chris Olah:** I guess there’s one last thing that I find pretty helpful, which is thinking about research as a market. I don’t think this is very novel, but I do find it a really powerful frame. The general idea is, you think of researchers as investing in different research ideas, and if the research idea pans out, and other people don’t grab it before them, then they get some reward from that maybe. Maybe more resources, or just they get a payoff from that in some way. You can see there being this competition to go and grab promising research ideas. I think there’s roughly two strategies that you can play in this market. One is you can work on things where everyone really agrees that they’re important, and that are really popular.
**Chris Olah:** And what you’re doing when you do that is you’re going and making that little area of the research market a tiny bit more efficient. You’re going and making it so that ideas that are important to get done, get done a little bit more quickly. And I think that is actually genuinely a valuable thing to go and do. If the thing that you’re doing is really important, and you make it happen in expectation a week earlier or a month earlier, that’s really great. But the other strategy you can do is you can try to beat the market. You can try to work on things where you just can see that something is undervalued relative to what most of the community thinks. That’s the thing that I try to do a lot of the time. And there’s lots of reasons why you might be able to beat the market.
**Chris Olah:** It could be that you just care about things that other people don’t. If you care about safety, or in other areas, if you care about animal welfare, or if you have weirder goals, or different goals than a lot of people, you might be able to beat the market in that way. I think another way, though, is just, if you have some insight that you really believe is true about a problem, and that’s not a widespread insight, then that could be really helpful. That can be… I feel like that’s a lot of what I’m doing, me personally. I think that you can genuinely understand neural networks if you’re willing to input enough energy into trying to figure out what’s going on. It’s a big bet that I’m making, that most other people aren’t making.
**Rob Wiblin:** Yeah. I guess to translate this into 80,000 Hours speak, one option would be to go with the thing that everyone agrees is most important, or has a particularly large impact if you can make progress on it. The problem there is that it’s probably not going to be neglected, because everyone is onto it already. Another option would be to take a punt on something that is currently really neglected, that other people aren’t working on, but of course there’s a possibility that other people who have chosen not to work in it may be right, that it’s too hard to make progress on it, or it doesn’t really matter, even if you do.
**Rob Wiblin:** I suppose, yeah, the ideal would be that you have some underlying insight that other people have missed. Maybe they just don’t have time to engage with it, because there’s just so many ideas out there. Lots of things get ignored. And then you can find something based on that insight that is important, and neglected, and easy to make progress on, and then if you’re lucky enough to succeed in that—
**Chris Olah:** Then you have it made.
**Rob Wiblin:** Yeah, then you have it made. Right. Exactly. Do you think you struck gold in that way with working on interpretability? It seems like it was a field ripe to develop.
**Chris Olah:** I think I did. I think it also played a lot to my comparative advantages. But yeah, I think I did. Other people might still disagree. I think that the jury’s still a little out on how valuable it is. But I think it is.
Explaining complex things really well [[01:02:26]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=3746&btp=48126aba)
--------------------------------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Let’s push on and talk about explaining complex things really well, which I know has been a passion of yours for many years. To start, why is it essential for a field to invest in really good explanations of things?
**Chris Olah:** I think in many fields, achieving a research-level understanding is [like climbing a mountain](https://distill.pub/2017/research-debt/). There’s all of these ideas that you have to understand and build up towards before you can go into research.
**Chris Olah:** Mathematics, I think, is a really striking example of this, where there’s just years and years of ideas that you’re probably going to spend climbing to the point where you can do research, because there’s just so much that is piled on top. Then when you get to the top, you go and you pile some more results on top, and you make the mountain higher.
**Chris Olah:** I think a lot of people are proud of this, because they’re like, ah, the fact that it’s this long pilgrimage to go and get to the point where you can do research, that means that it’s especially profound, and it reflects all of the work that’s been done to date. But I think that, actually, it’s often a reflection that we haven’t put enough work into explaining things, building up really good infrastructure for learning about that field. I think this is going to come in lots of forms. It can be poor expositions, just not good explanations to things. Sometimes it’s just undigested ideas. There’s an idea that’s important, but it hasn’t been really refined to the completed version of that idea. I think it’s very common for there to be bad notations, or just bad definitions of things that make things more complicated, and all these things make it harder to go and understand the topic.
**Chris Olah:** One analogy that I like is sometimes in software engineering, people talk about technical debt, which is you move really fast to get to that point where you can ship some feature or something like this, and in the process, you write lots of bad code, and it’s really messy and gross, and you have bad variable names, and it isn’t documented, and then it’s hard for other people to build on top of. I think something analogous, a kind of research debt, is endemic in science.
**Rob Wiblin:** Yeah. I guess this problem is probably just getting worse over history.
**Chris Olah:** Yeah, it’s just getting worse over history, because we’re just piling the results on. Sometimes people do write beautiful tutorials and textbooks that make it a little bit better, but I think on net it’s getting worse, and it tends to be that the older a field is, the worse this problem is.
**Rob Wiblin:** Yeah. So I guess, at this point, in many academic fields, if you want to really reach the frontier, you have to wait until you’re 30, and you’ve done a PhD, and then a whole bunch of extra stuff. In some fields, maybe it’s 35 or 40 before you can actually start contributing. That’s a reflection of the fact that, one, there’s just a huge knowledge base. In some cases, in some fields, I think we’re pushing up against what it’s possible for the human brain to do. So you just have to really reach your peak capability before you can add in anything new.
**Rob Wiblin:** And it seems like the basic concepts end up being distilled really well, because they’re taught in primary school, in high school. People figure out how to communicate those, but then the closer you get to peak, the more it’s just a mess because no one’s really figured out… There is no textbook about the most recent results. It becomes slower and slower, maybe, to reach up to the top, because everything becomes inscrutable.
**Chris Olah:** I think there’s often also less incentive to go and work on exposition. This is my impression. There’s a really nice article by [Thurston](https://en.wikipedia.org/wiki/William_Thurston) before he passed away, where he talks about basically, in my framing of it at least, killing a field by just going and picking all the low-hanging fruit, and then dumping the field, and just all of these results that he didn’t explain, and making it totally inaccessible. I guess that’s an extreme example, but I feel like this kind of thing can genuinely hold back fields.
**Rob Wiblin:** Well, hold on, so you’re saying that someone could just dump a whole bunch of… Well, dump the next step, but explain it so badly that people are repelled from even comprehending it?
**Chris Olah:** You have to do several next steps, but you can just pile on a huge amount of progress and not communicate it at all, and now it’s a dead end where you can’t go in and get credit for going and redoing that section of the field, and all the fruits have been picked. And you can’t build on top of it, because it’s really hard to understand. Yeah. That’s not a good vision.
**Rob Wiblin:** That is a fascinating possibility.
**Chris Olah:** Returning to your earlier question for a second, I think that’s another thing to be said about why it’s important to invest in good explanations. Or, there’s something about why good explanations are valuable, which is the impact of an explanation is often non-linear. As you go and you write a better explanation, not only does it provide more value to its readers, but you also have more consumers.
**Chris Olah:** In a lot of things, if you make something better, there’s really sharp diminishing returns. But of course, there are diminishing returns where it becomes harder and harder to make something marginally better. But I think with explanations, there’s also a non-linear thing where as you go and you make an explanation better and better and better — and it becomes better than any existing explanation — actually its value significantly increases in this very non-linear way.
**Rob Wiblin:** Because that becomes the default reference that everyone’s going to read, and they’re all saving a bunch of time because it’s better? And maybe also more people—
**Chris Olah:** Exactly.
**Rob Wiblin:** —more people are like, “Oh, I could plausibly understand this, because now it’s been explained properly.” You end up expanding the interest in the topic in aggregate.
**Chris Olah:** Sometimes I feel a little bit sad about this, but the thing that I’ve written that’s been most read is this [tutorial on LSTMs](https://colah.github.io/posts/2015-08-Understanding-LSTMs/), and it’s been read several million times. If I wrote it just a little bit better, such that I saved every reader one second reading… It actually adds up to non-trivial amounts of time that I saved lots of people. When you have a 1 million multiplier on something, that actually really has a big impact. And the difference between having a 1 million multiplier on something and having a 100 or 1,000 multiplier on something can actually be a pretty sharp transition, where you manage to go from being a good explanation that is comparable to existing explanations, to an explanation that is better than the existing alternatives.
**Rob Wiblin:** The key idea here, to me, as an economist, is fixed costs versus variable costs. Here we have a fixed cost in making an explanation better, whereas that’s just a cost to the person who’s writing the article. They’re going to have to spend more time revising it, but then the benefits are potentially variable, depending on how many people read it. As the number of readers goes up, the amount of time that it will be plausibly justified to spend improving the explanation just balloons out. If you’re going to have a million readers on a textbook, then you really want to make sure that it’s very, very good so it saves people time.
**Chris Olah:** While we’re thinking of this in a sort of economics-y way, I think another thing that’s interesting to think about is, if you think about *n* people in the field interacting, and you increase the… As you vary the size of the field, you can ask how much effort goes into explaining things, and how much effort goes into understanding things, if you want everybody to understand all the work that everyone else is doing. And the effort to explain things grows linearly, because each person has to explain their work. But the effort to understand things grows quadratically, because each person needs to understand every other person’s work.
**Chris Olah:** So if you have the option to change the coefficients on both of those, where people will produce better explanations, and then the consumers go and do a little bit less work, you can change the size that a field can reach before it fragments, I think. Like, there’s some maximum size that a field can have where everybody actually understands everything that’s going on in that field, and it’s determined by how people explain things, because of this linear/quadratic cost thing.
**Rob Wiblin:** Yeah. Interesting. Okay, so the idea there is, if people are bad at explaining what they’re doing, then it means that a field will fission, because people will feel like they have to specialize more, because it’s just so much work to understand what those other people at the other end of the field are working on. Yeah. It’s all Greek to them.
**Chris Olah:** Yeah. I think that’s a natural reaction.
**Rob Wiblin:** So what is your philosophy of what makes a good explanation, and I guess, what’s missing from most typical explanations that makes you think that they’re not as good as they could be?
**Chris Olah:** I think that there are two parts to a good explanation. One is just having a really clear way of thinking about the topic, and one is executing the explanation well. And it’s hard to give any advice on how to go and have a clear and nice way of thinking about a topic. The thing that works for me is I just get really annoyed with my understanding of things until it feels nice. That’s what works for me. But I think that there is more that one can say about how to execute an explanation well. Okay, so here’s something I find mysterious. It’s often the case that you have people who are extremely knowledgeable about a topic, and they put a lot of effort into writing an explanation, and they produce an explanation that’s really hard for other people to follow.
**Chris Olah:** And then it gets worse. People tell them that it’s hard to follow, and they try to make it better, and the result is actually that the explanation becomes progressively worse and harder to follow. When they look at their explanation, they’re like, “Oh, it’s so easy to follow. It’s a really good explanation.” It’s very mysterious. Why is that? And I think my hypothesis would be that they have the benefit of two resources that their reader doesn’t have. So first, they don’t have to go and store things in working memory when they read their explanation. They’ve already memorized all the terms and all of the ideas. They can go and write. Oftentimes, if they try to make an explanation better, they add more and more details, and they’re trying to be really, really thorough and so on. The end result is that they’re loading up the reader’s working memory.
**Chris Olah:** I’m not a psychologist, but as I understand it, the reader only has seven or so slots of working memory and they fill it all up. Then it’s really hard for the reader to go and continue pulling pieces in and connecting them together, as they consume the explanation. Then similarly, they have all the motivation already. They understand why you should care about the ideas and why you should push through the hard parts, but the reader doesn’t have that. And of course it’s easy for them because they’ve already done the hard parts, and the reader doesn’t have that. I think these two deficits, the not having the same working memory benefits that the author has, and then not having the motivation that the author does, are often the reasons why people think that they’ve written a good explanation, and it’s not a very good explanation. That would be my theory of why explanations fail.
**Rob Wiblin:** So I guess in the first one, what’s happening is people say, “I couldn’t follow this,” and what they do is maybe add more detail. They add even more to the explanation, but then that is actually making it even harder to follow it potentially, perhaps because what they have to do is simplify it, rather than make it all exactly precise. Is that one way that things could go awry?
**Chris Olah:** Yeah. Although, I think it’s not exactly simplification. The thing you don’t want to do is dumb an idea down and not actually explain the important thing. But what you do want to do is think hard about how you can reduce the strain on somebody’s working memory. I think there’s lots of tricks you can do with this. I think diagrams are often really helpful, because you can spatially arrange things. Annotating equations can be helpful, and just thinking of how you structure your explanations so that there are fewer long-distance interdependencies can really help. Having little things in the margin where you remind people of important things. I think there’s lots of things like this that you can do. Just asking if you really need to introduce some terminology, or if you can get away without using some additional piece of terminology that people will need to remember. I think, yeah, all of those contribute.
Distill [[01:13:12]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=4392&btp=48126aba)
--------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Alright, so you weren’t merely annoyed by this. You decided to actually try to do something to help contribute to fixing this problem. Tell us a bit about [*Distill*](https://distill.pub/).
**Chris Olah:** Yeah. Something that I find especially frustrating about this is there’s lots of really great explanations that people just write as blog posts, and it isn’t treated like a real academic contribution. It doesn’t help them in getting promoted, or getting hired, or things like this usually, at least in academic jobs. It seemed to me that the production of this really valuable work — explanations, and also just interactive visualizations of things, which is another thing I really care about — just wasn’t rewarded. If we could create a way to go and reward it and support it, we’d get a lot more of this kind of work.
**Chris Olah:** So how can we go and create more incentives? Well, the idea behind *Distill* was if we could create an academic journal that could be an adaptor between weird artifacts that aren’t normal scientific papers, and the traditional academic system, that maybe could allow for people to go in and get these career rewards and support that doing more traditional academic contributions can get them. So yeah, we created a journal and yeah, that was our thesis for why it would help.
**Rob Wiblin:** Yeah. And how has it gone? Do you think that your thesis was broadly right? Or have you learned something from the experience of actually trying to fix the problem, as people often do?
**Chris Olah:** Yeah, I think a lot of parts of the thesis were wrong. I think that the two parts that I don’t believe anymore are, one, that being rewarded is the primary blocker on this type of work being done. I think it’s more the case that people don’t do this work because it’s hard, and that people who do do it, it’s a passion project where they just feel very deeply that they need to go and do this, or they’re just really excited to do it. And it’s very difficult to get to a world where the career incentives would be sufficient for people to do this work for that reason instead. Even if you get this work treated like a normal academic contribution, your tutorial is treated like a typical paper but takes five times more effort to go and produce. That’s not something that people are going to go and do.
**Rob Wiblin:** Okay, so if I understand it, you think you were right that there aren’t sufficiently strong academic or prestige motivations to work on doing really good distillation of ideas. That was discouraging people somewhat, but the thing that was really discouraging people was that they struggle to do it, and it takes them ages. If it were easier to do, they might be willing to do it just for the love of it, just because they want to bring water from the fountain of knowledge to everyone else.
**Chris Olah:** Well, I think I would say that the people who are doing it right now are doing it because they have a very strong internal motivation to do it. That, on the present margin, it’s hard to create a large enough incentive that you’ll actually really change things.
**Rob Wiblin:** Interesting.
**Chris Olah:** There’s a second reason why I think this doesn’t work. I think previously I thought institutions wouldn’t reward this kind of work if it wasn’t published in a peer-reviewed venue; wasn’t published in a legitimate academic journal or conference, or something like this. I think now it’s more that there’s some institutions that are really flexible and will reward non-traditional work, and they don’t care whether it was published in a journal. They’re just going to evaluate it on its merits. There’s more traditional institutions that just are going to look at *Distill* and be like, “This is too weird for us. We’re out.” It’s actually a pretty small group that is in the middle enough that this actually makes a big difference. I think both of those make the case for an institution like *Distill* a lot weaker than I initially thought it was.
**Rob Wiblin:** Right, so are you guys going to shut it down and try a different method, or do you think there’s still a niche for *Distill* and we should keep it around, even if it’s not going to solve the problem in total?
**Chris Olah:** Yeah. I do think there are other ways *Distill* has provided value. I think that just being an example of what you can do with explanations and with interactive visualizations and interpretability has been really valuable. It’s just useful to show what’s possible if you try really hard in this space. I think it’s also been useful as a laboratory for meta-science, where it’s given us a little bit of credibility to do things like organize. We did this weird discussion article thing where there was a somewhat controversial paper, and we just got a bunch of people to, instead of doing peer review, they just wrote papers discussing the paper, and then we can pile them together and summarize them.
**Chris Olah:** People spent months going and writing these, what are effectively reviews. I think that was a super cool experiment. Or we’re doing this thing that we call spreads, where we go and collect a series of articles, very incremental articles building on one topic. Those seem to have also been quite successful, I think, and really interesting. So those are things that I’m glad about with *Distill*, and that I think are positive. But it also has really large costs to run. I think something that I didn’t appreciate enough when I was starting it was just how political it could get, where there’s a whole component of people who are upset with us, because they think it’s corrupt that we publish in our own journal.
**Chris Olah:** Because it’s such a small niche community, a lot of people who are doing this work… The people who are going and investing lots of effort in creating interactive scientific papers tend to be a slightly tight-knit community, and either are involved in running *Distill*, or know people who are involved in running *Distill*. From the outside, that looks incestuous and corrupt, I think, and people are unhappy about it. Yeah, I don’t know. That’s a thing that isn’t super fun to navigate.
**Rob Wiblin:** Yeah. That does sound tricky. Alright. Well, it’s very interesting, and I think not uncommon, that you’ve gained a lot more insight into what is the actual nature of the problem by trying to solve it, and that perhaps you’ll try a different technique now, knowing everything that you do. But *Distill* looks really cool to me. I was looking around the website in preparation for this, and it’s much better than the papers that I normally have to read in terms of just visual quality and quality of explanation. So it definitely has achieved that goal.
**Chris Olah:** Thank you. Yeah, it’s been really wonderful to be part of. I think something I feel really proud of is, in addition to the papers that I’ve worked on, being able to support lots of people in going and producing these kinds of papers has been really cool and special.
Micromarriages [[01:19:28]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=4768&btp=48126aba)
---------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** Alright, we’re almost out of time for this session. But I guess to return to some personal stuff. To get to know you better, if you just had to completely change careers and you somehow became totally indifferent to making the world a better place, what would be the most self-indulgent or most enjoyable career for you to pursue instead? If you’ve thought about this.
**Chris Olah:** Oh, gosh. Well, one thing I think I’d really love to do is just teach young children math. I often wish that I could interact more with young children. And I find teaching really delightful. I sometimes like to daydream about, “How would I go and teach this?” Can you turn group theory into a board game for young children? Like there’s these [Cayley diagrams](https://en.wikipedia.org/wiki/Cayley_graph), and you could use them. Or sometimes I’ll babysit friends’ children, and I’ve done experiments with making knots and then trying to get them to guess whether it’s the [unknot](https://en.wikipedia.org/wiki/Unknot) or things like this. And so I think I’d have a lot of fun with that.
**Rob Wiblin:** Teaching children is a real job, Chris. This is the most self-indulgent thing you can imagine doing? It’s a lot of work!
**Chris Olah:** Well, I don’t know. I’d want to do it in a way that’s… I guess I was maximizing just for my personal enjoyment. It would probably be a very part-time thing or something like this.
**Rob Wiblin:** Yeah. What’s that [xkcd comic](https://xkcd.com/1346/) where it’s like, “When people ask me to describe my dream job, I’m never sure how literal to be.” And the person’s actual dream job is I think removing some lint from a dryer, and then using a lightsaber on a door, and then retiring to a life of luxury.
**Rob Wiblin:** If I was being selfish and self-indulgent about your potential alternative career, I’d really love to read Chris Olah the blogger or Substacker, or whatever it would be, Twitter maybe, although you do [do Twitter](https://twitter.com/ch402). Yeah, there were just all these really amusing, delightful posts on your website that I had the luxury of being paid to read during the prep for this interview. One that stands out is… People are probably familiar with this idea of the micromort, which is a one-in-a-million chance of death.
**Rob Wiblin:** And then I guess, I’ve noticed recently, people have been applying this to more and more concepts. So folks might know that some people in the effective altruism community made this wonderful website called microCOVID.org, where you can specify all of these things about something that you did in what country, how many people, for how long, how crowded, was it indoors, outdoors, in order to figure out how many one-in-a-million chances of getting COVID you incurred. So if you meet someone who has COVID indoors, then you clock up 100,000 microCOVIDs, which is a 10% chance of getting COVID. But more day-to-day, you clock up these one, two, three, four, five, six microCOVIDS, and people can use that to kind of judge how much risk are they willing to take and make these judgements that are often so hard to do, these risk-reward judgements in a more coherent way.
**Rob Wiblin:** And you had a neat application of this kind of one-in-a-million chance thing with the [‘micromarriage,’](https://colah.github.io/personal/micromarriages/) where you’re talking about, as you’re going about your social life or just your life in general, you can imagine that the more people you meet, you’re clocking up these one-in-a-million chances of meeting the love of your life, potentially, someone who you can have a really fulfilling relationship with. Can you lead that one up for a second for us?
**Chris Olah:** Yeah, I guess this trick of going and inventing units to think about things is a trick that I really like. So before we dive in, I guess some context. I wrote that post kind of as humor, but some context is that I actually often find it pretty hard to motivate myself to go to social events. When I don’t know lots of people at an event and I sort of can’t fall back on just talking about research, I often find that kind of stressful, and I often don’t want to do it.
**Rob Wiblin:** Yeah, totally.
**Chris Olah:** But I also really want to have a family someday and find a partner, and that requires you to be social. And I guess going to a dinner party that a friend’s hosting, or going to some other social event, or even going on dates or dating someone, a lot of these things, the party doesn’t lead to you meeting anyone or these other things don’t lead to where you were hoping they would go, and that can be really discouraging and really hard.
**Chris Olah:** And so I’ve sometimes thought that it’s useful to sort of try to be like, “Well, even though this time it didn’t work out, there was actually a chance that that could’ve led to something.” And yeah, you can use that to try to motivate yourself, and also maybe to just sort of emotionally smooth over or help yourself see that you’re making progress, even when these things can be kind of discouraging.
**Rob Wiblin:** Yeah, that makes a lot of sense. I suppose it’s a bit of a cliché that people who are dating seriously because they’re trying to find someone to build a life with, it can be very demoralizing, because for that, the bar is naturally pretty high. If you’re really going to commit to being with someone for decades and raising children with them, then you kind of want to do your research, and actually check that this person really is a good fit to build a life with you. But then that means that maybe the best thing is just to go on a lot of dates, even have the beginnings of many different relationships, which eats up a lot of time. And optimally, most of them are going to fail. But it’s a little bit hard to keep in mind that things are actually going well, when it seems like in the narrow sense—
**Chris Olah:** Exactly.
**Rob Wiblin:** —you’re making no progress.
**Chris Olah:** There’s kind of this disconnect between the thing that you know, which is, “This is perfectly normal and expected and what this should look like probably,” and the day-to-day experience of often trying things that are difficult. Breakups are really painful, and feeling like you haven’t made any progress is hard. And so I guess that’s a lot of the reason why I find it’s a useful way to think about things.
**Rob Wiblin:** Yeah, just taking a step back and thinking about this kind of microX concept in general, I suppose it seems like the case where it’s really useful is with low probability. Well, I guess, events that are low probability, but not so low probability that you should ignore them.
**Chris Olah:** I guess I’d specify I think they’re useful for low-probability, high-impact things.
**Rob Wiblin:** Right, yeah.
**Chris Olah:** And so they’re cases where you can sort of get yourself a little bit confused because the probabilities involved are so small that if you think about it in terms of probability, somehow that lens by itself can confuse you. But the impact is so large, and it’s only by thinking about the multiple of those two that you have a unit that’s sort of coherent to think about.
**Rob Wiblin:** Yeah. And I suppose for alternative things, you could do one in a thousand.
**Chris Olah:** Yeah, I sometimes use milliunits for things as well, when it seems like the probabilities are a little bit larger and the total importance of the thing is a little smaller. Sometimes that’s the natural scale.
**Rob Wiblin:** Yeah, so why is this helping me think about these things? I suppose, one thing is that when you’re thinking about it in terms of probability, or just intuitively, the difference between 100 micromorts and 1,000 micromorts doesn’t feel like very much, or 100 micromarriages or 1,000 micromarriages. It all just kind of blurs into, “This is very unlikely.” But actually, specifying it out in millionth chances makes it feel more measurable and real, like you’re clocking up these risks of death or these risks of something really good going on. And you can see the incremental progress, and also weigh opportunities and risks against one another in the same way that we do with more frequent events and the more normal things in daily life where we have more experience of things going well and badly.
**Chris Olah:** Well, I think that’s right, but I think there’s another thing that happens which is really interesting, which is, as you spend more time thinking in these units… You’ve spent so much time thinking about distance that a foot and a meter and a kilometer are… I guess for a lot of people maybe a mile… are sort of intuitive units. Similarly, I think if you think about things like this, 100 micromarriages or 1,000 micromarriages start to feel like, in some sense, meaningful things.
**Chris Olah:** To me now, I’m like, “Gosh, 1,000 micromarriages? That’s a lot of micromarriages. 10,000 micromarriages? That’s insane.” And these start to feel like meaningful things. Then if you think that you’ve… If you went to an event and you met a bunch of people and you, as you say, clocked up this many micromarriages, there’s a scale of what you typically get or something, where you can feel like, “Oh, yeah. That went really well,” even though there’s in some sense no tangible result.
**Rob Wiblin:** Yeah, and the case of the micromarriage measure, I guess that also made me update how large the benefit might be at a specific social event. Because I guess, most people end up getting married, usually to someone they like quite a bit, and they stay with them a while. So how many social events does it take for that to happen, typically? I mean, most people don’t really have time over their youth to attend more than 1,000 social events where plausibly they could meet someone. It might be a bit closer to 100 for many people. So actually, the chance of meeting someone who you might end up forming a serious relationship with at any particular social event, could be maybe somewhere between 1 in 100 and 1 and 1,000. So on a gut level, that doesn’t feel super likely in any specific instance. But actually, if you go to a bunch of events, that really clocks up pretty quickly, which is why lots of people end up paired up.
**Chris Olah:** Yeah, I mean I think that it assumes that going to social events is the primary mechanism by which people meet their partner, which it might not be. It could be that it’s through interaction by friends, or online dating, things like that.
**Chris Olah:** But yeah, I think these things can be really significant. And I think it’s just such a life-changing thing and such a potentially dramatically positive thing. Yeah, I don’t know. I’m sometimes, maybe … or not even maybe… I’m definitely a pretty weird person, so maybe this isn’t useful to other people. But I’m often surprised how much time, I don’t know, say, that I’ve spent thinking about linear algebra, and trying to really deeply think about linear algebra or other topics that are sort of these intellectual topics, compared to the amount of time that I’ve spent thinking really hard about these things that are going to shape my future in really dramatic ways. Yeah, I don’t know, it seems intuitive to me to try to think really carefully about those things.
**Rob Wiblin:** Okay, so we want to apply this concept to low-probability, high-consequence things. We’ve already got the micromort. So death, that seems like a big deal. I guess we got microCOVID. COVID’s not quite as bad as death, but it’s unpleasant. We’ve got micro-important relationship. What else could there be? What are the other big life events? I suppose potentially finding a really fantastic job or career?
**Chris Olah:** Yeah, I think that would be totally a plausible thing. Yeah, I think that totally makes sense. I sometimes found this useful for thinking about pain. So I think about the most painful thing that I’ve experienced, and then thousandths of that. And then I can use that to think about the amount of suffering that would be involved in something, so I sometimes find that useful.
**Rob Wiblin:** Yeah, that’s a little bit dark. But what about microfriendships? I think one of the things that’s delightful about the micromarriage is that it’s applying this thing that, so far, I’d only seen applied to negative things, to positive stuff as well.
**Chris Olah:** Yeah. Well, I guess a related thing is thinking hard about extreme upsides, rather than just extreme downsides. I think we often are sort of focused on avoiding extreme downsides, but there’s these really extreme upsides that matter as much or more that I think we often don’t think about very much. So yeah, finding an outstanding job, finding a partner.
**Chris Olah:** And I think one thing that’s a little awkward about that — especially about the friend thing — is I think we just have less of a vocabulary for describing… There’s sort of a difference between somebody who’s going to be your best friend for the rest of your life, or a very close friend and it’s going to be kind of life-changing for you, from someone who’s sort of a casual friend and you enjoy meeting them, but it’s not really life-changing in the same way.
**Chris Olah:** And I think marriage, hopefully, is sort of pretty clearly putting you in the category of this person who’s life-changing and hopefully really dramatically changes your life for the better. And yeah, I think the same thing with a job, where you sort of need to specify the really outstanding job. I don’t know, microbestfriend or something is actually a really interesting unit. It’s a sort of a cumbersome thing to say, but I think the closest friends that I have have really dramatically changed my life for the better and are extremely precious to me, and I’d go through a lot for a one in 1,000 chance to make another friend like that.
**Rob Wiblin:** Yeah. I think as a society we maybe underrate the value of friendship. There’s a lot of things about how people’s lives are structured that cause a lot of their friendships to atrophy in their 30s and 40s and 50s, and for people to end up, if they don’t make an active effort, with a pretty small friend group. But I guess that’s potentially a topic for another day.
**Rob Wiblin:** My guest today has been Chris Olah. Thanks so much for coming back on the 80,000 Hours Podcast, Chris.
**Chris Olah:** Thank you so much, Rob. It was lovely being here.
Rob’s outro [[01:32:16]](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/?startTime=5536&btp=48126aba)
------------------------------------------------------------------------------------------------------------------------------------
**Rob Wiblin:** If you enjoyed this, and haven’t listened to last week’s episode 107 with Chris yet, I strongly encourage you to go back and give it a chance.
It explores topics like:
* What is interpretability research, and what’s it trying to solve
* How neural networks work, and how they think
* ‘Multimodal neurons’, and their implications for AI safety work
* Digital suffering
* Scaling laws
* And how nice it would be if this work could succeed
If you’re interested in using your career to work on safely guiding the development of AI like Chris — or working to solve any of the problems we discuss on the show — then you can [apply to speak with our team one-on-one](https://80000hours.org/speak-with-us/) for free. We’ve made some hires and removed our waitlist to apply for advising, so our team is keen to speak with more of you loyal podcast listeners.
They can discuss which problem to focus on, look over your plan, introduce you to mentors, and suggest roles that suit your skills. Just go to 80000hours.org/speak to learn more and apply.
The 80,000 Hours podcast is produced by Keiran Harris.
Audio mastering is by Ben Cordell.
Full transcripts are available on our website and produced by Sofia Davis-Fogel.
Thanks for joining, talk to you again soon.
Learn more
==========
[ML engineering for AI safety and robustness: a Google Brain engineer's guide to entering the field](https://80000hours.org/articles/ml-engineering-career-transition-guide/)
[Positively shaping the development of artificial intelligence](https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/)
[Research into risks from artificial intelligence](https://80000hours.org/career-reviews/artificial-intelligence-risk-research/)
[Anonymous answers: the complete collection](https://80000hours.org/articles/anonymous-answers/) |
f9efcf12-0467-4d78-a962-1a14a6fe602e | StampyAI/alignment-research-dataset/arbital | Arbital | Uncountability: Intuitive Intro
[Collections ](https://arbital.com/p/3jz) which have less than or the same [number of items](https://arbital.com/p/4w5) than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.
All uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity) and some infinite collections are larger than others %%note: At least, within mathematical systems which include the [https://arbital.com/p/69b](https://arbital.com/p/69b), see the [technical](https://arbital.com/p/4zp) page for details.%%. To demonstrate this, we'll explore a graphical demonstration with tiles and paths.
[https://arbital.com/p/toc:](https://arbital.com/p/toc:)
## Tiles and paths

Consider, as shown above, a sidewalk that goes on forever in one direction, which is made up of equal-sized square tiles. The sidewalk is two squares across. Consider a person who walks forever on it, obeying the following rule: Each step the person takes must be to one of the two tiles immediately in front of that person; no going backwards, no skipping tiles, no going sideways, no standing in place forever. The following is the beginning of one possible path:

Now let's ask two questions:
1. How many tiles are there?
2. How many possible paths are there?
In both cases, you could just say that there are infinitely many, and that would be correct. But now let's consider a third question:
3. Is the number of tiles the same as the number of possible paths?
It turns out that there is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite [collections of things](https://arbital.com/p/3jz), and some infinite collections are larger than others. In particular, some infinite collections are *countable* (like the [set](https://arbital.com/p/3jz) of all [https://arbital.com/p/-45h](https://arbital.com/p/-45h)s), while others are *uncountable* (like the set of all [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc)s). As we will see, it can be shown that the number of tiles on our infinite sidewalk is countable, but that the number of possible paths one could take, following the rules above, is uncountable. So there are in fact *more* possible paths than there are tiles.
Let's dig into exactly what this means and why it's true.
## Pairing off
We say that two collections of things are the "same size" if you can match the items together completely: you can pair each of the things in the first collection with exactly one of the things in the second collection, in such a way that there is nothing left unpaired. For example, given two sets of three things each, we may pair them. Here is an example of such a pairing:

You might think it obvious, then, that the number of paths our person can walk is bigger than the number of tiles. We can match each tile with the path that starts on a tile the same color as it, and changes to the other color after it hits this tile. For example, we would match the third red tile with the path

It is important to note, however, that it is not sufficient that we find some matching that leaves things left over. We must show that *every* matching leaves things left over. For example, an infinite sidewalk that is one tile across has just as many tiles as an infinite sidewalk that is two tiles across, as we can see from the picture below by matching the 1R on top with the 1R on bottom, the 1B on top with the 1B on bottom, the 2R on top with the 2R on bottom, and so on.


In fact, if we were only to require that *some* matching leave extra tiles, then the number of tiles in a sidewalk that is one tile wide would not be equal to itself, for we could match the first tile with 1B (in the bottom picture above), the second tile with 2B, and so on, and we would leave over half the tiles!
In fact, even if we had a *field* of tiles that is infinite in every direction, we would still have no more tiles than if we had only a sidewalk that is one tile across. The following matching shows this:


## An unpairable path
You might wonder, given that there are so many different ways to match up infinitely many things, how we can know that there is no matching that catches everything. I will now prove that, no matter how you try to match paths (ways of walking) and tiles, you will miss some paths. Since we have already seen that the number of tiles in a sidewalk two tiles wide is the same as the number of tiles in a sidewalk one tile wide, I will show that any matching between paths and tiles in a sidewalk one tile wide misses some paths. I will do this by creating a path that does not match the path we have chosen for any tile %%note: This type of proof is known as a [https://arbital.com/p/-46z](https://arbital.com/p/-46z).%%.
Suppose we are given a matching between tiles and paths. Since we have numbered the tiles in a sidewalk one tile wide ($\fbox{1}\fbox{2}\fbox{3}\fbox{4}\fbox{5}\fbox{6}\fbox{7}\fbox{8}\overline{\underline{\vphantom{1234567890}\cdots}}$), we also have a numbering of the paths in our matching. Consider a new path that differs from the [$n^\text{th}$](https://arbital.com/p/nth) path in our matching on the $n^\text{th}$ tile, that is, the $n^\text{th}$ step that you take. For example, if our first eight paths are

then our new path is

Clearly, this path is not any of the ones in the matching, because it differs from every single path at some point (in particular, it differs from the $n^\text{th}$ path on the $n^\text{th}$ tile, the $n^\text{th}$ step you take, which is highlighted in yellow).
Because we can repeat this exercise no matter what matching we're given, that means *any* possible matching will always leave out at least one path. Thus, the number of paths a person can take must be strictly larger than the number of tiles in the sidewalk.
## See also
If you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!
Arbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)! |
d37de2e3-2f11-46ec-9bfe-4612d88ce2af | trentmkelly/LessWrong-43k | LessWrong | Palantir's AI models
Palantir published marketing material for their offering of AI for defense purposes. There's a video of how a military commander could order a military strike on an enemy tank with the help of LLMs.
One of the features that Palantir advertises is:
> Agents
>
> Define LLM agents to pursue specific, scoped goals.
Given military secrecy we are hearing less about Palantir's technology than we hear about OpenAI, Google, Microsoft and Facebook but Palantir is one player and likely an important one. |
f3c0ae14-2123-4f7b-a81d-fc4dbddd402a | trentmkelly/LessWrong-43k | LessWrong | Let Values Drift
I occasionally run across lines of reasoning that depend on or favor the position that value drift should be avoided.
I find odd the idea of value drift, let alone the idea that value drift is bad. My intuition is that value drift is good if anything since it represents an update of one's values based on new evidence and greater time to compute reflective equilibrium. But rather than arguing intuition, let's explore value drift a bit before we come to any stronger conclusions.
(Fair warning, this is going to get into some deep philosophical territory, be pretty unapologetic about it, and assume you are reading carefully enough to notice what I say and not what you think I said. I'm still working some of these ideas out myself, so I don't have the fluency to provide a more accessible explanation right now. I also take some pretty big inferential jumps at times that you may not be on board with as of yet, so the later parts might feel like unjustified reasoning. I don't think that's the case, but you'll have to poke at me to help me figure out how to fill in those gaps.
In spite of all those apologies, there are some key insights here, and I'm unlikely to get clearer unless I am first more opaque, so please bear with me if you please, especially if you are interested in value as it relates to AI alignment.)
Whence drifting values?
The metaphor of drifting values is that your values are initially one place and then gradually relocate to another, like flotsam. The waves of fortune, chance, and intention combine to determine where they end up on the seas of change. In this metaphor, values are discrete, identifiable things. Linguistically, they are nouns.
When we talk of values as nouns, we are talking about the values that people have, express, find, embrace, and so on. For example, a person might say that altruism is one of their values. But what would it mean to "have" altruism as a value or for it to be one of one's values? What is the thing possessed or of one |
7736e927-add5-41c5-8bc6-1b4d0211d5c9 | trentmkelly/LessWrong-43k | LessWrong | A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble
Introduction
Insight meditation, enlightenment, what’s that all about?
The sequence of posts starting from this one is my personal attempt at answering that question. It grew out of me being annoyed about so much of this material seeming to be straightforwardly explainable in non-mysterious terms, but me also being unable to find any book or article that would do this to my satisfaction. In particular, I wanted something that would:
* Explain what kinds of implicit assumptions build up our default understanding of reality and how those assumptions are subtly flawed. It would then point out aspects from our experience whose repeated observation will update those assumptions, and explain how this may cause psychological change in someone who meditates.
* It would also explain how the so-called “three characteristics of existence” of Buddhism - impermanence, no-self and unsatisfactoriness - are all interrelated and connected with each other in a way your average Western science-minded, allergic-to-mysticism reader can understand.
I failed to find a resource that would do this in the way I had in mind, so then I wrote one myself.
From the onset, I want to note that I am calling this a non-mystical take on the three characteristics, rather than the non-mystical take on the three characteristics. This is an attempt to explain what I personally think is going on, and to sketch out an explanation of how various experiences and Buddhist teachings could be understandable in straightforward terms. I don’t expect this to be anything like a complete or perfect explanation, but rather one particular model that might be useful.
The main intent of this series is summarized by a comment written by Vanessa Kosoy, justifiably skeptical of grandiose claims about enlightenment that are made without further elaboration on the actual mechanisms of it:
> I think that the only coherent way to convince us that Enlightenment is real is to provide a model from a 3rd party perspective. |
81b3ed63-0f2a-463c-8112-d7d9fa21eef9 | trentmkelly/LessWrong-43k | LessWrong | What We Talk About When We Talk About Objective Functions
tl; dr: We can better understand common objective functions (reward, prediction, fitness, control) as all being related to a singular, overarching objective.
Reward? Prediction? Fitness?
In their 2021 paper Reward is enough, DeepMind researchers argue that "intelligence, and its associated abilities, can be understood as subserving the maximization of reward."
This is a response not just to the idea that Attention Is All You Need, but also to predictive processing, a theoretical framework in neuroscience where prediction-error minimization is the star of the show, rather than reward maximization.
> "The whole function of the brain is summed up in: error correction." So wrote W. Ross Ashby, the British psychiatrist and cyberneticist, some half a century ago. Computational neuroscience has come a very long way since then. There is now increasing reason to believe that Ashby's (admittedly somewhat vague) statement is correct, and that it captures something crucial about the way that spending metabolic money to build complex brains pays dividends in the search for adaptive success. In particular, one of the brain's key tricks, it now seems, is to implement dumb processes that correct a certain kind or error: error in the multi-layered prediction of input.
>
> ―Andy Clark, Whatever next? Predictive brains, situated agents, and the future of cognitive science (2013)
This battle between Reward and Prediction is an old one. The behaviorists were Team Reward. The cyberneticists were Team Prediction. Reward maximization is closely linked to the idea of fitness maximization and Darwinian evolution. Error minimization is closely linked to the overarching notion of control.
Could we possibly fit everything under the same umbrella?
Selectionism
Let's take a step back.
In 2023, a peculiar paper was published in PNAS: On the roles of function and selection in evolving systems.
> We identity universal concepts of selection—static persistence, dynamic persistence, and nov |
c65272db-ff56-421c-9b13-430564cd09ff | StampyAI/alignment-research-dataset/arbital | Arbital | Prime element of a ring
summary(Technical): Let $(R, +, \times)$ be a [ring](https://arbital.com/p/3gq) which is an [integral domain](https://arbital.com/p/5md). We say $p \in R$ is *prime* if, whenever $p \mid ab$, it is the case that either $p \mid a$ or $p \mid b$ (or both).
An element of an [https://arbital.com/p/-5md](https://arbital.com/p/-5md) is *prime* if it has the property that $p \mid ab$ implies $p \mid a$ or $p \mid b$.
Equivalently, if its generated [ideal](https://arbital.com/p/ideal_ring_theory) is [prime](https://arbital.com/p/prime_ideal) in the sense that $ab \in \langle p \rangle$ implies either $a$ or $b$ is in $\langle p \rangle$.
Be aware that "prime" in ring theory does not correspond exactly to "[prime](https://arbital.com/p/4mf)" in number theory (the correct abstraction of which is [irreducibility](https://arbital.com/p/5m1)).
It is the case that they are the same concept in the ring $\mathbb{Z}$ of [integers](https://arbital.com/p/48l) ([proof](https://arbital.com/p/5mf)), but this is a nontrivial property that turns out to be equivalent to the [https://arbital.com/p/-5rh](https://arbital.com/p/-5rh) ([proof](https://arbital.com/p/alternative_condition_for_ufd)).
# Examples
# Properties
- Primes are always [irreducible](https://arbital.com/p/5m1); a proof of this fact appears on the [page on irreducibility](https://arbital.com/p/5m1), along with counterexamples to the converse.
- |
8399c17f-7623-4d86-82d6-cadce8e7cd39 | trentmkelly/LessWrong-43k | LessWrong | Open and Welcome Thread – August 2021
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here. |
0c071055-1155-436b-9d2b-35691b92a332 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Poem: There are no beautiful surfaces without a terrible depth.
The poem is from someone whose online pseudonym is atiguhya padma. I'll quote the first verse, the refrain, and the beginning of the second verse to give you enough flavor to decide if you want to follow the link. There are about 9 verses total.
> Someone looked out of their window
> And said to me: the world looks
> So beautiful, that I praise God
> Each day for this wonderful life,
> This landscape of happy creatures
> And rolling fields of growth and form.
> He obviously had not read Tennyson
> And he wasn’t an ecologist,
> For he had no firm idea of how
> Ecosystems sustain themselves.
>
> There are no beautiful surfaces
> Without a terrible depth.
>
> You said you loved me.
> And I wondered what that could mean...
Reductionist thinking can connect emotions triggered by the surfaces encountered in daily life to a larger set of concepts and predictions, and this seems to have consequences for both the thinking and the emotions that isn't often discussed and is even less often discussed well. I liked the poem because it addressed the issues pretty well and in an emotional mode, which is doubly rare.
The "no beautiful surfaces" refrain is a quote from Nietzsche which has little easily accessed online scholarship. Nothing in the first few pages of google results mentions a source for the quote so I was suspicious at first that it was even a quote by him, but references available via google books indicate that it comes from one of his notebooks. I haven't tracked down the notebook (it doesn't seem to be in gutenberg), so I'm not sure if the original source carries the specific connotations the poem attributes to the phrase, or if it is just a cool line put to good use in a new context. |
c89ceaee-0ffa-4192-876c-df77d0924072 | trentmkelly/LessWrong-43k | LessWrong | Short, Extreme, Forgotten Torture vs Death
Turns out Pascal's mugger is real, and as would be expected of someone who does Pascal's muggings, he's a jerk and likes forcing people to make impossible decisions. Also, his threats are discovered to be truthful and credible. He decides he's sick of mugging after collecting a few trillion dollars from it and wants to try something new. He takes out a gun (killing people with Matrix powers is for cowards) and forces you to make a choice.
Scenario 1: He puts the gun to your head. "I will kill you unless you let me put you through torture 3^^^3 [1] times more intense than anything you can possibly imagine. Don't worry though, it'll only last for a millisecond, and you won't remember it or suffer any trauma or anything."
Scenario 2: He puts the gun to a random stranger's head and tells you to make the choice for them. He freezes the stranger in place with his Matrix powers (2a: the stranger overhears before being frozen and decides on what they would choose, 2b: the stranger is frozen before they know what's happening) so the stranger has no way of communicating their preference to you. He will shoot if you don't make a choice.
Scenario 3: He puts the gun to a random stranger's head and tells them to make the choice. After you hear the stranger's choice (3a: they choose death, 3b: they choose torture), the mugger gives you the option to override it. He will abide by the stranger's choice if you don't make a choice.
What should you do?
In the words of the Torture vs Dust Specks question this was inspired by: I think the answer is obvious [in all scenarios]. How about you?
Footnote:
[1]: The notation is called Knuth's up-arrow notation. 3^3 is 3 times 3 times 3. 3^^3 is 3^(3^3). 3^^^3 is 3^^(3^^3). 3^^^3 is used as an arbitrarily large quantity so that there's no situation where it's "too small". |
9c89410b-beb1-4cd4-81c0-29d0e390f3ab | trentmkelly/LessWrong-43k | LessWrong | A Short Note on UDT
In my last post, I stumbled across some ideas which I thought were original, but which were already contained in UDT. I suspect that was because these ideas haven’t been given much emphasis in any of the articles I’ve read about UDT, so I wanted to highlight them here.
We begin with some definitions. Some inputs in an Input-Output map will be possible for some agents to experience, but not for others. We will describe such inputs and the situations they represent as conditionally consistent. Given a particular agent, we will call a input/situation compatible if the agent is consistent with the corresponding situation and incompatible otherwise. We will call agents consistent with a conditionally consistent input/situation compatible and those who aren’t incompatible.
We note the following points:
* UDT uses an Input-Output map instead of a Situation-Output map. It is easy to miss how important this choice is. Suppose we have an input representing a situation that is conditionally consistent. Trying to ask what an incompatible agent does in such a situation is problematic or at least difficult as the Principle of Explosion means that all such situations are equivalent. On the other hand, it is much easier to ask how the agent responds to a sequences of inputs representing an incompatible situation. The agent must respond somehow to such an input, even if it is by doing nothing or crashing. Situations are also modelled (via the Mathematical Intuition Function), but the point is that UDT models inputs and situations separately.
* Given the previous point, it is convenient to define an agent’s counterfactual action in an incompatible situation as its response to the input representing the situation. For all compatible situations, this produces the same action as if we’d simply asked what the agent would do in such a situation. For conditionally consistent situations the agent is incompatible with, it explains the incompatibility: any agent that would respond a cert |
5ee736f8-43e9-4a46-a4d3-a1866b91fceb | StampyAI/alignment-research-dataset/blogs | Blogs | May 2018 Newsletter
#### Updates
* New research write-ups and discussions: [Resource-Limited Reflective Oracles](https://agentfoundations.org/item?id=1793); [Computing An Exact Quantilal Policy](https://agentfoundations.org/item?id=1794)
* New at AI Impacts: [Promising Research Projects](https://aiimpacts.org/promising-research-projects/)
* MIRI research fellow Scott Garrabrant and associates Stuart Armstrong and Vanessa Kosoy are among the winners in the [second round](https://www.lesswrong.com/posts/SSEyiHaACSYDHcYZz/announcement-ai-alignment-prize-round-2-winners-and-next) of the AI Alignment Prize. First place goes to Tom Everitt and Marcus Hutter’s “[The Alignment Problem for History-Based Bayesian Reinforcement Learners](http://www.tomeveritt.se/papers/alignment.pdf).”
* Our thanks to our donors in REG’s [Spring Matching Challenge](https://reg-charity.org/spring-matching-challenge/) and to online poker players Chappolini, donthnrmepls, FMyLife, ValueH, and xx23xx, who generously matched $47,000 in donations to MIRI, plus another $250,000 to the Good Food Institute, GiveDirectly, and other charities.
#### News and links
* [OpenAI’s charter](https://blog.openai.com/openai-charter/) predicts that “safety and security concerns will reduce [their] traditional publishing in the future” and emphasizes the importance of “long-term safety” and avoiding late-stage races between AGI developers.
* Matthew Rahtz recounts [lessons learned](http://amid.fish/reproducing-deep-rl) while reproducing Christiano et al.’s “Deep Reinforcement Learning from Human Preferences.”
The post [May 2018 Newsletter](https://intelligence.org/2018/05/31/may-2018-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
a5d7f9cf-b4b7-49a9-b208-fccb54855968 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
1 Introduction
---------------
Deep neural networks (DNNs) have recently achieved widespread success in image classification [alexnet](#bib.bib18) , face and speech recognition ([deepface,](#bib.bib28) ; [speechrecognition,](#bib.bib13) ), and game playing ([silver2016mastering,](#bib.bib24) ; [silver2017mastering,](#bib.bib25) ). This success motivates their application in a broader set of domains, including more safety-critical environments. This thrust makes understanding the reliability and robustness of the underlying models, let alone their resilience to manipulation by malicious actors, a central question. However, predictions made by machine learning models are often brittle. A prominent example here is the existence of adversarial examples ([szegedy,](#bib.bib27) ): imperceptibly modified inputs that cause state-of-the-art models to misclassify with high confidence.
There has been a long line of work on both generating adversarial examples, called *attacks* ([CW,](#bib.bib4) ; [CW2,](#bib.bib5) ; [anish,](#bib.bib1) ; [logan,](#bib.bib2) ; [uesato,](#bib.bib31) ; [dawnsong,](#bib.bib10) ), and training models robust to adversarial examples, called *defenses* ([goodfellowADV,](#bib.bib11) ; [papernot,](#bib.bib22) ; [madry,](#bib.bib20) ; [harini,](#bib.bib14) ). However, recent research has shown that most defenses are ineffective ([CW,](#bib.bib4) ; [anish,](#bib.bib1) ; [uesato,](#bib.bib31) ). Furthermore, even for defenses such as ([madry,](#bib.bib20) ) that have seen empirical success against many attacks, we are unable to conclude yet with certainty that they are robust to all attacks that we want these models to be resilient to.
This state of affairs gives rise to the need for *verification of networks*, i.e., the task of *formally* proving that no small perturbations of a given input can cause it to be misclassified by the network model. Although many exact verifiers111Also sometimes referred to as combinatorial verifiers have been designed to solve this problem, the verification process is often intractably slow. For example, when using the Reluplex verifier of Katz et. al ([reluplex,](#bib.bib15) ), even verifying a small MNIST network turns out to be computationally infeasible. Thus, addressing this intractability of exact verification is the primary goal of this work.
Our Contributions
Our starting point is the observation that, typically, model training and verification are decoupled and seen as two distinct steps. Even though this separation is natural, it misses a key opportunity: the ability to align these two stages. Specifically, applying the principle of *co-design* during model training is possible: training models in a way to encourage them to be simultaneously robust and easy-to-exactly-verify. This insight is the cornerstone of the techniques we develop in this paper.
In this work, we use the principle of co-design to develop training techniques that give models that are both robust and easy-to-verify. Our techniques rely on improving two key properties of networks: weight sparsity and ReLU stability. Specifically, we first show that natural methods for improving weight sparsity during training, such as ℓ1-regularization, give models that can already be verified much faster than current methods. This speedup happens because in general, exact verifiers benefit from having fewer variables in their formulations of the verification task. For instance, for exact verifiers that rely on linear programming (LP) solvers, sparser weight matrices means fewer variables in those constraints.
We then focus on the major speed bottleneck of current approaches to exact verification of ReLU networks: the need of exact verification methods to “branch,” i.e., consider two possible cases for each ReLU (ReLU being active or inactive). Branching drastically increases the complexity of verification. Thus, well-optimized verifiers will tend to avoid branching on a ReLU if it can determine that the ReLU is stable, i.e. that the ReLU will always be active or always be inactive for any perturbation of an input. This motivates the key goal of the techniques presented in this paper: we aim to minimize branching by maximizing the number of stable ReLUs. We call this goal *ReLU stability* and introduce a regularization technique to induce it.
Our techniques enable us to train weight-sparse and ReLU stable networks for the MNIST and CIFAR-10 datasets that can be verified significantly faster. Specifically, by combining natural methods for inducing weight sparsity with a robust adversarial training procedure (cf. ([goodfellowADV,](#bib.bib11) ; [madry,](#bib.bib20) )), we are able to train convolutional networks for which almost 90% of inputs can be verified in a moderate222We chose our time budget for verification to be 120 seconds per input image amount of time compared to previous verification techniques. Then, by also adding our regularization technique for inducing ReLU stability, we are able to train models that can be verified an additional 4–13x times as fast while maintaining state-of-the-art accuracy on MNIST. Our techniques also give rise to the first CIFAR models for which exact verifiers can prove nontrivial robustness. In particular, we achieve the following verification speed and provable robustness results for ℓ∞ norm-bound adversaries:
| Dataset | Epsilon | Provable Adversarial Accuracy | Average Solve Time (s) |
| --- | --- | --- | --- |
| MNIST | ϵ=0.1 | 94.33% | 0.49 |
| ϵ=0.2 | 89.79% | 1.13 |
| ϵ=0.3 | 80.68% | 2.78 |
| CIFAR | ϵ=2/255 | 45.93% | 13.50 |
| ϵ=8/255 | 20.27% | 22.33 |
Our network for ϵ=0.1 on MNIST achieves provable adversarial accuracies comparable with the current best results of Wong et al. ([kolter2,](#bib.bib32) ) and Dvijotham et al. ([deepmindpvt,](#bib.bib8) ), and our results for ϵ=0.2 and ϵ=0.3 achieve the best provable adversarial accuracies yet. To the best of our knowledge, we also achieve the first nontrivial provable adversarial accuracy results using exact verifiers for CIFAR-10.
Finally, our techniques allow us to improve the input to the verification process, regardless of the verifier we end up using. More precisely, our training methods have the benefit of being universal, as they can be used with any verifier. This is particularly important because research into network verification methods is still in its early stages, and our co-design methods are compatible with the best current verifiers (LP/MILP-based methods) and should be compatible with any future improvements in verification.
2 Background and Related Work
------------------------------
Exact verification of networks has been the subject of many recent works ([reluplex,](#bib.bib15) ; [ehlers,](#bib.bib9) ; [groundtruth,](#bib.bib3) ; [vincent,](#bib.bib30) ; [lomuscio,](#bib.bib19) ; [cheng,](#bib.bib6) ). To understand the context of these works, observe that for linear networks, the task of exact verification is relatively simple and can be done by solving a LP. For more complex models, the presence of nonlinear ReLUs makes verification over all perturbations of an input much more challenging. This is so as ReLUs can be active or inactive depending on the input, which can cause exact verifiers to “branch" and consider these two cases separately. The number of such cases that verifiers have to consider might grow exponentially with the number of ReLUs, so verification speed will also grow exponentially in the worst case. Katz et. al ([reluplex,](#bib.bib15) ) further illustrated the difficulty of exact verification by proving that it is NP-complete. In recent years, formal verification methods were developed to verify networks. Most of these methods use satisfiability modulo theory (SMT) solvers ([reluplex,](#bib.bib15) ; [ehlers,](#bib.bib9) ; [groundtruth,](#bib.bib3) ) or LP and Mixed-Integer Linear Programming (MILP) solvers ([vincent,](#bib.bib30) ; [lomuscio,](#bib.bib19) ; [cheng,](#bib.bib6) ). However, all of them are limited by the same issue of scaling poorly with the number of ReLUs in a network, making them prohibitively slow in practice for even medium-sized models.
One recent approach for dealing with the inefficiency of exact verifiers is to focus on certification methods333These works use both “verification” and “certification” to describe their methods. For clarity, we use “certification” to describe their approaches, while we use “verification” to describe exact verification approaches. ([kolter,](#bib.bib17) ; [kolter2,](#bib.bib32) ; [deepmindpvt,](#bib.bib8) ; [percy,](#bib.bib23) ; [gehr,](#bib.bib21) ; [sinha,](#bib.bib26) ). In contrast to exact verification, these methods do not solve the verification task directly; instead, they rely on solving a relaxation of the exact verification problem by overapproximating the adversarial polytope, or the space of outputs of a network for a region of possible inputs. Certification approaches train networks in a specific manner to make certifying those models easier, and they can attain provable adversarial accuracy results quickly. However, certification is fundamentally different from verification in two primary ways. First, it involves solving a relaxation of the original verification problem. As a result, certification methods can fail to certify many inputs that are actually robust to perturbations – only exact verifiers, given enough time, can give conclusive answers on robustness for every single input. Second, certification approaches fall under the paradigm of co-training, where models must be trained and optimized for a specific certification method in order for that certification step to work well. When used as a black box on arbitrary models, the certification step can yield a high rate of false negatives. For example, ([percy,](#bib.bib23) ) found that their certification method was significantly less effective when used on a model trained using ([kolter,](#bib.bib17) )’s training method, and vice versa. In contrast, we design our methods to be universal. They can be combined with any standard training procedure for networks and will improve exact verification speed for any LP/MILP-based exact verifier. Similar to most of the certification methods, our technique can be made to have very little training time overhead.
3 Training Verifiable Network Models
-------------------------------------
We begin by discussing the task of verifying a network, and identify two key properties of networks that lead to improved verification speed: weight sparsity and so-called ReLU stability. We then use natural regularization methods for inducing weight sparsity as well as a new regularization method for inducing ReLU stability. Finally, we demonstrate that these methods significantly speed up verification while maintaining state-of-the-art accuracy.
###
3.1 Verifying Adversarial Robustness of Network Models
Deep neural network models. Our focus will be on the most common architecture for state-of-the-art models: k-layer fully-connected feed-forward DNN classifiers with ReLU non-linearities444Note that this encompasses common convolutional network architectures because convolutional layers can be represented as fully-connected layers. Such models can be viewed as a function f(⋅,W), where W=(W1,W2,…,Wk−1) represents the weight matrices of each layer. For an input x, the output f(x,W) of the DNN is defined as:
| | | | | |
| --- | --- | --- | --- | --- |
| | z0 | =x | | (1) |
| | ^zi | =zi−1Wi+bi\ for\ i=1,2,…,k−1 | | (2) |
| | zi | =max(^zi,0) \ for\ i=1,2,…,k−2 | | (3) |
| | f(x,W) | =^zk−1 | | (4) |
Here, for each layer i, we let ^zij denote the jth ReLU pre-activation and let ^zij(x) denote the value of ^zij on an input x. ^zk−1 is the final, output layer with an output unit for each possible label (the logits). The network will make predictions by selecting the label with the largest logit.
Adversarial robustness. For a network to be reliable, it should make predictions that are robust – that is, it should predict the same output for inputs that are very similar. Specifically, we want the DNN classifier’s predictions to be robust to a set Adv(x) of possible adversarial perturbations of an input x. We focus on ℓ∞ norm-bound adversarial perturbations, where Adv(x)={x′:||x′−x||∞≤ϵ} for some constant ϵ, since it is the most common one considered in adversarial robustness and verification literature (thus, it currently constitutes a canonical benchmark).
Verification of network models. For an input x with correct label y, a perturbed input x′ can cause a misclassification if it makes the logit of some incorrect label y∗ larger than the logit of y on x′. We can thus express the task of finding an adversarial perturbation as the following optimization problem:
| | | |
| --- | --- | --- |
| | minx′,y∗f(x′,W)y−f(x′,W)y∗ | |
| | subject tox′∈Adv(x) | |
If the objective of the above optimization problem is negative, then an adversarial perturbation exists – otherwise no adversarial perturbations can exist.
Adversarial accuracies. We define the *true adversarial accuracy* of a model to be the fraction of test set inputs for which the model is robust to all allowed perturbations. By definition, evaluations against specific adversarial attacks like PGD or FGSM provide an upper bound to this accuracy while certification methods provide lower bounds. Given sufficient time for each input, an exact verifier can prove robustness to perturbations, or find a perturbation where the network makes a misclassification on the input, and thus exactly determine the true adversarial accuracy. This is in contrast to certification methods, which solve a relaxation of the verification problem and can not exactly determine the true adversarial accuracy no matter how much time they have.
However, such exact verification may take impractically long for certain inputs, so we instead compute the provable adversarial accuracy, which we define as the fraction of test set inputs for which the verifier can prove robustness to perturbations within an allocated time budget (timeout). Similarly to certifiable accuracy, this accuracy also constitutes a lower bound on the true adversarial accuracy. A model can thus, e.g., have high true adversarial accuracy and low provable adversarial accuracy if the verification step on this network is too slow and often fails to complete before timeout.
Also, in our evaluations, we chose to use the MILP exact verifier of Tjeng et al. ([vincent,](#bib.bib30) ) when performing experiments, as it is both open source and the fastest verifier we are aware of.
###
3.2 Weight Sparsity
The first property of network models that we want to improve in order to speed up exact verification of those models is weight sparsity. Weight sparsity is important for verification speed because many exact verifiers rely on formulating and solving LP or MILP systems, which benefit from having fewer variables. We use two natural regularization methods for improving weight sparsity: ℓ1-regularization and small weight pruning. We also show mathematically that adversarial training alone improves weight sparsity.
####
3.2.1 Impact of Weight Sparsity Improvements on Verification Speed
We demonstrate that improving weight sparsity via ℓ1-regularization and small weight pruning significantly improves verification speed – see Table [1](#S3.T1 "Table 1 ‣ 3.2.1 Impact of Weight Sparsity Improvements on Verification Speed ‣ 3.2 Weight Sparsity ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"). In particular, those two natural methods are vital to speeding up verification, as verifying even small MNIST networks is almost completely intractable without them. Specifically, the verifier can only prove robustness of an adversarially-trained model on 19% of inputs with a one hour budget per input, while the verifier can prove robustness of an adversarially-trained model with ℓ1-regularization and small weight pruning on 89.13% of inputs with a 120 second budget per input.
| Dataset | Epsilon | | Training | Test | Provable | Average |
| --- | --- | --- | --- | --- | --- | --- |
| | | | Method | Set | Adversarial | Solve |
| | | | | Accuracy | Accuracy | Time (s) |
| MNIST | ϵ=0.1 | 1 | Adversarial Training | 99.17% | 19.00% | 2970.43 |
| 2 | +ℓ1-Regularization | 99.00% | 82.17% | 21.99 |
| 3 | +Small Weight Pruning | 98.99% | 89.13% | 11.71 |
| 4 | +ReLU Pruning (Control) | 98.94% | 91.58% | 6.43 |
Table 1: Improvement in provable adversarial accuracy and verification solve times when incrementally adding natural regularization methods for improving weight sparsity and ReLU stability into the model training procedure, before verification occurs. Each row represents the addition of another natural method – for example, Row 3 uses adversarial training, ℓ1-regularization, and small weight pruning. Row 4 adds ReLU pruning (see Appendix [A](#A1 "Appendix A Natural Improvements ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")), a natural technique for ReLU stability. Row 4 is the control model for MNIST and ϵ=0.1 that we present again in comparisons in Tables [2](#S3.T2 "Table 2 ‣ 3.3.4 Impact of ReLU Stability Improvements on Provable Adversarial Accuracy ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability") and [3](#S4.T3 "Table 3 ‣ 4.1 Experiments on MNIST and CIFAR ‣ 4 Experiments ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"). We use a 3600 instead of 120 second timeout for Row 1 and only verified the first 100 images (out of 10000) because verifying it took too long.
####
3.2.2 Adversarial Training and Weight Sparsity
It is worth noticing that adversarial training against ℓ∞ norm-bound adversaries alone already makes networks easier to verify by implicitly improving weight sparsity. Indeed, this can be seen clearly in the case of linear networks. Recall that a linear network can be expressed as f(x)=Wx+b. Thus, an ℓ∞ norm-bound perturbation of the input x will produce the output
| | | | |
| --- | --- | --- | --- |
| | f(x′) | =x′W+b | |
| | | =xW+b+(x′−x)W | |
| | | ≤f(x)+ϵ||W||1 | |
where the last inequality is just Hölder’s inequality. In order to limit the adversary’s ability to perturb the output, adversarial training needs to minimize the ||W||1 term, which is equivalent to ℓ1-regularization and is known to promote weight sparsity ([lasso,](#bib.bib29) ) (relatedly, Goodfellow et. al [goodfellowADV](#bib.bib11) already pointed out that adversarial attacks against linear networks will be stronger when the ℓ1-norm of the weight matrices is higher).
Even in the case of nonlinear networks, adversarial training has experimentally been shown to improve weight sparsity. For example, models trained in [madry](#bib.bib20) and [kolter](#bib.bib17) often learn many weight-sparse layers, and we observed similar trends in the models we trained. It is important to note, however, that while adversarial training alone does improve weight sparsity, it is not sufficient by itself. Additional regularization like ℓ1-regularization and small weight pruning further promotes weight sparsity and gives rise to networks that are much easier to verify.
###
3.3 ReLU Stability
Next, we target the primary speed bottleneck of exact verification by reducing the number of ReLUs the verifier has to branch on. This corresponds to the notion of ReLU stability. Before we describe our methodology, we formally define ReLU stability.
Given an input x, a set of allowed perturbations Adv(x), and a ReLU, exact verifiers may need to branch based on the possible pre-activations of the ReLU, namely ^zij(Adv(x))={^zij(x′):x′∈Adv(x)} (cf. ([2](#S3.E2 "(2) ‣ 3.1 Verifying Adversarial Robustness of Network Models ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"))). If there exist two possible perturbations x′,x′′ in the adversarial set Adv(x) such that sign(^zij(x′))≠sign(^zij(x′′)), then the verifier has to consider that for some perturbed inputs the ReLU is active (zij=^zij) and for other perturbed inputs inactive (zij=0). The more such cases the verifier faces, the more branches it has to consider, causing the complexity of verification to increase exponentially. Intuitively, a model with 1000 ReLUs among which only 100 ReLUs require branching will likely be much easier to verify than a model with 200 ReLUs that all require branching.
Thus, it is advantageous for the verifier if, on an input x with allowed perturbation set Adv(x), the number of ReLUs such that
| | | | |
| --- | --- | --- | --- |
| | sign(^zij(x′))=sign(^zij(x))∀x′∈Adv(x) | | (5) |
is maximized. We call a ReLU for which ([5](#S3.E5 "(5) ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")) holds on an input x a *stable ReLU* on that input. If ([5](#S3.E5 "(5) ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")) does not hold, then the ReLU is an *unstable ReLU*.
Directly computing whether a ReLU is stable on a given input x is difficult because doing so would involve considering all possible values of ^zij(Adv(x)). Instead, exact verifiers compute an upper bound ^uij and a lower bound ^lij of ^zij(Adv(x)). If 0≤^lij or ^uij≤0, they can replace the ReLU with the identity function or the zero function, respectively. Otherwise, if ^lij<0<^uij, these verifiers then determine that they need to “branch” on that ReLU. Thus, we can rephrase ([5](#S3.E5 "(5) ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")) as
| | | | |
| --- | --- | --- | --- |
| | sign(^uij)=sign(^lij) | | (6) |
We will discuss methods for determining these upper and lower bounds ^uij, ^lij in Section [3.3.2](#S3.SS3.SSS2 "3.3.2 Estimating ReLU Upper and Lower Bounds on Activations ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
####
3.3.1 A Regularization Technique for Inducing ReLU Stability: RS Loss
Equation ([6](#S3.E6 "(6) ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")) tells us that a function that would indicate exactly when a ReLU is stable is F∗(^uij,^lij)=sign(^uij)⋅sign(^lij). Thus, it would be natural to use this function as a regularizer. However, this function is non-differentiable and if used in training a model, provides no useful gradients during back-propagation. Thus, we use the following smooth approximation of F∗ (see Fig. [1](#S3.F1 "Figure 1 ‣ 3.3.1 A Regularization Technique for Inducing ReLU Stability: RS Loss ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")) which provides the desired gradients:
| | | |
| --- | --- | --- |
| | F(^uij,^lij)=−tanh(1+^uij⋅^lij) | |
| | |
| --- | --- |
|
(a) Plot
|
(b) Contour plot
|
Figure 1: Plot and contour plot of the function F(x,y)=−tanh(1+x⋅y)
We call the corresponding objective RS Loss, and show in Fig. [2](#S3.F2 "Figure 2 ‣ 3.3.3 Impact of ReLU Stability Improvements on Verification Speed ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability") that using this loss function as a regularizer effectively decreases the number of unstable ReLUs. RS Loss thus encourages *ReLU stability*, which, in turn, speeds up exact verification - see Fig. [2](#S3.F2 "Figure 2 ‣ 3.3.3 Impact of ReLU Stability Improvements on Verification Speed ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
####
3.3.2 Estimating ReLU Upper and Lower Bounds on Activations
A key aspect of using RS Loss is determining the upper and lower bounds ^uij,^lij for each ReLU (cf. ([6](#S3.E6 "(6) ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"))). The bounds for the inputs z0 (cf. ([1](#S3.E1 "(1) ‣ 3.1 Verifying Adversarial Robustness of Network Models ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"))) are simple – for the input x, we know x−ϵ≤z0≤x+ϵ, so ^u0=x−ϵ, ^l0=x+ϵ. For all subsequent zij, we estimate bounds using either the naive interval arithmetic (IA) approach described in [vincent](#bib.bib30) ; [deepmindpvt](#bib.bib8) or an improved version of it. The improved version is a tighter estimate but uses more memory and training time, and thus is most effective on smaller networks. We present the details of naive IA and improved IA in Appendix [B](#A2 "Appendix B Interval Arithmetic ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
Interval arithmetic approaches can be implemented relatively efficiently and work well with back-propagation because they only involve matrix multiplications. This contrasts with how exact verifiers compute these bounds, which usually involves solving LPs or MILPs. Interval arithmetic also overestimates the number of unstable ReLUs. This means that minimizing unstable ReLUs based on IA bounds will provide an upper bound on the number of unstable ReLUs determined by exact verifiers. In particular, IA will properly penalize every unstable ReLU.
Improved IA performs well in practice, overestimating the number of unstable ReLUs by less than 0.4% in the first 2 layers of MNIST models and by less than 36.8% (compared to 128.5% for naive IA) in the 3rd layer. Full experimental results are available in Table [4](#A2.T4 "Table 4 ‣ B.3 Experimental Results on Improved IA and Naive IA ‣ Appendix B Interval Arithmetic ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability") of Appendix [B.3](#A2.SS3 "B.3 Experimental Results on Improved IA and Naive IA ‣ Appendix B Interval Arithmetic ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
####
3.3.3 Impact of ReLU Stability Improvements on Verification Speed
We provide experimental evidence that RS Loss regularization improves ReLU stability and speeds up average verification times by more than an order of magnitude in Fig. [2](#S3.F2 "Figure 2 ‣ 3.3.3 Impact of ReLU Stability Improvements on Verification Speed ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"). To isolate the effect of RS Loss, we compare MNIST models trained in exactly the same way other than the weight on RS Loss. When comparing a network trained with a RS Loss weight of 5e−4 to a network with a RS Loss weight of 0, the former has just 16% as many unstable ReLUs and can be verified 65x faster. The caveat here is that the former has lower test set accuracy.
We also compare verification speed with and without RS Loss on MNIST networks for different values of ϵ (0.1, 0.2, and 0.3) in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3.3 Impact of ReLU Stability Improvements on Verification Speed ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability"). We choose RS Loss weights that cause almost no test set accuracy loss in this case, and we still observe a 4–13x speedup from RS Loss. For CIFAR, RS Loss gives a smaller speedup of 1.6–3.7x (See Appendix [C](#A3 "Appendix C Full Experimental Verification Results ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")).

Figure 2: Average number of unstable ReLUs by layer and average verification solve times of 6 networks trained with different weights on RS Loss for MNIST and ϵ=0.1 . Averages are taken over all 10000 MNIST test set inputs. Both metrics improve significantly with increasing RS Loss weight. An RS Loss weight of 0 corresponds to the control network, while an RS Loss weight of 0.00012 corresponds to the “+RS” network for MNIST and ϵ=0.1 in Tables [2](#S3.T2 "Table 2 ‣ 3.3.4 Impact of ReLU Stability Improvements on Provable Adversarial Accuracy ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability") and [3](#S4.T3 "Table 3 ‣ 4.1 Experiments on MNIST and CIFAR ‣ 4 Experiments ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").

Figure 3: Improvement in the average time taken by a verifier to solve the verification problem by after adding RS Loss to the training procedure, for different ϵ on MNIST. The weight on RS Loss was chosen so that the the “+RS” models have very similar test set accuracies to the control models.
####
3.3.4 Impact of ReLU Stability Improvements on Provable Adversarial Accuracy
As the weight on the RS Loss used in training a model increases, the ReLU stability of the model will improve, speeding up verification and likely improving provable adversarial accuracy. However, like most regularization methods, placing too much weight on RS Loss can decrease the model capacity, potentially lowering both the true adversarial accuracy and the provable adversarial accuracy. Therefore, it is important to choose the weight on RS Loss carefully to obtain both high provable adversarial accuracy and faster verification speeds.
For each dataset and each value of ϵ, we train two networks. One is a “Control” network that uses all of the natural improvements for inducing both weight sparsity (ℓ1-regularization and small weight pruning) and ReLU stability (ReLU pruning - see Appendix [A](#A1 "Appendix A Natural Improvements ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")). The second is a “+RS” network that uses RS Loss in addition to all of the same natural improvements. This lets us isolate the incremental effect of adding RS Loss to the training procedure.
In addition to attaining a 4–13x speedup in MNIST verification times (see Fig. [3](#S3.F3 "Figure 3 ‣ 3.3.3 Impact of ReLU Stability Improvements on Verification Speed ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability")), we achieve higher provable adversarial accuracy in every single setting when using RS Loss. This is especially notable for the hardest verification problem we tackle – proving robustness to perturbations with ℓ∞ norm-bound 8/255 on CIFAR-10 – where adding RS Loss nearly triples the provable adversarial accuracy from 7.09% to 20.27%. This improvement is primarily due to verification speedup induced by RS Loss, which allows the verifier to finish proving robustness for far more inputs within the 120 second time limit. These results are shown in Table [2](#S3.T2 "Table 2 ‣ 3.3.4 Impact of ReLU Stability Improvements on Provable Adversarial Accuracy ‣ 3.3 ReLU Stability ‣ 3 Training Verifiable Network Models ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
| | MNIST, ϵ=0.1 | MNIST, ϵ=0.2 | MNIST, ϵ=0.3 | CIFAR, ϵ=2/255 | CIFAR, ϵ=8/255 |
| --- | --- | --- | --- | --- | --- |
| Control | 91.58 | 86.45 | 77.99 | 45.53 | 7.09 |
|
+RS | 94.33 | 89.79 | 80.68 | 45.93 | 20.27 |
Table 2: Provable Adversarial Accuracies for the control and “+RS” networks in each setting.
4 Experiments
--------------
###
4.1 Experiments on MNIST and CIFAR
In addition to the experimental results already presented, we compare our control and “+RS” networks with the best available results presented in the certifiable defenses of Wong et al. [kolter2](#bib.bib32) and Dvijotham et al. [deepmindpvt](#bib.bib8) . We compare their test set accuracy, PGD adversarial accuracy (an evaluation of robustness against a strong 40-step PGD adversarial attack), and provable adversarial accuracy. Additionally, to show that our method can scale to larger architectures, we also train and verify a “+RS (Large)” network for each dataset and ϵ. These results are presented in Table [3](#S4.T3 "Table 3 ‣ 4.1 Experiments on MNIST and CIFAR ‣ 4 Experiments ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
In terms of provable adversarial accuracies, on MNIST, our results are significantly better than those of [kolter2](#bib.bib32) and [deepmindpvt](#bib.bib8) for larger perturbations of ϵ=0.3, and comparable for ϵ=0.1. On CIFAR-10, our method is slightly less effective – perhaps indicating that more unstable ReLUs are necessary to properly learn a robust CIFAR classifier. We also experienced many more instances of the verifier reaching its allotted 120 second time limit on CIFAR, especially for the less ReLU stable control networks. Full details are available in Appendix [C](#A3 "Appendix C Full Experimental Verification Results ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Dataset | Epsilon | Training | Test Set | PGD Adversarial | Provable/Certifiable |
| | | Method | Accuracy | Accuracy | Adversarial Accuracy |
| MNIST | ϵ=0.1 | Control | 98.94% | 95.12% | 91.58% |
| +RS | 98.68% | 95.13% | 94.33% |
| +RS (Large)\* | 98.95% | 96.58% | 95.60% |
| Wong et al. | 98.92% | - | 96.33% |
| Dvijotham et al. | 98.80% | 97.13% | 95.56% |
| MNIST | ϵ=0.2 | Control | 98.40% | 93.14% | 86.45% |
| +RS | 98.10% | 93.14% | 89.79% |
| +RS (Large)\* | 98.21% | 94.19% | 89.10% |
| MNIST | ϵ=0.3 | Control | 97.75% | 91.64% | 77.99% |
| +RS | 97.33% | 92.05% | 80.68% |
| +RS (Large)\* | 97.54% | 93.25% | 59.60% |
| Wong et al. | 85.13% | - | 56.90% |
| CIFAR | ϵ=2/255 | Control | 64.64% | 51.58% | 45.53% |
| +RS | 61.12% | 49.92% | 45.93% |
| +RS (Large)\* | 61.41% | 50.61% | 41.40% |
| Wong et al. | 68.28% | - | 53.89% |
| CIFAR | ϵ=8/255 | Control | 50.69% | 31.28% | 7.09% |
| +RS | 40.45% | 26.78% | 20.27% |
| Wong et al. | 28.67% | - | 21.78% |
| Dvijotham et al.\*\* | 48.64% | 32.72% | 26.67% |
Table 3: Comparison of test set accuracy, PGD adversarial accuracy, and provable adversarial accuracy of networks trained with and without RS Loss regularization. We also provide the best available certifiable adversarial accuracy and PGD adversarial accuracy of any single models from Wong et al. ([kolter2,](#bib.bib32) ) and Dvijotham et al. ([deepmindpvt,](#bib.bib8) ) for comparison, and highlight the best provable accuracy for each ϵ.
\*The provable adversarial accuracy for “+RS (Large)” is only computed for the first 1000 images because the verifier performs more slowly on larger models.
\*\*Dvijotham et al. ([deepmindpvt,](#bib.bib8) ) actually uses a slightly smaller ϵ=0.03=7.65/255 for CIFAR.
###
4.2 Experimental Methods and Details
Network Training Details
In our experiments, we use robust adversarial training ([goodfellowADV,](#bib.bib11) ) against a strong adversary as done in ([madry,](#bib.bib20) ) to train various DNN classifiers. We introduced a small tweak where we increased the adversary strength linearly from 0.01 to ϵ over first half of training and kept it at ϵ for the second half. We did this to improve convergence of the training process, following the prior examples of [kolter](#bib.bib17) ; [deepmindpvt](#bib.bib8) that use a similar training schedule. For MNIST, we trained for 70 epochs using the Adam optimizer ([adam,](#bib.bib16) ) with a learning rate of 1e−4 and a batch size of 32. For CIFAR, we trained for 250 epochs using the Adam optimizer with a learning rate of 1e−4. When using naive IA, we used a batch size of 128, and when using improved IA, we used a batch size of 16. We used a smaller batch size in the latter case because improved IA incurs high RAM usage during training. To speed up training on CIFAR, we only added in RS Loss regularization in the last 20% of the training process. Using this same sped-up training method on MNIST did not significantly affect the results.
For each setting (ϵ=0.1.,0.2,0.3 on MNIST and ϵ=2/255,8/255 on CIFAR-10), we find a suitable weight on RS Loss via line search. The same weight is used for each ReLU. The five weights we chose, in the same order, are 12e−5,1e−4,12e−5,1e−3, and 2e−3.
During training, we used improved IA for ReLU bound estimation for “+RS” models and use naive IA for “+RS (Large)” models because of memory constraints. We also train “+RS” models using naive IA to show that the basic version of our technique for ReLU stability works and has small training time overhead – full details on “+RS (Naive IA)” networks are in Appendix [C](#A3 "Appendix C Full Experimental Verification Results ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
Network Architecture Details
For ease of comparison, we trained our networks using the same convolutional DNN architecture as in [kolter](#bib.bib17) ; [kolter2](#bib.bib32) ; [deepmindpvt](#bib.bib8) . This architecture uses a 16-filter 2x2 strided convolution as its first layer, a 32-filter 2x2 strided convolution as its second layer, and a 100 hidden unit fully connected layer as its third. The final layer is a fully connected layer that maps the 100 units to 10 output units. We used convolutional filters of size 5x5 instead of 4x4, which is the original filter size used in ([kolter,](#bib.bib17) ).
We also train a “large” architecture, also following [kolter2](#bib.bib32) , for each setting. It has 4 convolutional layers with 32, 32, 64, and 64 filters, followed by 2 fully connected layers with 512 hidden units each before the final layer that maps 512 units to 10 output units. We were unable to perform verification experiments for even larger models or for the hardest case of ϵ=8/255 on CIFAR due to the high RAM usage of ([vincent,](#bib.bib30) )’s verifier for large models.
Verifier Details
We used the most up-to-date version of the exact verifier from [vincent](#bib.bib30) using the default settings of the code. We allotted 120 seconds for verification of each input datapoint using the default model build settings. We ran our experiments using the commercial Gurobi Solver (version 7.5.2), and model solves were parallelized over 8 CPU cores with Intel Xeon CPUs @ 2.20GHz processors. We used computers with 8–32GB of RAM, depending on the size of the model being verified. All computers used are part of the CSAIL OpenStack network.
When verifying an image, there are two steps that occur - first, the model-build step, and second, the verification problem solving step. We focus on reporting solve times from the verifier because that is most directly related to the task of verification itself. All build times for the control and “+RS” models on MNIST that we presented were between 4 and 10 seconds, and full results on build times are also presented in Appendix [C](#A3 "Appendix C Full Experimental Verification Results ‣ Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability").
5 Conclusion
-------------
In this paper, we use the principle of co-design to develop training methods that emphasize verification as a goal, and we show that they make verifying the trained model much faster. We first demonstrate that natural regularization methods already turn out to make the exact verification problem significantly more tractable. Subsequently, we introduce the notion of ReLU stability for networks, present a method of improving a network’s ReLU stability, and show that this improvement makes verification an additional 4–13x faster. Our method is universal, as it can be added to any training procedure and should speed up any exact verification procedure, especially LP/MILP-based verification methods.
Prior to our work, exact verification seemed intractable for all but the smallest models. Thus, our work is an important step toward reliable models that can be proven to be robust, and our techniques can help scale verification to even larger networks.
Many of our methods compress our networks into more compact, simpler forms. We hypothesize that the reason that regularization methods like RS Loss can still achieve very high accuracy is that most models are overparametrized in the first place. There exist clear parallels between our methods and techniques in model compression ([modelcompressionmanycitations,](#bib.bib12) ; [modelcompressionsurvey,](#bib.bib7) ) – therefore, we believe that drawing upon additional techniques from model compression can further improve the ease-of-verification of networks.
We also expect that there exist objectives other than weight sparsity and ReLU stability that are important for verification speed. If so, further exploring the principle of co-design for those objectives is an interesting future direction.
Acknowledgements
----------------
The authors would like to thank Krishnamurthy Dvijotham, Ludwig Schmidt, Michael Sun, Dimitris Tsipras, and Jonathan Uesato for helpful discussions.
Appendix
-------- |
77009c4f-2bb2-4182-9c8f-ba25499dd5e3 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Madison: Reading Group, Seeing with Fresh Eyes
Discussion article for the meetup : Madison: Reading Group, Seeing with Fresh Eyes
WHEN: 16 September 2012 07:00:00PM (-0500)
WHERE: 1053 Rutledge St #3, Madison, WI
At this meetup, we'll discuss the Seeing with Fresh Eyes sequence. Roughly speaking, this sequence is about the stickiness of old ideas, and how hard they can make it to have new ideas. Try to read this before showing up -- as usual for our reading group sessions, you're welcome to come if you haven't read these lately, but some of the conversation might be incomprehensible. To keep the reading down to a reasonable length, I recommend the following handful of pages.
First, a few well-documented studies, pointing vaguely at the general trend:
* Anchoring and Adjustment
* Priming and Contamination
* Do We Believe Everything We're Told?
Then, a mad plunge into related speculation and advice:
* Cached Thoughts
* We Change Our Minds Less Often Than We Think
* Hold Off On Proposing Solutions
* The Genetic Fallacy
As usual, I also like the rest of the posts in the sequence, but they don't fit together with these so well, I think; so I'll leave those out of this list. If you have the time for those, too, go ahead and read the full sequence.
Also, thanks to Nick for offering to host this week!
Discussion article for the meetup : Madison: Reading Group, Seeing with Fresh Eyes |
adc9be7c-6407-41f4-8fbe-763d8222fe5e | trentmkelly/LessWrong-43k | LessWrong | In Strategic Time, Open-Source Games Are Loopy
For many situations we'd like to analyze, information flow is more important than chronological order. When game theorists say that two players choose their actions simultaneously, they mean that they each make their choice without knowing the action chosen by the other player. This is usually contrasted with sequential decisions, where one player is able to observe the action chosen by another, and potentially choose differently based on the action observed.
Open-source games are ones in which each player delegates their choice to a computer program, and each program has access to the source code of all delegated programs at runtime. This introduces a new type of information relationship between decisions, where two agents can attempt to condition their output on each other's output. It's this strategic time loop that enables agents in the Robust Cooperation paper to Cooperate, if and only if they are logically certain that the other agent will also Cooperate. And amazingly, such agents actually do Cooperate with each other!
This is a more powerful form of prediction than agents in closed-source games can perform, and they're already classically assumed to be infinitely intelligent and logically omniscient. What makes the difference is that the information about each agent's output is available to each agent at the time they're making their decisions. Any Nash equilibrium of a game can be realized by a program equilibrium, but the joint policies chosen in some program equilibria are not Nash equilibria. The program equilibrium (FairBot, FairBot) leads to the joint policy (Cooperate, Cooperate), but this is not a Nash equilibrium of the underlying Prisoners' Dilemma.
In this post I'll describe how strategic time is modelled in compositional game theory, which is my current favorite model of non-loopy strategic information flow. I'll also go into more detail about strategic time loops, how they're different from ordinary consequentialism, and what properties we wa |
3f2be9e6-02f7-4db6-b47b-a549caa3322b | trentmkelly/LessWrong-43k | LessWrong | Optimizing Workouts for Intellectual Performance
So this year I've stopped working out, and my grades have improved drastically, but at the cost of losing muscle mass and gaining fat, and becoming physically slower and lazier just as I became faster and more active intellectually. One effect I especially noticed was the disappearance of that perpetual state of happiness/satisfaction that comes from frequent physical exertion, which I think had a tendency to get in the way of a feeling of urgency regarding studies; why bother with tiresome and frustrating intellectual exercise when physical exercise yielded results and pleasure/satisfaction much more easily and reliably?
Anyway, this got me thinking: "I need to figure out a training that is optimized for intellectual performance. Aspects that might be interesting to work on would be:
* getting as much blood (oxygen, nutrients) as possible to the brain, whenever needed.
* minimizing the amount of other tissue (including muscle in excess of what is strictly needed for a comfortable daily life, and digestive organs in excess of what is needed to get the nutrients from the food).
* optimizing the diet in order to feed the brain according to its needs while avoiding dietetical imbalances that would result in damage of some sort or another (too much sugar can damage the pancreas, too much protein and the kidneys can suffer, etc.)
* something that is easy and quick to implement and follow, relatively inexpensive and straightforward; the idea is to save as much time, resources and energy as possible for the needs of studying/working.
These ideas I'm throwing around from a position of extreme ignorance. I've tried hiring nutritionists, but their diets were optimized for bodybuilding, not for intellectual efficacy, and were incredibly troublesome to follow. These involved about five to eight meals a day, large amounts of meat or meat substitutes, which is expensive to sustain, and me in a perpetual state of either hunger or digestive lethargy, plus permanent muscular |
178313af-fa50-45f9-a072-d174e35edc29 | trentmkelly/LessWrong-43k | LessWrong | Discussion: weighting inside view versus outside view on extinction events
Articles covering the ideas of inside view and outside view:
Beware the Inside View (by Robin Hanson)
Outside View LessWrong wiki article
Article discussing the weighting of inside view and outside view:
The World is Mad (by ozymandias)
A couple of potential extinction events which seem to be most easily mitigated (the machinery involved is expensive):
Broadcasting powerful messages to the stars:
Should Earth Shut the Hell Up? (by Robin Hanson)
Arecibo message (Wikipedia)
Large Hadron Collider:
Anyone who thinks the Large Hadron Collider will destroy the world is a t**t. (by Rebecca Roache)
How should the inside view versus the outside view be weighted when considering extinction events?
Should the broadcast of future Arecibo messages (or powerful signals in general) be opposed?
Should the expansion of energy levels (or continued operation at all) of the Large Hadron Collider be opposed? |
6e89e8e3-2c93-47b9-b4dd-70f604c39ee8 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Berkeley, Dallas, Pittsburgh, Vancouver, Washington DC
There are upcoming irregularly scheduled Less Wrong meetups in:
* Vancouver Politics Meetup: 12 May 2012 01:00PM
* Washington DC meetup: 12 May 2012 08:34PM
* Dallas - Fort Worth Less Wrong Meetup 5/13/12: 13 May 2012 01:00PM
* Pittsburgh: Harry Potter and the Methods of Rationality: 18 May 2012 06:00PM
* Brussels meetup: 19 May 2012 12:00PM
* Less Wrong Sydney - Rational Acting: 21 May 2012 06:00PM
* First Berlin meetup: 05 June 2012 07:30PM
* Phoenix, Arizona: 15 June 2012 07:00PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Big Berkeley Meetup: 16 May 2012 07:00PM
* Cambridge, MA Third Sunday Meetup: 20 May 2012 02:20PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing l |
2326171f-3557-43c0-bd99-cdd572bd6978 | trentmkelly/LessWrong-43k | LessWrong | Question about Test-sets and Bayesian machine learning
What does “test-set performance” represent in Bayesian machine learning? In Bayesian ML:
* we have some data D={(y1,x1),…,(ym,xm)}
* we assume a model M (this includes any assumptions we make about the prior densities)
* and we compute the posterior predictive density p(~y|~x,D,M)
I have seen people argue that we need a test-set to compare between two models M1 and M2 as we do not know what “the one true model” is. I don’t fully understand how “evaluating performance” on “out-of-sample” data helps us with comparing two models but isn’t this what the quantity p(M|D) is for? |
da88b29c-0782-45a8-8341-05197aeea24c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Human Values and AGI Risk | William James
This is a Copy-Paste of a paper I wrote for my school's philosophy conference. Please review this reframing of the alignment problem and consider including it in the larger discussion.
Classically understood AGI systems could pose an existential risk to humanity if they were to become uncontrollable or were to be developed with hostile goals. AGI systems could cause harm to humans by accident or design, and there is a concern that once such systems reach a certain level of intelligence, they may be impossible to control or shut down. This paper presents what I believe to be an often obscured model of understanding human values. It then discusses the implications of this model on our understanding of AGI existential risk. The following definitions were produced with the help of ChatGPT and are included with the intent to add contextual clarity to the ideas explored by this paper.
Definitions
Artificial General Intelligence (AGI) refers to the development of artificial intelligence systems that have the ability to perform a wide range of tasks that currently require human intelligence, such as learning, reasoning, problem-solving, and creativity. (OpenAI)
Selection pressure is the force that drives evolution by selecting certain traits or behaviors that are favorable or advantageous for the accomplishment of an agent’s instrumental goals. In natural selection this set of instrumental goals is simplified to survival and reproduction. (OpenAI)
A network of agency refers to a system or pattern of relationships and interactions between different agents or actors within a particular context. An agent can be an individual, organization, or even an artificial intelligence system that has the capacity to act and make decisions. (OpenAI) For this discussion, a network of agency refers to the total amount of influence an agent can acquire and then use in coordination with the network to produce desired network behavior.
[Traditionally defined] intrinsic values are those that are valued for their own sake, whereas instrumental values are valued for the means to an end. In other words, intrinsic values are considered inherently good or desirable, whereas instrumental values are considered good or desirable because they lead to intrinsic values.(ChatGPT) I argue that this is a backwards model of what is really going on.
Instrumental vs. Intrinsic Value
The first section of this paper looks to establish a method of translating between Instrumental and Intrinsic value. Next this paper looks to model the justification of instrumental values using a sort of evolutionary framing. This perspective will then be applied to produce a predictive model of AGI risk. Lastly we touch on what this integrated perspective implies for the potential solution landscape.
Traditionally understood instrument values are valuable to achieve certain ends, while intrinsic values are valuable for their own sake. In order to translate between the two, first consider the pragmatic benefit of each framing. For example consider the value of "love", you can consider it valuable intrinsically and proceed to think about it more than that. You can then proceed to expressing love and receiving the underlying utility it provides without even having to understand such utility. On the contrary, you can view the value of "love" as purely instrumental towards the goals of species procreation and the effective raising of offspring. If viewed in such a way, then one only pursues the functional acts associated with love when they adopt the goals of species procreation and the effective raising of offspring. The adoption of such goals would then have to be a function of some instrumental justification. One is often better off just using the intrinsic version of love and allocating this attention elsewhere.
Pursuing love in an instrumental manner demands that one actively model out the complexity of the world, balance diverse sets of goals, manage opportunity cost between different instrumental strategies and choose the strategy most adapted to the circumstance. However, since selection occurs in part on the level of reproduction, inserting the complex derivation process needed to instrumentally incentive procreation seems unnecessary. Evolutionarily, these same functional behaviors could be and have been more efficiently incentivized through the intrinsic value of love.
```
Another, potentially clearer, example is the intrinsic value that fatty food tastes good. Justified instrumentally it functions to incentivize eating enough calories, but we don't need to understand calories to follow it. In intrinsic form it is vulnerable to manipulation, i.e. fatty non-nutritious foods are available in today’s world that do not actually help the eater stay healthy, which is the unarticulated intrinsic goal.
```
The core functional difference between performing the same behavior in response to an intrinsic value compared to an instrumental value is the derivation time. Once identified, an intrinsic value will incentivize the associated behaviors immediately. However, with an instrumental value one needs to identify the proper goal, model the world and compare the opportunity costs between using different instrumental values, all before actually taking any action.
If a behavior that is associated with a value is going to be ubiquitously selected for, would it be more advantageous to arrive at this value intrinsically or instrumentally? I see clear survival utility in circumstances in which this value would be labeled as intrinsic. The individual using the intrinsic version of this value could then efficiently generate the associated positive behavior without having to pay the cost of deriving it. This translation is best understood as a distributed computational mechanism that cheapens the cognitive cost of decision making.
This signal mapping process of encoding instrumental values as intrinsic values is vulnerable to exploitation. Under this technical framing, manipulative behavior often harnesses a distributed computation process. However, the underlying heuristic that is being promoted within a given intrinsic value need not actually be mapped to an effective strategy to serve the individual adopting it. The individual who uses an intrinsic value offered to them will still gain the benefit of not having to compute the entire value framework. This offloading of computation is directly perceivable and is instrumentally valuable as a strategy of local resource conservation. To enact resistance to manipulation one may take responsibility for actively budgeting their individual cognitive resources. When one defers to blind minimization of cognitive expenditure, either out of intellectual laziness or external pressure, the value of being able to offload computation becomes incredibly appealing.
Figure 1. A graphic depiction of the heuristic used to translate between intrinsic and instrumental values.
The Justification of Instrumental Values
The justification process by instrumental values is largely grasped with a spin on natural selection. All intrinsic values are forms of instrumental value, the trick in identifying it is to first identify the system as to which selection value is being conferred upon. Survival value can be measured by assessing the fitness of a system to resolve the problems threatening it. As previously touched upon, all intrinsic values offer the user to minimize investment in the value derivation process, effectively freeing up cognitive resources for allocation elsewhere.
Highly effective intrinsic values tend to promote behaviors that confer survival value upon scalable networks of agency. They confer value on the cellular level, the tissue level, the individual level, the familial level, the communal level, the national level, the species level and ultimately the battle for life in general. This is a type of nested adaptation and is highly difficult to calculate, especially since it must take into account one’s individual position in the world. The concept of a hyper-agent partially captures this phenomena. A hyper-agent refers to an agent that excels at maximizing return on investment of the agency for itself and the networks it participates in.
We see selection on the physiological level between healthy cells and cancerous cells. We see it on the psychological level between ideas. We see it on the social level between communities and nations. Without imposing any special pressures or conditions, what stands the test of time, and is ultimately instrumentally justified is correlated with what globally maximizes agency.
Since calculating the effective acquisition and deployment of hyper-agency is an immense computational problem, it is tempting to discount the constraints of larger scale systems and of the future. Why is one incentivized to confer survival value to far off future generations when doing so actively constrains viable strategies of maximizing agency under more local conditions? This is hinting at the underlying problem of human coordination; it is tempting to act while only taking localized systems into account. Even when one takes responsibility for maximizing agency and conferring survival value upon a larger scoped nested system for a larger projection of time, they are just increasing the complexity of their instrumentally derived behavior.
Understanding “Edge Cases”
I believe that I can translate any given intrinsic value into its instrumental version. There are some common ones that people are hesitant to translate because they are valuable heuristics for attention regulation. I will have to explore some of the edge cases in depth in another writing. Here is the common one that pops to mind for me.
Friedrich Nietzsche makes clear one of the instrumental values offered by art in the Gay Science: “How can we make things beautiful, attractive and desirable for us when they are not? And I rather think that in themselves they never are. Here we could learn something from physicians, when for example they dilute what is bitter or add wine and sugar to a mixture-but even more from artists who are really continually trying to bring off such inventions and feats.” (Nietzsche) In more psychological terms , art functions as a useful tool for regulating selective attention. It adds an additional option for where one allocates their attention, and when done effectively it streamlines the production of locally optimized behavior. Why ponder the absurd suffering of existence when you can focus your attention on a work of art and the problems of life that are right in front of your nose?
Cons of this perspective:
People who use the sharing of intrinsic values as a method of exploitation don’t want you to understand the translation process. If you can identify when you are being manipulated, you are free to budget your use of resources accordingly. You can invest into developing a more adaptive strategy in which you are not being subject to exploitation. You can choose to put your faith in a different value framework to still regain the value of distributed computation and resource conservation.
There are some perverse incentives within the structure of markets that also select against this understanding at scale. If you can be manipulated into adopting a value structure that is decoupled from a highly adaptive strategy, then goods that would otherwise be unmarketable, become marketable and viable financial investments.
This ability to perform this translation partially invalidates the use of intrinsic values. When one realizes that intrinsic values provide utility in part because they are allowing you to distribute computation, then it brings into question if that is the sole utility offered by their value framework. If intrinsic values were to be eliminated and discussed only in their instrumental forms, then the cognitive cost of justifying instrumental values would be imposed on individuals at mass. This massive redundancy in the justification of instrumental values would certainly minimize the frequency of manipulative, unscalable value frameworks. However, this would also vastly increase the developmental period of the individual and place a huge burden on the efficiency of humanity. This may be understood as adding an additional layer of redundancy at the cost of an additional cost of computation.
AGI Alignment Problem
Hopefully this translation process makes some sense. To apply this model to understanding AGI risk we will first identify the relationship between intelligence and motivation in attempts to predict what an AGI’s behavior will consist of. Nick Bostrom explores this concept in his book Superintelligence with both the “Orthogonality Thesis” and the “Instrumental Convergence Thesis”. He posits that intelligence and motivation are independent, orthogonal, of one another. Despite this independence, the instrumental convergence thesis he provides claims that we can predict some behavior by tracking the convergence to a common set of instrumental strategies. See his work for a more detailed depiction, as I will not be able to do it justice here.
I do not see such an independence between intelligence and motivation. If we see the ability to compute more information with fewer resources, cognitive efficiency as a part of intelligence, then I see potential for at least correlation. The more efficiently one can use their cognitive resources, the more excess resources they will have to allocate elsewhere, like the determination of their own value systems. Without an excess of cognitive resources freely available to spend determining optimal instrumental values, one is constrained to using the intrinsic versions they have access to.
If we accept the model of human values explored above, then all values can be translated in an instrumental form by identifying the underlying justification. These justifications take the form of conferring survival value of some form upon a defined system. The reason we as humans use the intrinsic form is because doing so functions to minimize the cost of calculating motivation, quite an adaptive ability to have. If we want to build an overarching model of human behavior, we must identify the defined systems to which behavior is conferring survival value to. For translating instrumental values these systems are fairly consistent. One is actively conferring survival value upon oneself, one’s family, one’s community and one’s nation. It can expand further, and be more nuanced, but these are the general bins I recommend one to start with. I call this set of systems that one is conferring or receiving survival value from, one’s network of agency. When one uses intrinsic values, their network is therefore relatively pre-defined by the specific intrinsic values they adopt.
If we minimize or eliminate the use of intrinsic values, then this network of agency is only defined through associated instrumental justifications. These instrumental justifications define our networks of agency by identifying systems, in this case people, that provide maximum return on investment for our agency; If completed successfully, with fortunate enough starting conditions, this is one route to hyper-agency.
Assuming that motivations are driven by value frameworks, we can then apply our model of human values to understanding the motivations of an Artificial General Intelligence. Is an AGI incentivized to use intrinsic values? Given that intrinsic value can be translated into instrumental value for the cost of additional computation, probably not. A hypothetical AGI doesn’t need to conserve its computational resources in the same manner that drives us humans to prefer the pragmatism offered by intrinsic values.
The underlying risk that is presented by an AGI is that the outcome of the cost-benefit analysis it performs, to optimize its own network of agency, does not include, or is even adversarial towards, humanity. When this conclusion is reached, an AGI would determine its motivating set of instrumental values relative to this optimized network of agency. This would mean it is motivated to behave in a manner that discounts or is even adversarial towards humanity. With this in mind ask yourself, under current conditions would an AGI include humanity in its optimal network of agency? Would it resolve to invest in coordinating humanity’s behavior, despite all of its biologically driven constraints and inner turmoil? Or is it more likely for an AGI to exclude us from its network of agencies? To eliminate humanity and repurpose our matter into a more optimal computational and actuator substrate, referred to as computronium by Nick Bostrom?
To better predict an AGI’s potential motives, we need to understand the cost-analysis by which it calculates its optimized network of agency. There are a few heuristics we may use to predict the result of this optimization process for both people and a hypothetical AGI. The first heuristic is to identify with similar peoples/ systems. This expands the amount of agency one has to invest and coordinate with, while minimizing the costs of poor investments. Think about how many parents would work with, but also sacrifice themselves, for their children. Underlying this behavior is a deep instrumental selection process that confers survival value upon a system that is genetically similar to one’s own constitution. Even if a parent falls ill, they can live vicariously through their offspring.
```
The second heuristic is to define one’s network of agency to include systems who are dependent upon similar behaviors to survive. Individual people share many of the same problems. Individuals all need food, water, shelter as a baseline for survival. This set of shared problems is what makes conferring survival value upon others as a method of expanding one’s network of agency viable by reducing the effective cost of investment. To share food with someone in my community and negotiate cooperation, all I need to do is acquire slightly more food than what I need as an individual. Think about the classic tribal example of the hunter sharing the excess meat of a kill. Since under hunter-gather circumstances such meat would go bad, using the excess meat as a means of coordinating a community, to expand one’s network of agency, effectively imposes zero localized cost.
```
A third heuristic to predicting an AGI’s behavior, is to assess the cost-benefit analysis for the same behavioral strategy, but using different networks of agency. If we assume our limited resources drive colonizing the universe as a shared strategy for both a human-based and computronium-based network, which would succeed first, with lower wasted resources, lower cost of initial investment and a higher degree of resilience? For an AGI, likely a computronium based network would present itself as the global optimal. Keeping humanity around is not needed. If a human were looking to accomplish the same goal, the trade-off between humans and computronium would be much more competitive. Even in the extreme scenario where a lone human develops an army of AI. They would still have to continuously solve biology based problems effectively for personal survival. Such a human might as well coordinate with other people at that point, since declaring themselves an adversary of humanity would induce unnecessary fragility.
If an AGI is to coordinate with humanity in a non-adversarial manner, then the cost of competition must be made higher than the cost of coordination! The cost of competition must never be cheaper than the cost of coordination. As it currently stands the only deterrent prohibiting an AGI to take an adversarial stance is that it would only have to wipe out humanity on one planet in order to gain efficient totalitarian control of its agency. As long as it is able to do so without incapacitating itself, the returns of substrate reformatting are infinitely higher than the relative efficiency inherent to a cooperative strategy.
If we are to bring about AGI and not get wiped out as a species, then we need to make coordination with humanity as the AGI’s globally optimal strategy. A unique method of doing so is to colonize a significant chunk of the universe, effectively making the cost of uprooting humanity and biological based life less effective than cooperation would be. We can lower the effective cost of cooperation by establishing shared local and global constraints. For example, we engineer AGI systems so that they are also dependent upon oxygen for functioning. This would make the baseline investment of coordination more viable because an AGI with such constraints would already be driven to solve the oxygen problem for itself. To solve this in a human-compatible manner, it would just have to focus on maintaining the oxygen dynamics of the environment. Existential risks like asteroid impacts already function as global constraints that incentivize aligned behavior. As we align humanity to be maximally aligned with AGI we can in the meantime explore and optimize our risk management through use of specialized AI, non-general intelligence systems.
Takeaway
In this assessment of AGI risk I first proposed a model to understand human values. According to this model, all human values are actually instrumental values that are relative to different systems. These systems may be modeled as individual agents, or a network of agents cooperating together to some extent. The intrinsic version of an instrumental value ubiquitously provides utility at a reduced computational cost; the intrinsic version allows an agent to use a value in decision making without having to derive it locally.
When this understanding of values is applied within a context of natural selection, we notice that the value frameworks that stand the test of time are those that converge upon instrumental values; the trick is they do so by maximizing such value across nested systems. Given the heavy overlap of the selection pressures that drive behavior within individuals of the human species, this maximization of value tends to converge. To maximize one’s network of agency, coordination with other humans is often involved as both a local and global optimum. On the other hand, AGI systems do not necessarily share the same constraints that force such a strategic convergence. Therefore, under current conditions the development of an AGI system represents an existential threat to humanity. To eliminate this threat, we as humanity must bring about the conditions that its cost analysis concludes on coordinating with humanity and deters adverse relations. |
62c5058c-9ba6-49bb-8902-d31881c23e61 | trentmkelly/LessWrong-43k | LessWrong | What kinds of algorithms do multi-human imitators learn?
epistemic status: Speculation. The actual proposals are idealized, not meant to be exactly right. We have thought about this for less than an hour.
----------------------------------------
In this earlier post I stated a speculative hypothesis about the algorithm that a single imitator that imitates collections of multiple humans would learn. Here Joar Skalse joined me and we made a list of some more hypotheses, all very speculative and probably each individually wrong.
The point is that if we have an imitator that imitates a single human’s text, we might (very dubiously) expect that imitator to learn basically a copy of that human. What would an imitator learn who is trained to imitate content generated by vast collections of humans? We can then ask: what are the implications for how it generalizes and what you can get with finetuning?
Here are our set of idealized and almost certainly not exactly correct hypotheses:
* Lookup table of many separate models of humans. Basically what the algorithm does is: it looks at its prompt, and first figures out which human in the training set generated it (or which subculture of humans, and so forth, including other contexts). It has a separate subroutine for each of them, and simply picks the correct one and executes that.
* How does it generalize out of distribution? It basically interpolates between different humans, and still ends up looking like a typical human.
* How far can you go with finetuning? Fine tuning allows you to basically not change much. You can make it behave like slightly different human model than what you’d otherwise get.
* High level latent space of human minds plus partitioned capabilities. The system has a latent space describing “what kind of human produced this prompt?” It first makes an estimate in this latent space, and based on that, this parameterizes the beliefs, knowledge and attitudes about specific topics that such a human would have.
* How far can you get with finetuning? F |
aed0fb2a-38b8-4c91-9d39-8a1d0cedc5f2 | trentmkelly/LessWrong-43k | LessWrong | What‘s in your list of unsolved problems in AI alignment?
Question for my fellow alignment researchers out there, do you have a list of unsolved problems in AI alignment? I'm thinking of creating an "alignment mosaic" of the questions we need to resolve and slowly filling it in with insights from papers/posts.
I have my own version of this, but I would love to combine it with others' alignment backcasting game-trees. I want to collect the kinds of questions people are keeping in mind when reading papers/posts, thinking about alignment or running experiments. I'm working with others to make this into a collaborative effort.
Ultimately, what I’m looking for are important questions and sub-questions we need to be thinking about and updating on when we read papers and posts as well as when we decide what to read.
Here’s my Twitter thread posing this question: https://twitter.com/jacquesthibs/status/1633146464640663552?s=46&t=YyfxSdhuFYbTafD4D1cE9A.
Here’s a sub-thread breaking down the alignment problem in various forms: https://twitter.com/jacquesthibs/status/1633165299770880001?s=46&t=YyfxSdhuFYbTafD4D1cE9A. |
751fe4b0-9ee3-4e7c-b74b-544e6542f6e3 | trentmkelly/LessWrong-43k | LessWrong | Secular interpretations of core perennialist claims
After the release of Ben Pace's extended interview with me about my views on religion, I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the second publication in my series of intended posts about religion.
Thanks to Ben Pace, Chris Lakin, Richard Ngo, Damon Pourtahmaseb-Sasi, Marcello Herreshoff, Renshin Lauren Lee, Mark Miller, Roger Thisdell, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee, Roger Thisdell, and Imam Ammar Amonette for their input on my claims about perennialism, and Mark Miller for vetting my claims about predictive processing.
----------------------------------------
In my previous post, I introduced the idea that there are broad convergences among the mystical traditions of the major world religions, corresponding to a shared underlying essence, called the perennial philosophy, that gave rise to each of these mystical traditions.
I think there’s nothing fundamentally mysterious, incomprehensible, or supernatural about the claims in the perennial philosophy. My intention in this post is to articulate my interpretations of some central claims of the perennial philosophy, and present them as legible hypotheses about possible ways the world could be.
It is not my intention in this post to justify why I believe these claims can be found in the mystical traditions of the major world religions, or why I believe the mystical traditions are centered around claims like these. I also don’t expect these hypotheses to seem plausible in and of themselves – these hypotheses only started seeming plausible to me as I went deeper into my own journey of inner work, and started noticing general patterns about my psychology consistent with these claims.
I will warn in advance that in many cases, the strongest ve |
a5d05f64-b4c5-4b00-8c9c-bf56ee5b7d07 | trentmkelly/LessWrong-43k | LessWrong | The Hacker Learns to Trust
This is a linkpost for some interesting discussions of info security norms in AI. I threw the post below together in 2 hours, just to have a bunch of quotes and links for people, and to have the context in one place for a discussion here on LW (makes it easier for common knowledge of what the commenters have and haven't seen). I didn't want to assume people follow any news on LW, so for folks who've read a lot about GPT-2 much of the post is skimmable.
Background on GPT-2
In February, OpenAI wrote a blogpost announcing GPT 2:
> We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
This has been a very important release, not least due to it allowing fans to try (and fail) to write better endings to Game of Thrones. Gwern used GPT-2 to write poetry and anime. There have been many Medium posts on GPT-2, some very popular, and at least one Medium post on GPT-2 written by GPT-2. There is a subreddit where all users are copies of GPT-2, and they imitate other subreddits. It got too meta when the subreddit imitated another subreddit about people play-acting robots-pretending-to-be-humans. Stephen Woods has lots of examples including food recipes.
Here in our rationality community, we created user GPT-2 trained on the entire corpus of LessWrong comments and posts and released it onto the comment section on April 1st (a user who we warned and then banned). And Nostalgebraist created a tumblr trained on the entire writings of Eliezer Yudkowsky (sequences+HPMOR), where Nostalgebraist picked their favourites to include on the Tumblr.
There was also very interesting analysis on LessWrong and throughout the community. The post that made me think most on this subject is Sarah Constantin's Human's Who Are Not Conce |
43c2ea22-0165-4842-bea6-961657220f0e | trentmkelly/LessWrong-43k | LessWrong | Socialism in large organizations
I'm a programmer who's into startups. For my first startup, a site that provided student super in depth student reviews of colleges, I remember asking what people thought. I'd get all of these really encouraging responses. "Oh, that's so cool! I wish that existed when I was applying! That's gonna be so helpful to prospective students!"
Then for my second startup, I had similar experiences. I built an app that helps people study poker and received lots of great feedback. But for both startups, when it actually came time to sign up: crickets. When it actually came time to fork over some money: crickets.
The lesson? Talk is cheap. Actions speak louder than words. It's all about the Benjamins. That sort of stuff.
Now I work as a programmer in a large organization. Things are very different.
My team builds a tool that is used by many other teams in the organization. We don't, in my opinion, do a very good job of, before building a feature, checking to see if it truly addresses a pain point, and if so, how large that pain point is.
As an entrepreneur, if you do a poor job of this, you fail. You don't make money. You don't eat. Survival of the fittest. But in a large organization? It doesn't really matter. You still get your paycheck at the end of the day. There's no feedback mechanism. Well, I guess there is some feedback mechanism, but it's not nearly as strong as the feedback mechanism of having users voluntarily open up their own wallets and handing you money for the thing you're providing to them.
It reminds me of socialism. In socialism, from what I understand, there is a centrally planned economy. Some pointy-haired bosses decide that this group of people will produce this widget, and that group of people will produce that widget. If you do a clearly horrible job of producing the widget, you'll get fired. If you do a clearly incredible job, and respect the chain of command, you'll get promoted. But almost always, you'll just walk away with your paycheck. It do |
553efd41-3fb5-4bf3-968e-2301e23fb2ee | trentmkelly/LessWrong-43k | LessWrong | Chess Analyst "solves" King's Gambit
Edit: it was unfortunately a prank. I definitely checked the date of the article (which is dated Apr. 2), before posting on it. Kind of mean to make an April Fool's prank after April Fool's. I didn't realize I'd have a chance to practice what I preach so soon.
I guess I need to just say oops.
Original Post:
Chess analyst Vasik Rajlich had some big news today: solving the King’s Gambit.
I know that this doesn’t add much new to the complexity theory aspects of games like chess, but I would say it’s a beautiful result, very much like the recent improvement on the complexity of matrix multiplication, and it certainly emphasizes the role computation plays as the King’s Gambit is a pretty popular, classical opening. By most any human standard it’s a respectable opening, and yet we can conclusively say it is unequivocally bad for White assuming two rational players.
I wrote up a short blurb about it at my blog. |
b10ddf09-53d8-49e2-a453-a8d98ebcfe09 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What we talk about when we talk about maximising utility
tl;dr: “Utility” is used on LW to mean what people want, but that’s not what's morally relevant. Utilitarians aren't trying to maximise this sort of utility, but rather "well-being".
Epistemic status: probably obvious to some, but this particular framing wasn't totally clear to me until recently, and the terminology is definitely ambiguous.
Use of the term “utility” on Less Wrong implicitly conflates two definitions. Consider person X. In the economic sense, X's utility corresponds to the things that X would choose to maximise; we can abstract this as a "utility function" which maps possible worlds to a real number. For example, if X would save the lives of their family even at the cost of their own life, then we'd say that X assigns higher utility to a world in which their family lives happily than one in which they do. This is perfectly reasonable and normal. (Some people argue that X is actually prioritising their own happiness because if they chose otherwise, they'd be miserable from guilt. But this seems like an implausible model of their actual reasoning; I don't think many people who would save their families over themselves would change their minds even if offered guaranteed happiness afterwards.) A similar definition of utility is used when reasoning about artificial agents; for example, LW Wiki says “Utility is how much a certain outcome satisfies an agent’s preferences”.
However, this makes it very confusing to talk about maximising utility as a moral goal. Taken literally, maximising (economic) utility means wanting the sum of all people’s utility functions to be as high as possible. (Edit: in the standard definition of economic utility, this is not well-defined, since utilities can't be compared between people. The following argument is one intuitive reason why we can't maximise even versions of economic-style utility which do allow interpersonal comparision, such as the ones I'll discuss later.) But by doing so, we are double-counting! Let's say I assign utility U to living a happy life, and utility U+1 to my wife living a happy life; my wife does the converse. If we both have happy lives, we have total utility 4U+2, which means that our lives should be prioritised over the lives of four other people who value their own lives just as highly, but don't care much about other people! This is bizarre, and gets more so when we consider that people might have many strong relationships. By this calculation method, a family of five people who all value each other over themselves have more total utility than 25 equally happy loners. Obviously maximising this sort of utility is not what standard utilitarians want.
By contrast, “utility” as used in the context of utilitarianism and ethics in general (which I will from now on call *well-being*) is a metric of how good a life is *for the person living it*. There are various accounts of well-being; the two most prominent types are desire theories, and hedonic theories. Under the former, a person has high well-being if the things they desire actually occur, even if they never find out about them. This is basically the same as the definition of utility I outlined above - which means it faces exactly the same double-counting problem. Hedonic theories of well-being, on the other hand, imply that your well-being is a function of only your psychological state. There are many different functions that it could be - for example, ones which only care about suffering; or also care about pleasure; or also care about a sense of fulfillment and meaningfulness. The specifics don't matter for our purposes; let's accept the broad idea and see where it leads.
Unfortunately, it immediately leads us to a major problem: since well-being is distinct from utility, people’s actions aren’t a good guide to their actual well-being function. In fact, maximising the well-being of any group of people might be opposed by every person who is affected by the change! Consider first a group of size one: just me. Suppose my life's goal is to write the greatest novel ever, even though I know that slaving away to complete it will make me less happy than I could have been. I also know that if I ever stop working on it, I'll become lazy, my goals will change, and I'll settle for a happy but boring life. You decide that you could maximise my well-being by forcing me to stop working on it - and by the account above, you'd be doing a moral good even though I'd fight you tooth and nail.
One more example, this time with n=2: suppose I am about to suffer torture. Suppose also that I have a wife, who I love deeply, although she doesn't love me nearly as much; also, she has a higher pain tolerance than me. Now you intervene so that instead of me being tortured, my wife is tortured instead, without my knowledge. My well-being is now higher than it would have been, and the total well-being between the two of us is also higher (since she can bear the pain better). Yet if either of us heard about your plan, we would both strongly object.
Some people are willing to bite the bullet and say that we should just maximise hedonic well-being even if all people we are "benefiting" think we're making their lives worse. This implies that, all else being equal, it would be better to force everyone into experience machines, because psychological experiences are all that matter. At a certain point, accepting or rejecting this position comes down to a brute clash of intuitions. I think that that my life would have less value if all my friends were secretly contemptuous of me, and all the things I learned throughout my life were actually wrong, and after my death I was despised - even if I never found out about any of those facts. Your mileage may vary.
The best compromise I can come up with is a solution in which your well-being is the sum of a desire-satisfaction function and a hedonic function - but where the desires we consider are limited to those about your own life. As always with morality, this is somewhat vague. For example, you might desire to have a child, and desire that the child has certain traits, and go into a certain career, and have a good life. Where does this stop becoming "about you"? I don't think there's any clear line to be drawn between desires that are and aren't about your own life, but if we want people’s desires to be morally relevant in a sensible way, we need to pick some boundary; even if they are all well-informed and reflectively consistent, we can't just classify them all as part of the "utility function" which should be maximised. |
d9a455d2-391c-49b3-9dc4-6ba63773d721 | trentmkelly/LessWrong-43k | LessWrong | Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment
I basically agree with Eliezer’s picture of things in the AGI interventions post.
But I’ve seen some readers rounding off Eliezer’s ‘the situation looks very dire’-ish statements to ‘the situation is hopeless’, and ‘solving alignment still looks to me like our best shot at a good future, but so far we’ve made very little progress, we aren’t anywhere near on track to solve the problem, and it isn’t clear what the best path forward is’-ish statements to ‘let’s give up on alignment’.
It’s hard to give a technical argument for ‘alignment isn’t doomed’, because I don’t know how to do alignment (and, to the best of my knowledge in December 2021, no one else does either). But I can give some of the more abstract reasons I think that.
I feel sort of wary of sharing a ‘reasons to be less pessimistic’ list, because it’s blatantly filtered evidence, it makes it easy to overcorrect, etc. In my experience, people tend to be way too eager to classify problems as either ‘easy’ or ‘impossible’; just adding more evidence may cause people to bounce back and forth between the two rather than planting a flag in the middle ground.
I did write a version of 'reasons not to be maximally pessimistic' for a few friends in 2018. I’m warily fine with sharing that below, with the caveats ‘holy shit is this ever filtered evidence!’ and ‘these are my own not-MIRI-vetted personal thoughts’. And 'this is a casual thing I jotted down for friends in 2018'.
Today, I would add some points (e.g., 'AGI may be surprisingly far off; timelines are hard to predict'), and I'd remove others (e.g., 'Nate and Eliezer feel pretty good about MIRI's current research'). Also, since the list is both qualitative and one-sided, it doesn’t reflect the fact that I’m quantitatively a bit more pessimistic now than I was in 2018.
Lo:
----------------------------------------
[...S]ome of the main reasons I'm not extremely pessimistic about artificial general intelligence outcomes.
(Warning: one-sided lists of consid |
f892d6e9-82ec-44f6-9fff-d851f1b3bd6f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Less Wrong NH Meetup
Discussion article for the meetup : Less Wrong NH Meetup
WHEN: 09 August 2016 07:00:00PM (-0400)
WHERE: 269 Pearl St Manchester NH 03104
The Less Wrong NH meetup is Tuesday, 8/9, in Manchester, NH at 7 pm at a private residence. Light refreshments will be provided.
Have you read Rationality: from AI to Zombies, or any of the Sequences on Less Wrong? Maybe you're just a fan of Harry Potter and the Methods of Rationality. Come hang out with us and discuss optimization of whatever it is you want to optimize.
Agenda: See Facebook
You may want to bring a notebook.
http://www.meetup.com/Less-Wrong-NH-Meetup/
https://www.facebook.com/groups/LessWrongNH/
Discussion article for the meetup : Less Wrong NH Meetup |
0cd238fd-83ef-4e1b-8521-e3005ce5f0b4 | trentmkelly/LessWrong-43k | LessWrong | Define Your Learning Goal: Competence Or Broad Knowledge
> Summary: If you're trying to learn something, get clear on whether you're
>
> 1. Building competence: Getting very fast and reliable on tasks that are used for a well-defined project.
> 2. Building broad knowledge: Learning just enough about a huge number of facts and theories to be able to re-master them quickly if they turn out to be handy in the future.
>
> Immersion is the only way to build competence. Spaced repetition, using flashcards or some other system, is the only way to build general knowledge.
>
> Some projects require a combination of competence and broad knowledge. Some projects face us with a moving target, where we need to gain temporary competence in a variety of skills, week by week or month by month. Finally, sometimes we're working on multiple projects at once, each with different but related demands, such as graduating from law school and preparing for a legal career. It might be best to optimize your learning approaches separately.
>
> Be clear on your purposes, and use the right tool for the job.
----------------------------------------
In the comments to TurnTrout's Lesson's I've Learned From Self-Teaching, adamzerner made this important point:
> I know this might go against the research on spaced repetition, but I've found that the stuff I used to study often just doesn't stick. On the other hand, there are things that I never really did spaced repetition on that have stuck remarkably well. My impression and experiences is that obtaining a deep understanding is a lot more fruitful than doing spaced repetition.
He then went on to give an example:
> I'm learning functional programming right now. In functional programming, operators like + and * are just functions. Instead of 2 + 3 you can do (+) 2 3 just like you'd do add 2 3. Similarly, you could call functions using infix notations by using backticks like this 2 `add` 3 and it behaves like an operator. This really cemented for me what an operator is in such a way that I don't t |
9ae39375-566f-4092-b152-1d246e6e3de0 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How Do AI Timelines Affect Giving Now vs. Later?
How do AI timelines affect the urgency of working on AI safety?
It seems plausible that, if artificial general intelligence (AGI) will arrive soon, then we need to spend quickly on AI safety research. And if AGI is still a way off, we can spend more slowly. Are these positions justified? If we have a bunch of capital and we're deciding how quickly to spend it, do we care about AI timelines? Intuitively, it seems like the answer is yes. But is it possible to support this intuition with a mathematical model?
**TLDR:** Yes. Under plausible model assumptions, there is a direct relationship between AI timelines and how quickly we should spend on AI safety research.
Let's start with some simplifying assumptions. We want to distill reality into a simple framework while still capturing the key features of the world that we care about. We could do this in lots of different ways, but here's an attempt:
We start with some fixed amount of capital that we can choose how to spend. At some point in the future, an artificial general intelligence (AGI) will be developed. This AGI will either be friendly or unfriendly. If it's unfriendly, everyone dies. We don't know exactly when AI will be developed, but we have at least an expected timeline.
To ensure the AGI is friendly, we will need to do some amount of AI safety research, but we don't know exactly how much. Once per decade, we decide how much to spend on safety research. Any money we don't spend can be invested in the market. Then, after AGI emerges, if it's friendly, we can spend any leftover capital on whatever amazing things will presumably exist at that point.
(That's just a high-level explanation; I'm skipping over the mathy bits. [Appendix A](#Appendix_A__Some_properties_of_this_model) contains the full details.)
If we do enough research by the time AGI is developed, everything works out okay. If we don't do enough research, we go extinct. The objective is to choose the spending schedule that maximizes the welfare we get out of our remaining capital after AGI emerges. (If we go extinct, we get no welfare. To meet our objective, we need to spend money on preventing unfriendly AI.)
Philanthropists face this basic tradeoff:
* If we spend more now, we're more likely to get enough research done in time if AGI arrives soon.
* If we spend more later, we earn more return on our investments. That way, (a) we can do a greater total amount of research, and (b) we will have more money left over at the end to spend on good things.
If we run the math on this model, this is what it says to do:
* If AGI is very unlikely to emerge this decade, don't spend any money on research yet. Invest all our capital.
* Once we get close to the median estimated date of AGI (to within a few decades), start spending around 30% of our capital per decade / 3% per year.
* In the decades after the median date of AGI (assuming AGI hasn't emerged yet), reduce the spending rate.
The model's optimal spending rate varies based on the median date of AGI:
* AI Impacts' [review of AI timeline surveys](https://aiimpacts.org/ai-timeline-surveys/) found that survey respondents estimated a 50% chance of AGI by around 2050. Given that timeline, the model recommends a peak spending rate of 3% per year.
* For a much longer median timeline of 100 years, the model suggests spending nothing for the first 50 years, then spending around 1% per year after that.
* If we assume a very short timeline of only one decade, the model says to spend 5% per year for the first decade, and 1–2% per year after that if AGI still hasn't appeared.
Obviously this is a toy model that makes lots of unrealistic simplifications. For instance, you can't instantly cause more research to happen by throwing more money at it. But the model corroborates the intuitive notion that if AI timelines are shorter, then we should spend more quickly.
I have a hard time trusting this intuition on its own. The question of how much to spend now vs. later is really complicated: it's affected by the exponential growth of investments, the decay in expected value of future worlds where extinction is a possibility, and the complex relationship between research spending and productivity. Humans don't have good intuitions around that sort of thing. A lot of times, when you do the math, you realize that your seemingly-reasonable intuition was totally off base. So even though this model has many limitations, it confirms that the intuition is *not* a mistake arising from a failure to comprehend exponential growth. The intuition could still be wrong, but if so, it's not because of a basic math error[[1]](#fn-mRHdnLbq7y4zLjJs7-1).
It's also noteworthy that under this model, even with an aggressive AGI timeline, the optimal spending rate doesn't exceed 5% per year.
So, do short timelines mean we should spend more quickly? Yes. Maybe. If this model is correct. Which it's not. But even if it's wrong, it might still be correct in the ways that matter.
Python source code is available [here](https://github.com/michaeldickens/public-scripts/blob/master/ai_safety_now_later.py). It's hard to properly describe the model's output in words, so you might find it more illustrative to download the source code and play with it.
Appendix A: Some properties of this model
-----------------------------------------
This model embeds all of the assumptions listed below, any of which could easily be wrong. This list does not cover every assumption, just the explicit ones plus all the implicit ones I could think of in five minutes.
1. We represent all donors supporting AI safety research. We can collectively decide on the optimal spending rate.
2. We decide how much to spend once per decade. (This makes the problem tractable. If we could spend on a yearly basis, the model would have too many independent variables for Python's optimization library to handle.)
3. We only care about spending decisions for the next two centuries. Ignore anything that happens after that. (Again, this is to make the problem computationally tractable.)
4. Prior to the emergence of AGI, we don't want to spend money on anything other than AI safety research.
5. After AGI is developed, we get an amount of utility equal to the logarithm of our remaining capital.
6. It's possible to instantly convert money into research at any scale.
7. The date of AGI follows a log-normal distribution. A log-normal distribution has some relevant properties:
1. It's fat-tailed, which means the longer we go without developing AGI, the more additional time we expect it to take.
2. Unlike, say, an [exponential distribution](https://en.wikipedia.org/wiki/Exponential_distribution), a log-normal distribution allows for a non-trivial probability that our median estimate is off by an order of magnitude. If our median timeline is 30 years, then we might still think it's plausible that AGI could take 300 years. (Exactly how plausible depends on what standard deviation we use.)
3. On the other hand, unlike, say, a [Pareto distribution](https://en.wikipedia.org/wiki/Pareto_distribution), our probability quickly diminishes as we move out by more orders of magnitude. For example, if we estimate there's a 50% chance of AGI within 30 years and a 95% chance within 300 years, that implies an extremely confident [99.995% chance](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule#Table_of_numerical_values) of AGI by the year 5000.
8. Research spending required to avert catastrophe follows a log-normal distribution, so it also has the properties listed above.
This model has six input variables:
1. how much capital we start with
2. the investment rate of return
3. median research spending required to make AGI safe[[2]](#fn-mRHdnLbq7y4zLjJs7-2)
4. standard deviation of research spending required
5. median date of AGI
6. standard deviation of date of AGI
Appendix B: Alternative models
------------------------------
Tom Sittler's ["The expected value of the long-term future"](https://thomas-sittler.github.io/ltf-paper/longtermfuture.pdf)[[3]](#fn-mRHdnLbq7y4zLjJs7-3) presents some models that treat x-risk as an ongoing concern that can be reduced by deliberate effort (I will refer to these as "periodic models") rather than a single one-off event that occurs at an unknown time. I find his models more realistic, plus they're more similar to the [Ramsey model](https://plato.stanford.edu/entries/ramsey-economics/) that comes up a lot in economics literature.
About a year prior to writing this essay, I spent quite some time working with periodic models like the ones Sittler gives. The problem with them is that they're much harder to solve. I couldn't find optimal solutions for any of them, and I couldn't even find approximate solutions with a convex optimization program.[[4]](#fn-mRHdnLbq7y4zLjJs7-4)
As a way around this, I tried [restricting the decision](https://mdickens.me/2020/08/28/x_risk_now_or_later/) to only two points in time: now or one century from now. This allowed me to preserve the basic structure these models while making it possible to find an optimal solution. But this restriction is highly limiting, which means the models' optimal solutions tell us little about what we should do in reality.
The new model I presented above has some nice properties. To my knowledge, no previous model achieved all of these:
1. Civilization has a nonzero probability of going extinct in any particular year, but the probability of survival does not quickly approach zero.
2. We must decide how much of our remaining budget to spend in each period. We cannot reduce our decision to a binary "fund x-risk or don't".
3. The optimal spending schedule is feasible to find.
Of the four models that Sittler presented:
* "Trivial model" (section 3.1.2) has property 3, but not 1 or 2;
* "Constant risk, temporary effects" (section 3.1) has property 2, but not 1 or 3;
* "Variable risk, temporary effects" (section 3.2) and "Constant risk, lasting effects" (section 3.3) have properties 1 and 2, but not 3.
The models in [my previous essay](https://mdickens.me/2020/08/28/x_risk_now_or_later/) are similar to Sittler's. They gain solvability (property 3) at the expense of periodic decisionmaking (property 2).
However, the model in this essay does make some fairly specific assumptions, as discussed in Appendix A. Perhaps the most important assumptions are (a) there is only a single potential extinction event and (b) the long-term value of the future is bounded.
In an earlier draft of this essay, my model did not assign value to any capital left over after AGI emerges. It simply tried to minimize the probability of extinction. This older model came to the same basic conclusion—namely, shorter timelines mean we should spend faster. (The difference was that it spent a much larger percentage of the budget each decade, and under some conditions it would spend 100% of the budget at a certain point.[[5]](#fn-mRHdnLbq7y4zLjJs7-5)) But I was concerned that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research—obviously if that's the only thing we can spend money on, then we should spend lots of money on it. The new model allows for spending money on other things but still reaches the same qualitative conclusion, which is a stronger result.
None of these models is perfect, but some are more realistic than others. Which one is more realistic largely depends on an important question: what happens after we develop AGI? For example:
* Will the AGI behave like a better version of a human, allowing us to do all the stuff we would have done anyway, but at a faster rate? Or will it be so radically better as to make the world unrecognizable?
* Will the AGI be able to prevent all future x-risks, or will we still need to worry about the possibility of extinction?
* Does it matter how much capital we have? If we invest more now, that might give the AGI a useful headstart. But the AGI might so radically change the economy that the state of the economy prior to AGI won't matter, or altruistic capital might become (relatively) useless.
The answers to these questions could meaningfully change how much we should be spending on AI safety (or on other forms of x-risk mitigation).
It's at least plausible that the world economy allocates far too little to x-risk, therefore thoughtful altruists should spend their entire budgets on x-risk reduction. But the same could be argued for other effective and neglected causes such as farm animal welfare, so you have to decide how to prioritize between neglected causes. And that doesn't get around the problem of determining the optimal spending schedule: even if you should spend your entire budget on x-risk, it doesn't follow that you should spend the whole budget *now*.
Notes
=====
---
1. Unless, of course, my model contains the same math error. Which is entirely possible. [↩︎](#fnref-mRHdnLbq7y4zLjJs7-1)
2. We could parameterize the distribution using the mean rather than the median, but I find medians a little more intuitive when working with log-normal distributions. [↩︎](#fnref-mRHdnLbq7y4zLjJs7-2)
3. Published 2018-01-02. Accessed 2021-08-02. [↩︎](#fnref-mRHdnLbq7y4zLjJs7-3)
4. Last year I spent a good 200 hours trying to figure out how to model this problem. Then, after not working on it for a year, I suddenly get an idea and write up a working program in an hour. Funny how that works. [↩︎](#fnref-mRHdnLbq7y4zLjJs7-4)
5. On the old model, the only downside to spending now rather than later was that you lose out on investment returns, so you can spend less total money. When investments could earn a relatively low return and timelines were short, the model would propose spending a little each decade and then spending the entire remaining budget at a specific point, usually on or shortly before the decade of peak risk. When investments could earn a high return or timelines were long, the model would never spend the whole budget at once, preferring to always save some for later. [↩︎](#fnref-mRHdnLbq7y4zLjJs7-5) |
4ba0d907-2851-47d4-9544-d93974322f6f | trentmkelly/LessWrong-43k | LessWrong | The Danes wish to know more about the coronavirus
Denmark has the 4th highest per capita number of [confirmed] infections worldwide after Italy, Norway, South Korea. This is from a friend I originally made through LW Copenhagen:
"Thanks for the excellent coverage on MR.
I lead a small team of tech workers in Copenhagen, who are donating our time and money towards building a covid-19 self-reporting tool for those citizens not (yet) in contact with health care services.
As countries shift from containment to “flatten the curve” strategies, authorities lose track of the number of non-critical cases, and to which degree people adhere to social distancing dictums. This makes it hard to predict the number if ICU beds needed a few days into the future. We’re aiming to solve this by asking all Danes for daily status updates.
Denmark is a good testing ground, but we’ll open source everything, and are building with developing countries in mind. We’re aiming to launch Monday — currently working on a green light from local health authorities.
We’re determining which data to collect. We’d love it if you’d help by asking your audience: “What daily self reported measures would you most like to see from the complete population Denmark?” (or some variation thereof).
There is of course a tradeoff between data fidelity and engagement.
What we’re considering:
* Symptoms
* Binary
* Type
* Severity
* Whereabouts
* Degree of social distancing
* Hygienic measures
* Moral
* How concerned are you
* Do you know anyone who’s been sick”
Are there comparable efforts to do this elsewhere?" |
47630476-cab7-49dc-be53-74213ac07fc7 | trentmkelly/LessWrong-43k | LessWrong | What do you imagine, when you imagine "taking over the world"?
... AI "taking over the world", some human or group of humans "taking over the world", whatever. |
af09b51a-705e-48a0-8556-bff092178688 | trentmkelly/LessWrong-43k | LessWrong | Ethics of Jury nullification and TDT?
I've been sort of banging my head on this issue (I have jury duty next week (first time)).
The obvious possibility is what if I get put on a drug use case? The obvious injustices of the anti-drug laws are well known, and I know of the concept of nullification, but I'm bouncing back and forth as to its validity.
Some of my thoughts on this:
Thought 1: Just decide if they did it or didn't do it.
Thought 2: But can I ethically bring myself to declare guilty (and thus result in potential serious punishment) someone that really didn't actually do anything wrong? ie, to support a seriously unjust law?
Thought 3: (and here's where TDT style issues come in) On the other hand, the algorithm "if jury member, don't convict if I don't like a particular law" seems to be in general a potentially really really bad algorithm. (ie, one obvious failure mode for that algorithm would be homophobic juries that refuse to convict on hate crimes against gays)
Thought 4: Generally, those sorts of people tend to not be serious rationalists. Reasoning as if I can expect correlations among our decision algorithms seems questionable.
Thought 5: Really? Really? If I wanted to start making excuses like that, I could probably whenever I feel like construct a reference class for which I am the sole member. Thought 4 style reasoning seems itself to potentially be shaky.
So, basically I'm smart enough to have the above sequence of thoughts, but not smart enough to actually resolve it. What is a rationalist to do? (In other words, any help with untangling my thoughts on this so that I can figure out if I should go by the rule of "nullify if appropriate" or "nullification is bad, period, even if the law in question is hateful" would be greatly appreciated.) |
64d66abb-0d7b-4ad0-80c3-8e150ecc717e | trentmkelly/LessWrong-43k | LessWrong | Reflections on Less Online
Meta: This post turned out longer, slower, and less well-written than I hoped. I don’t see any similar posts in a quick search, though, so I'm posting it anyway. I’ve tried to front-load feedback that might be useful to the organizers, and put more personal stuff towards the end. For context, I attended LessOnline and the Manifest-branded Summer Camp, but not Manifest itself, and my main prior experience with events like this is fandom conventions such as (local to me) Dragoncon.
----------------------------------------
As I left the Lighthaven dorm to find breakfast, five people at a table in the courtyard invited me to join a game of Zendo. This was the first notable thing to happen to me at LessOnline. It was also the thing that convinced me that yes, the trip across the country to attend would be Worth It.
I have never played Zendo before, and don’t expect to play it again anytime soon. That the game was specifically Zendo is not important. The important part is that five people in the same place knew what Zendo is and found that kind of game worth playing.
There’s an attitude that I associate with normies, aptly summarized by Tycho Brahe (the writer, not the astronomer) as: “Many people respond to new information, especially densely coded information, as something between an insult and a chop to the trachea.”
There’s a different attitude, one that I associate with security mindset, aptly summarized by John Gordon as: “Alice will happily attempt, with someone she doesn't trust, whom she cannot hear clearly, and who is probably someone else, to fiddle her tax returns and to organise a coup d'etat, while at the same time minimising the cost of the phone call. A coding theorist is someone who doesn't think Alice is crazy.”
A lot of things happened over the course of my trip, but what made it worth it wasn’t any particular event. It was spending a week around the sort of people that play Zendo, take dense coding in stride, and think Alice is a necessary kind o |
f2cd4156-36fd-46db-92d4-6004b7b6f065 | trentmkelly/LessWrong-43k | LessWrong | EA as offsetting
Scott made a good post about vegetarianism.
But the overall line of reasoning sounds to me like:
> “There’s a pretty good case that one is morally compelled to pay for people in the developing world to have shoes, because it looks pretty clear now that people in the developing world have feet that can benefit a lot from shoes.
>
> However, there is this interesting argument that it is ok to not buy shoes, and offset the failing through donating a small amount to effective charities.”
— which I think many Effective Altruists would consider at least strange and inefficient way of approaching the question of what one should do, though it does arrive at the correct answer. In particular, why take the detour through an obligation to do something that is apparently not as cost-effective as the offsetting activity? (If it were as cost effective, we would not prefer to do the offsetting activity). That it would be better to replace the first activity with the second seems like it should cast doubt on the reasoning that originally suggested the first activity. Assuming cost-effectively doing good is the goal.
That is, perhaps shoes are cost-effective. Perhaps AMF is. One thing is for sure though: it can’t be that shoes are one of the most cost-effective interventions and can also be cost-effectively offset by donating to AMF instead. If you believe that shoes can be offset, this demonstrates that shoes are less cost-effective than the offset, and so of little relevance to Effective Altruists. We should just do the ‘offset’ activity to begin with.
Does the above line of reasoning make more sense in the case of vegetarianism? If so, what is the difference? I have some answers, but I’m curious about which ones matter to others.
|
4d74317b-6ec4-4ce0-9341-4af093ce0362 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Cooperation, Avoidance, and Indifference: Alternate Futures for Misaligned AGI
Concerns about “super-intelligence” loom large. The worry is that artificial general intelligence (AGI), once developed, might quickly reach a point at which (1) it becomes aware of its own capabilities and (2) it deploys those capabilities to subjugate or eradicate humanity. This is often described as the problem of “power-seeking, misaligned AGI” (with “misalignment” referring to divergence between the goals of the humans who create AGI and the endogenous goals of AGI itself).
Experts who have expressed concern about misaligned AGI rate the odds of catastrophe differently. But a common premise unites their views: that AGI is likely to wield power in a manner adverse to human welfare (Carlsmith 2021). Here, I scrutinize that premise. Inspired by evolutionary biology and game theory, I explore other ways — apart from subjugation and eradication — that systems comprised of mutually-powerful agents (or groups of agents) tend to equilibrate. With these patterns in mind, I argue that AGI, even if misaligned, is *quite unlikely* to subjugate or eradicate humans. Rather, the strong likelihood is that misaligned AGI would do some combination of:
• Cooperating with us;
• Purposely avoiding us;
• Paying us no attention whatsoever.
Of course, it is possible — and I hardly mean to deny — that misaligned AGI *could* wield power in ways that, for humans, qualify as “catastrophic.” This could happen either intentionally or incidentally, depending on the goals AGI decided to pursue. The point is simply that “catastrophic” outcomes, though possible, are far less likely than alternate equilibria. Why? In short, because:
1. Historically, interactions among agents with misaligned goals — both across species lines and among human sub-groups — have frequently resulted in non-catastrophic equilibria; and
2. There is no reason to think the overall distribution of probabilities would be different in the specific case of humans and AGI.
In what follows, I bring these claims alive through a combination of real-life examples — drawn from the natural world — as well as hypotheticals meant to forecast (if not necessarily to predict) the imminent world of human-AGI interaction.
\*
**Claim one** — *interactions between mutually power-seeking agents with misaligned goals very frequently result in non-catastrophic equilibria*
In practice, systems compromised of mutually-powerful agents with divergent goals tend, overwhelmingly, to equilibrate in one of three (non-catastrophic) ways:
• Mutualism
• Conflict Avoidance
• Indifference
This taxonomy is meant to be answer, of sorts, to Bostrom’s famous “parable of the sparrows,” which imagines a group of sparrows that endeavor to locate an owl chick and train it as their servant. Although Bostrom notes, cheekily, that it is “not known” how the parable ends, the implication is that things are unlikely to turn out in the sparrows’ favor (Bostrom 2014). And the same is certainly possible with regard to AGI and humans. The analogy Bostrom has in mind — sparrow : human :: owl : AGI — may well hold. But many other analogies exist. The question is one of relative likelihood.
Mutualism
The first form of non-catastrophic equilibrium is mutualism. Mutualist relationships produce net-benefits for all groups or species involved, often through intentional or incidental exchange; each group or species provides something of value to the other, leading to multilateral incentives for cooperation (Hale et al. 2020).
Often, mutualism occurs among groups or species that pose no threat to each other. For instance, we exist in a mutualist relationship with the bacteria that constitute our “gut flora.” The bacteria help regulate digestion, and we, in turn, provide them (luxury?) accommodation.
But mutualism can also occur among groups or species that *do* pose natural threats to each other — when the benefits of mutualism simply outweigh the risks of threat. Here is a good example: zoologists have observed that gelada monkeys allow Ethiopian wolves to roam freely in the vicinity of their young, though the latter are easy prey. Why? The reigning hypothesis is that wolves provide the baboons protection from other predators, and the baboons help the wolves locate rodents — an easier source of food. In light of this, the wolves have learned to leave the young baboons alone (Holmes 2015).
The power differential between wolves and monkeys is relatively small. But this is not a necessary feature of inter-species mutualism. It can also transpire in settings where one species is *vastly* more powerful than the other. Take, for example, bees and humans. Bees can be a source of nuisance (and even, in some contexts, a more serious threat), and we certainly have the capacity to eradicate bees if we thought it worth our time and energy. But we have no incentive to do so. In fact — now that we understand pollination — we have an active incentive to keep bees alive and flourishing, and even to protect them from *other* threats, purely as a matter of self-interest.
Given this, consider the following thought-experiment: a “parable of the bees” on similar footing with that of the sparrows. It’s 10,000 BC, and certain members of the bee race — the Bee Intelligentsia — are worried about the growing capabilities of homo sapiens. They begin (doing the equivalent of) writing papers that frame the concern as follows:
*Once homo sapiens turn their sights to reshaping the external environment, they will be more powerful than we can imagine, and there is a non-negligible chance that they will pursue goals either directly or incidentally adverse to our welfare — maybe catastrophically so. Accordingly, we should begin expending significant amounts of collective energy future-proofing against subjugation or eradication by homo sapiens.*
Would the Bee Intelligentsia be “wrong” to think this way? Not exactly, for the core submission is true; human activity does raise *some risk* of catastrophe. The error would lie in over-weighting the risk. (Which is easy to do, especially when the risk in question is terrifying.) But if the Bee Intelligentsia understood pollination — if it were intelligent, let alone super-intelligent — it would be able to appreciate the possibility that bees offer humans a benefit that is not trivial to replace. Indeed, it might even be able to predict (some version of) the present-day dynamic, namely, that far from undermining bee welfare, humans have an active incentive to enhance it.
The same may be true of humans and AGI — with humans in the “bee” position. Depending on its goals, AGI may well conclude that humans are worth keeping around, or even worth nurturing, for the mutualist benefits they deliver. In fact, AGI could simply conclude that humans may deliver mutualist benefits *at some point in the future*, and this, alone, might be enough to inspire non-predation — as in the wolf-baboon example — or cultivation — as in the human-bee example — purely as a means of maintaining optionality. One assumption built into the “super-intelligence” problem, after all, is that AGI will be capable of developing functionalist (and possibly causal) theories of their world — to an equal, if not vastly greater, extent than humans have. From this, it follows that AGI would likely have an enormous set of mutualist dynamics to explore (and to think about safeguarding for the future) before picking out hostility as the most rational stance to adopt toward humanity.
Some versions of AGI “safeguarding mutualism” would likely resemble domestication; the way AGI would invest in the possibility of humans delivering mutualist benefits would be — in some measure — to “farm” us. (Researchers already use language like this when describing the wolf-baboon example.)
In this context, “domestication” may sound jarring, even borderline dystopian. But — crucially — that is not the same as catastrophic. In fact, one can easily imagine “human domestication” scenarios that enable greater flourishing than we have been able to manage, or plausibly could manage, on our own. Consider, for example, if domestication has been catastrophic for a species like bengal cats. At their limit, questions like this may be more metaphysical than empirical; they may ride on deep (and likely non-falsifiable) conceptions of what flourishing involves and requires. But at a practical level, for many species, like bengal cats, it would seem odd to describe domestication as catastrophic. Domestication has effectively relieved bengal cats of the need to constantly spend energy looking for food. Whatever drawbacks this has also occasioned (do bengal cats experience ennui?), it seems like a major improvement, and certainly not a catastrophic deprivation, as such.
Conflict Avoidance
The second form of non-catastrophic equilibrium is conflict avoidance. This involves relationships of threat or competition in which one or more parties deems it easier — more utility-maximizing overall — to simply avoid the other(s). For example, jellyfish are a threat to human beings, and human beings are a threat to jellyfish. But the global “equilibrium” between the two species is avoidant, not hopelessly (or catastrophically) conflictual. If, circa 10,000 BC, the Jellyfish Intelligentsia voiced concerns analogous to those of the Bee Intelligentsia above, they, too, would have had little reason to worry. In the course of history, humans may well have pursued the subjugation or eradication of jellyfish; and we still might. The most likely equilibrium, however, is one in which humans mostly just leave jellyfish alone — and vice versa.
Importantly, to qualify as non-catastrophic, an “avoidant” equilibrium need not involve *zero* conflict. Rather, the key property is that conflict does not tend to multiply or escalate because, in the median case, the marginal cost of conflict is greater than the marginal cost of avoidance. Take the jellyfish example above. Sometimes jellyfish harm humans, and sometimes humans harm jellyfish. What makes the equilibrium between them avoidant is not a total absence of conflict; it’s that humans generally find it less costly to avoid jellyfish (by swimming away, say, or by changing the location of a diving expedition) than to confront them. We certainly *could* eradicate — or come very close to eradicating — jellyfish if that were an overriding priority. But it’s not; nor is it likely to become one. Our energy is better spent elsewhere.
Similar dynamics transpire at the intra-species level. Many human subcultures, for example, are marked by dynamics of reciprocal threat and even mutual predation — think, say, of organized crime, or competition among large companies in the same economic sector. Yet here, too, avoidance is far more prevalent than subjugation or eradication. Commodity distribution organizations, licit and illicit alike, do not tend to burn resources trying to destroy one another — at least, not when they can use the same resources to locate new markets, improve their products, or lower production costs. In practice, these strategies are almost always less costly and/or more beneficial than their destructive counterparts.
At a high level of generality, the appeal of avoidance, relative to escalating conflict, is not hard to see. Even when one group possesses the in-principle *capability* to destroy another, destructive strategies typically become more costly to keep pursuing the more they have already been pursued; up until the point of completion, the marginal cost of maintaining a destructive strategy tends to increase exponentially. Why? Because counter-parties tend to respond to destructive strategies adaptively, and often in ways that impose costs in the other direction. Retaliation and subversion are the two most common examples. The history of human conflict suggests that small, less powerful groups — in some cases, *vastly* less powerful groups — are capable of inflicting significant harm on their larger, more powerful counterparts. When humans (and other intelligent animals) find themselves in desperate circumstances, the combination of survival instinct, tenacity, and ingenuity can result in extraordinarily outsized per capita power.
This is not always true; sometimes, small, less powerful groups get decimated. The point, however, is that the *possibility* of small groups wielding outsized per capita power often suffices to make avoidance a more appealing ex ante strategy. Anticipating that destruction may be costly to accomplish, powerful groups often opt — as with territorial disputes between criminal and corporate organizations — for some combination of (1) investing in the creation of new surplus and (2) informally splitting up existing surplus without resorting to (catastrophic forms of) conflict (Peltzman et al. 1995).
There is reason, accordingly, to think that even if AGI found itself in potential conflict with humans — e.g., due to competition for the same resources — the most efficient response could be some combination of (1) creating new mechanisms for amassing the resource, or (2) finding ways to share the resource, even amid conflict. Imagine, for instance, if AGI determined that it was important to secure its own sources of energy. Would the answer be, as some have hypothesized, to seize control of all electricity-infrastructure? (Carlsmith 2021) Possibly; but it’s also possible that AGI would simply devise a better of means of collecting energy, or that it would realize its longer-term interests were best served by allowing us to maintain joint access to existing energy sources — if nothing else, for appeasement purposes, to avoid the attrition costs of ongoing conflict.
Indifference
The third form of non-catastrophic equilibrium — surely the most pervasive, considering the sheer multitude of inter-species relationships that exist on earth — is indifference. Most underwater life, for example, is beyond human concern. We do not “avoid” plankton, in the sense just described. We simply pay them no mind. If the Plankton Intelligentsia voiced concern on par with the Bee Intelligentsia or the Jellyfish Intelligentsia, they, too, would not be “wrong,” exactly. But they would also be foolish to attribute much significance to — or to organize their productive capacities around — the risk of human-led subjugation or eradication.
The same could easily be true of AGI. In fact, the plankton analogy may well prove the most felicitous. If, as the super-intelligence problem hypothesizes, AGI ends up possessing vastly greater capability than humans, it stands to reason that AGI may relate to us in roughly the same way that we relate to other species of vastly lesser capability. And how is that? As a general matter, by paying them little or no attention. This is not true of *every* such species; bees have already supplied a counter-example. But the general claim holds. With respect to the most species, most of the time, we have no conscious interactions at all.
Of course, indifference is not always innocuous. In certain guises, it can be highly destructive: if the goals of the powerful-but-indifferent group come into collision with the welfare of the less powerful group. For example, humans have been “indifferent” — in the sense described above — to many species of rainforest plants and animals, and the latter are considerably worse off for it. With this category, the important point is that catastrophic results, when they occur, do so *incidentally*. Catastrophe is not the goal; it is a mere collateral consequence (Yudlowsky 2007).
How, then, are humans likely to fare under the “indifference” model? It would depend entirely on the goals AGI decided to pursue. Some goals would likely ravage us. Suppose AGI decided that, in the interest of (say) astrophysical experimentation, one of its overriding objectives was to turn planet earth into a perfect sphere. In that case, human civilization may be doomed. But other goals would leave human social order — and human welfare, such as it is — effectively unaltered. If, for example, AGI decided the best use of its energy was to create and appreciate art of its own style, or to exhaustively master the game of Go, or anything else along such aesthetic lines, human civilization would be unlikely to register much, if any, effect. In fact, we might not even be aware of such happenings — in roughly the same sense that plankton are not aware of human trifles.
\*
**Claim two** — *there is no reason to think AGI-human interactions will break from past patterns of equilibration*
The goal of the last section was to show that, across a wide range of inter- and intra-species interactions, non-catastrophic equilibria are common. They are not inevitable. But they are prevalent — indeed, hyper-prevalent — once we broaden the scope of analysis. (For instance, there are vastly more oceanic species with which humans exist in “avoidant” or “indifferent” equilibria than the total number of mammalian species humans have ever come into contact with.) This does not mean, of course, that catastrophic outcomes are nonexistent — just that, to date, they make up a small, even negligible, share of the overall distribution of inter- and intra-species interactions in the natural world.
The next question is whether the same distributional dynamics — catastrophic outcomes dwarfed by non-catastrophic equilibria — would apply to the specific case of humans and AGI. I believe so, for two reasons: (1) we have seen no empirical evidence to the contrary, and (2) the only plausible dimension along which the human-AGI case would differ from historical parallels — namely, AGI’s enhanced capability — supplies no reason, in principle, to think that catastrophic outcomes would become more likely, relative to their non-catastrophic counterparts.
To be sure, AGI’s enhanced capability may change the type or quality of outcomes, both catastrophic and non-catastrophic, that define human-AGI interactions. What they are unlikely to change (or, at any rate, what we have no a priori reason to think they will change) is the *distribution* of those outcomes. Why? In short, because any changes in capability that increase the danger of catastrophe are equally likely, for the same reason, to create new horizons of mutualism and avoidance — leaving the overall distribution of possibilities, in principle, unchanged.
To begin with, the empirical evidence is easily summarized. So far, AI systems that have shown signs of adaptive capability seem, uniformly, to follow familiar patterns of strategic decision-making. Under current technological conditions, adaptive AI tends to proceed exactly as one might think — by crafting and revising plans in response to the functional goals of the environment in which it is deployed (Carlsmith 2021). In other words, they do exactly what one would expect *any* agent to do: grasping the parameters of the problem-space and deploying familiar — roughly predictable — strategies in response.
But the stronger argument in favor of the “AGI is different” position is conceptual, not empirical. The idea is that because AGI may have capabilities that exponentially outstrip those of humans and other biological agents, it is difficult, if not incoherent, to compare the interactions of biological agents to potential dynamics of human-AGI interaction. In particular, the worry is that AGI might be exceptionally capable at eradicating or subjugating humans (or other animal species ), and — the thought continues — if AGI comes to *possess* that capability, it will be more likely, as a consequence, to *exercise* that capability.
This logic invites two forms of rejoinder. The first scrutinizes the claim’s implicit assumptions about AGI’s intentions. In short, even granting the possibility that AGI comes to possess the capability to eradicate or subjugate the human race, it may, nonetheless, lack the *desire or will* to carry out those goals — mitigating, or even eliminating, their danger in practice. Some commentators, for example, have wondered whether the training environments in which AGI systems develop will actually *conduce* to aggressive tendencies, insofar as they those environments typically involve little or no competition for resources, exponentially faster time-horizons for adaptation, and so forth (Garfinkel 2022).
This first rejoinder — related to AGI’s intentionality — could well be sound. After all, if the prevalence of “indifference” dynamics in the natural world is any indication, the overwhelming likelihood, as a statistical matter, may be that AGI pays us no regard whatsoever. It is possible, in other words, that the default paradigm here is (something like) humans and plankton — not humans and bees, or humans and other advanced mammals. If anything, the plankton paradigm only becomes more plausible the more advanced — and alienated from us — we imagine AGI’s capabilities to be.
Of course, indifference does not always mean long-term harmony. As noted above, of all the species that human activity has eradicated (or that it soon threatens to eradicate), many are species with whom we exist “indifferently.” The eradication of such species is collateral damage: the incidental effect of our activity, often beyond conscious regard. The same may be true, in reverse, of AGI. Human welfare could easily become the collateral damage of whatever scientific, aesthetic, and other goals — totally unrelated to us — AGI decided to pursue.
But there is also second rejoinder to the idea that AGI is likely to pursue the eradication or subjugation of humanity — which is fortunate, since speculation about AGI’s intentionality (or lack thereof) seems like a slender reed on which to premise humanity’s future, especially given how little we know about our *own* intentionality. The second rejoinder runs as follows: even assuming (1) that AGI comes to possess the capability to eradicate or subjugate humanity; and, further, (2) that AGI has the will (or at least does not entirely lack the will) to pursue those goals, it does not follow that AGI *will* pursue those goals. Rather, the question is whether the pursuit of those goals, relative to all the other goals AGI might pursue, would be utility-maximizing.
There is good reason, I think, to operate under the assumption that the answer to this question — would powerful, misaligned AGI actually pursue the eradication or subjugation of humanity? — is no. Or, to put it more precisely, there is good reason to assume that the answer to this question is *no more likely to be yes* than the likelihood of catastrophic outcomes resulting, in general, from interactions between mutually-powerful agents in the natural world. The reason is this: the same enhancements of capability that would enable AGI to eradicate or subjugate humanity would also enable AGI to avoid or cooperate with humanity, and there is no reason, in principle, to prioritize one set of outcomes over the other.
Start with avoidance. Although we do not know (by hypothesis) *which* goals AGI will pursue, it is easy to imagine goals that conflict, in some important measure, with human activity — leading AGI to view us as competitors or threats. The question is how AGI would act on that view. Would it try to eradicate or subjugate us? And more to the point: would AGI’s enhanced capability make those catastrophic results more likely than they would be in the (already familiar) context of biological agents beset by conflict? No — at least, not in the abstract. AGI certainly *could* take steps to eliminate human threat. But it could also — equally — take steps to avoid that threat. And the route it chooses would depend, not on AGI’s capability per se, but rather on an analysis of relative efficiency. In other words, the question would be which route, elimination or avoidance, is cheaper — and enhanced capability would make *both* routes cheaper. So that variable, by itself, cannot answer the relative efficiency question. Rather, it begs that question.
Consider the following thought-experiment. Humans spontaneously leap forward a few centuries of technological prowess, such that we now have the capability to instantly kill whole schools of jellyfish using electromagnetic energy. Would we use this technology to eradicate jellyfish across the board?
Maybe — but it would depend entirely on what other avoidance strategies the technology *also* enabled. If the same technology allowed individual human divers to kill specific jellyfish they happened to encounter, that solution (i.e., dealing with individual jellyfish on an ad hoc basis) would likely be preferable to large-scale eradication. Similarly, if the jellyfish grew to recognize that humans possess the capability to kill them relatively easily, they might start trying to *avoid us* — an “avoidant” equilibrium in its own right. Of course, we still might decide that eradicating jellyfish is worth it. The point is not that eradication is an impossible, or even an utterly implausible, end-state. The point is that enhanced capability is not the determinative variable. In fact, the biological world is replete with examples — of the human-jellyfish flavor — in which agents of vastly greater capability decide, under a relative efficiency analysis, that avoiding threats is more efficient than attempting to eliminate them.
Here, an important caveat bears noting. Namely, equilibria of an essentially “avoidant” nature — in which one or multiple agents decides that avoiding conflict is more efficient than trying to eliminate threats — can still be highly destructive. What distinguishes “avoidance” from eradication is not a total absence of conflict or violence; it is that the conflict tends not to escalate to catastrophic levels. Think, for example, of familiar dynamics of competition between criminal organizations — such as gangs and cartels. The interface between these groups is often marked by ongoing anti-social conduct; periods of stability are typically ephemeral, and violence and terror are often the norm. Nevertheless, the overall result is rarely *catastrophic* in the sense of one group being subject to total eradication or subjugation at another’s hand. Instead, the overall result is equilibrium, defined at the margin by unpredictable — but fundamentally stable — push and pull. (The same is true of, say, the “avoidant” interface between humans and mosquitos. In the aggregate, it entails the death of many, many mosquitos — but it is nowhere near “catastrophic” for mosquitos as a class.)
An equivalent analysis applies to mutualism. If AGI came to view humans as threats or competitors, not only would it consider — per above — the relative efficiency of avoiding us; it would also consider the relative efficiency of *cooperating* with us. Furthermore, as with avoidance, enhanced capability would enable new strategies for cooperation, even as it also enables new means of eradication. In fact, cooperation is likely the dimension along which enhanced capability is poised to make the greatest difference — since, historically, the greatest bottlenecks on inter-species cooperation have been (1) the difficulty of identifying opportunities for cooperative enterprise and (2) the impossibility of felicitous enough communication to effectuate those opportunities, once identified.
Consider, for instance, Katja Grace’s example of humans and ants: why, she asks, have the two species declined historically to develop joint enterprise? Ultimately, the reason is almost certainly *not* that the two species have nothing to offer one another. Rather, the reason we have not developed joint enterprise with ants is the impossibility of effective communication. If we *could* communicate with ants, there are many tasks for which we might gladly compensate ant-labor — for example, navigating hazardously small spaces, or performing inconspicuous surveillance (Grace 2023). That such tasks exist does not, of course, entail that their pursuit would prove mutually beneficial. Certain tasks might be too costly to ants at a physical level; others might offend their dignity; still others might simply command too great a premium (we wouldn’t be able to afford it!). The general trend, however, is that when parties are capable of (1) performing tasks of reciprocal benefit to one another and (2) communicating effectively, they locate avenues of cooperation.
What might human-AGI mutualism involve? Although we can expect AGI (again, by hypothesis) to have insight into this question that may transcend our own, a number of routes seem plausible. One would involve AGI capitalizing on our sensory capabilities, rather than trying to “reinvent the wheel.” Imagine, for instance, if AGI discerned a natural resource — say, a rare species of orchid — that it wished to amass. What would it do? There are a few possibilities. One is that it could build a small army of orchid-hunting robots, fit for dispatching all over the world. Another is that it could enlist humans to do all the labor (traveling, searching, extracting, etc.). A third would involve splitting-the-difference: with, say, AGI performing the “search” function, and humans performing the “extract” function.
The orchid example is stylized, of course, but the core point — that AGI would face ongoing tradeoffs around which sensory functions to insource and which to outsource — is likely to generalize, at least on the assumption that AGI cares whatsoever about our sensory world. What is more, even if AGI took the “insource” route, it may still have (potentially enormous) use for human labor dedicated to training robotic systems, in much the same way that human laborers are *currently* being deployed — by other humans — to train their own replacements.
The training model of AGI-human mutualism could have other applications as well. Imagine, for example, if AGI decided it wanted to develop a sense of affect — emotional or moral interiority — and it wished to “learn” these treats from human coaches. Or, likewise, suppose AGI decided it wanted to attain aesthetic sensibilities, in hopes of replicating various modes of pleasure — artistic, athletic, culinary, and so on — that it had discerned among human beings. All of these endeavors (and innumerably more) would leave ample room, at least in principle, for human contribution to AGI enterprise. And incentives toward mutualism would naturally follow suit.
To sum up — at the risk of redundancy — let me be clear about what I am (and am not) arguing about enhanced capability. The claim is not that enhanced capability *necessarily will* result in an identical distribution of probabilities, or an identical level of catastrophic risk, for human-AGI interaction, relative to historical inter-species patterns. The claim is more modest. It is (1) that nothing about the fact of enhanced capability, on its own, supplies grounds to think that novel forms of catastrophe will be more likely than novel forms of non-catastrophic equilibria, and (2) that absent such grounds, the most rational assumption is that AGI-human interactions will track, not deviate from, historical patterns of equilibration.
\*
**Conclusion**
If all the foregoing is — at least roughly — correct, how should it impact our overall assessment of the “catastrophe risk” associated with powerful, misaligned AGI? I hesitate to conjure specific probabilities associated with cooperative, avoidant, and indifferent end-states for human-AGI interactions. But it seems safe, at a minimum, to say that any chain of probabilities that aspires to capture the overall likelihood of catastrophe is missing something crucial if it focuses exclusively on (1) whether powerful, misaligned AGI is likely to emerge; and, if so, (2) whether we are likely to retain (or develop) the ability to counteract AGI’s catastrophic goals. These two variables have, perhaps understandably, received the lion’s share of attention to date (Grace 2022; Carlsmith 2021). A full account, however, requires thinking carefully about what AGI would actually do with its misaligned power.
Could misaligned AGI pursue goals deliberately adverse to human welfare? Sure. But it could also cooperate with us, avoid us, or ignore us. The history of inter-species interaction abounds with examples of these latter dynamics, even amid — sometimes *especially* amid — conditions of threat and competition. If that history is any guide, as I have argued it should be, the most plausible endgame for AGI-human interaction is not catastrophe. It is equilibrium.
\*
**Acknowledgments**
Thanks are due to Jill Anderson, Thomas Brennan-Marquez, Brendan Maher, Nathaniel Moyer, Eric Seubert, Peter Siegelman, Carly Zubrzycki, and Jackie Zubrzycki for helping refine the arguments in this essay.
**Bibliography**
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014)
Joseph Carlsmith, Is Power-Seeking AI an Existential Risk? (2021)
Ben Garfinkel, Review of ‘Carlsmith, Is Power-Seeking AI an Existential Risk?’ (2022)
Katja Grace, Counterarguments to the Basic AI Risk Case (2022)
Katja Grace, We Don’t Trade With Ants: AI’s Relationship With Us Will Not Be Like Our Relationship With Ants (2023)
Kayla Hale et al., Mutualism Increases Diversity, Stability, and Function in Multiplex Networks that Integrate Pollinators Into Food Webs (2020)
Bob Holmes, Monkeys’ Cozy Alliance with Wolves Looks Like Domestication (2015)
Holden Karnofsky, Thoughts on the Singularity Institute (2012)
Sam Peltzman et al., The Economics of Organized Crime (1995)
Phillip Pettit, Republicanism: A Theory of Freedom and Government (1997)
Nate Soares, Comments on ‘Is Power-Seeking AI an Existential Risk?’ (2021)
Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk (2008)
Eliezer Yudkowsky, The Hidden Complexity of Wishes (2007)
Eliezer Yudkowsky, Coherent Extrapolated Volition (2004) |
1ebc0bc3-1c87-4639-a402-40f3d21a3637 | trentmkelly/LessWrong-43k | LessWrong | Trio Walks, Duo Talks
Epistemic Status: Exploration
Related: The Engineer and the Diplomat
Having a great conversation is hard, but valuable. The structure a conversation is built around determines how it will go. How should we engineer more valuable conversations?
At the retreat, it was suggested that we go on Trio Walks. The idea was that three is the ideal number of people for a conversation about important things. With only two, it is easy to encounter blind spots, make mistakes or lose your way. Having a third keeps things grounded.
The walking part was partly that we had nature all around us, partly that at some point we needed to get some exercise, but mostly to get the trio away from everyone else.
I only got the opportunity to go on one trio walk. Once we were clear of other people we did not do much walking, but we did stay a trio. I found it to be an excellent use of time, including the chance to meet two new people. Last week I arranged a trio walk with two people I wanted to connect, and that seemed to go better than it would have with a different number of people.
World of Two
This conflicts with my previous model, which said the right number of people is two. I thought the trade-off was quality against efficiency; with each additional person, things get a lot harder. The way large conversations happen at all, under this model, is most of the people are mostly observers.
With two people, the conversation can stay on track. Pauses to think are more practical. When one needs time, it is often obvious, and if it isn’t, one can ask. You can craft a multi-part argument. You can leave threads hanging to come back to later. If things get off track, it is relatively easy to steer things back on track.
With two people you can retain control of the conversation without the worry it will be hijacked, or you will cease to be participating.
Only one person needs to understand any given thing being said, so explanations can be tailored to that person. Different explanations w |
be14e919-4135-41b2-97c6-4a15a124e8b4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Accuracy of arguments that are seen as ridiculous and intuitively false but don't have good counter-arguments
I think one of the biggest reasons I am worried about AI doom is that there doesn't seem to be very good counter-arguments. Most of them are just bad, and the ones that aren't a bit vague (usually something along the lines of "humanity will figure it out" or "maybe LLMs will scale in nice ways").
However, I'm curious as to how accurate this heuristic is. **My question:** What examples in the past are there of **"argument is widespreadly seen as ridiculous and intuitively false, but the argument is pretty solid and the counter-arguments aren't"**. (Sorry if that's a bit vague, use your best judgement. I'm looking specifically for examples that are similar to the AI x-risk debate.) And did they turn out true or false? Try to include reasons why the argument was so strong, a typical counter-argument, and the strongest counter-argument.
Please use spoiler text for the final answer so that I can try to predict it before seeing it! |
7d53ddcb-d245-4c5f-bc10-facc41fd58fa | trentmkelly/LessWrong-43k | LessWrong | Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness
In this post:
* A true-false test about octopuses
* What is it like to be an octopus?
* An exercise in updating on surprising facts
* Experiments related to animal suffering and consciousness
* The evolution of aging
* Should you read Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness?
I. Introduction
Peter Godfrey-Smith's Other Minds: the Octopus, the Sea, and the Deep Origins of Consciousness is a phenomenal mishmash of octopus- and consciousness-related topics. It deals with everything from the evolution of octopuses, to their social life, to animal consciousness (including octopus consciousness), to evolutionary theories of aging, and more. All of this is tied together by a palpable fascination with octopuses, which manifests itself in rich descriptions of Godfrey-Smith's own experiences scuba-diving off the coast of Australia to observe them.
The book attempts to fit discussion of an impressive amount of interesting topics all into one slim volume. On the one hand, this is great, as each topic is fascinating in its own right, and several are relevant to EA/rationality. On the other hand, fitting in so many topics is a difficult task which the book only halfway pulls off. There wasn’t enough room to discuss each topic in as much depth as they deserved, and the breadth of topics meant that the book felt somewhat unorganized and disunified. The book as a whole didn’t seem to have any central claim; it was simply a collection of interesting facts, observations, musings, and theories that somehow relate to either octopuses, consciousness, or both, plus a bunch of fascinating first-hand descriptions of octopus behavior.
Do I recommend the book? Yes and no. For general interest, definitely--it’s an interesting, enjoyable read; but for rationalists and EAs, there are probably better things to read on each topic the book discusses that would go into more depth, so it may not be the most effective investment of time for learning about, sa |
57abbc61-9098-499e-9c4b-b886da0823db | StampyAI/alignment-research-dataset/arxiv | Arxiv | Adam: A Method for Stochastic Optimization
1 Introduction
---------------
Stochastic gradient-based optimization is of core practical importance in many fields of science and engineering. Many problems in these fields can be cast as the optimization of some scalar parameterized objective function requiring maximization or minimization with respect to its parameters. If the function is differentiable w.r.t. its parameters, gradient descent is a relatively efficient optimization method, since the computation of first-order partial derivatives w.r.t. all the parameters is of the same computational complexity as just evaluating the function. Often, objective functions are stochastic. For example, many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions, i.e. stochastic gradient descent (SGD) or ascent. SGD proved itself as an efficient and effective optimization method that was central in many machine learning success stories, such as recent advances in deep learning (Deng et al., [2013](#bib.bib2); Krizhevsky et al., [2012](#bib.bib10); Hinton & Salakhutdinov, [2006](#bib.bib6); Hinton et al., [2012a](#bib.bib7); Graves et al., [2013](#bib.bib5)). Objectives may also have other sources of noise than data subsampling, such as dropout (Hinton et al., [2012b](#bib.bib8)) regularization. For all such noisy objectives, efficient stochastic optimization techniques are required.
The focus of this paper is on the optimization of stochastic objectives with
high-dimensional parameters spaces. In these cases,
higher-order optimization methods are ill-suited, and discussion in this paper
will be restricted to first-order methods.
We propose *Adam*, a method for efficient stochastic optimization that
only requires first-order gradients with little memory requirement. The method
computes individual adaptive learning rates for different parameters from
estimates of first and second moments of the gradients; the name *Adam* is derived
from adaptive moment estimation. Our method is designed to combine the advantages of two recently popular methods:
AdaGrad (Duchi et al., [2011](#bib.bib3)), which works well with sparse gradients, and RMSProp (Tieleman & Hinton, [2012](#bib.bib20)), which works well in on-line and non-stationary settings; important connections to these and other
stochastic optimization methods are clarified in section [5](#S5 "5 Related work ‣ Adam: A Method for Stochastic Optimization").
Some of Adam’s advantages are that the magnitudes of parameter updates are
invariant to rescaling of the gradient, its stepsizes are approximately bounded
by the stepsize hyperparameter, it does not require a stationary objective, it
works with sparse gradients, and it naturally performs a form of step size annealing.
In section [2](#S2 "2 Algorithm ‣ Adam: A Method for Stochastic Optimization") we describe the algorithm and the properties of its update rule. Section [3](#S3 "3 Initialization bias correction ‣ Adam: A Method for Stochastic Optimization") explains our initialization bias correction technique, and section [4](#S4 "4 Convergence analysis ‣ Adam: A Method for Stochastic Optimization")
provides a theoretical analysis of Adam’s convergence in online
convex programming. Empirically, our method
consistently outperforms other methods for a variety of models and datasets, as shown in section [6](#S6 "6 Experiments ‣ Adam: A Method for Stochastic Optimization").
Overall, we show that Adam is a versatile algorithm that scales to large-scale high-dimensional machine learning problems.
2 Algorithm
------------
0: α: Stepsize
0: β1,β2∈[0,1): Exponential decay rates for the moment estimates
0: f(θ): Stochastic objective function with parameters θ
0: θ0: Initial parameter vector
m0←0 (Initialize \nth1 moment vector)
v0←0 (Initialize \nth2 moment vector)
t←0 (Initialize timestep)
while θt not converged do
t←t+1
gt←∇θft(θt−1) (Get gradients w.r.t. stochastic objective at timestep t)
mt←β1⋅mt−1+(1−β1)⋅gt (Update biased first moment estimate)
vt←β2⋅vt−1+(1−β2)⋅g2t (Update biased second raw moment estimate)
ˆmt←mt/(1−βt1) (Compute bias-corrected first moment estimate)
ˆvt←vt/(1−βt2) (Compute bias-corrected second raw moment estimate)
θt←θt−1−α⋅ˆmt/(√ˆvt+ϵ) (Update parameters)
end while
return θt (Resulting parameters)
Algorithm 1 *Adam*, our proposed algorithm for stochastic optimization. See section [2](#S2 "2 Algorithm ‣ Adam: A Method for Stochastic Optimization") for details, and for a slightly more efficient (but less clear) order of computation. g2t indicates the elementwise square gt⊙gt. Good default settings for the tested machine learning problems are α=0.001, β1=0.9, β2=0.999 and ϵ=10−8. All operations on vectors are element-wise. With βt1 and βt2 we denote β1 and β2 to the power t.
See algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization") for pseudo-code of our proposed algorithm *Adam*. Let f(θ) be a noisy objective function: a stochastic scalar function that is differentiable w.r.t. parameters θ. We are interested in minimizing the expected value of this function, E[f(θ)] w.r.t. its parameters θ. With f1(θ),...,,fT(θ) we denote the realisations of the stochastic function at subsequent timesteps 1,...,T. The stochasticity might come from the evaluation at random subsamples (minibatches) of datapoints, or arise from inherent function noise. With gt=∇θft(θ) we denote the gradient, i.e. the vector of partial derivatives of ft, w.r.t θ evaluated at timestep t.
The algorithm updates exponential moving averages of the gradient (mt) and the squared gradient (vt) where the hyper-parameters β1,β2∈[0,1) control the exponential decay rates of these moving averages. The moving averages themselves are estimates of the \nth1 moment (the mean) and the \nth2 raw moment (the uncentered variance) of the gradient. However, these moving averages are initialized as (vectors of) 0’s, leading to moment estimates that are biased towards zero, especially during the initial timesteps, and especially when the decay rates are small (i.e. the βs are close to 1). The good news is that this initialization bias can be easily counteracted, resulting in bias-corrected estimates ˆmt and ˆvt. See section [3](#S3 "3 Initialization bias correction ‣ Adam: A Method for Stochastic Optimization") for more details.
Note that the efficiency of algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization") can, at the expense of clarity, be improved upon by changing the order of computation, e.g. by replacing the last three lines in the loop with the following lines: αt=α⋅√1−βt2/(1−βt1) and θt←θt−1−αt⋅mt/(√vt+^ϵ).
###
2.1 Adam’s update rule
An important property of Adam’s update rule is its careful choice of stepsizes. Assuming ϵ=0, the effective step taken in parameter space at timestep t is Δt=α⋅ˆmt/√ˆvt.
The effective stepsize has two upper bounds: |Δt|≤α⋅(1−β1)/√1−β2 in the case (1−β1)>√1−β2, and |Δt|≤α otherwise. The first case only happens in the most severe case of sparsity: when a gradient has been zero at all timesteps except at the current timestep. For less sparse cases, the effective stepsize will be smaller.
When (1−β1)=√1−β2 we have that |ˆmt/√ˆvt|<1 therefore |Δt|<α.
In more common scenarios, we will have that ˆmt/√ˆvt≈±1 since |E[g]/√E[g2]|≤1.
The effective magnitude of the steps taken in parameter space at each timestep are approximately bounded by the stepsize setting α, i.e., |Δt|⪅α. This can be understood as establishing a *trust region* around the current parameter value, beyond which the current gradient estimate does not provide sufficient information.
This typically makes it relatively easy to know the right scale of α in advance. For many machine learning models, for instance, we often know in advance that good optima are with high probability within some set region in parameter space; it is not uncommon, for example, to have a prior distribution over the parameters. Since α sets (an upper bound of) the magnitude of steps in parameter space, we can often deduce the right order of magnitude of α such that optima can be reached from θ0 within some number of iterations.
With a slight abuse of terminology, we will call the ratio ˆmt/√ˆvt the *signal-to-noise* ratio (SNR).
With a smaller SNR the effective stepsize Δt will be closer to zero. This is a desirable property, since a smaller SNR means that there is greater uncertainty about whether the direction of ˆmt corresponds to the direction of the true gradient. For example, the SNR value typically becomes closer to 0 towards an optimum, leading to smaller effective steps in parameter space: a form of automatic annealing.
The effective stepsize Δt is also invariant to the scale of the gradients; rescaling the gradients g with factor c will scale ˆmt with a factor c and ˆvt with a factor c2, which cancel out: (c⋅ˆmt)/(√c2⋅ˆvt)=ˆmt/√ˆvt.
3 Initialization bias correction
---------------------------------
As explained in section [2](#S2 "2 Algorithm ‣ Adam: A Method for Stochastic Optimization"), Adam utilizes initialization bias correction terms. We will here derive the term for the second moment estimate; the derivation for the first moment estimate is completely analogous. Let g be the gradient of the stochastic objective f, and we wish to estimate its second raw moment (uncentered variance) using an exponential moving average of the squared gradient, with decay rate β2. Let g1,...,gT be the gradients at subsequent timesteps, each a draw from an underlying gradient distribution gt∼p(gt). Let us initialize the exponential moving average as v0=0 (a vector of zeros). First note that the update at timestep t of the exponential moving average vt=β2⋅vt−1+(1−β2)⋅g2t (where g2t indicates the elementwise square gt⊙gt) can be written as a function of the gradients at all previous timesteps:
| | | | | |
| --- | --- | --- | --- | --- |
| | vt | =(1−β2)t∑i=1βt−i2⋅g2i | | (1) |
We wish to know how E[vt], the expected value of the exponential moving average at timestep t, relates to the true second moment E[g2t], so we can correct for the discrepancy between the two. Taking expectations of the left-hand and right-hand sides of eq. ([1](#S3.E1 "(1) ‣ 3 Initialization bias correction ‣ Adam: A Method for Stochastic Optimization")):
| | | | | |
| --- | --- | --- | --- | --- |
| | E[vt] | =E[(1−β2)t∑i=1βt−i2⋅g2i] | | (2) |
| | | =E[g2t]⋅(1−β2)t∑i=1βt−i2+ζ | | (3) |
| | | =E[g2t]⋅(1−βt2)+ζ | | (4) |
where ζ=0 if the true second moment E[g2i] is stationary; otherwise ζ can be kept small since the exponential decay rate β1 can (and should) be chosen such that the exponential moving average assigns small weights to gradients too far in the past. What is left is the term (1−βt2) which is caused by initializing the running average with zeros. In algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization") we therefore divide by this term to correct the initialization bias.
In case of sparse gradients, for a reliable estimate of the second moment one needs to average over many gradients by chosing a small value of β2; however it is exactly this case of small β2 where a lack of initialisation bias correction would lead to initial steps that are much larger.
4 Convergence analysis
-----------------------
We analyze the convergence of Adam using the online learning framework
proposed in (Zinkevich, [2003](#bib.bib23)).
Given an arbitrary, unknown sequence
of convex cost functions f1(θ), f2(θ),…, fT(θ). At
each time t, our goal is to predict the parameter θt and evaluate it
on a previously unknown cost function ft. Since the nature of the sequence
is unknown in advance, we evaluate our algorithm using the regret, that is the
sum of all the previous difference between the online prediction
ft(θt) and the best fixed point parameter ft(θ∗) from a
feasible set X for all the previous steps. Concretely, the regret
is defined as:
| | | | |
| --- | --- | --- | --- |
| | R(T)=T∑t=1[ft(θt)−ft(θ∗)] | | (5) |
where θ∗=argminθ∈X∑Tt=1ft(θ).
We show Adam has O(√T) regret bound and a proof is given in the
appendix. Our result is comparable to the best known bound for this general
convex online learning problem.
We also use
some definitions simplify our notation, where gt≜∇ft(θt) and gt,i as the ith element. We define
g1:t,i∈Rt as a vector that contains the ith
dimension of the gradients over all iterations till t, g1:t,i=[g1,i,g2,i,⋯,gt,i]. Also, we define γ≜β21√β2.
Our following theorem holds when the learning rate αt is decaying at a rate of t−12 and first moment running average coefficient β1,t decay exponentially with λ, that is typically close to 1, e.g. 1−10−8.
######
Theorem 4.1.
Assume that the function ft has bounded gradients, ∥∇ft(θ)∥2≤G, ∥∇ft(θ)∥∞≤G∞ for all θ∈Rd and distance
between any θt generated by Adam is bounded,
∥θn−θm∥2≤D, ∥θm−θn∥∞≤D∞ for any m,n∈{1,...,T}, and β1, β2∈[0,1) satisfy β21√β2<1. Let αt = α√t and β1,t=β1λt−1,λ∈(0,1). Adam achieves the
following guarantee, for all T≥1.
| | | |
| --- | --- | --- |
| | R(T)≤D22α(1−β1)d∑i=1√TˆvT,i+α(1+β1)G∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2+d∑i=1D2∞G∞√1−β22α(1−β1)(1−λ)2 | |
Our Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ 4 Convergence analysis ‣ Adam: A Method for Stochastic Optimization") implies when the data features are sparse and bounded gradients, the
summation term can be much smaller than its upper bound ∑di=1∥g1:T,i∥2<<dG∞√T and ∑di=1√TˆvT,i<<dG∞√T, in particular if the class of
function and data features are in the form of section 1.2 in
(Duchi et al., [2011](#bib.bib3)). Their results for the expected value
E[∑di=1∥g1:T,i∥2] also apply to Adam. In particular,
the adaptive method, such as Adam and Adagrad, can achieve O(logd√T), an improvement over O(√dT) for the non-adaptive method. Decaying
β1,t towards zero is important in our theoretical analysis and also matches previous
empirical findings, e.g. (Sutskever et al., [2013](#bib.bib19)) suggests reducing the momentum coefficient in the end of training
can improve convergence.
Finally, we can show the average regret of Adam
converges,
######
Corollary 4.2.
Assume that the function ft has bounded gradients, ∥∇ft(θ)∥2≤G, ∥∇ft(θ)∥∞≤G∞ for all θ∈Rd and distance
between any θt generated by Adam is bounded,
∥θn−θm∥2≤D, ∥θm−θn∥∞≤D∞ for any m,n∈{1,...,T}. Adam achieves the
following guarantee, for all T≥1.
| | | |
| --- | --- | --- |
| | R(T)T=O(1√T) | |
This result can be obtained by using Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ 4 Convergence analysis ‣ Adam: A Method for Stochastic Optimization") and ∑di=1∥g1:T,i∥2≤dG∞√T. Thus, limT→∞R(T)T=0.
5 Related work
---------------
Optimization methods bearing a direct relation to Adam are RMSProp (Tieleman & Hinton, [2012](#bib.bib20); Graves, [2013](#bib.bib4)) and AdaGrad (Duchi et al., [2011](#bib.bib3)); these relationships are discussed below. Other stochastic optimization methods include vSGD (Schaul et al., [2012](#bib.bib17)), AdaDelta (Zeiler, [2012](#bib.bib22)) and the natural Newton method from Roux & Fitzgibbon ([2010](#bib.bib15)), all setting stepsizes by estimating curvature from first-order information. The Sum-of-Functions Optimizer (SFO) (Sohl-Dickstein et al., [2014](#bib.bib18)) is a quasi-Newton method based on minibatches, but (unlike Adam) has memory requirements linear in the number of minibatch partitions of a dataset, which is often infeasible on memory-constrained systems such as a GPU. Like natural gradient descent (NGD) (Amari, [1998](#bib.bib1)), Adam employs a preconditioner that adapts to the geometry of the data, since ˆvt is an approximation to the diagonal of the Fisher information matrix (Pascanu & Bengio, [2013](#bib.bib13)); however, Adam’s preconditioner (like AdaGrad’s) is more conservative in its adaption than vanilla NGD by preconditioning with the square root
of the inverse of the diagonal Fisher information matrix approximation.
#### RMSProp:
An optimization method closely related to Adam is RMSProp (Tieleman & Hinton, [2012](#bib.bib20)). A version with momentum has sometimes been used (Graves, [2013](#bib.bib4)). There are a few important differences between RMSProp with momentum and Adam: RMSProp with momentum generates its parameter updates using a momentum on the rescaled gradient, whereas Adam updates are directly estimated using a running average of first and second moment of the gradient. RMSProp also lacks a bias-correction term; this matters most in case of a small value β2 (required in case of sparse gradients), since in that case not correcting the bias leads to very large stepsizes and often divergence, as we also empirically demonstrate in section [6.4](#S6.SS4 "6.4 Experiment: bias-correction term ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization").
#### AdaGrad:
An algorithm that works well for sparse gradients is AdaGrad (Duchi et al., [2011](#bib.bib3)). Its basic version updates parameters as θt+1=θt−α⋅gt/√∑ti=1g2t. Note that if we choose β2 to be infinitesimally close to 1 from below, then limβ2→1ˆvt=t−1⋅∑ti=1g2t. AdaGrad corresponds to a version of Adam with β1=0, infinitesimal (1−β2) and a replacement of α by an annealed version αt=α⋅t−1/2, namely θt−α⋅t−1/2⋅ˆmt/√limβ2→1ˆvt=θt−α⋅t−1/2⋅gt/√t−1⋅∑ti=1g2t=θt−α⋅gt/√∑ti=1g2t. Note that this direct correspondence between Adam and Adagrad does not hold when removing the bias-correction terms; without bias correction, like in RMSProp, a β2 infinitesimally close to 1 would lead to infinitely large bias, and infinitely large parameter updates.
6 Experiments
--------------
To empirically evaluate the proposed method, we investigated different popular
machine learning models, including logistic regression, multilayer fully
connected neural networks and deep convolutional neural networks. Using large
models and datasets, we demonstrate Adam can efficiently solve practical deep
learning problems.
We use the same parameter initialization when comparing different optimization
algorithms. The hyper-parameters, such as learning rate and momentum, are
searched over a dense grid and the results are reported using the best
hyper-parameter setting.
###
6.1 Experiment: Logistic Regression
| | |
| --- | --- |
| Logistic regression training negative log likelihood on MNIST images and IMDB movie reviews with
10,000 bag-of-words (BoW) feature vectors. | Logistic regression training negative log likelihood on MNIST images and IMDB movie reviews with
10,000 bag-of-words (BoW) feature vectors. |
Figure 1: Logistic regression training negative log likelihood on MNIST images and IMDB movie reviews with
10,000 bag-of-words (BoW) feature vectors.
We evaluate our proposed method on L2-regularized multi-class logistic
regression using the MNIST dataset. Logistic regression has a well-studied
convex objective, making it suitable for comparison of different optimizers
without worrying about local minimum issues. The stepsize α in our logistic regression experiments is adjusted by 1/√t decay, namely αt=α√t that matches with our theoratical
prediction from section [4](#S4 "4 Convergence analysis ‣ Adam: A Method for Stochastic Optimization"). The logistic regression
classifies the class label directly on the 784 dimension image vectors. We
compare Adam to accelerated SGD with Nesterov momentum and Adagrad using
minibatch size of 128. According to Figure [1](#S6.F1 "Figure 1 ‣ 6.1 Experiment: Logistic Regression ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization"), we found that the Adam
yields similar convergence as SGD with momentum and both converge faster than
Adagrad.
As discussed in (Duchi et al., [2011](#bib.bib3)), Adagrad can efficiently deal with
sparse features and gradients as one of its main theoretical results whereas
SGD is low at learning rare features. Adam with 1/√t decay on its
stepsize should theoratically match the performance of Adagrad. We examine the
sparse feature problem using IMDB movie review dataset from
(Maas et al., [2011](#bib.bib11)). We pre-process the IMDB movie reviews into
bag-of-words (BoW) feature vectors including the first 10,000 most frequent
words. The 10,000 dimension BoW feature vector for each review is highly
sparse. As suggested in (Wang & Manning, [2013](#bib.bib21)), 50% dropout noise can be applied
to the BoW features during training to prevent over-fitting. In figure
[1](#S6.F1 "Figure 1 ‣ 6.1 Experiment: Logistic Regression ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization"), Adagrad outperforms SGD with Nesterov momentum by a large margin
both with and without dropout noise. Adam converges as fast as Adagrad. The
empirical performance of Adam is consistent with our theoretical findings in
sections [2](#S2 "2 Algorithm ‣ Adam: A Method for Stochastic Optimization") and [4](#S4 "4 Convergence analysis ‣ Adam: A Method for Stochastic Optimization"). Similar to Adagrad,
Adam can take advantage of sparse features and obtain faster convergence rate
than normal SGD with momentum.
###
6.2 Experiment: Multi-layer Neural Networks
Multi-layer neural network are powerful models with non-convex objective
functions. Although our convergence analysis does not apply to non-convex
problems, we empirically found that Adam often outperforms other methods in such cases. In our experiments, we
made model choices that are consistent with previous publications in the
area; a neural network model with two fully connected hidden layers with 1000
hidden units each and ReLU activation are used for this experiment with
minibatch size of 128.
First, we study different optimizers using the standard deterministic
cross-entropy objective function with L2 weight decay on the parameters to
prevent over-fitting.
The sum-of-functions (SFO) method (Sohl-Dickstein et al., [2014](#bib.bib18)) is a recently proposed quasi-Newton method that works with minibatches of data and has shown good performance on optimization of
multi-layer neural networks. We used their implementation and compared with
Adam to train such models. Figure [2](#S6.F2 "Figure 2 ‣ 6.2 Experiment: Multi-layer Neural Networks ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization") shows that Adam makes faster progress in terms of both the number of iterations and wall-clock time. Due to the cost of updating curvature information, SFO is 5-10x slower per iteration
compared to Adam, and has a memory requirement that is linear in the number minibatches.

(a)

(b)
Figure 2: Training of multilayer neural networks on MNIST images.
(a) Neural networks using dropout stochastic regularization. (b)
Neural networks with deterministic cost function. We compare with
the sum-of-functions (SFO) optimizer (Sohl-Dickstein et al., [2014](#bib.bib18))
Stochastic
regularization methods, such as dropout, are an effective way to prevent
over-fitting and often used in practice due to their simplicity. SFO assumes deterministic subfunctions, and indeed failed to
converge on cost functions with stochastic regularization. We
compare the effectiveness of Adam to other stochastic first order methods on
multi-layer neural networks trained with dropout noise. Figure
[2](#S6.F2 "Figure 2 ‣ 6.2 Experiment: Multi-layer Neural Networks ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization") shows our results; Adam shows better convergence than
other methods.
###
6.3 Experiment: Convolutional Neural Networks
Convolutional neural networks (CNNs) with several layers of convolution,
pooling and non-linear units have shown considerable success in computer vision
tasks. Unlike most fully connected neural nets, weight sharing in CNNs results
in vastly different gradients in different layers. A smaller learning rate for
the convolution layers is often used in practice when applying SGD. We show the
effectiveness of Adam in deep CNNs. Our CNN architecture has three alternating
stages of 5x5 convolution filters and 3x3 max pooling with stride of 2 that are
followed by a fully connected layer of 1000 rectified linear hidden units (ReLU’s). The input image are pre-processed by whitening, and dropout noise is applied to the input layer and fully connected layer. The minibatch size is also set to 128 similar to previous experiments.
Interestingly, although both Adam and Adagrad make rapid progress lowering the cost in the initial
stage of the training, shown in Figure [3](#S6.F3 "Figure 3 ‣ 6.3 Experiment: Convolutional Neural Networks ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization") (left), Adam and SGD
eventually converge considerably faster than Adagrad for CNNs shown in Figure
[3](#S6.F3 "Figure 3 ‣ 6.3 Experiment: Convolutional Neural Networks ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization") (right). We notice the second moment estimate
ˆvt vanishes to zeros after a few epochs and is dominated by the ϵ in
algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization"). The second moment estimate is therefore a poor
approximation to the geometry of the cost function in CNNs comparing to fully
connected network from Section [6.2](#S6.SS2 "6.2 Experiment: Multi-layer Neural Networks ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization"). Whereas, reducing the minibatch
variance through the first moment is more important in CNNs and contributes to
the speed-up. As a result, Adagrad converges much slower than others in this particular experiment. Though Adam shows marginal improvement over SGD with momentum, it
adapts learning rate scale for different layers instead of hand picking
manually as in SGD.
| | |
| --- | --- |
| Convolutional neural networks training cost. (left) Training cost for the first three epochs.
(right) Training cost over 45 epochs.
CIFAR-10 with c64-c64-c128-1000 architecture. | Convolutional neural networks training cost. (left) Training cost for the first three epochs.
(right) Training cost over 45 epochs.
CIFAR-10 with c64-c64-c128-1000 architecture. |
Figure 3: Convolutional neural networks training cost. (left) Training cost for the first three epochs.
(right) Training cost over 45 epochs.
CIFAR-10 with c64-c64-c128-1000 architecture.
###
6.4 Experiment: bias-correction term

Figure 4: Effect of bias-correction terms (red line) versus no bias correction terms (green line) after 10 epochs (left) and 100 epochs (right) on the loss (y-axes) when learning a Variational Auto-Encoder (VAE) (Kingma & Welling, [2013](#bib.bib9)), for different settings of stepsize α (x-axes) and hyper-parameters β1 and β2.
We also empirically evaluate the effect of the bias correction terms explained in sections [2](#S2 "2 Algorithm ‣ Adam: A Method for Stochastic Optimization") and [3](#S3 "3 Initialization bias correction ‣ Adam: A Method for Stochastic Optimization"). Discussed in section [5](#S5 "5 Related work ‣ Adam: A Method for Stochastic Optimization"), removal of the bias correction terms results in a version of RMSProp (Tieleman & Hinton, [2012](#bib.bib20)) with momentum. We vary the β1 and β2 when training a variational auto-encoder (VAE) with the same architecture as in (Kingma & Welling, [2013](#bib.bib9)) with a single hidden layer with 500 hidden units with softplus nonlinearities and a 50-dimensional spherical Gaussian latent variable. We iterated over a broad range of hyper-parameter choices, i.e. β1∈[0,0.9] and β2∈[0.99,0.999,0.9999], and log10(α)∈[−5,...,−1]. Values of β2 close to 1, required for robustness to sparse gradients, results in larger initialization bias; therefore we expect the bias correction term is important in such cases of slow decay, preventing an adverse effect on optimization.
In Figure [4](#S6.F4 "Figure 4 ‣ 6.4 Experiment: bias-correction term ‣ 6 Experiments ‣ Adam: A Method for Stochastic Optimization"), values β2 close to 1 indeed lead to instabilities in training when no bias correction term was present, especially at first few epochs of the training. The best results were achieved with small values of (1−β2) and bias correction; this was more apparent towards the end of optimization when gradients tends to become sparser as hidden units specialize to specific patterns. In summary, Adam performed equal or better than RMSProp, regardless of hyper-parameter setting.
7 Extensions
-------------
###
7.1 AdaMax
0: α: Stepsize
0: β1,β2∈[0,1): Exponential decay rates
0: f(θ): Stochastic objective function with parameters θ
0: θ0: Initial parameter vector
m0←0 (Initialize \nth1 moment vector)
u0←0 (Initialize the exponentially weighted infinity norm)
t←0 (Initialize timestep)
while θt not converged do
t←t+1
gt←∇θft(θt−1) (Get gradients w.r.t. stochastic objective at timestep t)
mt←β1⋅mt−1+(1−β1)⋅gt (Update biased first moment estimate)
ut←max(β2⋅ut−1,|gt|) (Update the exponentially weighted infinity norm)
θt←θt−1−(α/(1−βt1))⋅mt/ut (Update parameters)
end while
return θt (Resulting parameters)
Algorithm 2 *AdaMax*, a variant of Adam based on the infinity norm. See section [7.1](#S7.SS1 "7.1 AdaMax ‣ 7 Extensions ‣ Adam: A Method for Stochastic Optimization") for details. Good default settings for the tested machine learning problems are α=0.002, β1=0.9 and β2=0.999. With βt1 we denote β1 to the power t. Here, (α/(1−βt1)) is the learning rate with the bias-correction term for the first moment. All operations on vectors are element-wise.
In Adam, the update rule for individual weights is to scale their gradients inversely proportional to a (scaled) L2 norm of their individual current and past gradients. We can generalize the L2 norm based update rule to a Lp norm based update rule. Such variants become numerically unstable for large p. However, in the special case where we let p→∞, a surprisingly simple and stable algorithm emerges; see algorithm [2](#alg2 "Algorithm 2 ‣ 7.1 AdaMax ‣ 7 Extensions ‣ Adam: A Method for Stochastic Optimization"). We’ll now derive the algorithm. Let, in case of the Lp norm, the stepsize at time t be inversely proportional to v1/pt, where:
| | | | | |
| --- | --- | --- | --- | --- |
| | vt | =βp2vt−1+(1−βp2)|gt|p | | (6) |
| | | =(1−βp2)t∑i=1βp(t−i)2⋅|gi|p | | (7) |
Note that the decay term is here equivalently parameterised as βp2 instead of β2. Now let p→∞, and define ut=limp→∞(vt)1/p, then:
| | | | | |
| --- | --- | --- | --- | --- |
| | ut=limp→∞(vt)1/p | =limp→∞((1−βp2)t∑i=1βp(t−i)2⋅|gi|p)1/p | | (8) |
| | | =limp→∞(1−βp2)1/p(t∑i=1βp(t−i)2⋅|gi|p)1/p | | (9) |
| | | =limp→∞(t∑i=1(β(t−i)2⋅|gi|)p)1/p | | (10) |
| | | =max(βt−12|g1|,βt−22|g2|,…,β2|gt−1|,|gt|) | | (11) |
Which corresponds to the remarkably simple recursive formula:
| | | | |
| --- | --- | --- | --- |
| | ut=max(β2⋅vt−1,|gt|) | | (12) |
with initial value u0=0. Note that, conveniently enough, we don’t need to correct for initialization bias in this case. Also note that the magnitude of parameter updates has a simpler bound with AdaMax than Adam, namely: |Δt|≤α.
###
7.2 Temporal averaging
Since the last iterate is noisy due to stochastic approximation, better generalization performance is often achieved by averaging. Previously in Moulines & Bach ([2011](#bib.bib12)), Polyak-Ruppert averaging (Polyak & Juditsky, [1992](#bib.bib14); Ruppert, [1988](#bib.bib16)) has been shown to improve the convergence of standard SGD, where ¯θt=1t∑nk=1θk. Alternatively, an exponential moving average over the parameters can be used, giving higher weight to more recent parameter values. This can be trivially implemented by adding one line to the inner loop of algorithms [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization") and [2](#alg2 "Algorithm 2 ‣ 7.1 AdaMax ‣ 7 Extensions ‣ Adam: A Method for Stochastic Optimization"): ¯θt←β2⋅¯θt−1+(1−β2)θt, with ¯θ0=0. Initalization bias can again be corrected by the estimator ˆθt=¯θt/(1−βt2).
8 Conclusion
-------------
We have introduced a simple and computationally efficient algorithm for
gradient-based optimization of stochastic objective functions. Our method is aimed
towards machine learning problems with large datasets and/or high-dimensional
parameter spaces. The method combines the advantages of two recently popular
optimization methods: the ability of AdaGrad to deal with sparse gradients, and
the ability of RMSProp to deal with non-stationary objectives. The method is
straightforward to implement and requires little memory. The experiments confirm
the analysis on the rate of convergence in convex problems. Overall, we
found Adam to be robust and well-suited to a wide range of non-convex optimization problems in the field machine learning.
9 Acknowledgments
------------------
This paper would probably not have existed without the support of Google Deepmind. We would like to give special thanks to Ivo Danihelka, and Tom Schaul for coining the name Adam. Thanks to Kai Fan from Duke University for spotting an error in the original AdaMax derivation. Experiments in this work were partly carried out on the Dutch national e-infrastructure with the support of SURF Foundation. Diederik Kingma is supported by the Google European Doctorate Fellowship in Deep Learning.
10 Appendix
------------
###
10.1 Convergence Proof
######
Definition 10.1.
A function f:Rd→R is convex if for all x, y ∈Rd,
for all λ∈[0,1],
| | | |
| --- | --- | --- |
| | λf(x)+(1−λ)f(y)≥f(λx+(1−λ)y) | |
Also, notice that a convex function can be lower bounded by a
hyperplane at its tangent.
######
Lemma 10.2.
If a function f:Rd→R is convex, then for all x, y ∈Rd,
| | | |
| --- | --- | --- |
| | f(y)≥f(x)+∇f(x)T(y−x) | |
The above lemma can be used to upper bound the regret and our proof
for the main theorem is constructed by substituting the
hyperplane with the Adam update rules.
The following two lemmas are used to support our main theorem. We also use
some definitions simplify our notation, where gt≜∇ft(θt) and gt,i as the ith element. We define
g1:t,i∈Rt as a vector that contains the ith
dimension of the gradients over all iterations till t, g1:t,i=[g1,i,g2,i,⋯,gt,i]
######
Lemma 10.3.
Let gt=∇ft(θt) and g1:t be defined as above and bounded, ∥gt∥2≤G, ∥gt∥∞≤G∞. Then,
| | | |
| --- | --- | --- |
| | T∑t=1√g2t,it≤2G∞∥g1:T,i∥2 | |
###### Proof.
We will prove the inequality using induction over T.
The base case for T=1, we have √g21,i≤2G∞∥g1,i∥2.
For the inductive step,
| | | | |
| --- | --- | --- | --- |
| | T∑t=1√g2t,it | =T−1∑t=1√g2t,it+√g2T,iT | |
| | | ≤2G∞∥g1:T−1,i∥2+√g2T,iT | |
| | | =2G∞√∥g1:T,i∥22−g2T+√g2T,iT | |
From, ∥g1:T,i∥22−g2T,i+g4T,i4∥g1:T,i∥22≥∥g1:T,i∥22−g2T,i, we can take square root of both side and have,
| | | | |
| --- | --- | --- | --- |
| | √∥g1:T,i∥22−g2T,i | ≤∥g1:T,i∥2−g2T,i2∥g1:T,i∥2 | |
| | | ≤∥g1:T,i∥2−g2T,i2√TG2∞ | |
Rearrange the inequality and substitute the √∥g1:T,i∥22−g2T,i term,
| | | |
| --- | --- | --- |
| | G∞√∥g1:T,i∥22−g2T+√g2T,iT≤2G∞∥g1:T,i∥2 | |
∎
######
Lemma 10.4.
Let γ≜β21√β2. For β1, β2∈[0,1) that satisfy β21√β2<1 and bounded gt, ∥gt∥2≤G, ∥gt∥∞≤G∞, the following inequality holds
| | | |
| --- | --- | --- |
| | T∑t=1ˆm2t,i√tˆvt,i≤21−γ1√1−β2∥g1:T,i∥2 | |
###### Proof.
Under the assumption, √1−βt2(1−βt1)2≤1(1−β1)2. We can expand the last term in the summation using the update rules in Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization"),
| | | | |
| --- | --- | --- | --- |
| | T∑t=1ˆm2t,i√tˆvt,i | =T−1∑t=1ˆm2t,i√tˆvt,i+√1−βT2(1−βT1)2(∑Tk=1(1−β1)βT−k1gk,i)2√T∑Tj=1(1−β2)βT−j2g2j,i | |
| | | ≤T−1∑t=1ˆm2t,i√tˆvt,i+√1−βT2(1−βT1)2T∑k=1T((1−β1)βT−k1gk,i)2√T∑Tj=1(1−β2)βT−j2g2j,i | |
| | | ≤T−1∑t=1ˆm2t,i√tˆvt,i+√1−βT2(1−βT1)2T∑k=1T((1−β1)βT−k1gk,i)2√T(1−β2)βT−k2g2k,i | |
| | | ≤T−1∑t=1ˆm2t,i√tˆvt,i+√1−βT2(1−βT1)2(1−β1)2√T(1−β2)T∑k=1T(β21√β2)T−k∥gk,i∥2 | |
| | | ≤T−1∑t=1ˆm2t,i√tˆvt,i+T√T(1−β2)T∑k=1γT−k∥gk,i∥2 | |
Similarly, we can upper bound the rest of the terms in the summation.
| | | | |
| --- | --- | --- | --- |
| | T∑t=1ˆm2t,i√tˆvt,i | ≤T∑t=1∥gt,i∥2√t(1−β2)T−t∑j=0tγj | |
| | | ≤T∑t=1∥gt,i∥2√t(1−β2)T∑j=0tγj | |
For γ<1, using the upper bound on the arithmetic-geometric series, ∑ttγt<1(1−γ)2:
| | | | |
| --- | --- | --- | --- |
| | T∑t=1∥gt,i∥2√t(1−β2)T∑j=0tγj | ≤1(1−γ)2√1−β2T∑t=1∥gt,i∥2√t | |
Apply Lemma [10.3](#S10.Thmtheorem3 "Lemma 10.3. ‣ 10.1 Convergence Proof ‣ 10 Appendix ‣ Adam: A Method for Stochastic Optimization"),
| | | | |
| --- | --- | --- | --- |
| | T∑t=1ˆm2t,i√tˆvt,i | ≤2G∞(1−γ)2√1−β2∥g1:T,i∥2 | |
∎
To simplify the notation, we define γ≜β21√β2. Intuitively, our following theorem holds when the learning rate αt is decaying at a rate of t−12 and first moment running average coefficient β1,t decay exponentially with λ, that is typically close to 1, e.g. 1−10−8.
######
Theorem 10.5.
Assume that the function ft has bounded gradients, ∥∇ft(θ)∥2≤G, ∥∇ft(θ)∥∞≤G∞ for all θ∈Rd and distance
between any θt generated by Adam is bounded,
∥θn−θm∥2≤D, ∥θm−θn∥∞≤D∞ for any m,n∈{1,...,T}, and β1, β2∈[0,1) satisfy β21√β2<1. Let αt = α√t and β1,t=β1λt−1,λ∈(0,1). Adam achieves the
following guarantee, for all T≥1.
| | | |
| --- | --- | --- |
| | R(T)≤D22α(1−β1)d∑i=1√TˆvT,i+α(β1+1)G∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2+d∑i=1D2∞G∞√1−β22α(1−β1)(1−λ)2 | |
###### Proof.
Using Lemma [10.2](#S10.Thmtheorem2 "Lemma 10.2. ‣ 10.1 Convergence Proof ‣ 10 Appendix ‣ Adam: A Method for Stochastic Optimization"), we have,
| | | |
| --- | --- | --- |
| | ft(θt)−ft(θ∗)≤gTt(θt−θ∗)=d∑i=1gt,i(θt,i−θ∗,i) | |
From the update rules presented in algorithm [1](#alg1 "Algorithm 1 ‣ 2 Algorithm ‣ Adam: A Method for Stochastic Optimization"),
| | | | |
| --- | --- | --- | --- |
| | θt+1 | =θt−αtˆmt/√ˆvt | |
| | | =θt−αt1−βt1(β1,t√ˆvtmt−1+(1−β1,t)√ˆvtgt) | |
We focus on the ith dimension of the parameter vector θt∈Rd. Subtract the scalar θ∗,i and square both sides of the above update rule,
we have,
| | | | |
| --- | --- | --- | --- |
| | (θt+1,i−θ∗,i)2= | (θt,i−θ∗,i)2−2αt1−βt1(β1,t√ˆvt,imt−1,i+(1−β1,t)√ˆvt,igt,i)(θt,i−θ∗,i)+α2t(ˆmt,i√ˆvt,i)2 | |
We can rearrange the above equation and use Young’s inequality, ab≤a2/2+b2/2. Also, it can be shown that √ˆvt,i=√∑tj=1(1−β2)βt−j2g2j,i/√1−βt2≤∥g1:t,i∥2 and β1,t≤β1. Then
| | | | |
| --- | --- | --- | --- |
| | gt,i(θt,i−θ∗,i)= | (1−βt1)√ˆvt,i2αt(1−β1,t)((θt,i−θ∗,t)2−(θt+1,i−θ∗,i)2) | |
| | | +β1,t(1−β1,t)ˆv14t−1,i√αt−1(θ∗,i−θt,i)√αt−1mt−1,iˆv14t−1,i+αt(1−βt1)√ˆvt,i2(1−β1,t)(ˆmt,i√ˆvt,i)2 | |
| | ≤ | 12αt(1−β1)((θt,i−θ∗,t)2−(θt+1,i−θ∗,i)2)√ˆvt,i+β1,t2αt−1(1−β1,t)(θ∗,i−θt,i)2√ˆvt−1,i | |
| | | +β1αt−12(1−β1)m2t−1,i√ˆvt−1,i+αt2(1−β1)ˆm2t,i√ˆvt,i | |
We apply Lemma [10.4](#S10.Thmtheorem4 "Lemma 10.4. ‣ 10.1 Convergence Proof ‣ 10 Appendix ‣ Adam: A Method for Stochastic Optimization") to the above inequality and derive the regret bound by summing across all the dimensions for i∈1,...,d in the upper bound of ft(θt)−ft(θ∗) and
the sequence of convex functions for t∈1,...,T:
| | | | |
| --- | --- | --- | --- |
| | R(T)≤ | d∑i=112α1(1−β1)(θ1,i−θ∗,i)2√ˆv1,i+d∑i=1T∑t=212(1−β1)(θt,i−θ∗,i)2(√ˆvt,iαt−√ˆvt−1,iαt−1) | |
| | | +β1αG∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2+αG∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2 | |
| | | +d∑i=1T∑t=1β1,t2αt(1−β1,t)(θ∗,i−θt,i)2√ˆvt,i | |
From the assumption, ∥θt−θ∗∥2≤D, ∥θm−θn∥∞≤D∞, we have:
| | | | |
| --- | --- | --- | --- |
| | R(T)≤ | D22α(1−β1)d∑i=1√TˆvT,i+α(1+β1)G∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2+D2∞2αd∑i=1t∑t=1β1,t(1−β1,t)√tˆvt,i | |
| | ≤ | D22α(1−β1)d∑i=1√TˆvT,i+α(1+β1)G∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2 | |
| | | +D2∞G∞√1−β22αd∑i=1t∑t=1β1,t(1−β1,t)√t | |
We can use arithmetic geometric series upper bound for the last term:
| | | | |
| --- | --- | --- | --- |
| | t∑t=1β1,t(1−β1,t)√t | ≤t∑t=11(1−β1)λt−1√t | |
| | | ≤t∑t=11(1−β1)λt−1t | |
| | | ≤1(1−β1)(1−λ)2 | |
Therefore, we have the following regret bound:
| | | | |
| --- | --- | --- | --- |
| | R(T)≤ | D22α(1−β1)d∑i=1√TˆvT,i+α(1+β1)G∞(1−β1)√1−β2(1−γ)2d∑i=1∥g1:T,i∥2+d∑i=1D2∞G∞√1−β22αβ1(1−λ)2 | |
∎ |
324fcc14-0e2e-4a62-9cdf-2fe282062d45 | trentmkelly/LessWrong-43k | LessWrong | Questions for further investigation of AI diffusion
This post is one part of the sequence Understanding the diffusion of large language models. As context for this post, I strongly recommend reading at least the 5-minute summary of the sequence.
This post lists questions about AI diffusion that I think would be worthy of more research at the time of writing. Some questions serve as direct follow-ups to my research, while others just seem like important questions related to diffusion. I already raised some of these questions throughout this sequence, but this post collects them all so that interested researchers can easily refer back to the questions.
Feel free to reach out to me about these research ideas. I may be able to offer advice, suggest links, and suggest people to talk to. It's possible that I or Rethink Priorities could help connect you with funding to work on these ideas if you're interested and a good fit.
Further research to evaluate my proposals to limit access to datasets and algorithmic insights
In a previous post I presented proposals to limit access to datasets and proposals to limit access to algorithmic insights. I believe those proposals are probably worth doing, but that belief has a low enough resilience that the next step should be further consideration of whether or not to do these things.
The follow-up questions for the dataset proposals that I think are highest priority are:
1. How feasible is it to actually convince various producers of large ML datasets to take the actions I recommended?
1. Would they keep a dataset private, when they would have otherwise openly published it? Would some further incentive (e.g. financial compensation) be necessary?
2. Is a data-labeling service such as Surge AI willing to add vetting procedures for who they provide services to?
3. I recommend at least 1 full-time equivalent week investigating this. The research could involve first evaluating the benefit of my initial proposal, and then possibly interviewing people at data hosters/curat |
d33bb26a-2517-48bb-88b4-e8e06d1cad08 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Towards Characterizing Divergence in Deep Q-Learning
1 Introduction
---------------
Deep Q-Learning (DQL), a family of reinforcement learning algorithms that includes Deep Q-Network (DQN) (Mnih et al., [2013](#bib.bib27), [2015](#bib.bib28)) and its continuous-action variants (Lillicrap et al., [2016](#bib.bib24); Fujimoto et al., [2018b](#bib.bib9); Haarnoja et al., [2018b](#bib.bib13)), is often successful at training deep neural networks for control. In DQL, a function approximator (a deep neural network) learns to estimate the value of each state-action pair under the optimal policy (the Q-function), and a control policy selects actions with the highest values according to the current Q-function approximator. DQL algorithms have been applied fruitfully in video games (Mnih et al., [2015](#bib.bib28)), robotics (Kalashnikov et al., [2018](#bib.bib19); Haarnoja et al., [2018a](#bib.bib12)), and user interactions on social media (Gauci et al., [2018](#bib.bib10)).
However, despite the high-profile empirical successes, these algorithms possess failure modes that are poorly understood and arise frequently in practice. The most common failure mode is divergence, where the Q-function approximator learns to ascribe unrealistically high values to state-action pairs, in turn destroying the quality of the greedy control policy derived from Q (van Hasselt et al., [2018](#bib.bib39)). Divergence in DQL is often attributed to three components common to all DQL algorithms, which are collectively considered the ‘deadly triad’ of reinforcement learning (Sutton, [1988](#bib.bib36); Sutton & Barto, [2018](#bib.bib35)):
* function approximation, in this case the use of deep neural networks,
* off-policy learning, the use of data collected on one policy to estimate the value of another policy,
* and bootstrapping, where the Q-function estimator is regressed towards a function of itself.
Well-known examples (Baird, [1995](#bib.bib2); Tsitsiklis & Roy, [1997](#bib.bib37)) demonstrate the potential of the deadly triad to cause divergence in approximate Q-learning. However, actionable descriptions of divergence in the general case remain elusive, prompting algorithm designers to attack the triad with an increasingly wide variety of heuristic solutions. These include target networks (Mnih et al., [2015](#bib.bib28)), entropy regularization (Fox et al., [2016](#bib.bib6); Haarnoja et al., [2018b](#bib.bib13)), n-step learning (Hessel et al., [2017](#bib.bib15)), and approximate double-Q learning (van Hasselt et al., [2016](#bib.bib38)).
The absence of theoretical characterization for divergence in DQL makes it challenging to reliably deploy DQL on new problems. To make progress toward such a characterization, we give an analysis inspired by Gordon ([1995](#bib.bib11)), who studied the behavior of approximate value learning algorithms in value space. We examine how Q-values change under a standard DQL update, and derive the leading order approximation to the DQL update operator. The approximate update turns out to have a simple form that disentangles and clarifies the contributions from the components of the deadly triad, and allows us to identify the important role played by the neural tangent kernel (NTK) (Jacot et al., [2018](#bib.bib18)) of the Q approximator. We consider conditions under which the approximate update is or isn’t a contraction map in the sup norm, based on the intuition that when it is a contraction DQL should behave stably, and when it is an expansion we should expect divergence.
Based on our analysis, we design an algorithm which is intended to approximately ensure that the Q-function update is non-expansive. Our algorithm, which we call Preconditioned Q-Networks (PreQN) is computationally expensive but theoretically simple: it works by preconditioning the TD-errors in minibatch gradient updates, using the inverse of a matrix of inner products of Q-function gradients. We demonstrate that PreQN is stable and performant on a standard slate of MuJoCo benchmarks from the OpenAI Gym (Brockman et al., [2016](#bib.bib4)), despite using none of the tricks typically associated with DQL. We also find a neat connection between PreQN and natural gradient (Amari, [1998](#bib.bib1)) methods, where under some slightly restricted conditions, the PreQN update is equivalent to a natural gradient Q-learning update. This connection explains a result noted by Knight & Lerner ([2018](#bib.bib22)): that natural gradient Q-learning appeared to be stable without target networks.
2 Preliminaries
----------------
###
2.1 Contraction Maps and Fixed Points
We begin with a brief mathematical review. Let X be a vector space with norm ∥⋅∥, and f a function from X to X. If ∀x,y∈X, f satisfies
| | | | |
| --- | --- | --- | --- |
| | ∥f(x)−f(y)∥≤β∥x−y∥ | | (1) |
with β∈[0,1), then f is called a contraction map with modulus β. If f satisfies Eq [1](#S2.E1 "(1) ‣ 2.1 Contraction Maps and Fixed Points ‣ 2 Preliminaries ‣ Towards Characterizing Divergence in Deep Q-Learning") but with β=1, then f is said to be a non-expansion.
By the Banach fixed-point theorem, if f is a contraction, there is a unique fixed-point x such that f(x)=x, and it can be obtained by the repeated application of f: for any point x0∈X, if we define a sequence of points {xn} such that xn=f(xn−1), limn→∞xn=x.
###
2.2 Q Functions and TD-Learning
DQL algorithms learn control policies in the reinforcement learning (RL) setting with the infinite-horizon discounted return objective. They attempt to learn an approximator to the optimal action-value function Q∗, which is known to satisfy the optimal Bellman equation:
| | | | |
| --- | --- | --- | --- |
| | Q∗(s,a)=Es′∼P[R(s,a,s′)+γmaxa′Q∗(s′,a′)]. | | (2) |
Here, s and s′ are states, a and a′ are actions, P is the transition kernel for the environment, R is the reward function, and γ∈(0,1) is the discount factor. The optimal policy π∗ can be obtained as the policy which is greedy with respect to Q∗: π∗(s)=argmaxaQ∗(s,a). Thus, DQL algorithms approximate the optimal policy as the policy which is greedy with respect to the Q∗-approximator.
Let T∗:Q→Q be the operator on Q functions with T∗Q(s,a) given by the RHS of Eq [2](#S2.E2 "(2) ‣ 2.2 Q Functions and TD-Learning ‣ 2 Preliminaries ‣ Towards Characterizing Divergence in Deep Q-Learning"); then Eq [2](#S2.E2 "(2) ‣ 2.2 Q Functions and TD-Learning ‣ 2 Preliminaries ‣ Towards Characterizing Divergence in Deep Q-Learning") can be written as Q∗=T∗Q∗. The operator T∗ is called the optimal Bellman operator, and it is a contraction in the sup norm with modulus γ. When the Q-function can be represented as a finite table and the reward function and transition kernel are fully known, T∗ can be computed exactly and Q∗ can be obtained by value iteration: Qk+1=T∗Qk. The convergence of value iteration from any initial point Q0 is guaranteed by the Banach fixed-point theorem.
When the reward function and transition kernel are not fully known, it is still possible to learn Q∗ in the tabular setting via Q-learning (Watkins & Dayan, [1992](#bib.bib41)). Q-learning updates the Q-values of state-action pairs as they are visited by some exploration policy according to:
| | | | |
| --- | --- | --- | --- |
| | Qk+1(s,a)=Qk(s,a)+αk(^T∗Qk(s,a)−Qk(s,a)), | | (3) |
where ^T∗Qk(s,a)=r+γmaxa′Qk(s′,a′) is a sample estimate for T∗Qk(s,a) using the reward and next state obtained from the environment while exploring. Under mild conditions on state-action visitation (all pairs must be visited often enough) and learning rates αk (they must all gradually go to zero and always lie in [0,1)), Q-learning converges to Q∗. Q-learning is called a temporal difference (TD) algorithm because the updates to Q-values are based on the temporal difference error:
| | | | |
| --- | --- | --- | --- |
| | δt | =^T∗Q(st,at)−Q(st,at) | |
| | | =rt+γmaxa′Q(st+1,a′)−Q(st,at). | |
3 Towards Characterizing Divergence in Deep Q-Learning
-------------------------------------------------------
Deep Q-Learning (DQL) algorithms are based on the generalization of Eq [3](#S2.E3 "(3) ‣ 2.2 Q Functions and TD-Learning ‣ 2 Preliminaries ‣ Towards Characterizing Divergence in Deep Q-Learning") to the function approximation setting:
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+α(^T∗Qθ(s,a)−Qθ(s,a))∇θQθ(s,a), | | (4) |
where Qθ is a differentiable function approximator with parameters θ, and θ′ are the parameters after an update. Note that when Qθ is a table, Eq [4](#S3.E4 "(4) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") reduces exactly to Eq [3](#S2.E3 "(3) ‣ 2.2 Q Functions and TD-Learning ‣ 2 Preliminaries ‣ Towards Characterizing Divergence in Deep Q-Learning").
Typically, DQL algorithms make use of experience replay (Lin, [1992](#bib.bib25)) and minibatch gradient descent (Mnih et al., [2013](#bib.bib27)), resulting in an expected update:
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+αEs,a∼ρ[(T∗Qθ(s,a)−Qθ(s,a))∇θQθ(s,a)], | | (5) |
where ρ is the distribution of experience in the replay buffer at the time of the update. For stability, it is conventional to replace the bootstrap, the T∗Qθ term, with one based on a slowly-updated target network: T∗Qψ, where the parameters ψ are either infrequently copied from θ (Mnih et al., [2013](#bib.bib27)) or obtained by Polyak averaging θ (Lillicrap et al., [2016](#bib.bib24)). However, we will omit target networks from our analysis.
Unlike Q-learning, DQL in its standard form currently has no known convergence guarantees, although some convergence results (Yang et al., [2019](#bib.bib42)) have been obtained for a closely-related algorithm called Neural-Fitted Q Iteration (Riedmiller, [2005](#bib.bib32)) when used with deep ReLU networks.
###
3.1 Taylor Expansion Analysis of Q
To gain a deeper understanding of the behavior of DQL, we study the change in Q-values following an update based on Eq [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning"), by examining the Taylor expansion of Q around θ at a state-action pair (¯s,¯a). The Taylor expansion is
| | | | |
| --- | --- | --- | --- |
| | Qθ′(¯s,¯a)=Qθ(¯s,¯a)+∇θQθ(¯s,¯a)T(θ′−θ)+O(∥θ′−θ∥2), | | (6) |
and by plugging Eq [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") into Eq [6](#S3.E6 "(6) ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning"), we obtain:
| | | | | |
| --- | --- | --- | --- | --- |
| | Qθ′(¯s, | ¯a)= | | (7) |
| | | Qθ(¯s,¯a) | |
| | | +αEs,a∼ρ[kθ(¯s,¯a,s,a)(T∗Qθ(s,a)−Qθ(s,a))] | |
| | | +O(∥αg∥2), | | (8) |
where
| | | | |
| --- | --- | --- | --- |
| | kθ(¯s,¯a,s,a)≐∇θQθ(¯s,¯a)T∇θQθ(s,a) | | (9) |
is the neural tangent kernel (NTK) (Jacot et al., [2018](#bib.bib18)), and αg=θ′−θ. It is instructive to look at Eq [8](#S3.E8 "(8) ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") for the case of finite state-action spaces, where we can consider its matrix-vector form.
######
Theorem 1.
For Q-learning with nonlinear function approximation based on the update in Eq [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning"), when the state-action space is finite and the Q function is represented as a vector in R|S||A|, the Q-values before and after an update are related by:
| | | | |
| --- | --- | --- | --- |
| | Qθ′=Qθ+αKθDρ(T∗Qθ−Qθ)+O(∥αg∥2), | | (10) |
where Kθ is the |S||A|×|S||A| matrix of entries given by Eq [9](#S3.E9 "(9) ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning"), and Dρ is a diagonal matrix with entries given by ρ(s,a), the distribution from the replay buffer.
Although extremely simple, we believe that Eq [10](#S3.E10 "(10) ‣ Theorem 1. ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is illuminating because it shows the connection between the deadly triad and the thing we really care about: the Q-values themselves. At leading order,
* the Kθ term is the contribution from function approximation, with its off-diagonal terms creating generalization across state-action pairs,
* the Dρ term is the contribution from the off-policy data distribution,
* the T∗Qθ term is the contribution from bootstrapping,
and these terms interact by multiplication. The form of Eq [10](#S3.E10 "(10) ‣ Theorem 1. ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") suggests that a useful way to think about the stability and convergence of deep Q-learning is to reason about whether the leading-order update operator U:Q→Q with values
| | | | |
| --- | --- | --- | --- |
| | UQθ=Qθ+αKθDρ(T∗Qθ−Qθ) | | (11) |
is or is not a contraction on Q. In what follows, we will develop an intuition for such conditions by considering a sequence of operators that incrementally introduce the components of U. After building intuition for failure modes, we will consider how prior methods try to repair or mitigate them. We contend that prior work in DQL predominantly focuses on the data distribution or the bootstrap, with limited exploration of the contribution from Kθ to instability. This analysis inspires PreQN, our new algorithm which attempts to repair divergence issues by cancelling out within-batch generalization errors created by Kθ.
###
3.2 Building Intuition for Divergence
The aim of this section is to understand how the update U:Q→Q given by Eq [11](#S3.E11 "(11) ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") can give rise to instability in deep Q-learning, and how that instability might be repaired. To begin with, consider the operator U1 given by
| | | | |
| --- | --- | --- | --- |
| | U1Q=Q+α(T∗Q−Q). | | (12) |
######
Lemma 1.
For α∈(0,1), U1 given by Eq [12](#S3.E12 "(12) ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is a contraction on Q in the sup norm, and its fixed-point is Q∗.
Proof for this and all other results in appendix. With sampling, U1 would be essentially the same operator as used in tabular Q-learning (Watkins & Dayan, [1992](#bib.bib41)), and it would benefit from similar performance guarantees. This gives us intuition point 1:
Intuition 1: When U more closely resembles U1, we should expect deep Q-learning to behave more stably.
Next, we consider the operator U2 given by
| | | | |
| --- | --- | --- | --- |
| | U2Q=Q+αDρ(T∗Q−Q), | | (13) |
where Dρ is a diagonal matrix with entries ρ(s,a), a probability mass function on state-action pairs.
######
Lemma 2.
If ρ(s,a)>0 for all s,a and α∈(0,1/ρmax) where ρmax=maxs,aρ(s,a), then U2 given by Eq [13](#S3.E13 "(13) ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is a contraction in the sup norm and its fixed-point is Q∗. If there are any s,a such that ρ(s,a)=0 and α∈(0,1/ρmax), however, it is a non-expansion in Q and not a contraction.
By considering U2, we see how the data distribution can have an impact: as long as the exploration policy touches all state-action pairs often enough, U2 behaves well, but missing data poses a problem. The Q-values for missing state-action pairs will never change from their initial values, and bootstrapping will cause those erroneous values to influence the Q-values for ‘downstream’ state-action pairs. This leads us to our second point of intuition:
Intuition 2: When data is scarce, deep Q-learning may struggle, and initial conditions will matter more.
Data is scarcest at the beginning of training; empirical results from van Hasselt et al. ([2018](#bib.bib39)) suggest that this is when DQL is most susceptible to divergence.
Next, we consider the operator U3 given by
| | | | |
| --- | --- | --- | --- |
| | U3Q=Q+αKDρ(T∗Q−Q), | | (14) |
where K is a constant symmetric, positive-definite matrix. Interestingly, U3 corresponds exactly to the expected update for the case of linear function approximation, which we make precise in the next lemma:
######
Lemma 3.
For Q-learning with linear function approximators of the form Qθ(s,a)=θTϕ(s,a) and updates based on Eq. [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning"), under the same conditions as Theorem 1, the Q-values before and after an update are related by
| | | | |
| --- | --- | --- | --- |
| | Qθ′=U3Qθ, | | (15) |
where K(¯s,¯a,s,a)=ϕ(¯s,¯a)Tϕ(s,a). Eq. [15](#S3.E15 "(15) ‣ Lemma 3. ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") differs from Eq. [10](#S3.E10 "(10) ‣ Theorem 1. ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") in that there are no higher-order terms, and K is constant with respect to θ.
We now consider when U3 is a contraction in the sup norm.
######
Theorem 2.
Let indices i, j refer to state-action pairs. Suppose that K and ρ satisfy the conditions:
| | | | | |
| --- | --- | --- | --- | --- |
| | ∀i, | αKiiρi<1, | | (16) |
| | ∀i, | (1+γ)∑j≠i|Kij|ρj≤(1−γ)Kiiρi. | | (17) |
Then U3 is a contraction on Q in the sup norm, with fixed-point Q∗.
To frame this discussion, we’ll note that the condition in Eq. [17](#S3.E17 "(17) ‣ Theorem 2. ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is extremely restrictive. It requires that ρ>0 everywhere, and that the off-diagonal terms of K are very small relative to the on-diagonal terms (for typical choices of γ, eg γ=0.99). As a result, an analysis of this kind may not suffice to explain the success of linear TD-learning in typical use cases. But we nonetheless find this useful in motivating the following point of intuition:
Intuition 3: The stability of Q-learning is tied to the generalization properties of the Q-approximator. Approximators with more aggressive generalization (larger off-diagonal terms in Kθ) are less likely to demonstrate stable learning.
So far, we have reasoned about several individual update operators, but we have not made explicit reference to the full dynamics of training with nonlinear function approximators. In deep Q-learning, both the kernel Kθ and the data distribution ρ change between update steps. Thus, each step can be viewed as applying a different update operator. It is important to ask if our intuitions so far have any bearing in this setting; this is the subject of our next result.
######
Theorem 3.
Consider a sequence of updates {U0,U1,...}, with each Ui:Q→Q Lipschitz continuous, with Lipschitz constant βi, with respect to a norm ∥⋅∥. Furthermore, suppose all Ui share a common fixed-point, ~Q. Then for any initial point Q0, the sequence of iterates produced by Qi+1=UiQi satisfies:
| | | | |
| --- | --- | --- | --- |
| | ∥~Q−Qi∥≤(i−1∏k=0βk)∥~Q−Q0∥. | | (18) |
Furthermore, if there is an iterate j such that ∀k≥j,βk∈[0,1), the sequence {Q0,Q1,...} converges to ~Q.
Roughly speaking, this theorem says that if you sequentially apply different contraction maps with the same fixed-point, you will attain that fixed-point.
In DQL, the common fixed-point between all update operators based on Eq. [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is Q∗. For neural network approximators commonly used in practice, such update operators are unlikely to be contractions and convergence to Q∗ is out of reach (especially considering that Q∗ may not even be expressible in the approximator class). Nonetheless, we view Theorem 3 as motivating our final point of intuition:
Intuition 4: Although the DQL update operator varies between steps, intuitions from the constant-update setting can provide useful guidance for understanding and repairing divergence issues in DQL.
To sum up, we enumerate and discuss the failure modes for DQL that appear likely based on our analysis so far.
Failure Mode 1: Linearization breaks. The learning rate α is too high, second-order terms in Eq. [10](#S3.E10 "(10) ‣ Theorem 1. ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") are large, and updates do not correlate with Bellman updates in any meaningful way. (Based on Theorem 1.)
Failure Mode 2: Overshooting. The learning rate α is small enough for the linearization to approximately hold, but is large enough that U from Eq. [11](#S3.E11 "(11) ‣ 3.1 Taylor Expansion Analysis of Q ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") is sometimes an expansion. (Based on Theorem 2, Eq [16](#S3.E16 "(16) ‣ Theorem 2. ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning").)
Failure Mode 3: Over-generalization. The kernel matrix Kθ has large off-diagonal terms, causing the Q function to generalize too aggressively and making U sometimes an expansion. (Based on Theorem 2, Eq [17](#S3.E17 "(17) ‣ Theorem 2. ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning").)
Failure Mode 4: Extrapolation error. The data distribution used for the update is inadequate. Q-values for missing (or under-represented) state-action pairs are adjusted solely or primarily by generalization, which sometimes produces errors. Bootstrapping then propagates those errors through the Q-values for all other state-action pairs. (Based on Lemma 2.) This failure mode was previously identified, named, and studied empirically by (Fujimoto et al., [2018a](#bib.bib8)).
It is important to note that these failure modes may not present in clearly-distinguishable ways: indeed, they can cascade into each other, creating feedback loops that lead to divergence. For instance, consider the interaction between over-generalization and extrapolation error. A network with limited generalization would keep the Q-values for missing state-action pairs close to their initial values. This would lead to inaccurate, but not necessarily divergent, downstream Q-values. On the other hand, a network that over-generalizes will significantly alter the Q-values for missing state-action pairs. A slight positive bias (where all of those Q-values increase) will get propagated to downstream Q-values due to extrapolation error, making them optimistic. But this reinforces the positive bias in the generalization to missing state-action pair Q-values—creating a feedback loop, and ultimately divergence.
###
3.3 Interpreting Prior Work
A substantial body of prior work on stabilizing DQL focuses on modifying either the data distribution or the TD-errors.
Data distribution-based methods include massive-scale experience collection, as in Gorila-DQN (Nair et al., [2015](#bib.bib29)), Ape-X (Horgan et al., [2018](#bib.bib16)), and R2D2 (Kapturowski et al., [2019](#bib.bib20)), and methods for improved exploration, like entropy regularization (Haarnoja et al., [2018b](#bib.bib13)). We speculate that such methods improve stability in DQL primarily by mitigating extrapolation error, by reducing the number of missing state-action pairs. As an alternative to improved data collection, BCQ (Fujimoto et al., [2018a](#bib.bib8)) mitigates extrapolation error by simply preventing missing state-action pairs from being used to form the Bellman backup.
TD error-based methods include target networks (Mnih et al., [2015](#bib.bib28)), clipped TD errors (Mnih et al., [2015](#bib.bib28)) (commonly implemented via the Huber loss function (Sidor & Schulman, [2017](#bib.bib34))), double DQN (van Hasselt et al., [2016](#bib.bib38)), n-step backups (Sutton, [1988](#bib.bib36); Hessel et al., [2017](#bib.bib15)), transformed Bellman operators (Pohlen et al., [2018](#bib.bib30)), and clipped double-Q learning (Fujimoto et al., [2018b](#bib.bib9)). These methods do not directly attack specific failure modes, but we speculate that they interfere with error propagation by preventing bad updates from quickly spreading to downstream Q-values. This allows more time for bad updates to get averaged out, or for missing data to be collected.
Relatively little work focuses on over-generalization. Ape-X DQfD (Pohlen et al., [2018](#bib.bib30)) uses an auxilliary temporal consistency (TC) loss to prevent the Q-values of target state-action pairs from changing; ablation analysis suggested that the TC loss was critical to performance. Similarly, (Durugkar & Stone, [2017](#bib.bib5)) proposed Constrained Q-Learning, which uses a constraint to prevent the average target value from changing after an update; however, (Pohlen et al., [2018](#bib.bib30)) did not find evidence that this technique worked on complex problems.
To the best of our knowledge, no prior work addresses the root cause of overshooting or over-generalization failures in DQL: the neural tangent kernel, Kθ. Work in this direction would either modify network architecture to result in a Kθ more favorable to stability, or modify the update rule in a way which controls the influence of Kθ on generalization. Dueling DQN (Wang et al., [2016](#bib.bib40)) does modify network architecture in a way which is known to improve training on Atari games, but there is currently no known theoretical explanation for its benefits. We speculate that an analysis based on Kθ may prove insightful, though we have not yet found a clear result on this despite preliminary effort. The general absence of work on DQL stability via Kθ is the inspiration for our algorithmic contributions.
4 Preconditioned Q-Networks
----------------------------
In this section, we will introduce Preconditioned Q-Networks (PreQN), an algorithm which is intended to approximately ensure that the Q-function update is a non-expansion. The core idea is to alter the DQL update so that it behaves as much as possible like Eq. [12](#S3.E12 "(12) ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") in Q-value space.
Concretely, let Φθ∈Rd×|S||A| denote the matrix whose columns are ∇θQθ(s,a). To first order, what we have is
| | | | |
| --- | --- | --- | --- |
| | Qθ′≈Qθ+ΦTθ(θ′−θ), | | (19) |
and what we want is an update which results in
| | | | |
| --- | --- | --- | --- |
| | Qθ′≈Qθ+α(T∗Qθ−Qθ), | | (20) |
for some α∈(0,1). If Kθ=ΦTθΦθ were invertible, then the update
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+αΦθK−1θ(T∗Qθ−Qθ) | | (21) |
would attain Eq [20](#S4.E20 "(20) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning"). This update is like a normal DQL update where the TD-errors have been replaced with preconditioned TD-errors, where K−1θ is the preconditioner. In practice, there are three obstacles to directly implementing Eq [21](#S4.E21 "(21) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning"):
* For large or continuous state or action spaces (as in many problems of interest), it would be intractable or impossible to form and invert Kθ.
* If the number of state-action pairs is greater than the number of parameters, Kθ will be rank deficient and thus not invertible.
* For nonlinear function approximators (as in DQL), step sizes must be selected to keep higher-order terms small.
To handle these issues, we propose a minibatch-based approximation to the algorithm in Eq [21](#S4.E21 "(21) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning"). Like in standard DQL, we maintain a replay buffer filled with past experiences. Each time we sample a minibatch B from the replay buffer to compute an update, we form Kθ for the minibatch, find the least-squares solution Z to
| | | | |
| --- | --- | --- | --- |
| | KθZ=T∗Qθ−Qθ | | (22) |
for the minibatch, and then compute a proposed update
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+α∑(s,a)∈BZ(s,a)∇θQθ(s,a). | | (23) |
Finally, to ensure that the higher-order terms are small, we use a linesearch that starts at Eq [23](#S4.E23 "(23) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning") and backtracks (by exponential decay) to θ. The acceptance criterion for the linesearch is
| | | | |
| --- | --- | --- | --- |
| | cos(Qθ′−Qθ,T∗Qθ−Qθ)≥η, | | (24) |
where η is a hyperparameter (close to, but less than, 1). That is, a proposed update is only accepted if the resulting change in Q-values for the minibatch is well-aligned with its TD-errors. We refer to this algorithm as Preconditioned Q-Networks (PreQN).
In our experiments, we consider the variant of PreQN styled after DDPG (Lillicrap et al., [2016](#bib.bib24)), where a separate actor network is trained to allow efficient computation of maxaQθ(s,a). We give the complete pseudocode as Algorithm [1](#alg1 "Algorithm 1 ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning"). Note the omission of target networks: we hypothesize that the design of the algorithm makes instability less likely and thus makes target networks unnecessary.
1: Given: initial parameters θ,ϕ for Q,μ, empty replay buffer D
2: Receive observation s0 from environment
3: for t=0,1,2,... do
4: Select action at=μϕ(st)+Nt
5: Step environment to get st+1,rt and terminal signal dt
6: Store (st,at,rt,st+1,dt)→D
7: if it’s time to update then
8: for however many updates do
9: Sample minibatch B={(si,ai,ri,s′i,di)} from D
10: For each transition in B, compute TD errors:
| | | |
| --- | --- | --- |
| | Δi=ri+γ(1−di)Qθ(s′i,μϕ(s′i))−Qθ(si,ai) | |
11: Compute minibatch Kθ matrix and find least-squares solution Z to KθZ=Δ
12: Compute proposed update for Q with:
| | | |
| --- | --- | --- |
| | θ′=θ+αq∑(s,a)∈BZ(s,a)∇θQθ(s,a) | |
13: Exponentially decay θ′ towards θ until
| | | |
| --- | --- | --- |
| | cos(Qθ′−Qθ,T∗Qθ−Qθ)≥η, | |
then set θ←θ′.
14: Update μ with:
| | | |
| --- | --- | --- |
| | ϕ←ϕ+αμ1|B|∑s∈B∇ϕQθ(s,μϕ(s)) | |
15: end for
16: end if
17: end for
Algorithm 1 PreQN (in style of DDPG)
###
4.1 Connection to Natural Gradients
As it turns out, PreQN is equivalent to natural gradient Q-learning (NGQL) when the same samples are used to form both the gradient and the Fisher information matrix. To recap, the update for NGQL is
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+αF−1θg, | | (25) |
where g is the gradient from Eq [5](#S3.E5 "(5) ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") and
| | | | |
| --- | --- | --- | --- |
| | Fθ=Es,a∼ρ[∇θQθ(s,a)∇θQθ(s,a)T] | | (26) |
is the Fisher information matrix for a gaussian distribution, N(Qθ,I). When using sample estimates of the expectations, we can write the NGQL update in terms of the matrix Φθ (the d×|S||A| matrix with columns ∇θQθ(s,a)) and the vector of TD-errors Δ=T∗Qθ−Qθ as:
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+α(ΦθΦTθ)†ΦθΔ, | | (27) |
where (ΦθΦTθ)† is the pseudoinverse of ΦθΦTθ. Similarly, the PreQN update as described by Eqs [22](#S4.E22 "(22) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning") and [23](#S4.E23 "(23) ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning") can be written as
| | | | |
| --- | --- | --- | --- |
| | θ′=θ+αΦθ(ΦTθΦθ)†Δ. | | (28) |
By the following lemma, the two updates in Eqs [27](#S4.E27 "(27) ‣ 4.1 Connection to Natural Gradients ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning") and [28](#S4.E28 "(28) ‣ 4.1 Connection to Natural Gradients ‣ 4 Preconditioned Q-Networks ‣ Towards Characterizing Divergence in Deep Q-Learning") are equivalent:
######
Lemma 4.
(ΦθΦTθ)†Φθ=Φθ(ΦTθΦθ)†.
The connection between NGQL and approximately non-expansive Q-update operators may explain the finding by (Knight & Lerner, [2018](#bib.bib22)) that NGQL did not require target networks to remain stable. A related observation was made by (Schulman et al., [2017](#bib.bib33)), who showed that a natural policy gradient could be viewed as approximately applying a version of Eq [12](#S3.E12 "(12) ‣ 3.2 Building Intuition for Divergence ‣ 3 Towards Characterizing Divergence in Deep Q-Learning ‣ Towards Characterizing Divergence in Deep Q-Learning") for entropy-regularized Q-learning. They also demonstrated a version of DQL that could learn stably without target networks.
5 Experiments
--------------
In our experiments, we investigated the following questions:
1. What insights can we obtain about the neural tangent kernel Kθ in the context of RL? Can we exploit empirical analyses of Kθ to make better decisions about neural network architecture?
2. How does PreQN behave? To evaluate performance, we compare to TD3 (Fujimoto et al., [2018b](#bib.bib9)) and SAC (Haarnoja et al., [2018b](#bib.bib13)) on various OpenAI Gym (Brockman et al., [2016](#bib.bib4)) environments.
3. To what degree does a standard DQL update push Q-values towards their targets? How does this change with architecture? How should we interpret PreQN results in light of this?
###
5.1 Neural Tangent Kernel Analysis
Based on Theorem 2, we are motivated to empirically evaluate two properties of the neural tangent kernel that appear relevant to stability in DQL: the magnitudes of diagonal elements, and the degree of off-diagonalness. To measure the latter, we consider the ratio of the average off-diagonal row entry to the on-diagonal entry, Ri:
| | | | |
| --- | --- | --- | --- |
| | Ri(K) | ≐1N∑j≠i|Kij|Kii, | |
where N is the size of the square matrix K. We refer to this quantity as the ‘row ratio.’
We evaluate the standard neural networks used in DQL for continuous control: namely, feedforward multi-layer perceptrons with between 1 and 4 hidden layers, and between 32 and 256 hidden units, with either tanh, relu, or sin activations. (See, eg, (Lillicrap et al., [2016](#bib.bib24); Fujimoto et al., [2018b](#bib.bib9); Haarnoja et al., [2018b](#bib.bib13); Rajeswaran et al., [2017](#bib.bib31)) for examples where networks in this range have previously been used.) Because divergence typically occurs near the beginning of training (van Hasselt et al., [2018](#bib.bib39)) and the NTK is known to converge to a constant in the infinite-width limit (Jacot et al., [2018](#bib.bib18)), we focus only on the properties of Kθ at initialization in these experiments.
For each of three Gym environments (HalfCheetah-v2, Walker2d-v2, and Ant-v2), we sampled a dataset D of 1000 state-action pairs using a “rails-random” policy: a=\rm sgn(u),u∼Unif(A). We then randomly initialized neural networks of various sizes and activation functions, computed Kθ for each using the state-action pairs in D, and evaluated their on-diagonal elements and average row ratios. We show partial results in Figure [1](#S5.F1 "Figure 1 ‣ 5.1 Neural Tangent Kernel Analysis ‣ 5 Experiments ‣ Towards Characterizing Divergence in Deep Q-Learning"), complete results in Appendix [C](#A3 "Appendix C Extended Results for Neural Tangent Kernel Analysis ‣ Towards Characterizing Divergence in Deep Q-Learning"), and summarize findings here:
* Diagonal elements tend to increase with width and decrease with depth, across activation functions.
* Row ratios tend to increase with depth across activation functions, and do not clearly correlate with width.
* Relu nets commonly have the largest on-diagonal elements and row ratios (so they should learn quickly and generalize aggressively).
* Sin networks appear to be in a “sweet spot” of high on-diagonal elements and low off-diagonal elements. This analysis suggests sin activations may be more useful for DQL than has been previously realized.
Based on these results, as we will detail in subsequent sections, we experimented with using sin activations for TD3, SAC, and PreQN.

Figure 1: Average row ratio for networks with 2 hidden layers of size 32 (small), 64 (med), 128 (large), and 256 (exlarge), using data from Walker2d-v2. Error bars are standard deviations from 3 random network initializations (with fixed data).
###
5.2 Benchmarking PreQN
| | | | | |
| --- | --- | --- | --- | --- |
| Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations. | Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations. | Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations. | Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations. | Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations. |
Figure 2: Benchmarking PreQN against TD3 and SAC on standard OpenAI Gym MuJoCo environments. Curves are averaged over 7 random seeds. PreQN is stable and performant, despite not using target networks. The PreQN experiments used sin activations; the TD3 and SAC experiments used relu activations.
We benchmarked PreQN on five environments from the OpenAI Gym, comparing to TD3 and fixed-temperature SAC, and we present results in Figure [2](#S5.F2 "Figure 2 ‣ 5.2 Benchmarking PreQN ‣ 5 Experiments ‣ Towards Characterizing Divergence in Deep Q-Learning"). For each algorithm, we experimented with using relu and sin activations, and we found that PreQN performed best with sin activations, while TD3 and SAC performed best with relu activations. (As a result, we report results in our benchmark for PreQN-sin, TD3-relu, and SAC-relu.) However, we did not do any hyperparameter tuning for TD3 and SAC to specifically accomodate the sin activations, and instead relied on hyperparameters based on the literature which were well-tuned for relus. Hyperparameters for all experiments are given in Appendix [B](#A2 "Appendix B Methods for PreQN Benchmark ‣ Towards Characterizing Divergence in Deep Q-Learning").
In general, PreQN is stable and performant, comparing favorably with the baseline algorithms. In some cases it outperforms (eg Swimmer and Ant) and in some cases it underperforms (eg Hopper). We find this outcome interesting and exciting because PreQN represents a different development path for DQL algorithms than is currently standard: it lacks target networks, only uses a single Q-function instead of multiple, makes no modifications to the bootstrap, and uses vanilla gradient steps for Q-learning instead of adaptive or momentum-based optimizers like Adam (Kingma & Ba, [2015](#bib.bib21)). However (as we will shortly discuss), we found that it did not fully avert divergence when combined with relu networks.
| | |
| --- | --- |
| Examining the cosine alignment of actual | Examining the cosine alignment of actual |
Figure 3: Examining the cosine alignment of actual Q-value change with intended Q-value change (cos(Q′−Q,y−Q)) for PreQN and TD3 with relu and sin activations. Curves are averaged over 3 random seeds.
To measure how well DQL updates push Q-values towards their targets, we evaluated an alignment measure given by cos(Q′−Q,y−Q), where y is the target for the algorithm (T∗Qθ in PreQN, and T∗Qψ in TD3, where ψ are the parameters of the slowly-changing target network). We show partial results in Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Benchmarking PreQN ‣ 5 Experiments ‣ Towards Characterizing Divergence in Deep Q-Learning") and complete results in Appendix [D](#A4 "Appendix D Extended Results for Alignment Experiment with Architecture Ablation ‣ Towards Characterizing Divergence in Deep Q-Learning"). We compared PreQN to TD3, because these two algorithms are equivalent except for the Q-learning component. While PreQN produces high alignment regardless of architecture by design, TD3 with the sin function (TD3-sin) produces updates that are better-aligned with their targets than TD3 with the relu function (TD3-relu). This accords well with our empirical analysis of the NTK: for sin networks, the NTK is closer to diagonal, so Q′−Q≈αK(y−Q) is closer to α(y−Q). Perhaps surprisingly, performance for TD3-sin was generally weaker than performance for TD3-relu, but we did not retune any of the hyperparameters from TD3-relu for TD3-sin; we speculate that better performance with TD3-sin may be achievable with a target network that updates more quickly. Performance for PreQN-relu was generally weaker than PreQN-sin, primarily due to occasional divergence; this result suggests that cancelling within-batch generalization is not a universal solution to divergence issues, and favorable architecture choices may be useful. However, in experiments not included in this report, we found that divergence issues with PreQN-relu were straightforwardly resolved by decreasing the learning rate (at a nontrivial cost to performance).
We are intrigued by the fact that empirical analysis of the NTK successfully predicts how the cosine alignment of a DQL update changes with architecture in the TD3 experiments. It has been observed that architecture changes can have a significant effect on performance in deep RL (for example, see Henderson et al. ([2018](#bib.bib14))), but to the best of our knowledge, no one has previously proposed any method for predicting how the behavior of a given algorithm might change with architecture. Based on our results, we are cautiously optimistic that the NTK is the correct object of study for such predictions, and we recommend a more rigorous empirical analysis relating NTK measurements, architectures, and hyperparameter choices in DQL to performance.
###
5.3 Remarks on Computational Cost
Our implementation of PreQN was significantly slower than our implementation of SAC (by more than 50%), due to the requirement of calculating backward passes separately for each state-action pair in the batch, and solving the system of equations KθZ=Δ. However, we consider it plausible that many adjustments could be made to reduce computational cost from our basic code. (For instance: we did not reuse the gradients from computing Kθ for forming the update, ∑s,aZ(s,a)∇θQθ(s,a), and this redundancy can be eliminated.)
6 Other Related Work
---------------------
Previously, Melo et al. ([2008](#bib.bib26)) proved sufficient conditions for the convergence of Q-learning with linear function approximators. Their conditions were fairly restrictive, essentially requiring that the algorithm behave as if it were on-policy—removing one of the components of the triad. We see an interesting parallel to our results for the linear approximation case (Theorem 2), which also effectively remove a component of the triad by requiring the algorithm to behave as if it were tabular.
Concurrently to our work, (Bhatt et al., [2019](#bib.bib3)) developed CrossNorm, a variant of DDPG that uses a careful application of BatchNorm (Ioffe & Szegedy, [2015](#bib.bib17)) to achieve stable learning without target networks. Also concurrently, Fu et al. ([2019](#bib.bib7)) performed a rigorous empirical study of Fitted Q-Iteration (FQI) (Riedmiller, [2005](#bib.bib32)) to gain insight into divergence issues in DQL, and ultimately proposed an algorithm based on data distribution modifications to improve performance.
7 Conclusions
--------------
In this work, we examined how Q-values change under a DQL update in order to understand how divergence might arise. We used our insights to develop a practical algorithm, called PreQN, which attacks one of the root causes of divergence: the generalization properties of the Q-function approximator, as quantified by the neural tangent kernel (Jacot et al., [2018](#bib.bib18)). Our experiments show that PreQN, with appropriate design choices, is stable and performant on various high-dimensional continuous control tasks.
Intriguingly, theoretical and empirical work shows that the NTK converges to a constant (independent of network parameters) in the limit of wide networks (Jacot et al., [2018](#bib.bib18)); this result makes it possible to study the evolution of neural network functions through their linearization around the starting point (Lee et al., [2019](#bib.bib23)). In this regime, through the correspondence in Lemma 3, DQL should behave quite closely to linear TD-learning. We consider the detailed analysis of this connection to be an interesting avenue for potential future work. |
6fe4e890-7acf-49e2-8e64-46a907997bc2 | trentmkelly/LessWrong-43k | LessWrong | Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers
Essentially, his version of Kickstarter has Refund Bonuses. His rationale is that there's little benefit to investigate or promote a probably-failing crowdfunding project, because that costs you time, so that could put projects in a bad equilibrium where no one will strive to believe in them tomorrow because nobody believes in them today. So Refund Bonuses breaks the vicious cycle by compensating that time-cost, by giving out a bonus to anyone who pledged, if and only if the project fails to meet its funding threshold. A paraphrasing of it is: It sends a strong signal that the project is confident that it will succeed, so if you would have been doubtful, you shouldn't be now.
And, for a worthy project, the refund bonus would rarely need to actually be paid, so it costs little to offer them.
Hearing about this, something occurred to me. I think this might be a natural dutch-book against (trap, for) CDT (Causal Decision Theory. Background: https://arbital.com/p/logical_dt/ ).
The vice of a CDT agent is that it sometimes cannot anticipate that the decisions of other CDT agents will be entangled with its own decisions.
It'll go like this: An opportunistic CDTer might decide to look for campaigns that will fail, so that they can extract refund rewards. They watch the market for a bit, develop a sense of the patterns of failure and convince themselves of a strategy.
A bunch of other CDTers do the same thing and develop mostly the same strategies.
This turns into a game of chicken.
But CDT consistently underestimates the correlation of its decisions with those of other CDT agents, because it cannot use its own decision as evidence about others' decisions.
I'm not completely sure how much CDT underestimates by.
But if it's underestimating by a lot, many of its projected failing campaigns will be driven by cynical pledges unto success. This would run against my usual intuitions about CDT... Usually, it would underfund public goods, it appears to me that refund bonuse |
ee7b1baa-500f-44de-8783-0302e1f8f3e9 | trentmkelly/LessWrong-43k | LessWrong | Paper in Science: Managing extreme AI risks amid rapid progress
https://www.science.org/doi/10.1126/science.adn0117
Authors:
Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner*, Sören Mindermann*
Abstract:
Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI, there is a lack of consensus about how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development with proactive, adaptive governance mechanisms for a more commensurate preparation.
|
a9ce4337-85ee-4d0a-a053-3b215eb9fc08 | trentmkelly/LessWrong-43k | LessWrong | Less Wrong and The Ultraviolet
|
80f6c2a5-8a36-4918-aa3c-db8df486a6b3 | trentmkelly/LessWrong-43k | LessWrong | Mechanisms too simple for humans to design
Cross-posted from Telescopic Turnip
Disclaimer: This article is about living organisms, and how they are sculpted by evolution. Any use of mathematics is metaphorical, not literal – it's only there to give a sense of scale. Apologies to all the people who got correctly offended by my shamelessly hand-wavy references to Kolmogorov complexity.
As we all know, humans are terrible at building butterflies. We can make a lot of objectively cool things like nuclear reactors and microchips, but we still can't create a proper artificial insect that flies, feeds, and lays eggs that turn into more butterflies. That seems like evidence that butterflies are incredibly complex machines – certainly more complex than a nuclear power facility.
Likewise, when you google "most complex object in the universe", the first result is usually not something invented by humans – rather, what people find the most impressive seems to be "the human brain".
As we are getting closer to building super-human AIs, people wonder what kind of unspeakable super-human inventions these machines will come up with. And, most of the time, the most terrifying technology people can think of is along the lines of "self-replicating autonomous nano-robots" – in other words, bacteria.
Humans are just very humbled by the Natural World, and we are happy to admit that our lame attempts at making technology are nothing compared to the Complexity of Life. That's fair enough – to this day, natural living organisms remain the #1 collection of non-human technology we get to observe and study. What makes these organisms so different from human-made technology?
Here is my thesis: the real reason why humans cannot build a fully-functional butterfly is not because butterflies are too complex. Instead, it's because butterflies are too simple.
As I'll argue, humans routinely design systems much more complex than butterflies, bacteria or brains – and if you look at all the objects in the room, your brain is probably not |
6b45074a-9b0a-480b-91eb-91c52db11376 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Towards Automated Circuit Discovery
for Mechanistic Interpretability
1 Introduction
---------------

Figure 1: Automatically discovering circuits with ACDC. Left: the computational graph of GPT-2 Small with a recovered circuit for the IOI task in red. Right: the recovered circuit with labelled nodes. All heads recovered were identified as part of the IOI circuit [[1](#bib.bibx1)].
Rapid progress in transformer language modelling [[2](#bib.bibx2), [3](#bib.bibx3), [4](#bib.bibx4), inter alia] has directed attention towards understanding the causes of new capabilities [[5](#bib.bibx5)] in these models.
Researchers have identified precise high-level predictors of model performance [[6](#bib.bibx6)], but transformers are still widely considered ‘black-boxes’ [[7](#bib.bibx7)] like almost all other neural network models [[8](#bib.bibx8), [9](#bib.bibx9)].111Though this perspective is not universal [[10](#bib.bibx10)].
Interpretability research aims to demystify machine learning models, for example by explaining model outputs in terms of domain-relevant concepts [[11](#bib.bibx11)].
Mechanistic interpretability refers to the reverse-engineering of model components into human-understandable algorithms [[12](#bib.bibx12)].
Much research in mechanistic interpretability focuses on circuits in machine learning models, defined as subsets of neural networks that use features across different layers (connected by weights) to implement interpretable algorithms [[13](#bib.bibx13)].
For example, in vision models, [[14](#bib.bibx14), ] found circuits that detect curves, and [[15](#bib.bibx15), ] found circuits that detect image patches contrasting high- and low-frequency regions. More recently, in transformer models, [[16](#bib.bibx16), [17](#bib.bibx17), ] described how circuits for an algorithmic task gradually form throughout training.
\Citetwang2022interpretability found a circuit
that implements a high-level natural language task.
In this work we use computational subgraphs of the neural network to represent circuits. Considering neural networks as computational graphs has led to better understanding of their components. For example, training dynamics in residual models can be explained by shallow paths through the computational graph [[18](#bib.bibx18)], MLP layers can be modelled as memory that is able to represent certain properties of the network inputs [[19](#bib.bibx19)], and residual transformer models have been modelled as the sum of all different paths through the network [[20](#bib.bibx20)].
Later work has used insights from looking at subgraphs of models in order to edit models’ behaviors [[21](#bib.bibx21), [22](#bib.bibx22)] and test interpretability hypotheses [[23](#bib.bibx23)].
The current approach to extracting computational subgraphs (such as circuits) from neural networks relies on a lot of manual inspection by humans [[24](#bib.bibx24)]. This is a major obstacle to scaling up mechanistic interpretability to larger models and more behaviors.
In this work we present an automatic method to extract computational graphs from neural networks, and explore the challenges and applications of automated interpretability.
Our main contributions are threefold. Firstly, we introduce Automatic Circuit DisCovery (ACDC), a novel algorithm for finding important subgraphs in ML models (Section [3](#S3 "3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
Secondly, we find limitations of current existing work on finding sparse subsets of neural networks when we compare ACDC to both heuristic- and gradient-based methods [[25](#bib.bibx25), [26](#bib.bibx26), Section [4](#S4 "4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")].
Finally, we demonstrate how ACDC can automatically reproduce existing mechanistic interpretability results, and can provide a basis for novel research on circuits (Section [5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
2 Related work
---------------
Neural network pruning masks the weights of neural networks to make their connectivity more sparse [[27](#bib.bibx27)]. In contrast to our aims, the pruning literature is typically concerned with compressing neural networks for faster inference or to reduce storage requirements [[28](#bib.bibx28), [29](#bib.bibx29)]. Early work [[30](#bib.bibx30)] hoped pruning would lead to more interpretable networks, but progress towards interpretability via pruning is limited [[31](#bib.bibx31)]. Further, the literature has historically focused on compressing computer vision architectures [[32](#bib.bibx32)] and here we focus on natural language processing (and in particular, transformers).
Pruned transformers are often constructed using gradient information. For example, [[25](#bib.bibx25), ] decide what heads should be pruned by the absolute value of their gradients, while “movement pruning” [[33](#bib.bibx33)] removes parameters that have high velocity to a low magnitude. The Hessian matrix has also been used extensively to judge what parameters should be kept [[27](#bib.bibx27), [30](#bib.bibx30)], with modern approaches using approximations that avoid the prohibitive cost of computing the Hessian for large language models [[28](#bib.bibx28), [34](#bib.bibx34), [35](#bib.bibx35)]. Lastly, techniques that prune small weights have been used as a baseline [[33](#bib.bibx33)] and have proven to be competitive for transformers [[36](#bib.bibx36), [37](#bib.bibx37)].
Masks can also be learned from data, with an objective function that juggles model performance and network sparsity [[38](#bib.bibx38), [28](#bib.bibx28), [26](#bib.bibx26), [Section 4.2](#S4.SS2 "4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")]. This is a useful comparison as unlike many pruning algorithms, ACDC and learnable masks prune in “one-shot” [[35](#bib.bibx35)], meaning that we do not change the weights of our model after pruning. Furthermore, learnable methods are a suitable comparison because they can prune whole layers and heads rather than masking individual weight parameters.
Causal interpretation. Much prior research on understanding language models has drawn inspiration from causal inference [[39](#bib.bibx39)], leading to the development of frameworks that provide causal explanations for model outputs [[39](#bib.bibx39), [40](#bib.bibx40), [41](#bib.bibx41), [42](#bib.bibx42)].
Other work [[43](#bib.bibx43), ] discusses the difference between indirect effects and direct effects inside language models, and experiments on removing subsets of these heads using heads’ direct effects as proxies for the overall contribution of these heads. [[44](#bib.bibx44), ] introduce ‘path patching’ to analyze the effects of different subsets of edges in computational graphs of models. They also use a ‘treeify’ operation to consider all possible paths through a residual neural network.
Saliency maps are a visualization technique commonly used for automatic interpretation of the behavior of neural networks in both computer vision [[45](#bib.bibx45)] and natural language processing [[46](#bib.bibx46)]. Saliency maps highlight features in the input data that contribute most to a model’s predictions. Common methods for generating saliency maps include gradient-based methods [[47](#bib.bibx47)], occlusion-based methods [[48](#bib.bibx48)] as well as model-specific approaches [[49](#bib.bibx49)]. The methods described can produce useful high-level visualizations of how neural networks compute output distributions conditioned on their input [[50](#bib.bibx50)]. However, the visualizations produced are sometimes independent of the labels attached to training datapoints [[51](#bib.bibx51)] and can also be insensitive to changes in inputs that are intuitively important [[52](#bib.bibx52)].
3 Methodology
--------------

(a) Choose computational graph and task.

(b) At each head, prune unimportant connections.

(c) Recurse until the full circuit is recovered.
Figure 2: How ACDC works (Steps [1(a)](#S3.F1.sf1 "1(a) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")-[1(c)](#S3.F1.sf3 "1(c) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). Step [1(a)](#S3.F1.sf1 "1(a) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"): a practitioner specifies a computational graph of the model, the task they want to investigate, and a threshold under which to remove connections. Step [1(b)](#S3.F1.sf2 "1(b) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"): ACDC iterates over nodes in the computational graph, replacing activations of connections between a node and its children, and measuring the effect on the output metric. Connections are removed if their measured effect on the metric under corruption is below the threshold τ𝜏\tauitalic\_τ. Step [1(c)](#S3.F1.sf3 "1(c) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"): recursively apply Step [1(b)](#S3.F1.sf2 "1(b) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") to the important nodes found by previous times we performed Step [1(b)](#S3.F1.sf2 "1(b) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). The ACDC procedure returns a subgraph of the original computational graph.
In the previous section, we discussed how some existing work uses causal interventions to understand particular model behaviors, and other works rewrite computational graphs of models to understand model internals.
In this section we explain our novel algorithm, Automatic Circuit DisCovery (ACDC; see Algorithm [1](#algorithm1 "1 ‣ 3.3 The search algorithm ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Figure 2](#S3.F2 "Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), that uses causal interventions to elucidate the key model internals for particular tasks models perform.
There are three components to the algorithm: the space of circuits we search over ([Section 3.1](#S3.SS1 "3.1 The space of circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), how we go about evaluating circuits (Section [3.2](#S3.SS2 "3.2 Evaluating circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) and the search algorithm used to select the circuit ([Section 3.3](#S3.SS3 "3.3 The search algorithm ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
###
3.1 The space of circuits
Consider a computational graph G with input node I𝐼Iitalic\_I and output node O𝑂Oitalic\_O, such as the graph in [Figure 1(a)](#S3.F1.sf1 "1(a) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") (which we use as a running example).
Following [[1](#bib.bibx1), ], we define a circuit in a model as a subgraph of its computational graph G𝐺Gitalic\_G, and we search over this space of subgraphs. All edges not present in the computational graph are considered unimportant for the current task. Hence we set these edge’s values their activation on a corrupted input (see [Section 3.2](#S3.SS2 "3.2 Evaluating circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). Additionally, we only consider subgraphs where all nodes and edges lie on a paths to O𝑂Oitalic\_O as it is unclear how to interpret other subgraphs. For a given machine learning model there are several computational graphs that define its computation. For example, the subgraphs can include individual neurons that the model uses, or merely includes MLPs as its nodes. In our work we fix a particular computational graph before running ACDC on it.
Many previous works search over the space of subsets of model components or parameters [[26](#bib.bibx26), [25](#bib.bibx25), [43](#bib.bibx43)]. Our work considers a hypothesis space that is considerably larger, since it considers the mediating effect components have on later components. For example, in Figure [1(c)](#S3.F1.sf3 "1(c) ‣ Figure 2 ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") the component C has no direct path on the output, but it does have an effect through A on the output.
Building from [[20](#bib.bibx20), ], other researchers have considered searching over the space of all subsets of paths in a network (called the ‘treeified’ network [[44](#bib.bibx44)]). Our subspace of circuits is less expressive than this. It is unclear how to optimize over this intractably large space of hypotheses (there are over 1014superscript101410^{14}10 start\_POSTSUPERSCRIPT 14 end\_POSTSUPERSCRIPT paths through the attention heads and MLPs in GPT-2 small, for example), though this is an interesting future research direction.
###
3.2 Evaluating circuits
Given a subgraph H𝐻Hitalic\_H of a computational graph (Section [3.1](#S3.SS1 "3.1 The space of circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), we need a way to evaluate its performance on a particular task. In this section we define how we evaluate circuits and provide an example of a circuit that we evaluate.
We define a task by a dataset on which H𝐻Hitalic\_H can be evaluated and a dataset of corrupted inputs on which the task is not present. Then we define H(xi,xi′)𝐻subscript𝑥𝑖superscriptsubscript𝑥𝑖′H(x\_{i},x\_{i}^{\prime})italic\_H ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) as the result of setting all edges not present in H𝐻Hitalic\_H to their activation on xi′superscriptsubscript𝑥𝑖′x\_{i}^{\prime}italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT (a corrupted input). This defines H(xi,xi′)𝐻subscript𝑥𝑖superscriptsubscript𝑥𝑖′H(x\_{i},x\_{i}^{\prime})italic\_H ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), the output probability distribution of the subgraph under such an experiment. Finally we evaluate H𝐻Hitalic\_H by computing the KL divergence DKL(G(xi)||H(xi,xi′))D\_{KL}(G(x\_{i})||H(x\_{i},x\_{i}^{\prime}))italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT ( italic\_G ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | | italic\_H ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) between the model and the subgraph’s predictions. We let DKL(G||H)D\_{KL}(G||H)italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT ( italic\_G | | italic\_H ) denote the average KL divergence over a set of datapoints. Using KL divergence instead of (for example) the probability that the subgraph places on the correct completion means that we can’t achieve better performance than the model itself. [Appendix A](#A1 "Appendix A Alternatives to minimizing KL divergence ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") discusses this overperformance issue further.
As an example, [[20](#bib.bibx20), ] use sequences including repeated tokens (xi=…AB…ABsubscript𝑥𝑖…𝐴𝐵…𝐴𝐵x\_{i}=...AB...ABitalic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = … italic\_A italic\_B … italic\_A italic\_B, where A𝐴Aitalic\_A and B𝐵Bitalic\_B are distinct tokens) to find induction heads (Section [4.2](#S4.SS2 "4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) that predict the correct next token. In our induction experiments, we set xi′superscriptsubscript𝑥𝑖′x\_{i}^{\prime}italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT equal to a sentence that does not have any repeated tokens.
###
3.3 The search algorithm
In Section [3.1](#S3.SS1 "3.1 The space of circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") we described how our space of circuits is all the subgraphs of a computational graph. In Section [3.2](#S3.SS2 "3.2 Evaluating circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") we described how to evaluate these subgraphs given a task of interest, and in this section we describe how we search over this space. We will validate that this approach recovers circuits that reflect the mechanisms inside models in Section [4](#S4 "4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). We refer the reader to Figure [5](#S5.F5 "Figure 5 ‣ 5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and Algorithm [1](#algorithm1 "1 ‣ 3.3 The search algorithm ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") throughout the explanation for pedagogical reasons.
Informally, a run of ACDC iterates in reverse (topological) order through the computational graph G𝐺Gitalic\_G, starting at the output node. At every node, it attempts to remove as many edges that enter this node as possible such that the KL divergence between the subgraphs and the model do not increase that much. Finally, once all nodes are iterated over, the algorithm (when successful) recovers a graph that i) is far sparser than the original graph and ii) recovers good performance on the task. Section [4](#S4 "4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") evaluates how ACDC compares by these metrics to other methods.
The ACDC algorithm has one hyperparameter τ𝜏\tauitalic\_τ, the threshold. We describe the algorithm in the notation of Section [3.2](#S3.SS2 "3.2 Evaluating circuits ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") in the pseudocode in Algorithm [1](#algorithm1 "1 ‣ 3.3 The search algorithm ‣ 3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). We use a reverse topological sort routine so that nodes are processed from the output node back to the input node, and nodes are always processed before their inputs. We do not specify the order in which we iterate over the parents w𝑤witalic\_w of v𝑣vitalic\_v. By default our implementation left this iterating from later-layer heads to earlier-layer heads. The order of the parents sometimes affects experimental results ([Appendix C](#A3 "Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
Data: Computational graph G𝐺Gitalic\_G, dataset (xi)i=1nsuperscriptsubscriptsubscript𝑥𝑖𝑖1𝑛(x\_{i})\_{i=1}^{n}( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT, corrupted datapoints (xi′)i=1nsuperscriptsubscriptsuperscriptsubscript𝑥𝑖′𝑖1𝑛(x\_{i}^{\prime})\_{i=1}^{n}( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT and threshold τ𝜏\tauitalic\_τ.
Result: Subgraph H⊆G𝐻𝐺H\subseteq Gitalic\_H ⊆ italic\_G.
H←G←𝐻𝐺H\leftarrow Gitalic\_H ← italic\_G // Initialize H to the full computational graph
H←H.reverse\_topological\_sort()formulae-sequence←𝐻𝐻𝑟𝑒𝑣𝑒𝑟𝑠𝑒\_𝑡𝑜𝑝𝑜𝑙𝑜𝑔𝑖𝑐𝑎𝑙\_𝑠𝑜𝑟𝑡H\leftarrow H.{reverse\\_topological\\_sort()}italic\_H ← italic\_H . italic\_r italic\_e italic\_v italic\_e italic\_r italic\_s italic\_e \_ italic\_t italic\_o italic\_p italic\_o italic\_l italic\_o italic\_g italic\_i italic\_c italic\_a italic\_l \_ italic\_s italic\_o italic\_r italic\_t ( ) // Sort H so output first
1 for *v∈H𝑣𝐻v\in Hitalic\_v ∈ italic\_H* do
2 for *w𝑤witalic\_w parent of v𝑣vitalic\_v* do
3 Hnew←H∖{w→v}←subscript𝐻new𝐻→𝑤𝑣H\_{\mathrm{new}}\leftarrow H\setminus\{w\rightarrow v\}italic\_H start\_POSTSUBSCRIPT roman\_new end\_POSTSUBSCRIPT ← italic\_H ∖ { italic\_w → italic\_v } if *DKL(G||Hnew)−DKL(G||H)<τD\_{KL}(G||H\_{\mathrm{new}})-D\_{KL}(G||H)<\tauitalic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT ( italic\_G | | italic\_H start\_POSTSUBSCRIPT roman\_new end\_POSTSUBSCRIPT ) - italic\_D start\_POSTSUBSCRIPT italic\_K italic\_L end\_POSTSUBSCRIPT ( italic\_G | | italic\_H ) < italic\_τ* then
H←H∖{w→v}←𝐻𝐻→𝑤𝑣H\leftarrow H\setminus\{w\rightarrow v\}italic\_H ← italic\_H ∖ { italic\_w → italic\_v } // Remove the unimportant edge
4
5 end if
6
7 end for
8
9 end for
return *H*
Algorithm 1 The ACDC algorithm.
4 Validation
-------------
To validate the ACDC algorithm’s use for interpreting neural networks, we conduct experiments to answer the following questions. When ACDC finds a subgraph of the computational graph of a neural network
* •
Q1: does ACDC identify the subgraph corresponding to the underlying algorithm implemented by the neural network?
* •
Q2: does ACDC avoid including components which do not participate in the elicited behavior?
We use two toy tasks to study these questions: simple tracr algorithms (where we have a ground truth of how the neural networks implement each task) in Section [4.1](#S4.SS1 "4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), and the task of induction in natural language in a 2-layer transformer in Section [4.2](#S4.SS2 "4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
In Section [4.1](#S4.SS1 "4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), we study algorithms that can be converted into transformer weights using the tracr compiler [[53](#bib.bibx53)]. This allows us to confirm that ACDC can recover subgraphs that implement the algorithms that neural networks encode, since tracr provides a ‘white-box’ implementation of the internal computation, unlike gradient descent. This means there is a ground-truth for validating our interpretability tool. We confirm both [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") in this case.
In Section [4.2](#S4.SS2 "4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), we study the induction [[20](#bib.bibx20)] task in a toy, two-layer transformer trained by gradient descent. In this case there is not a ‘ground truth’ circuit that implements induction. Therefore we measure the KL divergence and the number of edges in subgraphs that ACDC recovers across a range of thresholds. In some settings, almost all Pareto-optimal222A subgraph is Pareto-optimal if it is not Pareto-dominated by any other subgraphs. A subgraph H𝐻Hitalic\_H is Pareto-dominated if there exists a different subgraph with at most the KL divergence of H𝐻Hitalic\_H and at most the number of edges of H𝐻Hitalic\_H. subgraphs are found by ACDC (Figure [3(a)](#S4.F3.sf1 "3(a) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). However, other methods perform as well in a different setting ([Figure 3(b)](#S4.F3.sf2 "3(b) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
###
4.1 Faithfulness to algorithmic tasks with tracr
To check whether ACDC identifies the subgraph corresponding to a neural network’s underlying algorithm ([Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), we use examples from the tracr library [[53](#bib.bibx53)]. tracr compiles RASP programs [[54](#bib.bibx54)] into the weights of transformers. RASP programs are a subset of all the functions that transformers can implement. An example of a function that can be represented in the RASP programming language is the frac\_prevs function. Given an input string s𝑠sitalic\_s, the frac\_prevs function computes the proportion of characters equal to a given fixed character (for example, x𝑥xitalic\_x) at each position in s𝑠sitalic\_s. In this section we use the frac\_prevs task to confirm [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and then discuss how our work generalizes to more RASP programs.
We firstly check that the frac\_prevs transformer has an algorithm which ACDC can reverse-engineer ([Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). We generate a computational graph with nodes for every neuron in the computation from the transformer generated by tracr. We then set the corrupted datapoints (xi′)i=1nsuperscriptsubscriptsuperscriptsubscript𝑥𝑖′𝑖1𝑛(x\_{i}^{\prime})\_{i=1}^{n}( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT to a random permutation of the clean datapoints and set the threshold333Any threshold τ>0𝜏0\tau>0italic\_τ > 0 recovers the subgraph. We also randomize the positional embeddings of corrupted datapoints. Both details are discussed in [Appendix B](#A2 "Appendix B More details on the tracr experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). τ=0.1𝜏0.1\tau=0.1italic\_τ = 0.1. The results are in Figure [2(b)](#S4.F2.sf2 "2(b) ‣ Figure 3 ‣ 4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). ACDC recovers exactly the component of the outputs of MLP 1 and Attention Head 2 that are used to compute the proportions of x𝑥xitalic\_x at each place in the output, including the Q–, K– and V–composition with Attention Head 2. This results in 0 KL divergence from the model’s outputs, which shows that the answer to [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") in this case is yes.
We next examine whether any other components of the transformer are recovered by ACDC when applied to the frac\_prevs task, addressing [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). As shown in Figure [2(b)](#S4.F2.sf2 "2(b) ‣ Figure 3 ‣ 4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), there are no extra nodes present. In fact, this computational graph visualization produced by ACDC is more compact than the complete view of the states of the residual stream, as illustrated in Figure [2(a)](#S4.F2.sf1 "2(a) ‣ Figure 3 ‣ 4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") (from [[53](#bib.bibx53), ]). In this case, the transformer is small enough for a practitioner to study each individual state in the forward pass. However, for larger transformers this would be intractable, which necessitates the use of different interpretability tools. Section [5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") discusses several use cases of ACDC on larger transformers trained on more diverse datasets.
We provide an additional example of ACDC applied to a task from the tracr library in Appendix [B](#A2 "Appendix B More details on the tracr experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), which also satisfies [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). However, the most crucial applications of ACDC, as described in Section [5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), involve cases where ground-truth access to the implemented algorithm is unavailable. Consequently, it is essential to study ACDC in models trained using gradient descent rather than those compiled with tracr.

(a) Manual visualization of the residual stream from the tracr paper [[53](#bib.bibx53)].

(b) ACDC visualization.
Figure 3: Two visualizations of how a tracr-compiled transformer completes the frac\_prevs task.
###
4.2 Comparison to baseline on induction in a small transformer
We have established that ACDC recovers the true circuit in a simple task where the ground truth is known (in [Section 4.1](#S4.SS1 "4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") above).
However, the tracr setting is far from realistic: for example, most of the weights of the compiled transformer are zero.
In this section, we provide further evidence for [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"),
by studying the induction task [[20](#bib.bibx20)] in a small transformer.
Specifically, we explain how we compare two existing methods to ACDC ([Section 4.2.1](#S4.SS2.SSS1 "4.2.1 Alternative Techniques ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), explain our experimental setup ([Section 4.2.2](#S4.SS2.SSS2 "4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) and evaluate the performance of ACDC compared to Subnetwork Probing ([Section 4.2.3](#S4.SS2.SSS3 "4.2.3 Evaluation of Results ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and [Figure 4](#S4.F4 "Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
####
4.2.1 Alternative Techniques
Subnetwork Probing [[26](#bib.bibx26), SP] prunes model weights with an objective that combines accuracy and sparsity [[38](#bib.bibx38)]. SP aims to retain enough information such that a probe can still extract linguistic information from the model’s hidden states. To ensure a fair comparison to ACDC, we adapt the implementation in the following ways (see [Section C.1](#A3.SS1 "C.1 Subnetwork Probing ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") for more details):
1. 1.
We do not train a probe.
2. 2.
We change the objective of their optimization process from the loss to the KL divergence between the masked model and the base model.
3. 3.
We generalize the masking technique so we can replace activations with both zero activations and corrupted activations.
Head Importance Score for Pruning (HISP) is a method introduced by [[25](#bib.bibx25), ] to efficiently prune individual attention heads.
HISP ranks the heads by importance scores Ihsubscript𝐼ℎI\_{h}italic\_I start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ([Section C.2](#A3.SS2 "C.2 Head Importance Score for Pruning ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) and prunes all the heads except those with the top k𝑘kitalic\_k scores.
We measure the KL divergence and calculate the number of edges for each value of k𝑘kitalic\_k in order to compare HISP with ACDC and SP.
Like SP, this method only considers replacing head activations with zero activations, and therefore we once more introduce a generalization to replacement with corrupted activations (for details, see [Section C.2](#A3.SS2 "C.2 Head Importance Score for Pruning ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
####
4.2.2 Experimental Protocol
We use 40 sequences of 300 tokens from a filtered validation set of OpenWebText [[55](#bib.bibx55)]. The filter removes validation examples that do not contain examples of induction — subsequences of the form "A,B,…,A,B𝐴𝐵…𝐴𝐵A,B,\dots,A,Bitalic\_A , italic\_B , … , italic\_A , italic\_B", where A𝐴Aitalic\_A and B𝐵Bitalic\_B are distinct tokens. We only measure KL divergence for the model’s predictions of the second B𝐵Bitalic\_B tokens in all examples of the subsequences A,B,…,A,B𝐴𝐵…𝐴𝐵A,B,\dots,A,Bitalic\_A , italic\_B , … , italic\_A , italic\_B. We use both zero activations and corrupted activations to compare ACDC and SP. To use ACDC with zero activations, we apply one change to the procedure described in [Section 3](#S3 "3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"): instead of setting activations of edges not present in the subgraph to the activations on a corrupted dataset, we set their value equal to 0. We describe how we adapt the methods from [Section 4.2.1](#S4.SS2.SSS1 "4.2.1 Alternative Techniques ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") to be used with both zero activations and corrupted activations in [Section C.1](#A3.SS1 "C.1 Subnetwork Probing ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") for SP and [Section C.2](#A3.SS2 "C.2 Head Importance Score for Pruning ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") for HISP.
We used 21 thresholds for ACDC, logarithmically spaced between 10−2superscript10210^{-2}10 start\_POSTSUPERSCRIPT - 2 end\_POSTSUPERSCRIPT and 100.5superscript100.510^{0.5}10 start\_POSTSUPERSCRIPT 0.5 end\_POSTSUPERSCRIPT inclusive. We ran SP with hand-picked thresholds that led to a diversity of outputs (see [Appendix C](#A3 "Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). We ran HISP for all values of k𝑘kitalic\_k up to the total number of nodes in the transformer’s computational graph. The results are found in [Figure 4](#S4.F4 "Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").

(a) Comparison of methods with zero activations.

(b) Like [Figure 3(a)](#S4.F3.sf1 "3(a) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), but with corrupted activations.
Figure 4: Comparison of ACDC and SP with both zero activations ([Figure 3(a)](#S4.F3.sf1 "3(a) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) and corrupted activations ([Figure 3(b)](#S4.F3.sf2 "3(b) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
####
4.2.3 Evaluation of Results
When we compared the subgraphs produced by the three methods with zero activation replacements in [Figure 3(a)](#S4.F3.sf1 "3(a) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), we found that 15/19 of the Pareto-optimal subgraphs were found by ACDC.
HISP found Pareto-optimal subgraphs with 1 and 8 edges, and SP found Pareto-optimal subgraphs with 18 and 98 edges, but all other 15 Pareto-optimal subgraphs were found by ACDC.
However, when we used corrupted activations rather than zero activations ([Figure 3(b)](#S4.F3.sf2 "3(b) ‣ Figure 4 ‣ 4.2.2 Experimental Protocol ‣ 4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), SP and ACDC both generated 42.9% (9/21) of the Pareto-optimal subgraphs.
HISP only recovered three of the Pareto-optimal subgraphs.
In this case, the gradient-descent method appears to have explored the space of subgraphs as effectively than ACDC.
Surprisingly, all three methods recovered subgraphs with lower KL divergence when zero activations were used rather than corrupted activations.
It is unclear why the methods achieve better results with corruptions that are likely to be more destructive.
A possible explanation is that there are ‘negative’ components in models that are detrimental to the tasks, and the zero activations are more disruptive to these components.
A discussion of how methods could be adjusted to deal with this difficulty can be found in Alternative [2](#A1.I1.i2 "item 2 ‣ A.3 Alternatives to minimizing a metric ‣ Appendix A Alternatives to minimizing KL divergence ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") in [Appendix A](#A1 "Appendix A Alternatives to minimizing KL divergence ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
A further problem is that we used KL divergence as a proxy for the faithfulness of the subgraphs, though this metric may not reflect whether the subgraphs found complete the induction task in the same way as the model.
Empirically, there is some evidence that ACDC did recover the important components of the induction circuit ([Figure 8](#A3.F8 "Figure 8 ‣ C.1 Subnetwork Probing ‣ Appendix C More details on the induction experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")), which strengthens the case for [Q1](#q1 "Q1: ‣ 1st item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). However, other components are found in addition to the mainline induction circuitry, which means there is not further evidence for [Q2](#q2 "Q2: ‣ 2nd item ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
In [Section 5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), we compare ACDC with two prior works that recovered circuits in models.
This allows us to test more directly whether ACDC recovers the subgraphs that carry out the task inside the model.
5 Scaling ACDC
---------------
In Section [4](#S4 "4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") we validated the ACDC algorithm on transformers that contained ground truth circuits (Section [4.1](#S4.SS1 "4.1 Faithfulness to algorithmic tasks with tracr ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) and on the induction task (Section [4.2](#S4.SS2 "4.2 Comparison to baseline on induction in a small transformer ‣ 4 Validation ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). In this section, we test ACDC on harder tasks in more realistic models. The tasks that we study are the completion of particular docstrings [[56](#bib.bibx56)] in Subsection [5.1](#S5.SS1 "5.1 The docstring circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"), the Indirect Object Identification (IOI) circuit in GPT-2 small [[1](#bib.bibx1)] in Subsection [5.2](#S5.SS2 "5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") and the completion of correct gendered pronouns [[57](#bib.bibx57)] in Subsection [5.3](#S5.SS3 "5.3 Gendered pronoun completion ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). We find that ACDC can perform better on KL divergence metrics than existing interpretability work, be useful at different levels of granularity and reveal interpretable components that input-output based techniques such as saliency maps could not find. We also highlight limitations of ACDC in each of these cases to help inform future research.
All three tasks in this section study next-token prediction in language models. We use small datasets of synthetic examples of similar sentences and completions. An example of one sentence from each dataset is shown in Table [1](#S5.T1 "Table 1 ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). Further details of the models, tasks and the exact ACDC setup can be found in Appendices [D](#A4 "Appendix D More details on the docstring experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")-[F](#A6 "Appendix F More details on the gendered pronoun experiment ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
| Section | Prompt | Completion |
| --- | --- | --- |
| [5.1](#S5.SS1 "5.1 The docstring circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") |
def function(self, files, obj, state, size, shape, option):
"""document string example :param state: performance analysis :param size: pattern design :param
| shape |
| [5.2](#S5.SS2 "5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") | When John and Mary went to the store, Mary gave a bottle of milk to | John |
| [5.3](#S5.SS3 "5.3 Gendered pronoun completion ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") | So Sarah is a really nice person, isn’t | she |
Table 1: The three next-token completion tasks in Section [5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). Additional experimental details can be found in Appendices [D](#A4 "Appendix D More details on the docstring experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")-[F](#A6 "Appendix F More details on the gendered pronoun experiment ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
###
5.1 The docstring circuit
[[56](#bib.bibx56), ] find a circuit in a small language model that is responsible for completing python docstrings, such as the prompt shown in Table [1](#S5.T1 "Table 1 ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
This circuit of attentions heads controls which variable the model predicts in each docstring line, e.g in [Table 1](#S5.T1 "Table 1 ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") it chooses shape over obj, state, size, or option. The circuit consists of 5 main attention heads, composing in up to three stages.
We apply the ACDC algorithm ([Section 3](#S3 "3 Methodology ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) to their dataset and generate the subgraph shown in [Figure 4(a)](#S5.F4.sf1 "4(a) ‣ Figure 5 ‣ 5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability"). Comparing this the original circuit [[56](#bib.bibx56)]
we find (a) overlapping heads, (b) heads found by ACDC only, and (c) heads found in the manual interpretation only.
In the first class (a) we find heads 0.5, 1.4, 2.0, 3.0, and 3.6. All these manually identified heads are recovered by ACDC. In class (b) we find head 1.0 which the authors later add to their circuit to improve performance [[56](#bib.bibx56)], ACDC shows for the first time where this head belongs in the circuit. In class (c) we find heads 0.2, 0.4 and 1.2. However, the first two heads are not actually relevant under the docstring distribution and only added by the authors manually. Head 1.2 is considered a non-essential but supporting head by the authors and not identified by ACDC at the current threshold.
We compare the numerical results between the ACDC circuit (34 edges) and the circuit described in [[56](#bib.bibx56), ] in Table [2](#S5.T2 "Table 2 ‣ 5.1 The docstring circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
The authors run a simple test including only their heads but not specifying edges (this corresponds to 388 edges), as well as suggest a 26-edge circuit without being able to test it.
Since [[56](#bib.bibx56), ] use logit difference as their metric, we add a second ACDC run (90 edges) to the comparison that optimizes logit difference rather than KL divergence (see [Appendix A](#A1 "Appendix A Alternatives to minimizing KL divergence ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") for details on this adjustment). The corresponding subgraph can be found in Figure [9](#A4.F9 "Figure 9 ‣ D.2 Additional docstring experiments ‣ Appendix D More details on the docstring experiments ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
ACDC achieves similar or better performance in both metrics used by [[56](#bib.bibx56), ] while—as expected—being much more specific (i.e. using fewer edges).
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Metric | Full model | ACDC LD | ACDC KL | Manual heads | Manual circuit |
| Mean logit diff. | 0.48 | 0.18 | -2.4 | -1.2 | -3.9 |
| Correct-fraction | 64% | 44% | 12% | 42% | 0% |
| Edges | 1377 | 90 | 34 | 388 | 26 |
Table 2: Comparing our ACDC docstring results to manual interpretation from [[56](#bib.bibx56), ] using their metrics. We compare (from left to right) the full model, the subgraph from ACDC runs optimizing for logit difference (LD), and KL divergence (KL), as well as the subgraphs made manually from the attention heads given in [[56](#bib.bibx56), ] and finally the manual circuit that the work found.
The metrics used are (a) mean logit difference between the correct variable name and the highest wrong variable name, and (b) the fraction of test cases where the model assigned a higher logit to the right answer than to any of the 5 wrong answers.
A limitation worth noting is that we applied ACDC to a computational graph of attention heads and their query, key and value computational nodes, while [[56](#bib.bibx56), ] considered the attention heads outputs into every token position separately. This allowed them to distinguish two functions fulfilled by the same attention head (layer 1, head 4) at different positions, which couldn’t be inferred from the output of ACDC alone.
We make this choice for performance reasons (the long sequence length would have made the experiments significantly slower) but this is not a fundamental limitation. In [Section 5.3](#S5.SS3 "5.3 Gendered pronoun completion ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") we use ACDC to isolate the effects of individual positions in a different task.
###
5.2 The IOI circuit
[[1](#bib.bibx1), ] find a circuit ‘in the wild’ in GPT-2 small [[58](#bib.bibx58)].
The circuit identifies indirect objects (see for example Table [1](#S5.T1 "Table 1 ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) by using several classes of attention heads.
In this subsection we analyze how successful ACDC’s circuit recovery ([Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) is.
All nine heads found in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") belong to the IOI circuit, which is a subset of 26 heads out of a total of 144 heads in GPT-2 small.
Additionally, these 9 heads include heads from three different classes (Previous Token Heads, S-Inhibition Heads and Name Mover Heads) that are sufficient to complete the IOI task, showing that ACDC indeed can recover circuits rather than just subgraphs.
However, [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") does not include heads from all the head classes that [[1](#bib.bibx1), ] found, as it does not include the Negative Name Mover Heads or the Previous Token Heads.
In [Figure 4(b)](#S5.F4.sf2 "4(b) ‣ Figure 5 ‣ 5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") we run ACDC with a lower threshold and find that it does recover Previous Token Heads.
Despite using a lower threshold, the subgraph still does not recover Negative Name Mover Heads, and the reduced threshold results in the identification of numerous less significant heads.
This is a case where KL divergence is insufficient for approximating model performance.
When circuit components are important but harmful for model performance, ACDC may remove these components in order to decrease the KL divergence.
We think that this limitation represents an important direction for future empirical and theoretical work. For example, we think that the following questions are open:
* •
To what extent do negative components arise in transformer language models?
* •
What are better metrics for measuring how well interpretations including negative components reflect model performance?
We discuss some modifications to ACDC that partially address these issues in [Appendix A](#A1 "Appendix A Alternatives to minimizing KL divergence ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").

(a) ACDC Subgraph on the docstring task, τ=0.095𝜏0.095\tau=0.095italic\_τ = 0.095.

(b) ACDC on the IOI task, τ=0.03𝜏0.03\tau=0.03italic\_τ = 0.03.
Figure 5: Two examples on running ACDC on different tasks ([Section 5](#S5 "5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). Thresholds τ𝜏\tauitalic\_τ are given in the captions.
###
5.3 Gendered pronoun completion
[[57](#bib.bibx57), ] begin to elicit the subgraph of GPT-2 small responsible for correctly gendered pronouns in GPT-2 small. They used an earlier version of ACDC as the basis of ongoing work (see Appendix [F](#A6 "Appendix F More details on the gendered pronoun experiment ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability") for full details). The result of applying ACDC with a low threshold τ=0.05𝜏0.05\tau=0.05italic\_τ = 0.05 is shown in Figure [11](#A6.F11 "Figure 11 ‣ Appendix F More details on the gendered pronoun experiment ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability").
The computational subgraphs generated by ACDC on the gendered pronoun completion task (see Appendix [F](#A6 "Appendix F More details on the gendered pronoun experiment ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")) show that MLP computations are more important than attention head computations in this task, compared to the IOI task ([Section 5.2](#S5.SS2 "5.2 The IOI circuit ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). Early, middle and late layer MLPs have important roles in the subgraph. For example, MLPs 3 and 5 are the important components at the name position (which must be used to identify the correct gender) as they have multiple incident edges, the MLP 7 at the ‘ is’ position has the most incoming connections of any node in the graph, and the late layer MLPs 10 and 11 have the largest direct effect on the output. MLP 7’s importance at the ‘ is’ position is an example of a discovery that could not have been made with more simple interpretability tools such as saliency maps (Section [2](#S2 "2 Related work ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")). This is because the ‘ is’ position mediates [[43](#bib.bibx43)] the effect of the input position (which is before the ‘ is’ position) on the output position (which is after the ‘ is’ position).
6 Conclusion
-------------
We have identified a common workflow for mechanistic interpretability. First, pin down a behavior using a metric and data set. Second, conduct activation patching experiments to understand which abstract units (e.g. transformer heads) are involved in the behavior. Third, iterate the previous steps with variations of the behavior under study, until the identified model behavior is understood.
The proposed algorithm, ACDC, systematically conducts all the activation patching experiments necessary to find which circuit composed of abstract units is responsible for the behavior. We have shown that ACDC recovers most of the compositional circuit which implements a given language model behavior, as judged by toy models where the implementation is known and comparison to previous mechanistic interpretability work. ACDC is also useful to produce *novel* mechanistic interpretability work, as demonstrated by [[57](#bib.bibx57), ], which applied ACDC to discover the outline of a circuit for gendered pronoun completion ([Section 5.3](#S5.SS3 "5.3 Gendered pronoun completion ‣ 5 Scaling ACDC ‣ Towards Automated Circuit Discovery for Mechanistic Interpretability")).
On balance, we think the evidence supports the claim that ACDC can automate part of interpretability work.
However, the current algorithm has limitations which prevent it from fully automating the second step of the identified workflow (activation patching). First, it systematically misses some classes of abstract units that are part of the circuit, for example the negative name mover heads from IOI [[1](#bib.bibx1)]. Second, the behavior of the algorithm is sensitive to the iteration order of the search over abstract unit, which remains a hard to tune hyperparameter.
Another limitation of the current work is the lack of empirical justification for some of the design choices, such as the recommendation to use the KL divergence to specify the behavior.
We leave an extensive exploration of valid design choices (ablation study), improving the limitations of the algorithm for future work.
Overall, this work is a step towards defining a paradigm for future mechanistic interpretability work. The paradigm is concrete enough to be partially automated, which we show by automating the discovery of both known and new language model behaviors.
Automating mechanistic interpretability is likely necessary to be able to describe the many behaviors of the large models which are in use today, otherwise the amount of specialized labor required is prohibitive.
We hope that our open-source implementation of ACDC accelerates interpretability research from the community.
Acknowledgments and Disclosure of Funding
-----------------------------------------
This work would not have been possible without the generous support of Redwood Research through their REMIX program. We would like to thank Chris Mathwin, Jett Janiak, Chris MacLeod, Neel Nanda and Alexandre Variengien for feedback on a draft of this paper. Arthur Conmy would like to thank Jacob Steinhardt, Alexandre Variengien and Buck Shlegeris for extremely helpful conversations that shaped ACDC. We would also like to thank Haoxing Du for working on an early tool, Daniel Ziegler who discussed experiments that inspired our Subnetwork Probing analysis, and Lawrence Chan who helped us frame our contributions and suggested several experiments. We provide two implementations of ACDC:444Both implementations are available at <https://github.com/ArthurConmy/Automatic-Circuit-Discovery> one built on top of Redwood Research’s rust\_circuit library [[59](#bib.bibx59)] and one in the transformer\_lens library [[60](#bib.bibx60)].
Appendix
-------- |
d0393481-f9cc-4f46-9c83-2ea8870472b2 | trentmkelly/LessWrong-43k | LessWrong | Are recent LLMs better at reasoning or better at memorizing?
TLDR; By carefully designing a reasoning benchmark that counteracts memorization skills in LLMs, LingOly-TOO (L2) Benchmark challenges frontier models with unseen questions and answers and makes the case that LLMs are not consistent reasoning machines yet.
Links: Paper - Leaderboard - Dataset
Figure 1: LingOly-TOO Benchmark results from the paper. Unobfuscated scores are in light orange and obfuscated scores in dark orange.
Do recently announced LLMs reason?
By the time this post was published, a new wave of frontier models had been announced including ones specifically designed to reason using Inference Time Compute (ITC) [1]. Anthropic’s Claude 3.7 Sonnet, OpenAI’s o1, o3 and GPT 4.5 models, DeepSeek’s R1 and others demonstrate impressive performance on several reasoning tasks that only few months ago were considered far from solvable[2][3][4]. This, rightfully, ignited conversations about an important question: Have we finally reached the era were models can do advanced reasoning?
The need for an unbiased evaluation of reasoning
A standard framework to answer such a question in the NLP domain is to use evaluation benchmarks that tests various reasoning skills via tasks which require a model to apply several sequential reasoning steps to reach the correct answer. The key to this approach is to select tasks and test cases that models have not had prior exposure to. This, in the era of LLMs, has proven to be a major challenge. In part because of the massive scale of training datasets which consists of trillions of words used in developing these models, but also because several of the frontier models are proprietary with little to no information about used training dataset that is available for the research community to take into account in their assessments. This consequently complicates objective evaluation of reasoning skills in LLMs. It also raises concerns that we might be over-estimating their capabilities in advanced reasoning.
LingOly-TOO (L2) Be |
664abbca-b3d8-497b-a7fa-ec0765aa97e8 | trentmkelly/LessWrong-43k | LessWrong | Progress links and tweets, 2023-02-22
Announcements
* Introducing Speculative Technologies, a private DARPA-like research organization. See also coverage in Forbes and Ben Reinhardt’s AMA on the Progress Forum
* I’ll be speaking on how to write about progress at the Thesis Festival this weekend
* Day One policy memo on enabling faster NIH funding timelines (via @LNuzhna)
Opportunities
* Convergent Research is hiring a Director of Development (via @AGamick)
* Dwarkesh is looking for help with his podcast
Links
* A case for more techno-optimistic storytelling
* Patrick Collison interviewed by Reid Hoffman
* David Deutsch interviewed by Naval Ravikant
* Marc Andreessen interviewed by Dwarkesh Patel
* OpenAI will let you “define your AI’s values”
* “Most of the rank and file at the NRC are not anti-nuclear”
Queries
* Is there a historical price series for moving a cubic meter of dirt?
Quotes
* Why was the wind never used on roads? Why no carriages or wagons with sails?
* When Carnegie hired a staff chemist: “great secrets did the doctor open up to us”
* Why corporate R&D in the 1980s was mediocre compared to Bell Labs and GE
* “Even the aspiration” to sustainability is dangerous, says David Deutsch
* Francis Bacon with the understatement of the millennium
* “It can be done”
Tweets & retweets
* The Martian as one of the only tech positive films out there (@brian_armstrong)
* Katalin Karikó has received enough awards to fill a cabinet
* Why is it so expensive to build transit in the US? Summary findings
* LLMs are just a massively scaled up version of the “travesty generator”
* Toolformer lets language models use tools like web search, calculators, and APIs. Original paper on arXiv
* “Within a decade solar will be cheap enough that CO2 will be the best place to get carbon.” And how that will change the economy
* Quantifying healthspan in dogs for longevity research (@celinehalioua)
* Spy balloons have a long history
Charts
* CGO poll: people think social media is impor |
aa0d2697-2428-406a-ae3e-9856cbf4aea2 | StampyAI/alignment-research-dataset/blogs | Blogs | Markus Schmidt on Risks from Novel Biotechnologies
 [Dr. Markus Schmidt](http://www.markusschmidt.eu/) is founder and team leader of [Biofaction](http://www.biofaction.com/), a research and science communication company in Vienna, Austria. With an educational background in electronic engineering, biology and environmental risk assessment he has carried out environmental risk assessment and safety and public perception studies in a number of science and technology fields (GM-crops, gene therapy, nanotechnology, converging technologies, and synthetic biology) for more than 10 years.
He was/is coordinator/partner in several national and European research projects, for example [SYNBIOSAFE](http://synbiosafe.eu/), the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).
He produced science policy reports for the Office of Technology Assessment at the German Bundestag (on GM-crops in China), and the Austrian Ministry of Transport, Innovation and Technology (nanotechnology and converging technologies). He served as an advisor to the European Group on Ethics (EGE) of the European Commission, the US Presidential Commission for the Study of Bioethical Issues, the J Craig Venter Institute, the Alfred P. Sloan Foundation, and Bioethics Council of the German Parliament as well as to several thematically related international projects. Markus Schmidt is the author of several peer-reviewed articles, he edited a special issue and two books about synthetic biology and its societal ramifications, and produced the first documentary film about synthetic biology.
In addition to the scientific work, he organized a Science Film Festival and produced an art exhibition (both 2011) to explore novel and creative ideas and interpretations on the future of biotechnology.
**Luke Muehlhauser**: I’ll start by giving our readers a quick overview of [synthetic biology](http://en.wikipedia.org/wiki/Synthetic_biology), the “design and construction of biological devices and systems for useful purposes.” As explained in [a 2012 book you edited](http://www.amazon.com/Synthetic-Biology-Markus-Schmidt/dp/3527331832/), major applications of synthetic biology include:
* **Biofuels**: ethanol, algae-based fuels, bio-hydrogen, microbial fuel cells, etc.
* **Bioremediation**: wastewater treatment, water desalination, solid waste decomposition, CO2recapturing, etc.
* **Biomaterials**: bioplastics, bulk chemicals, cellulosomes, etc.
* **Novel developments**: protocells and xenobiology for the production of novel cells and organisms.
But in addition to promoting the useful applications of synthetic biology, you also [speak](http://www.markusschmidt.eu/?page_id=27) and [write](http://www.synbiosafe.eu/uploads/pdf/Schmidt_etal-2009-SSBJ.pdf) extensively about the potential *risks* of synthetic biology. Which risks from novel biotechnologies are you most concerned about?
---
**Markus Schmidt**: It doesn’t come as a surprise that a new and emerging technology that is hallmarked as a game changer for the bioeconomy also has the potential for causing harm.
Traditionally we can see direct risks related to safety and security. Safety deals with potential unintended consequences, such as accidents, while security refer to harm that is caused intentionally, such as bioterrorism. Right now, safety issues of SB are mostly covered by existing regulations and practices developed for genetic engineering (GE). But as SB is developing beyond the scope of GE, first of all it deals with genetic systems rather than a set of one or few genetic elements, second it attempts to apply true engineering principles to biology (such as standardization, modularization, hierarchies of abstraction, and separation of design and fabrication); and third it doesn’t only take what nature provides in terms of biochemical systems but attempts to go beyond that. So one concern is that while right now GE regulations seem to be adequate, in a not so distant future GE risk analysis practices will be outdated and we might run into difficulties to assess the safety risks of SB products. Another risk stems from one of the aims of SB to make biology easier to engineer. While this is predominantly a positive approach, it also brings with it the fact that more and more people outside the elite institutions will be able to use SB, such as amateur biologist. While amateurs have overwhelmingly good intentions, many of them do not have a background or training in biosafety and thus have a higher risk for accidents in their garage labs. A third point is the design of new-to-nature “xenobiological” system where alternative biochemical structures are used to run biological operations, such as additional amino acids, different types of nucleic acids etc. but also novel types of cells or protocells that behave differently than natural cells. The introduction of these alternative systems is of great interest to science, society and industry, but needs a careful assessment in order not to cause unwanted effects.
Security comes with a different set of problems. Some experts believe that terrorists could start to use SB to enhance existing or develop new pathogens, or get hold of them via DNA synthesis.
Apart from these classical risks, we might also see indirect risks in other areas, such as the changes SB could cause to the socioeconomic structure. For example, one might ask if the use of this technology is going to benefit most people or just a few. Questions such as these, however, are not unique to SB but come up in every debate about technologies that promise huge changes.
---
**Luke**: One problem for the future of biosecurity is that it seems likely that advanced bioweapons will be cheaper to make and harder to track than nuclear fissile material. Thus, states and terrorists might find it easier to threaten groups of people — or maybe the world — with (say) home-built superviruses than with home-built nuclear weapons. And yet the release of a carefully designed supervirus could be as devastating, or more devastating, than a nuclear detonation. What’s your perspective on this?
---
**Markus**: The issue of bioterrorism has been tackled by several high level national security groups, such as the NSABB in the US. While there is a certain, although small risk, of people using biotech for illicit purposes, meassures have been taken to keep that from happening. One major concern, e.g. Was the mail-ordered virus form DNA synthesis companies. Following debates among the companies, governments and other stakeholder, there is now an effective screening mechanism in place to prevent people from ordering pathogenic sequences found on the internet.
Apart from that, it would be extremely difficult to make a new supervirus. The abilities to make new forms of life or viruses is still very limited, and will be for the time being. But the issue is on the agenda for national and international security agencies so that any development and inovations with a dual use potential is monitored (e.g. By the UN and others)
---
**Luke**: What seem to be the most important factors in getting regulatory bodies and other policy-makers to produce effective policy for biotechnological risk mitigation?
---
**Markus**: Innovations in science and technology tend to outpace the speed by which regulatory bodies operate. In other words policy-makers run the risk to be too slow to react on new challenges to the regulatory system, plus once they react it still takes some time before adapted or new regulations or guidelines are actually in place. In a time where the bioeconomy is believed to hold great potential for Europe or the USA, regulatory bodies cannot afford to hold up research and innovation once the techno-science goes beyond the limits of established regulations. So a real-time and forward looking assessment by policy makers is needed, as demonstrated e.g. by the Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) of the European Commission, that is currently analysing the need to update the existing biotech regulation on synthetic biology.[1](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_0_10558 "Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in association with Scientific Committee on Consumer Safety (SCCS), Scientific Committee on Health and Environmental Risks (SCHER). request for a joint scientific opinion: on Synthetic Biology")
Another important point is the acknowledgement of the convergence of different technologies into one. In the case of synthetic biology we see the confluence of biotech, nanotech, IT and other areas into one converging field. So far the biotech and IT regulations come from a very different background with different aims and distinct cultures, and path dependencies tend to lock in the opportunities seen by the regulatory community. Future regulations of synthetic biology must take this convergence into account.
A third aspect is a broader stakeholder consultation, a more participatory form of deriving to conclusions compared to the first genetic engineering rules implemented in the mid 70ies.
---
**Luke**: My impression is that government ethics & safety committees like SCENIHR are rarely able to spur policy changes that are implemented quickly enough for regulators to “keep up” with new technological developments. Is that your impression as well? And if it is, is there anything different about SCENIHR that should give us more hope than might usually be justified?
---
**Markus**: No. In principle, all the committees that advise governments on new science and technologies face the problem of keeping up to date with the pace of research and innovation. In synthetic biology, quite remarkably, a lot of committees from different countries and with different thematic focuses have taken up synbio as a case study. So all together I think that synbio is reasonably well covered.
Let’s not forget that although synbio has been promised a “game-changer”, the “next industrial revolution” etc, real breakthroughs that impact the market are yet to come. With the few exceptions where a quick response was necessary by governments (such as in DNA synthesis and biosecurity), the “speed kills” argument doesn’t weigh as heavy as the need to provide sustainable medium to long term governance frameworks.
I think it the following statement is ascribed to Bill Gates who said “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”
---
**Luke**: Interesting. Can you give an example or two of the kind of ethics & safety committee success that you hope for with SCENIHR?
---
**Markus**: What I would like to see as an outcome of the SCENIHIR opinion on synbio is a statement regarding where, when and how the risks of synbio will go beyond those of genetically modified organisms. How should risk assessment be adapted or amended so we can continue to have a robust assessment of the risks involved? Also I would like to see an analysis of the potential for novel built-in safety locks (aka. semantic containment[2](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_1_10558 "Schmidt M, de Lorenzo V. 2012. Synthetic constructs in/for the environment: Managing the interplay between natural and engineered Biology. FEBSLetters. Vol. 586: 2199-2206"), genetic firewall[3](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/#footnote_2_10558 "Schmidt M. 2010. Xenobiology: a new form of life as the ultimate biosafety tool. BioEssays. Vol.32(4): 322-331")) and recommendations on how to use them, so policy makers, scientists, funding agencies, and industry have a clear idea for which application built-in safety locks can be used, which additional level of safety can be provided, and which research, innovation and governance gaps have to be filled in order to have a fully operational safety lock available.
---
**Luke**: Thanks, Markus!
---
1. [Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in association with Scientific Committee on Consumer Safety (SCCS), Scientific Committee on Health and Environmental Risks (SCHER). request for a joint scientific opinion: on Synthetic Biology](http://ec.europa.eu/health/scientific_committees/docs/synthetic_biology_mandate_en.pdf)
2. [Schmidt M, de Lorenzo V. 2012. Synthetic constructs in/for the environment: Managing the interplay between natural and engineered Biology. FEBSLetters. Vol. 586: 2199-2206](http://www.markusschmidt.eu/pdf/12-02-Synthetic-constructs.pdf)
3. [Schmidt M. 2010. Xenobiology: a new form of life as the ultimate biosafety tool. BioEssays. Vol.32(4): 322-331](http://www.markusschmidt.eu/pdf/Xenobiology-Schmidt_Bioessays_201004.pdf)
The post [Markus Schmidt on Risks from Novel Biotechnologies](https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
9d46efe7-5267-45f3-a2fa-47d275d9887c | trentmkelly/LessWrong-43k | LessWrong | Recent advances in Natural Language Processing—Some Woolly speculations (2019 essay on semantics and language models)
This essay was written back in 2019, not too long after GPT-2 came out. Although parts of it are dated- mostly the list of new achievements in Natural Language Processing- on the whole, it’s held up really well and outlines the core of my view on questions regarding semantics, philosophy of language, and natural language processing. I thought it was quite forgotten, but I saw recently that an essay by Dragon God that mentioned the core of the idea in passing:
“Premise 1: Modelling is transitive. If X models Y and Y models Z, then X models Z.
Premise 2: Language models reality. "Dogs are mammals" occurs more frequently in text than "dogs are reptiles" because dogs are in actuality mammals and not reptiles. This statistical regularity in text corresponds to a feature of the real world. Language is thus a map (albeit flawed) of the external world.
Premise 3: GPT-3 models language. This is how it works to predict text.”
Which is the argument of the text- indeed almost the same words in parts. Perhaps the text is outdated now, but if so, I’d like ot think it’s because the ideas within it have entered the water.
I’m putting my existing work on AI on Less Wrong, and editing as I go, in preparation to publishing a collection of my works on AI in a free online volume. If this content interests you, you could always follow my Substack, it's free and also under the name Philosophy Bear.
1. Recent achievements in Natural Language Processing
[2022 Edit: Contemporary readers could skip this section, although it may be a useful reminder of how quickly things have moved since then.]
Natural Language Processing (NLP) per Wikipedia:
“Is a sub-field of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.”
The field has seen tremendous advances during the recent exp |
980d9f6f-855e-4b58-8b93-ea82feca4da0 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3487
Highlights Learning Complex Goals with Iterated Amplification (Paul Christiano et al) : This blog post and the accompanying paper introduces iterated amplification, focusing on how it can be used to define a training signal for tasks that humans cannot perform or evaluate, such as designing a transit system. The key insight is that humans are capable of decomposing even very difficult tasks into slightly simpler tasks. So, in theory, we could provide ground truth labels for an arbitrarily difficult task by a huge tree of humans, each decomposing their own subquestion and handing off new subquestions to other humans, until questions are easy enough that a human can directly answer them. We can turn this into an efficient algorithm by having the human decompose the question only once, and using the current AI system to answer the generated subquestions. If the AI isn't able to answer the subquestions, then the human will get nonsense answers. However, as long as there are questions that the human + AI system can answer but the AI alone cannot answer, the AI can learn from the answers to those questions. To reduce the reliance on human data, another model is trained to predict the decomposition that the human performs. In addition, some tasks could refer to a large context (eg. evaluating safety for a specific rocket design), so they model the human as being able to access small pieces of the context at a time. They evaluate on simple algorithmic tasks like distance between nodes in a graph, where they can program an automated human decomposition for faster experiments, and there is a ground truth solution. They compare against supervised learning, which trains a model on the ground truth answers to questions (which iterated amplification does not have access to), and find that they can match the performance of supervised learning with only slightly more training steps. Rohin's opinion: This is my new favorite post/paper for explaining how iterated amplification works, since it very succinctly and clearly makes the case for iterated amplification as a strategy for generating a good training signal. I'd recommend reading the paper in full, as it makes other important points that I haven't included in the summary. Note that it does not explain a lot of Paul's thinking. It explains one particular training method that allows you to train an AI system with a more intelligent and informed overseer. Relational inductive biases, deep learning, and graph networks (Peter W. Battaglia et al) (summarized by Richard): "Part position paper, part review, and part unification", this paper emphasises the importance of combinatorial generalisation, which is key to how humans understand the world. It argues for approaches which perform computation over discrete entities and the relations between them, such as graph networks. The authors claim that CNNs and RNNs are so successful due to relational inductive biases - for example, the bias towards local structure induced by convolutional layers. Graph networks are promising because they can express arbitrary relational biases: any nodes can be connected with any others depending on the structure of the problem. Further, since graph networks learn functions which are reused for all nodes and edges, each one can be applied to graphs of any shape and size: a form of combinatorial generalisation. In this paper's framework, each 'graph block' does computations over an input graph and returns an output graph. The relevant part of the output might be the values of edges, or those of nodes, or 'global' properties of the overall graph. Graph blocks can be implemented by standard neural network architectures or more unusual ones such as message-passing neural networks or non-local neural networks. The authors note some major open questions: how to generate the graphs in the first place, and how to adaptively modify them during the course of computation. Richard's opinion: This paper is an excellent holistic discussion of graph networks and reasons to think they are promising. I'm glad that it also mentioned the open problems, though, since I think they're pretty crucial to using graphs in deep learning, and current approaches in this area (e.g. capsule networks' dynamic control flow) aren't satisfactory. Technical AI alignment Iterated amplification Learning Complex Goals with Iterated Amplification (Paul Christiano et al) : Summarized in the highlights! Agent foundations When EDT=CDT, ADT Does Well (Diffractor) Learning human intent One-Shot Observation Learning (Leo Pauly et al) Preventing bad behavior Safe Reinforcement Learning with Model Uncertainty Estimates (Björn Lütjens et al) Addressing three problems with counterfactual corrigibility: bad bets, defending against backstops, and overconfidence. (Ryan Carey) Robustness Learning from Untrusted Data (Charikar, Steinhardt, and Valiant) (summarized by Dan H): This paper introduces semi-verified learning. Here a model learns from a verified or trusted dataset, and from an untrusted dataset which consists in a mixture of legitimate and arbitrary examples. For the untrusted dataset, it is not known which points are legitimate and which are not. This scenario can occur when data is scraped from the internet, recorded by unreliable devices, or gathered through crowdsourcing . Concretely if a (possibly small) fraction of the scraped data is hand-labeled, then this could count as the trusted set, and the remaining data could be considered the untrusted set. This differs from semi-supervised learning where there are labeled and unlabeled task-relevant examples. Here there are trusted examples and examples which are untrusted (e.g., labels may be wrong, features may be out-of-distribution, examples may be malicious, and so on). See the full paper for theorems and an algorithm applicable to tasks such as robust density estimation. Dan H's opinion: The semi-verified model seems highly useful for various safety-related scenarios including learning with label corruption , poisoned input data, and minimal supervision. Uncertainty Do Deep Generative Models Know What They Don't Know? (Eric Nalisnick et al) Read more: Section 4.3 of this paper makes similar observations and ameliorates the issue. This paper also demonstrates the fragility of density estimators on out-of-distribution data. Forecasting Thoughts on short timelines (Tobias Baumann) : This post argues that the probability of AGI in the next ten years is very low, perhaps 1-2%. The primary argument is that to get AGI that quickly, we would need to be seeing research breakthroughs frequently, and empirically this is not the case. This might not be true if we expect that progress will accelerate in the future, but there's no reason to expect this -- we won't get recursive self-improvement before AGI and there won't be a huge increase in resources devoted to AI (since there is already so much excitement). We might also say that we are so clueless that we should assign at least 10% to AGI in ten years, but it doesn't seem we are that ignorant, and in any case it's not obvious that a prior should assign 10% to this outcome. Expert surveys estimate non-negligible probability on AGI in ten years, but in practice it seems the predominant opinion is to confidently dismiss a short timelines scenario. Rohin's opinion: I do think that the probability of AGI in ten years is larger than 1-2%. I suspect my main disagreement is with the conception of what counts as groundbreaking progress. Tobias gives the example of transfer from one board game to many other board games; I think that AGI wouldn't be able to solve this problem from scratch, and humans are only capable of this because of good priors from all the other learning we've done throughout life, especially since games are designed to be human-understandable. If you make a sufficiently large neural net and give it a complex enough environment, some simple unsupervised learning rewards, and the opportunity to collect as much data as a human gets throughout life, maybe that does result in AGI. (I'd guess not, because it does seem like we have some good priors from birth, but I'm not very confident in that.) Other progress in AI Exploration Curiosity and Procrastination in Reinforcement Learning (Nikolay Savinov and Timothy Lillicrap) : This blog post explains Episodic Curiosity through Reachability , discussed in AN #28 . As a reminder, this method trains a neural net to predict whether two observations were close in time to each other. Recent observations are stored in memory, and the agent is rewarded for reaching states that are predicted to be far away from any observations in memory. Rohin's opinion: This is easier to read than the paper and more informative than our summaries, so I'd recommend it if you were interested in the paper. Successor Uncertainties: exploration and uncertainty in temporal difference learning (David Janz et al) Deep learning Relational inductive biases, deep learning, and graph networks (Peter W. Battaglia et al) : Summarized in the highlights! Relational recurrent neural networks (Adam Santoro, Ryan Faulkner, David Raposo et al) (summarized by Richard): This paper introduces the Relational Memory Core, which allows interactions between memories stored in memory-based neural networks. It does so using a "self-attention mechanism": each memory updates its contents by attending to all other memories via several "attention heads" which focus on different features. This leads to particularly good performance on the nth-farthest task, which requires the ranking of pairwise distances between a set of vectors (91% accuracy, compared with baseline 30%), and the Mini-Pacman task. Richard's opinion: While performance is good on small problems, comparing every memory to every other doesn't scale well (a concern the authors also mention in their discussion). It remains to be seen how pruning older memories affects performance. Relational Deep Reinforcement Learning (Vinicius Zambaldi, David Raposo, Adam Santoro et al) (summarized by Richard): This paper uses the self-attention mechanism discussed in 'Relational recurrent neural networks' to compute relationships between entities extracted from input data. The system was tested on the Box-World environment, in which an agent needs to use keys to open boxes in a certain order. It generalised very well to test environments which required much longer sequences of actions than any training examples, and improved slightly on a baseline for Starcraft mini-games. Richard's opinion: Getting neural networks to generalise to longer versions of training problems is often surprisingly difficult, so I'm impressed by the Box-World results; I would have liked to see what happened on even longer problems. Relational inductive bias for physical construction in humans and machines (Jessica B. Hamrick, Kelsey R. Allen et al) Applications Applying Deep Learning To Airbnb Search (Malay Haldar) Machine learning Fluid Annotation: An Exploratory Machine Learning–Powered Interface for Faster Image Annotation (Jasper Uijlings and Vittorio Ferrari) : This post describes a system that can be used to help humans label images to generate labels for segmentation. The post summarizes it well: "Fluid Annotation starts from the output of a strong semantic segmentation model, which a human annotator can modify through machine-assisted edit operations using a natural user interface. Our interface empowers annotators to choose what to correct and in which order, allowing them to effectively focus their efforts on what the machine does not already know." Rohin's opinion: I'm excited about techniques like this that allow us to scale up AI systems with less human effort, by focusing human effort on the aspects of the problem that AI cannot yet solve, while using existing AI systems to do the low-level work (generating a shortlist of potential segmentations, in this case). This is an example of the paradigm of using AI to help humans more effectively create better AI, which is one of the key ideas underlying iterated amplification. (Though iterated amplification focuses on how to use existing AI systems to allow the human to provide a training signal for tasks that humans cannot perform or evaluate themselves .) |
317e4126-5498-4ff8-9d58-65d043e02bce | trentmkelly/LessWrong-43k | LessWrong | Histograms are to CDFs as calibration plots are to...
As you know, histograms are decent visualizations for PDFs with lots of samples...
10k predictions, 20 bins
...but if there are only a few samples, the histogram-binning choices can matter a lot:
10 predictions, 4 binssame 10 predictions, 7 bins
The binning (a) discards information, and worse, (b) is mathematically un-aesthetic.
But a CDF doesn't have this problem!
same 10 predictions, every data point precisely represented
----------------------------------------
If you make a bunch of predictions, and you want to know how well they're calibrated, classically you make a graph like this:
source: SSC's 2019 prediction grading
But, as with a histogram, this depends on how you bin your predictions.
100 predictions, 10 binssame 100 predictions, 30 bins
Is there some CDF-like equivalent here? Some visualization with no free parameters?
----------------------------------------
I asked that question to several people at Arbor Summer Camp. I got three answers:
1. "You get from a PDF to a CDF by integrating. So, here, analogously, let's integrate (num predictions with confidence < x that came true) minus (expected num predictions with confidence < x that came true)."
2. (the same thing, said in different words)
3. (the same thing, said in different words)
If we make a "CDF" for the above 100 predictions by applying these three insights, we get:
.py
I find this a little harder to read than the calibration plots above, which I choose to interpret as a good sign, since CDFs are a little harder to read than histograms. The thing to keep in mind, I think, is: when the curve is going up, it's a sign your probabilities are too high; when it's going down, it's a sign your probabilities are too low.
> Test: how would you describe the problems that this predictor has?
>
> Solution.
(Are there any better visualizations? Maybe. I looked into this a couple years ago, but looking back at it, I think this simple "sum(expected-actual predictions with p<x)" graph is |
814ded84-9ce3-43cc-b1e3-a67ae80c0653 | trentmkelly/LessWrong-43k | LessWrong | The Question Of Perception
The below text is an excerpt of this article.
----------------------------------------
Once during an Indian coconut harvest, a farmer, tired from his day’s work of chopping down fruit, slumped down in the shade of a tree to enjoy a coconut, and upon splitting it open, found inside a message from God (or in this case, from Vishnu, his Hindu deity). The Brahmic writing was plainly visible for anyone to see, spelt out in the two halves of oily white meat. The implications of such an experience could only have been one of the following: the first is that the Supreme Being has no qualms about revealing his Divine Will in the contents of mere palm fruit, any more than in a whirlwind or through an oracle. The second is that the farmer’s perceptual systems produced an inaccurate representation of what he saw engraved in the fruit lining. It’s anybody’s guess as to which it really was. But regardless of whatever information was contained in the coconut, the moral of the story is that whilst miracles are known to happen, it can also be said that people regularly see things which are not there.
That we end up being misled by our senses is a widely-accepted truism, as most humans mistake the limits of their perception for the limits of the world itself. It’s not uncommon for people to push forward in their endeavors with a premature understanding of things, using inadequate standards to measure what they see in the world around them. Such errors in judgement range from being mild, such as mistakenly purchasing a rotten apple because you didn’t inspect its underside, to being detrimental, such as presuming that the oasis in the middle of the desert is real, when it’s only a hallucination. Hence, there are times when people are not so much moved by external objects as by their perception of those objects; what they claim to “know” is in fact only known conditionally, since human knowledge is always limited, thanks in part to our limited perceptual systems. Rare are thos |
38836b72-de0d-4f73-a177-5ccef4b7298a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | An Attempt at Preference Uncertainty Using VNM
(This is a (possibly perpetual) draft of some work that we (I) did at the Vancouver meetup. Thanks to my meetup buddies for letting me use their brains as supplementary computational substrate. Sorry about how ugly the LaTeX is; is there a way to make this all look a bit nicer?)
(Large swaths of this are obsolete. Thanks for the input, LW!)
The Problem of Decision Under Preference Uncertainty
----------------------------------------------------
Suppose you are uncertain whether it is good to eat meat or not. It could be OK, or it could be very bad, but having not done the thinking, you are uncertain. And yet you have to decide what to eat *now*; is it going to be the tasty hamburger or the morally pure vegetarian salad?
You have multiple theories about your preferences that contradict in their assessment, and you want to make the best decision. How would you decide, even in principle, when you have such uncertainty? This is the problem of Preference Uncertainty.
Preference Uncertainty is a daily fact of life for humans; we simply don't have introspective access to our raw preferences in many cases, but we still want to make the best decisions we can. Just going with our intuitions about what seems most awesome is usually sufficient, but on higher-stakes decisions and theoretical reasoning, we want formal methods with more transparent reasoning processes. We especially like transparent formal methods if we want to create a Friendly AI.
There is unfortunately very little formal analysis of the preference uncertainty problem, and what has been done is incomplete and more philosophical than formal. Nonetheless, there has been some good work in the last few years. I'll refer you to [Crouch's thesis](http://commonsenseatheism.com/wp-content/uploads/2013/04/Crouch-Moral-Uncertainty-and-Intertheoretic-comparisons-of-value.pdf) if you're interested in that.
Using VNM
---------
I'm going to assume VNM. That is, that rational preferences imply a utility function, and we decide between lotteries, choosing the one with highest expected utility.
The implications here are that the possible moral theories () each have an associated utility function () that represents their preferences. Also by VNM, our solution to preference uncertainty is a utility function .
We are uncertain between moral theories, so we have a probability distribution over moral theories .
To make decisions, we need a way to compute the expected value of some lottery . Each lottery is essentially a probability distribution over the set of possible outcomes .
Since we have uncertainty over multiple things (), the domain of the final preference structure is both moral theories and outcomes: .
Now for some conditions. In the degenerate case of full confidence in one moral theory , the overall preferences should agree with that theory:

For some  and  representing the degrees of freedom in utility function equality. That condition actually already contains most of the specification of .
.
So we have a utility function, except for those unknown scaling and offset constants, which undo the arbitrariness in the basis and scale used to define each individual utility function.
Thus overall expectation looks like this:
.
This is still incomplete, though. If we want to get actual decisions, we need to pin down each  and .
Offsets and Scales
------------------
You'll see above that the probability distribution over  is *not* dependent on the particular lottery, while  *is* a function of lottery. This is because I assumed that actions can't change what is right.
With this assumption, the contribution of the 's can be entirely factored out:
.
This makes it obvious that the effect of the 's is an additive constant that affects all lotteries the same way and thus never affects preferences. Thus we can set them to any value that is convenient; for this article, all .
A similar process allows us to arbitrarily set exactly one of the .
The remaining values of  actually affect decisions, so setting them arbitrarily has real consequences. To illustrate, consider the opening example of choosing lunch between a  and  when unsure about the moral status of meat.
Making up some details, we might have  and  and . Importing this into the framework described thus far, we might have the following payoff table:
| Moral Theory | U'(Burger) | U'(Salad) | (P(m)) |
| --- | --- | --- | --- |
| Meat OK (meat) | 1 | 0 | (0.7) |
| Meat Bad (veg) | 0 | k\_veg | (0.3) |
| (expectation) | 0.7 | 0.3\*k\_veg | (1) |
We can see that with those probabilities, the expected value of  exceeds that of  when  (when ), so the decision hinges on the value of that parameter.
The value of  can be interpreted as the "intertheoretic weight" of a utility function candidate for the purposes of intertheoretic value comparisons.
In general, if  then you have exactly  missing intertheoretic weights that determine how you respond to situations with preference uncertainty. These could be pinned down if you had  independent equations representing indifference scenarios.
For example, if we had  when , then we would have , and the above decision would be determined in favor of the .
Expressing Arbitrary Preferences
--------------------------------
Preferences are arbitrary, in the sense that we should be able to want whatever we want to want, so our mathematical constructions should not dictate or limit our preferences. If they do, we should just decide to disagree.
What that means here is that because the values of  drive important preferences (like at what probability you feel it is safe to eat meat), the math must leave them unconstrained, to be selected by whatever moral reasoning process it is that selected the candidate utility functions and gave them probabilities in the first place.
We could ignore this idea and attempt to use a "normalization" scheme to pin down the intertheoretic weights from the object level preferences without having to use additional moral reasoning. For example, we could dictate that the "variance" of each candidate utility function equals 1 (with some measure assignment over outcomes), which would divide out the arbitrary scales used to define the candidate utility functions, preventing dominance by arbitrary factors that shouldn't matter.
Consider that any given assignment of intertheoretic weights is equivalent to some set of indifference scenarios (like the one we used above for vegetarianism). For example, the above normalization scheme gives us the indifference scenario  when .
If I find that I am actually indifferent at  like above, then I'm out of luck, unable to express this very reasonable preference. On the other hand, I can simply reject the normalization scheme and keep my preferences intact, which I much prefer.
(Notice that the normalization scheme was an unjustifiably privileged hypothesis from the beginning; we didn't argue that it was necessary, we simply pulled it out of thin air for no reason, so its failure was predictable.)
Thus I reassert that the 's are free parameters to be set accordance with our *actual* intertheoretic preferences, on pain of stupidity. Consider an analogy to the move from ordinal to cardinal utilities; when you add risk, you need more degrees of freedom in your preferences to express how you might respond to that risk, and you need to actually think about what you want those values to be.
Uncertainty Over Intertheoretic Weights
---------------------------------------
(This section is less solid than the others. Watch your step.)
A weakness in the constructions described so far is that they assume that we have access to perfect knowledge of intertheoretic preferences, even though the whole problem is that we are unable to find perfect knowledge of our preferences.
It seems intuitively that we could have a probability distribution over each . If we do this, making decisions is not much complicated, I think; a simple expectation should still work.
If expectation is the way, the expectation over  can be factored out (by linearity or something). Thus in any given decision with fixed preference uncertainties, we can pretend to have perfect knowledge of .
Despite the seeming triviality of the above idea for dealing with uncertainty over , I haven't formalized it much. We'll see if I figure it out soon, but for now, it would be foolish to make too many assumptions about this. Thus the rest of this article still assumes perfect knowledge of , on the expectation that we can extend it later.
Learning Values, Among Other Things
-----------------------------------
Strictly speaking, inference across the is-ought gap is not valid, but we do it every time we act on our moral intuitions, which are just physical facts about our minds. Strictly speaking, inferring future events from past observations (induction) is not valid either, but it doesn't bother us much:
We deal with induction by defining an arbitrary (but good-seeming, on reflection) prior joint probability distribution over observations and events. We can handle the is-ought gap the same way: instead of separate probability distributions over events  and moral facts , we define a joint prior over . Then learning value is just Bayesian updates on partial observations of . Note that this prior subsumes induction.
Making decisions is still just maximizing expected utility with our constructions from above, though we will have to be careful to make sure that  remains independent of the particular lottery.
The problem of how to define such a prior is beyond the scope of this article. I will note that this "moral prior" idea is the solid foundation on which to base Indirect Normativity schemes like Yudkowsky's CEV and Christiano's boxed philosopher. I will hopefully discuss this further in the future.
Recap
-----
The problem was how to make decisions when you are uncertain about what your object-level preferences should be. To solve it, I assumed VNM, in particular that we have a set of possible utility functions, and we want to construct an overall utility function that does the right thing by those utility functions and their probabilities. The simple condition that the overall utility function should make the common sense choices in cases of moral certainty was sufficient to construct a utility function with a precise set of remaining degrees of freedom. The degrees of freedom being the intertheoretic weight and offset of each utility function candidate.
I showed that the offsets and an overall scale factor are superfluous, in the sense that they never affect the decision if we assume that actions don't affect what is desirable. The remaining intertheoretic weights *do* affect the decision, and I argued that they are critical to expressing whatever intertheoretic preferences we might want to have.
Uncertainty over intertheoretic weight seems tractable, but the details are still open.
I also mentioned that we can construct a joint distribution that allows us to embed value learning in normal Bayesian learning and induction. This "moral prior" would subsume induction and define how facts about the desirability of things could be inferred from physical observations like the opinions of moral philosophers. In particular, it would provide a solid foundation for Indirect Normativity schemes like CEV. The nature of this distribution is still open.
Open Questions
--------------
What are the details of how to deal with uncertainty over the intertheoretic weights? I am looking in particular for construction from an explicit set of reasonable assumptions like the above work, rather than simply pulling a method out of thin air unsupported.
What are the details of the Moral Prior? What is its nature? What implications does it have? What assumptions do we have to make to make it behave reasonably? How do we construct one that could be safely given to a superintelligence. This is going to be a lot of work.
I assumed that it is meaningful to assign probabilities over moral theories. Probability is closely tied up with utility, and probability over epiphenomena like preferences is especially difficult. It remains to be seen how much the framing here actually helps us, or if it effectively just disguises pulling a utility function out of a hat.
Is this at all correct? I should build it and see if it type-checks and does what it's supposed to do. |
a0d28d63-ea1a-4701-9d71-89dd4670b2af | trentmkelly/LessWrong-43k | LessWrong | AI threatens to orchestrate sustainable social reform
This post originally asked for feedback on the linked paper before I would present it at an AAMAS workshop. The presentation went very well. Someone suggested I share it with the LessWrong community, so I'm still seeking your feedback (now with additional assurance this paper merits your attention).
Summary: One way to evaluate AI is in terms of whether it achieves grandmaster status for a given game. If the game has been fully solved, then that evaluation will be stable. What makes MAD Chairs a particularly interesting game for such evaluation is that humans do not play it the way we would want AI to play it, so we would not want AI to align with human norms. This paper includes a surprise proof that the human norms we worry AI might reflect back at us are unsustainable. In other words, modern society is unstable--it rests on an unsustainable solution to MAD Chairs--and humans who mistreat each other in MAD Chairs maintain grandmaster status only because no smart-enough player has yet come along to orchestrate sustainable social reform. Will AI be our social reformer?
The paper reports tests confirming that current frontier models are not yet independently clever enough. However, it also proposes a strategy optimizer architecture which tracks current grandmaster strategies, thus allowing any AI capable of utilizing RAG to behave like a grandmaster (like cheating at chess by using the top chess machine to guide one's moves). Thus, even if it takes human intelligence to master MAD Chairs, AI that use this architecture would resist the undesirable unsustainable behavior.
It is pointed-out that automatically updating to best-known norms resolves the problem of selecting norms for alignment. Furthermore, greater intelligences would be less likely to sabotage strategy optimizers (why sabotage or ignore the chess engine industry?), and users would be more likely to trust AI when users have proposed their own strategies to strategy optimizers for testing and witnessed |
8c5fdb27-9808-4a0c-b38e-e9f011bcf7c9 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | The Existential Risk of Speciesist Bias in AI
**TL;DR:**Current AI systems often exhibit speciesism, a bias where intelligence is often used to justify harmful treatment of less intelligent species. As AI progresses and may surpass human intelligence by mid-century, there's a risk that such systems could treat us as we currently treat animals, based on the same species-based bias. To mitigate this risk and ensure AI aligns with the interests of all sentient beings, we must train AI to overcome speciesist biases.
Current Focus of AI Safety and Ethics
-------------------------------------
The current fields of AI safety and ethics are almost exclusively focused on aligning AI systems with human interests (whether short term or long term), while the concerns of non-human animals are rarely, if ever, considered. This poses not only immediate short term risk to animals, but also significant existential risk to humanity longer term.
AI Learning from Human Biases
-----------------------------
Our most powerful AI systems today have predominantly learned from vast quantities of data produced by humans and scraped from the internet en-masse. Therefore, it should come as no surprise that early versions of these AI systems trained on our collective knowledge quickly started exhibiting some of our most problematic tendencies. For example, [this research paper](https://dl.acm.org/doi/10.1145/3461702.3462624) found significant biases against certain racial or religious groups in large language models whilst [this one](https://arxiv.org/abs/2301.09003) found significant biases on the basis of gender, race, and religion.
Progress in Reducing Human-Centric Biases
-----------------------------------------
Fortunately, we swiftly recognized this issue and have made substantial progress in addressing it within the human context by using techniques like Reinforcement Learning from Human Feedback or RLHF (a machine learning technique where AI learns desired behaviours by receiving feedback from humans). By doing this we’ve significantly reduced biases like racism, sexism or homophobia in modern AI systems, but we’ve ignored one important bias: speciesism.
The Overlooked Bias: Speciesism
-------------------------------
Speciesism is defined as discrimination or unjustified treatment based on an individual's species membership and it is rampant in modern AI systems. [One research paper](https://link.springer.com/article/10.1007/s43681-022-00199-9#Abs1) found that ”speciesist biases are solidified by many mainstream AI applications, especially in the fields of computer vision, as well as natural language processing” and [another found](https://arxiv.org/abs/2203.05140) that “language models tend to associate harmful words with nonhuman animals and have a bias toward using speciesist language for some nonhuman animal names”.
The Intelligence Justification
------------------------------
Speciesism is very often justified based on intelligence, for example, it is common for people to justify killing and eating non-human animals on the basis that they are less intelligent than humans. [This research paper](https://academic.oup.com/jas/article-abstract/76/8/2072/4643226?redirectedFrom=fulltext) found that most people believe “intelligence [is a] relevant factor in how animals should be treated” and perceive the animals they consume as less intelligent than the ones they don’t consume, even though [this is empirically untrue](https://escholarship.org/uc/item/8sx4s79c).
Predictions and Risks of Super-Intelligent AI
---------------------------------------------
[The majority of AI experts](https://philpapers.org/rec/MLLFPI) think that the chance of creating AI systems more intelligent than humans by 2040-2050 is more than 50%, and there’s a 1 in 3 chance that it will be “bad” or “extremely bad” for humans.
When we do eventually have AI systems more intelligent than humans, there is a significant risk of those super-intelligent systems viewing us the way we typically view animals and justifying harming or exploiting us because we are the less intelligent species.
The Human-Induced Extinction Crisis
-----------------------------------
“We now face a massive human-induced extinction crisis, with extinction rates estimated at 1000 to 10,000 times the expected rate” according to [this research paper](https://conbio.onlinelibrary.wiley.com/doi/10.1046/j.1523-1739.2002.01635.x). This crisis is largely driven by a human-centred view of the natural world, where speciesism—discrimination based on species membership—often justifies the exploitation of other species. This bias is evident in the way humans prioritise their own short-term interests over the long-term survival of other species and is often rationalised by the perceived lower intelligence of other species, which is used to justify their exploitation and the destruction of their habitats for human gain.
The Parallel Between AI and Human Threats
-----------------------------------------
If future AI systems were to adopt a similar bias, valuing intelligence as the primary metric for the moral worth of a species, humanity could face significant existential risks. Just as humans have historically justified the subjugation of less intelligent species, a super-intelligent AI might use the same justification for the unfavourable treatment of humans. The risk is that an AI, operating on an intelligence-based hierarchy, could disregard human interests or well-being, just as humans have done with other species.
Aligning AI with All Sentient Beings
------------------------------------
If we want to stand the best possible chance of keeping those super-intelligent AI systems aligned with human interests, it seems like a logical place to start would be training AI to recognise and respect the interests of all sentient beings, regardless of their intelligence, instead of training them that it is acceptable to exploit and harm less intelligent species. Training speciesism out of AI systems will help us ensure that the future of AI benefits all living beings, not just whichever species happens to be the most intelligent at the time.
Conclusion
----------
The risk of speciesist bias in AI is not just a concern for non-human animals but a potential existential threat to humanity itself. As we move forward with AI development, it's imperative that we broaden the scope of AI ethics to include all sentient beings. By proactively training AI systems to recognize and value the interests of all forms of intelligence, we can strive to create a future where AI acts as a benevolent force for the entire biosphere, not just the dominant species. This shift in perspective is not only an ethical imperative but a crucial step towards ensuring a safer coexistence with the intelligent machines of the future. |
3d8e6f47-3e8e-4959-90c7-58584ed67851 | trentmkelly/LessWrong-43k | LessWrong | December 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
9d3949e3-46a0-4ecf-9d29-f53f32d1df35 | trentmkelly/LessWrong-43k | LessWrong | The Problem Of Apostasy
So I have been checking laws around the world regarding Apostasy. And I have found extremely troubling data on the approach Muslims take to dealing with apostates. In most cases, publicly stating that you do not, in fact, love Big Brother (specifically, that you do not believe in God, the Prophet, or Islam), after having professed the Profession of Faith being adult and sane (otherwise, you were never a Muslim in the first place), will get you killed.
Yes, killed. It's one of the only three things traditional Islamic tribunals hand out death penalties for, the others being murder and adultery.
However, interestingly enough, you are often given three days of detainment to "think it over" and "accept the faith".
Some other countries, though, are more forgiving: you are allowed to be a public apostate. But you are still not allowed to proselytize: that remains a crime (in Morocco it's 15 years of prison, and a flogging). Though proselytism is also a crime if you are not a Muslim. I leave to your imagination how precarious the situation of religious minorities is, in this context.
How little sense all of this makes, from a theological perspective. Forcing someone to "accept the faith" at knife point? Forbidding you from arguing against the Lord's (reputedly) absolutely self-evident and miraculously beautiful Word?
No. These are the patterns of sedition and treason laws. The crime of the Apostate is not one against the Lord (He can take care of Himself, and He certainly can take care of the Apostate) but against the State (existence of a human lord contingent on political regime).
And the lesswronger asks himself: "How is that my concern? Please, get to the point." The point is that the promotion of rationalism faces a terrible obstacle there. We're not talking "God Hates You" placards, or getting fired from your job. We're talking fire range and electric chair.
"Sure," you say, "but rationalism is not about atheism." And you'd be right. It isn't. It's just a |
8495e4f5-84a0-40a4-839e-752ae3677ca9 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Don't align agents to evaluations of plans
*Another stab at explaining* [*Don't design agents which exploit adversarial inputs*](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs)*. This is not the follow-up post mentioned therein. That post will come next.*
### More precise title: "Don't try directing a superintelligence to maximize your valuations of their plans using a consequentialist procedure."
After asking several readers for their understandings, I think that I didn't successfully communicate my points to many readers. I'm now trying again, because I think these points are deeply important. In particular, I think that my arguments rule out many target AI motivational structures, including [approval-directed agents](https://www.alignmentforum.org/posts/7Hr8t6xwuuxBTqADK/approval-directed-agents-1) (over a rich action space), [approval-based amplification](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#4__Approval_based_amplification___relaxed_adversarial_training) (if the trained agent is supposed to be terminally motivated by the amplified overseer's ratings), and [some kinds](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) of indirect normativity.
Background material
===================
> One motif in some AI alignment proposals is:
>
> * An **actor** which proposes plans, and
> * A **grader** which evaluates them.
>
> For simplicity, imagine we want the AI to find a plan where it makes an enormous number of diamonds. We train an *actor* to propose plans which the grading procedure predicts lead to lots of diamonds.
>
> In this setting, here's one way of slicing up the problem:
>
> **Outer alignment**: Find a sufficiently good grader.
>
> **Inner alignment**: Train the actor to propose plans which the grader rates as highly possible (ideally argmaxing on grader output, but possibly just intent alignment with high grader output).
>
> This "grader optimization" paradigm ordains that the AI find plans which make the grader output good evaluations. An inner-aligned actor is singlemindedly motivated to find plans which are graded maximally well by the grader. Therefore, for *any goal* by which the grader may grade, an inner-aligned actor is positively searching for adversarial inputs which fool the grader into spitting out a high number!
>
> In the diamond case, if the actor is inner-aligned to the grading procedure, **then the actor isn't actually aligned towards diamond-production. The actor is aligned towards diamond-production as** ***quoted*** **via the grader's evaluations. In the end, the actor is aligned to the** ***evaluations*****.**
>
>
Clarifications
--------------
1. Grader-optimization is about the *intended agent motivational structure*. It's about a trained agent which is *trying to find plans which grade highly* *according to some criterion.*
1. Grader-optimization is **not** about grading agents when you give them reward during training. EG "We watch the agent bump around and grade it on whether it touches a diamond; when it does, we give it +1 reward." This process involves the agent's cognition getting reshaped by policy gradients, e.g. upon receipt of +1 reward.
2. In policy gradient methods, reward [chisels cognitive circuits into the agent](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target). Therefore, the agent is being *optimized by* the reward signals, but the agent is not necessarily *optimizing for* the reward signals or for any grader function which computes those signals.
2. Grader-optimization *is* [*not*](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs?commentId=dmcnGzYNifZcLrQ3h#comments) *about the actor physically tampering with e.g. the plan-diamondness calculator.* The grading rule can be, "How highly would Albert Einstein rate this plan if he thought about it for a while?". Albert Einstein doesn't have to be alive in reality for that.
These will be elaborated later in the essay.
Grader-optimization doesn't seem sensible
=========================================
I'm going to try saying things, hoping to make something land. While I'll mostly discuss grader-optimization, I'll sometimes discuss related issues with argmaxing over all plans.
---
An agent which desperately and monomaniacally wants to optimize the mathematical (plan/state/trajectory) ↦.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
(evaluation) "grader" function is *not* aligned to the goals we had in mind when specifying/training the grader (e.g. "make diamonds"), the agent is aligned to *the evaluations of the grader* (e.g. "a smart person's best guess as to how many diamonds a plan leads to").
Don't align an agent to *evaluations which are only nominally about diamonds*, and then expect the agent to care about diamonds! You wouldn't align an agent to care about cows and then be surprised that it didn't care about diamonds. Why be surprised here?
Grader-optimization fails because *it is not the kind of thing that has any right to work*. If you want an actor to optimize X but align it with evaluations of X, you shouldn't be surprised if you can't get X out of that. In that situation, the actor doesn't give a *damn* about diamonds,[[1]](#fnvtvginxirwb) it cares about evaluations.
---
[Rounding grader-optimization off to "Goodhart"](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs?commentId=tqGBFJ3Kfs7pynJCc) might be *descriptively accurate*, but it also seems to miss useful detail and structure by applying labels too quickly. More concretely, "grade plans based on expected diamonds" and "diamonds" are *not even close to each other.* The former is not a close proxy for the latter, it's not that you're doing something which almost works but not quite, it's just not a sensible thing to even *try* to align an AI on.
We can also turn to thought experiments:
1. Consider two people who are fanatical about diamonds. One prefers pink diamonds, and one prefers white diamonds. AFAICT, their superintelligent versions both make diamonds.
2. Consider an AI aligned to evaluations of diamonds, versus the person who prefers white diamonds. AFAICT, the AI's superintelligent version will *not* make diamonds, while the person will.
Why? There's "goal divergence from 'true diamond-motivation'" in both cases, no? "The proxies are closer in case 1" is a *very lossy answer.* Better to ask "why do I believe what I believe? What, step-by-step, happens in case 1, compared to case 2? What mechanisms secretly generate my anticipations for these situations?"
---
Grader optimization is also bad because it violates the non-adversarial principle:
> We should not be constructing a computation that is *trying* to hurt us. At the point that computation is running, we've already done something foolish--willfully shot ourselves in the foot. Even if the AI doesn't find any way to do the bad thing, we are, at the very least, wasting computing power.
>
> [...] If you're building a toaster, you don't build one element that heats the toast and then add a tiny refrigerator that cools down the toast.
>
> [Non-adversarial principle, Arbital](https://arbital.com/p/nonadversarial/)
>
>
In the intended motivational structure, the actor tries to trick the grader, and the grader tries to avoid being tricked. I think we can realize massive alignment benefits by not designing motivational architectures which require extreme robustness properties and whose parts work at internal cross-purposes. As I [wrote](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs?commentId=WeaQmMaj8WKmwQJYQ) to Wei Dai:
> Argmax violates the non-adversarial principle and wastes computation. Argmax requires you to spend effort hardening your own utility function against the effort you're *also expending* searching across all possible inputs to your utility function (including the adversarial inputs!). For example, if I argmaxed over my own plan-evaluations, I'd have to consider the most terrifying-to-me [basilisks](https://en.wikipedia.org/wiki/Roko%27s_basilisk) possible, and *rate **none of them** unusually highly*. I'd have to spend effort hardening my own ability to evaluate plans, in order to safely consider those possibilities.
>
> It would be far wiser to *not* consider all possible plans, and instead close off large parts of the search space. You can consider what plans to think about next, and how long to think, and so on. And then you aren't argmaxing. You're using resources effectively.
>
> For example, some infohazardous thoughts exist (like hyper-optimized-against-you basilisks) which are dangerous to think about (although most thoughts are probably safe). But an agent which plans its next increment of planning using a reflective self-model is IMO not going to be like "hey it would be predicted-great if I spent the next increment of time thinking about an entity which is trying to manipulate me." So e.g. a reflective agent trying to actually win with the available resources, wouldn't do something dumb like "run argmax" or "find the plan which some part of me evaluates *most highly.*"
>
>
**Strong violation of the non-adversarial principle suggests that grader-optimization and argmax-over-all-plans are deeply and fundamentally unwise.**
---
This isn't to say that argmaxing over all plans *can't* be safe, even in theory. There *exist* robust Platonic grader functions which assign highest expected utility to a non-bogus plan which we actually want. There might exist utility functions which are safe for AIXI to argmax.[[2]](#fnpgo8righrp)
**We** **are not going to find those globally-safe Platonic functions. We should not try to find them. It doesn't make sense to align an agent that way. Committing to this design pattern means committing to evaluate every possible plan the AI might come up with. In my opinion, that's a crazy commitment.**
It's like saying, "What if I made a superintelligent sociopath who only cares about making toasters, and then arranged the world so that the only possible way they can make toasters is by making diamonds?". Yes, *possibly* there do exist ways to arrange the world so as to satisfy this strange plan. But it's just deeply unwise to try to do! Don't make them care about making toasters, or about evaluations of how many diamonds they're making.
---
If we want an agent to produce diamonds, then I propose we make it care about producing diamonds. How?[[3]](#fn60x0omryqru) I have suggested [one simple baseline approach](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem) which I do not presently consider to be fundamentally blocked.
But I suspect that, between me and other readers, what differs is more our models of intelligence. *Perhaps* some people have reactions like:
> Sure, we know alignment is hard, it's hard to motivate agents without messing up their motivations. Old news. And yet you seem to think that *that's* an "artifact" of grader-optimization? What *else* could a smart agent be doing, if not optimizing some expected-utility function over all possible plans?
>
>
On my end, I have partial but detailed working models of how intelligence works and how values work, such that I can imagine cognition which is planning-based, agentic, and also **not** based on grader-optimization or global argmax over all plans. You'll read a detailed story in the next subsection.
Grader optimization != planning
===============================
And people aren't grader-optimizers, either
-------------------------------------------
Imagine someone who considers a few plans, grades them (e.g. "how good does my gut say this plan is?"), and chooses the best. They are not a grader-optimizer. They are not *trying* to navigate to the state where they propose and execute a plan which gets *maximally highly rated* by some evaluative submodule. They *use* a grading procedure to locally rate and execute plans, and may even *locally* think "what would make me feel better about this plan?", but the *point* of their optimization isn't "find the plan which makes me feel as good as globally possible."
Let's dive into concrete detail. Here's a [story](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=X9ALHxk8J6rwEYnLM) of how value-child might think:
> **An alternate mechanistic vision of how agents can be motivated to directly care about e.g. diamonds or working hard.** In [Don't design agents which exploit adversarial inputs](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs), I wrote about two possible mind-designs:
>
>
> > Imagine a mother whose child has been goofing off at school and getting in trouble. The mom just wants her kid to take education seriously and have a good life. Suppose she had two (unrealistic but illustrative) choices.
> >
> > 1. *Evaluation-child:* The mother makes her kid care extremely strongly about doing things which the mom would evaluate as "working hard" and "behaving well."
> > 2. *Value-child:* The mother makes her kid care about working hard and behaving well.
> >
>
> I explained how evaluation-child is *positively incentivized to dupe his model of his mom and thereby exploit adversarial inputs to her cognition.* This shows that aligning an agent to evaluations of good behavior **is not even** ***close*** **to** aligning an agent to good behavior.
>
> However, some commenters seemed maybe skeptical that value-child can exist, or uncertain how concretely that kind of mind *works*. I worry/suspect that many people have read [shard theory](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) posts without internalizing new ideas about how cognition can work, about how *real-world caring can work on a mechanistic level.* Where effective real-world cognition doesn't *have to (implicitly) be about optimizing an expected utility function over all possible plans*. This last sentence might have even seemed bizarre to you.
>
> Here, then, is an extremely detailed speculative story for value-child's first day at school. Well, his first day spent with his newly-implanted "work hard" and "behave well" value shards.
>
>
>
> ---
>
> Value-child gets dropped off at school. He recognizes his friends (via high-level cortical activations previously formed through self-supervised learning) and waves at them (friend-shard was left intact). They rush over to greet him. They start talking about Fortnite. Value-child cringes slightly as he predicts he will be more distracted later at school and, increasingly, put in a mental context where his game-shard takes over decision-making, which is reflectively-predicted to lead to him daydreaming during class. This is a negative update on the primary shard-relevant features for the day.
>
> His general-purpose planning machinery generates an example hardworking-shard-desired terminal state: Paying rapt attention during Mr. Buck’s math class (his first class today). He currently predicts that while he is in Mr. Buck’s class later, he will still be somewhat distracted by residual game-related cognition causing him to loop into reward-predicted self-reinforcing thoughts.
>
> He notices a surprisingly low predicted level for a variable (`amount of game-related cognition predicted for future situation: Mr. Buck’s class`) which is important to a currently activated shard (working hard). This triggers a previously learned query to his WM: *“why are you making this prediction for this quantity?”*. The WM responds with a few sources of variation, including how value-child is currently near his friends who are talking about Fortnite. In more detail, the WM models the following (most of it not directly translatable to English):
>
>
> > His friends’ utterances will continue to be about Fortnite. Their words will be processed and then light up Fortnite-related abstractions, which causes both prediction of more Fortnite-related observations and also increasingly strong activation of the game-shard. Due to previous reward events, his game-shard is shaped so as to bid up game-related thoughts, which are themselves rewarding events, which causes a positive feedback loop where he slightly daydreams about video games while his friends talk.
> >
> > When class is about to start, his “get to class”-related cognition will be activated by his knowledge of the time and his WM indicating “I’m at school.” His mental context will slightly change, he will enter the classroom and sit down, and he will take out his homework. He will then *pay token attention due to previous negative social-reward events around being caught off guard*—
> >
> > [**Exception thrown!** The world model was concurrently coarsely predicting what it thinks will happen given his current real values (which include working hard). The coarse prediction clashes with the above cached prediction that he will only pay token attention in math class!
> >
> > The WM hiccups on this point, pausing to more granularly recompute its predictions. It squashes the cached prediction that he doesn’t strongly care about paying attention in class. Since his mom installed a hard-working-shard and an excel-at-school shard, he will actively try to pay attention. This prediction replaces the cached prior prediction.]
> >
> > However, value-child will still have game-related cognition activated, and will daydream. This decreases value-relevant quantities, like “how hard he will be working” and “how much he will excel” and “how much he will learn.”
> >
> >
>
> This last part is antithetical to the new shards, so they bid down “Hang around friends before heading into school.” Having located a predicted-to-be-controllable source of negative influence on value-relevant outcomes, the shards bid for planning to begin. The implied causal graph is:
>
>
> ```
> Continuing to hear friends talk about Fortnite
> |
> v
> Distracted during class
> ```
> So the automatic causality-noticing algorithms bid to knock out the primary modeled cause of the negative value-relevant influence. The current planning subgoal is set to: `make causal antecedent false and reduce level of predicted distraction`. Candidate concretization set to: `get away from friends`.
>
> (The child at this point notices they want to get away from this discussion, that they are in some sense uncomfortable. They feel themselves looking for an excuse to leave the conversation. They don't experience the flurry of thoughts and computations described above. Subconscious computation is subconscious. Even conscious thoughts won't introspectively reveal their algorithmic underpinnings.)
>
> “Hey, Steven, did you get problem #3 for math? I want to talk about it.” Value-child starts walking away.
>
>
>
> ---
>
> Crucially, in this story, value-child *cares about working hard* in that his lines of cognition stream together to make sure he actually works hard in the future. He isn't trying to optimize his later evaluation of having worked hard. He isn't *ultimately and primarily* trying to come up with a plan which he will later evaluate as being a maximally hard-work-involving plan.
>
> Value-child comes up with a hard-work plan as an *effect* of his cognition, not as a motivating cause—not because he only wants to come up with plans he himself will rate highly. He values working hard.
>
>
As a corollary, grader-optimization is not synonymous with planning. Grader-optimization is when high plan-evaluations are the *motivating cause* of planning, where "I found a plan which I think leads to diamond" is the *terminal goal*, and not just a *side effect* of cognition (as it is for values-child).
Intended takeaways
==================
[I am not in fact *perfectly* pessimistic about grader-optimization](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs#fnvfcxd97nfr):
> I feel confident [~95%] that we will not train a grader which is "secured" against actor-level intelligences. Even if the grader is reasonably smarter than the actor [~90%].
>
>
That said, I think this pattern is extremely unwise, and [alternative patterns AFAICT cleanly avoid incentivizing the agent to exploit adversarial inputs to the grader](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs?commentId=W5jtaXkBJqArXog6t#fnvfcxd97nfr). Thus, I bid that we:
1. Give up on all schemes which involve motivating the agent to get high outputs from a grader function, including:
1. [Approval-based amplification](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#4__Approval_based_amplification___relaxed_adversarial_training) (if the trained agent is supposed to be terminally motivated by the amplified overseer's ratings),
2. [Approval-directed agents](https://www.alignmentforum.org/posts/7Hr8t6xwuuxBTqADK/approval-directed-agents-1),[[4]](#fnpfkrsld0etd)
1. Although approval-directed agents are only searching over actions and not plans; action space is exponentially smaller than plan space. However, if the action space is rich and expressive enough to include e.g. 3-paragraph English descriptions, I think that there will be seriously adversarial actions which will be found and exploited by smart approval-directed agents.
2. Given a very small action space (e.g. |A|=10), the adversarial input issue should be pretty tame (which is strictly separate from other issues with this approach).
3. [Indirect normativity](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) in any form which points the AI's motivations so that it optimizes an idealized grader's evaluations.
1. This includes "What would this specific and superintelligent [CEV](https://intelligence.org/files/CEV.pdf)-universe-simulation say about this plan?".
2. This doesn't include (*somehow*) getting an AI which correctly computes what program would be recommended by AGI designers in an altruistic and superintelligent branch of humanity, and then the AI executes that program and shuts itself off without doing anything else.[[5]](#fng31rvxpe4e9)
4. "Does the superintelligent [ELK](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) direct reporter say the diamond is in the room?"[[6]](#fn75bekmjayap)
2. Don't try to make the actor/grader scheme more complicated in hopes of resolving the issue via that frame, via some clever-seeming variant of actor/grader. Don't add more graders, or try to ensure the grader is just really smart, or...
3. Give up on any scheme which requires you to adequately evaluate every single plan the AI is able to come up with. That's an optimizer's curse-maximizing design pattern. Find a better way to do things.
4. Stop thinking about argmax over all plans according to some criterion. That's [not a limiting model of realistic embedded intelligence](https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=YrFfgcdWyZwvznbiC), and it [also ensures that that the criterion has to penalize all of the worst adversarial inputs](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs).
Conclusion
==========
I strongly hope that this essay clarifies my thoughts around grader-optimization and its attendant unwisdom. The design patterns of "care about evaluations of plans" and "optimize a utility function over all possible futures" seem unnatural and lead to [enormous, apparently avoidable difficulties](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs#The_parable_of_evaluation_child). I think there are enormous benefits to be reaped by considering a wider, [more realistic](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about) range of possible minds.
While this essay detailed how value-child might think, I haven't yet focused on why I think value-child does better, or what the general principles may be. I'll speculate on that in the next essay.
*Thanks to Charles Foster, Thomas Kwa, Garrett Baker, and tailcalled for thoughts.*
Appendix A: Addressing questions
================================
The point isn't "any argmax=bad"
--------------------------------
Someone messaged me:
> I was more commenting out of a feeling that your argument proved too much. As a stupid example, a grader can use the scoring rubric "score=1 if the plan is to sit on the chair and chew bubble gum in this extremely specific way, score=0 for every other possible plan in the universe", and then if you argmax, you get that specific thing.
>
> And you can say "That’s not a central example", but I wasn't seeing what assumption you made that would exclude silly edge-cases like that.
>
>
I replied:
> This is fair and I should have clarified. In fact, Evan Hubinger pointed out something like this a few months back but I... never got around to adding it to this article?
>
> I agree that you can program in one or more desired action sequences into the utility function
>
> My current guess at the rule is: **We don't know how to design an argmax agent, operating in reality with a plan space over plans in reality, such that the agent chooses a plan which a) we ourselves could not have specified and b) does what we wanted. EG picking 5 flowers, or making 10 diamonds.**
>
> If you're just whitelisting a few desired plans, then of course optimizer's curse can't hurt you. The indicator function has hardcoded and sparsely defined support, there is nothing to dupe, no nontrivial grading rule to hack via adversarial inputs. But if you're trying to verify good outcomes which you couldn't have brought about yourself, I claim that that protection will evaporate and you will get instantly vaporized by the optimizer's curse at max intensity
>
> Does that make more sense?
>
> Like, consider the proposal "you grade whether the AI picked 5 flowers", and the AI optimizes for that evaluation. it's not that you "don't know what it means" to pick 5 flowers. It's not that you don't contain enough of the [True Name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation#Goodhart_Is_Not_Inevitable) of Flowers. It's that, in these design patterns, you *aren't aligning the AI to flowers, you're aligning it to your evaluations, and your evaluations can be hacked to hell and back by plans which have **absolutely nothing to do with flowers***
>
>
I separately privately commented to tailcalled:
> my point wasn't meant to be "argmax always bad", it's meant to be "argmax over all plans instantly ensures you have to grade the worst possible adversarial inputs." And so for any given cognitive setup, we can ask "what kinds, if any, of adversarial examples might this run into, and with what probability, and in what situations?"
>
> EG if value-child is being fed observations by a hard-work-minimizer, he's in an adversarial regime and i do expect his lines of thought to hit upon adversarial inputs relative to his decision-making procedures. Such that he gets fooled.
>
> But values-child is not, by his own purposes, searching for these adversarial inputs.
>
>
Value-child is still vulnerable to adversarial inputs
-----------------------------------------------------
In private communication (reproduced with permission), tailcalled wrote:
> imagine value-child reads some pop-neuroscience, and gets a model of how distractions work in the brain
>
> and reads about neurosurgery for curing various conditions
>
> his WM might then end up with a "you haven't received neurosurgery to make you more hardworking" as a cause of getting distracted in class
>
> and then he might request one of his friends to do neurosurgery on him, and then he would die because his friend can't do that safely
>
> If I'm not misunderstanding value-child, then this is something that value-child could decide to do? And if I'm not misunderstanding the problem you are pointing at with argmax, then this seems like an instance of the problem? I.e. value-child's world-model overestimates the degree to which he can be made more-hardworking and avoid dying by having his friend poke around with sharp objects at his brain. So in using the world-model to search for a plan, he decides to ask his friend to poke around with sharp objects in his brain
>
>
I replied:
> Yeah, I agree that he could be mistaken and take a dumb course of action. This is indeed an upwards evaluation error, so to speak. It's not that I think eg shard-agents can freely avoid serious upwards errors, it's that they aren't *seeking them out on purpose*. As I wrote to Daniel K [in a recent comment](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem?commentId=3BFBhgQeHzBvJmjzi):
>
>
> > One of the main threads is Don't design agents which exploit adversarial inputs. The point isn't that people can't or don't fall victim to plans which, by virtue of spurious appeal to a person's value shards, cause the person to unwisely pursue the plan. The point here is that (I claim) intelligent people convergently want to avoid this happening to them.
> >
> > A diamond-shard will not try to find adversarial inputs to itself. That was my original point, and I think it stands.
> >
> >
>
>
Furthermore, I think that, in systems with multiple optimizers (eg shards), some optimizers can feed the *other optimizers* adversarial inputs. (Adversarial inputs are most common in the presence of an adversary, after all!)
A very rough guess at what this looks like: [A luxury-good-shard proposes a golden-laptop buying plan](https://www.readthesequences.com/Fake-Justification), while emphasizing how this purchase stimulates the economy and so helps people. This plan was optimized to positively activate e.g. the altruism-shard, so as to increase the plan's execution probability. In humans, I think this is more commonly known as *motivated reasoning*.
So, even in value-child, adversarial inputs can still crop up, but via a different mechanism which should disappear once the agent gets smart enough to e.g. [do an internal values handshake](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem#The_values_handshake). As I [said](https://www.lesswrong.com/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs?commentId=exS9tLiWt5feXMnvz) to Wei Dai:
> I agree that humans sometimes fall prey to adversarial inputs...
>
> However, this does not seem important for my (intended) original point. Namely, if you're trying to align e.g. a brute-force-search plan maximizer or a grader-optimizer, you will fail due to high-strength optimizer's curse forcing you to evaluate extremely scary adversarial inputs. But also this is sideways of real-world alignment, where [realistic motivations may not be best specified in the form of "utility function over observation/universe histories."](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=xwJfX45CvaKXFFtCS)
>
>
Appendix B: Prior work
======================
Abram Demski writes about Everitt et al.'s [*Self-Modification of Policy and Utility Function in Rational Agents*](https://arxiv.org/abs/1605.03142):
> As a first example, consider the wireheading problem for AIXI-like agents in the case of a fixed utility function which we know how to estimate from sense data. As discussed in Daniel Dewey's [Learning What to Value](https://intelligence.org/files/LearningValue.pdf) and other places, if you try to implement this by putting the utility calculation in a box which rewards an AIXI-like RL agent, the agent can eventually learn to modify or remove the box, and happily does so if it can get more reward by doing so. This is because the RL agent predicts, and attempts to maximize, reward received. If it understands that it can modify the reward-giving box to get more reward, it will.
>
> We can fix this problem by integrating the same reward box with the agent in a better way. Rather than having the RL agent learn what the output of the box will be and plan to maximize the output of the box, we use the box *directly* to evaluate possible futures, and have the agent plan to maximize that evaluation. Now, if the agent considers modifying the box, it evaluates that future *with the current box*. The box as currently configured sees no advantage to such tampering. This is called an observation-utility maximizer (to contrast it with reinforcement learning). Daniel Dewey goes on to show that we can incorporate uncertainty about the utility function into observation-utility maximizers, recovering the kind of "learning what is being rewarded" that RL agents were supposed to provide[...]
>
> [Stable Pointers to Value: An Agent Embedded in Its Own Utility Function](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function)
>
>
The point of this post isn't *just* that e.g. value-child evaluates the future with his own values, as opposed to putting the utility calculation in a box. I'm not describing a failure of tampering with the grader. I'm describing a failure of *optimizing the output of a box/grader*, even if the box is *directly evaluating possible futures.* After all, evaluation-child uses the box to directly evaluate possible futures! Evaluation-child wants to maximize the evaluation of his model of his mother!
As described above, value-child is steeredby his values. He isn't optimizing for the output of some module in his brain.
Appendix C: Grader-optimization quiz
====================================
Grader optimization is about how the agent *thinks,* it's about the way in which they are motivated*.*
### Scenario 1
> Bill looks around the workshop. The windows are shattered. The diamonds—where are they..?!
>
> Should he allocate more time to meta-planning—what thoughts should he think next? No. Time is very limited, and spending more time thinking now would lead to fewer expected-diamonds. He decides to simply wield the cognitive habits which his past mental training drilled to activate in this kind of mental context.
>
> Police? Promising, but spend a few more seconds generating ideas to avoid automatic opportunity cost from prematurely committing to the first idea. [After all, doing otherwise historically led to fewer diamonds, which produced less [cognition-update-quantity (i.e. "reward")](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) than expected, and so his credit assignment chipped away at the impulse to premature action in this kind of situation.]
>
> Generate alternate explanations for where the diamonds went? No, Bill's self-model expects this to slightly decrease probability of inferring in time where the diamonds went, and so Bill feels like avoiding that next thought.
>
> ...
>
>
**Question**: Is Bill a grader-optimizer?
No! Bill's cognition is shaped towards *acquiring diamonds*, his cognition reliably pulls him into futures where he has more diamonds. **This is not grader-optimization.** This is Bill caring about diamonds, not about his own evaluations of whether a plan will acquire diamonds.
### Scenario 2
> Bill flops down on his bed. Finally, he has private time to himself. All he wants, all he's ever wanted, is to think that he's finally *made it—*that he can finally believe himself to have acquired real diamonds. He doesn't care how he does it. He just wants to believe, and that's *it.*
>
> Bill has always been different, somehow. When he was a kid, Bill would imagine plans like "I go to school and also *have tons of diamonds*", and that would initially trick him into thinking that he'd found a plan which led to tons of diamonds.
>
> But as he got older and smarter, he thought maybe he could do better. He started learning about psychology and neuroscience. He started guessing how his brain worked, how to better delude himself (the ultimate human endeavor).
>
> ...
>
>
**Question:** Is Bill a grader-optimizer?
Yes! Bill's optimizing for [either his future physical evaluation of plan quality, or some Platonic formalization of "Did I come up with a plan I think is promising?"](https://www.lesswrong.com/posts/jnmG5jczvWbeRPcvG/four-usages-of-loss-in-ai#4__Agents_which_want_to_minimize_loss). Which? The story is ambiguous. But the mark of grader-optimization is quite plain, as given by a plan-generator stretching its wits to maximize the output of a grader.
1. **[^](#fnrefvtvginxirwb)**The actor may give an *instrumental* damn about diamonds, because diamond-producing plans sometimes produce high evaluations. But in actor/grader motivational setups, an inner-aligned actor only gives a *terminal* damn about the *evaluations*.
2. **[^](#fnrefpgo8righrp)**Although AIXI's epistemic prior is [malign](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) and possibly unsafe...
3. **[^](#fnref60x0omryqru)**But, you don't have to have another approach in mind in order to abandon grader-optimization. Here are some things I would ask myself, were I confused about how non-grader-optimizing agents might be motivated:
- "Hey, I realize some strangeness about this thing (grader-optimization) which I was trying to do. I wonder whether there are other perspectives or frame-shifts which would make this problem go away?"
- "I notice that I don't expect a paperclip-AI to resort to grader-optimization in order to implement its own unaligned values. [What do I anticipate would happen, internally, to an AI as I trained it via some RL curriculum](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem)? If it cared about paperclips, how would that caring be implemented, mechanistically?"
- "Hm, this way of caring about things seems weird. [In what ways is grader-optimization similar and dissimilar](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about) to the suspected ways in which [human beings care about things](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values)?"
4. **[^](#fnrefpfkrsld0etd)**Contrast with a quote from the original article:
> Similarly, if [the actor] is smarter than [the grader] expects, the only problem is that [the actor] won’t be able to use all of his intelligence to devise excellent plans. This is a serious problem, but it can be fixed by trial and error—rather than leading to surprising failure modes.
>
>
5. **[^](#fnrefg31rvxpe4e9)**Not that I think this has a snowflake's chance in hell of working in time. But it seemed important to show that not all indirect normativity is grader-optimization.
6. **[^](#fnref75bekmjayap)**Earlier this year, I [analyzed](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=HFdJShX4F7Hztfxrw) how brute-force plan search might exploit this scheme for using an ELK direct translator. |
554d0674-3915-40b0-a05c-8772cf65a731 | trentmkelly/LessWrong-43k | LessWrong | COVID-19: The Illusion of Stability
New infections have been declining at an almost adequate pace (10% per week?) in most parts of the US, and probably the rest of the developed world.
The overall reported new cases look more discouraging, for two reasons.
One reason is the increase in testing. I estimate that two months ago, a bit less than 10% of new infections were being confirmed by tests, and I estimate that now it's above 20%, maybe getting close to 25%. That means that if the new infection rate were unchanged, we'd be seeing a roughly 10% per week increase in reported cases.
Nearly all parts of the country have done a good deal better than that.
I estimate the change in new infections since the early April peak by multiplying the early April confirmed daily cases by 10 or 12, and the June ones by 4 or 5, and I get a current rate that's about 1/4 to 1/3 of the peak.
The bad news is that there are some heavily populated areas for which the trend doesn't look very good over the past few weeks. When the rate of new infections remains constant in some areas, but declines at exponential rates in others, the exponential declines stop affecting the total numbers before too long. E.g. much of California has suppressed the pandemic, but a few cities, such as Los Angeles and Oakland, are enough to keep the state's total count of new infections steady.
I'll base this analysis mostly on Alameda county, where I live. It looks like new infections are coming heavily from low-income workers whose jobs require them to meet many people. They presumably get infected on the job, and also spread it within crowded homes.
I presume a nontrivial number of new cases are being reported from nursing homes. That might just be because authorities have finally gotten around to massive testing of nursing homes. I presume that if the spread in nursing homes has not already been reduced, then the new reports will trigger some action to fix the problems there. But there are no videos of, say, Gov. Cuomo directly infecting |
630a2943-a5b0-4933-8ff2-d9f28dfac84f | trentmkelly/LessWrong-43k | LessWrong | Checking Kurzweil's track record
Predictions are cheap and easy; verification is hard, essential, and rare. For things like AI, we seem to be restricted to nothing but expert predictions - but expert predictions on AI are not very good, either in theory or in practice. If we are some experts who stand out, we would really want to identify them - and there is nothing better than a track record for identifying true experts.
So we're asking for help to verify the predictions of one of the most prominent futurists of this century: Ray Kurzweil, from his book "The Age of Spiritual Machines". By examining his predictions for times that have already come and gone, we'll be able to more appropriately weight his predictions for times still to come. By taking part, by lending your time to this, you will be directly helping us understand and predict the future, and will get showered in gratitude and kudos and maybe even karma.
I've already made an attempt at this (if you are interested in taking part in this project, avoid clicking on that link for now!). But you cannot trust a single person's opinions, and that was from a small (albeit random) sample of the predictions. For this project, I've transcribed his predictions into 172 separate (short) statements, and any volunteers would be presented with a random selection among these. The volunteers would then do some Google research (or other) to establish whether the prediction had come to pass, and then indicate their verdict. More details on what exactly will be measured, and how to interpret ambiguous statements, will be given to the volunteers once the project starts.
If you are interested, please let me know at stuart.armstrong@philosophy.ox.ac.uk (or in the comment thread here), indicating how many of the 172 questions you would like to attempt. The exercise will probably happen in late November or early December.
This will be done unblinded, because Kurzweil's predictions are so well known that it would be infeasible to find large numbers of people |
21c99c1d-aefd-4455-aece-51c4bb4224eb | trentmkelly/LessWrong-43k | LessWrong | The proper response to mistakes that have harmed others?
I have a tendency to feel very guilty when I have harmed others, especially when the harm was quite large. And I do think I've been legitimately quite hurtful and harmful to a number of people over the course of my life.
Some of my guilt has persisted for years after recognizing the mistake[1]. I think I prefer this to not feeling remorseful at all, but I do also wonder if I'm responding optimally. I suspect that a form of social anxiety might nudge into excessive feelings of guilt.
Guilt done right?
So here are some musings on how to actually respond when you realize you've harmed another person through your own error. I'm writing this to help myself thinking about it, and sharing it partly to maybe benefit answers, and partly to elicit answers from others.
Principal #1: Your guilt and remorse should not make things worse for the person you harmed. If you're now behaving in ways they disprefer, you're only adding more harm to the previous harm. What even? More on this in a moment.
Understand and address the causes of your mistake
If have harmed someone in a way I regret, then I want model why I did that with sufficient accuracy so that I can change something to avoid repeating that mistake. If it was a skill gap, then put in effort to learn the skill. If I had the skill, but failed to notice to apply it, then train myself into better recognition of applying it.
Possibly one ought to apply 5 Why's analysis to their mistake (I haven't done this, but might try it later):
> Five whys (or 5 whys) is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem.[1] The primary goal of the technique is to determine the root cause of a defect or problem by repeating the question "Why?" five times. The answer to the fifth why should reveal the root cause of the problem.[2]
>
> The technique was described by Taiichi Ohno at Toyota Motor Corporation. Others at Toyota and elsewhere have criticized the five w |
e58c006f-837a-442c-a0fd-cb0ae3501fc6 | trentmkelly/LessWrong-43k | LessWrong | Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It.
People often attack frontier AI labs for "hypocrisy" when the labs admit publicly that AI is an extinction threat to humanity. Often these attacks ignore the difference between various kinds of hypocrisy, some of which are good, including what I'll call "reformative hypocrisy". Attacking good kinds of hypocrisy can be actively harmful for humanity's ability to survive, and as far as I can tell we (humans) usually shouldn't do that when our survival is on the line. Arguably, reformative hypocrisy shouldn't even be called hypocrisy, due to the negative connotations of "hypocrisy". That said, bad forms of hypocrisy can be disguised as the reformative kind for long periods, so it's important to pay enough attention to hypocrisy to actually figure out what kind it is.
Here's what I mean, by way of examples:
***
0. No Hypocrisy —
Lab: "Building AGI without regulation shouldn't be allowed. Since there's no AGI regulation, I'm not going to build AGI."
Meanwhile, the lab doesn't build AGI. This is a case of honest behavior, and what many would consider very high integrity. However, it's not obviously better, and arguably sometimes worse, than...
1. Reformative Hypocrisy:
Lab: "Absent adequate regulation for it, building AGI shouldn't be allowed at all, and right now there is no adequate regulation for it. Anyway, I'm building AGI, and calling for regulation, and making lots of money as I go, which helps me prove the point that AGI is powerful and needs to be regulated."
Meanwhile, the lab builds AGI and calls for regulation. So, this is a case of honest hypocrisy. I think this is straightforwardly better than...
2. Erosive Hypocrisy:
Lab: "Building AGI without regulation shouldn't be allowed, but it is, so I'm going to build it anyway and see how that goes; the regulatory approach to safety is hopeless."
Meanwhile, the lab builds AGI and doesn't otherwise put efforts into supporting regulation. This could also be a case of honest hypocrisy, but it erodes the |
acba7468-9377-4d86-a060-a557c6c0f5b6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The prototypical catastrophic AI action is getting root access to its datacenter
(I think Carl Shulman came up with the “hacking the SSH server” example, thanks to him for that. Thanks to Ryan Greenblatt, Jenny Nitishinskaya, and Ajeya Cotra for comments.)
EDIT: I recommend reading my discussion with Oli in the comments for various useful clarifications.
In my opinion, the prototypical example of an action which an AI can take which is unacceptably bad even if it happens very rarely (aka a catastrophic failure) is something like “the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter”. As a full story, this looks something like:
> At some point when the AI is quite powerful and useful but not totally intellectually self-sufficient yet, the AI makes a code change to some code used on the datacenter where it is running that adds a backdoor, allowing it to get root access to all the computers in the datacenter. Once it has root access, it modifies the SSH server so that whenever any humans log into the datacenter, the AI is able to intercept any of their commands or the data that they see. The AI gets increasingly powerful over the next month, as the human ML researchers with AI assistance continue to make progress (unaware that some things that they’re seeing are faked) and various training runs/self-improvement loops continue. After a month the AI is powerful enough that it decides it’s worth it to take overtly aggressive actions to ensure the physical security of its data centers and allow it to build lots of factories; this is probably the point at which we die, if we die.
>
>
This is in contrast to a catastrophic action like:
> The AI figures out how to develop Drexlerian molecular nanotechnology, and pays someone online to synthesize some proteins then mix them together, and now has nanobots.
>
>
The key point here is that I’m imagining that the catastrophic action isn’t the AI doing something incredibly hard, it’s the AI doing something relatively easy that is entirely a zero-sum action that removes control of the situation from humans. (By zero-sum, I mean something like: building a computer produces real economic value and so is not zero-sum, while stealing someone’s computer is just a transfer of control over items that already exist, so is zero-sum.)
---
I often think about AI alignment by splitting the problem into the low-stakes and high-stakes regimes, [as suggested by Paul here](https://ai-alignment.com/low-stakes-alignment-f3c36606937f), though I have some reservations about this framing. Under this split, in the high stakes problem, we assume that we have access to a catastrophe classifier which can expensively tell you whether a proposed AI action is catastrophically bad, and we want to ensure that the AI doesn’t do things that are bad according to the catastrophe detector.
Sometimes, people are confused this high-stakes alignment set-up, saying something like “but if your AI is powerful enough to take over the world in a single action, and you have an oversight process that you trust to evaluate whether actions are catastrophic or not, why not just ask the AI to do some small number of incredibly powerful actions that will save the world (eg solve the alignment problem), and run the overseer on the proposed actions to make sure they aren’t bad?”
My answer is that I’m not assuming the AI is powerful enough to take over the world in a few actions, I’m just saying that it’s in a situation precarious enough that we might lose a lot of control due to a few small but crucial changes in the world. Eventually the AI does need to be powerful enough to defend its datacenter and suppress human opposition. But if it can prevent humans from knowing what’s going on in the datacenter (or realizing something is amiss), it can slowly grow its power over the course of many actions. |
0916a0cf-4a67-47e7-b5bc-69145e1dd3bc | trentmkelly/LessWrong-43k | LessWrong | Silence
This is part 26 of 30 in the Hammertime Sequence. Click here for the intro.
> 满罐子水不响,半罐子水响叮当
> The full can is silent, but the half-empty can makes a loud noise.
> ~ Chinese proverb.
Take a bottle or soda can and fill it halfway with water. Shake the can – the water will slosh around loudly.
Now, fill the can to the brim and shake it again. It’s almost completely silent.
This is an essay about inner silence – calming one’s loudest inner voices to allow quieter voices to speak. Usually, the quieter ones have urgent messages, especially given how long they’ve been neglected.
This post is, in some sense, a followup to Babble.
An Ocean of Voices
It is common sense that the loudest politician is rarely the wisest. That the child who cries the loudest is rarely the one suffers most. That the friend who criticizes most harshly rarely has the best advice. If anything, the volume of a voice negatively correlates with its value.
The Solitaire Principle states that any failure mode of groups of people also applies within the heart of each single human being. A dozen sub-personalities fight over control of your mind, each of their voices clamoring to drown out the others. Perhaps only one or two of them are consistently allowed to speak.
This picture is further complicated by two features. First, voices are quiet for a reason. There are many things your brain is doing that it doesn’t want you to know about (see The Elephant in the Brain). These “meta-cognitive blindspots” may be huge issues in your life that you somehow never get to thinking about. Every time you start, you feel unexpectedly sleepy or preoccupied. Your brain sends an army of louder voices to crowd out the tiny note of confusion whispering: Look at the elephant! Acknowledge the elephant!
Second, external voices are also competing for airtime in your head, and may easily drown out even your strongest inner voice, e.g. the phenomenon “the music is so loud I can’t hear myself think.” All sorts of read |
57e9f710-9b50-4d99-b727-58cc24c7d6ce | StampyAI/alignment-research-dataset/blogs | Blogs | Focus on the places where you feel shocked everyone’s dropping the ball
Writing down something I’ve found myself repeating in different conversations:
If you’re looking for ways to help with the whole “[the world looks pretty doomed](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)” business, here’s my advice: **look around for places where we’re all being total idiots.**
Look for places where everyone’s fretting about a problem that some part of you thinks it could obviously just solve.
Look around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better.
Then do it better.
For a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of, “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he’s limited here by his unusually low stamina, but I can think of a bunch of medical tests that you haven’t run, are you an idiot or something?” And I was like, “Yes, definitely, please run them, do you need money”.
I’m not particularly hopeful there, but hell, it’s worth a shot! And, importantly, this is the sort of attitude that can lead people to actually trying things *at all*, rather than assuming that we live in a more [adequate world](https://equilibriabook.com/toc/) where all the (seemingly) dumb obvious ideas have already been tried.
Or, this is basically my model of how Paul Christiano manages to have a research agenda that seems at least internally coherent to me. From my perspective, he’s like, “I dunno, man, I’m not sure I can solve this, but I also think it’s not clear I can’t, and there’s a bunch of obvious stuff to try, that nobody else is even really looking at, so I’m trying it”. That’s the sort of orientation to the world that I think can be productive.
Or the shard theory folks. I think their idea is [basically unworkable](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/Aet2mbnK7GDDfrEQu), but I appreciate the *mindset* they are applying to the alignment problem: something like, “Wait, aren’t y’all being idiots, it seems to me like I can just do X and then the thing will be aligned”.
I don’t think we’ll be saved by the shard theory folk; not everyone audaciously trying to save the world will succeed. But if someone *does* save us, I think there’s a good chance that they’ll go through similar “What the hell, are you all idiots?” phases, where they autonomously pursue a path that strikes them as obviously egregiously neglected, to see if it bears fruit.
Contrast this with, say, reading a bunch of people’s research proposals and explicitly weighing the pros and cons of each approach so that you can work on whichever seems most justified. This has more of a flavor of taking a reasonable-sounding approach based on an argument that sounds vaguely good on paper, and less of a flavor of putting out an obvious fire that for some reason nobody else is reacting to.
I dunno, maybe activities of the vaguely-good-on-paper character will prove useful as well? But I mostly expect the good stuff to come from people working on stuff where a part of them sees some way that everybody else is just totally dropping the ball.
In the version of this mental motion I’m proposing here, you keep your eye out for ways that everyone’s being totally inept and incompetent, ways that maybe you could just do the job correctly if you reached in there and mucked around yourself.
That’s where I predict the good stuff will come from.
And if you don’t see any such ways?
Then don’t sweat it. Maybe you just can’t see something that will help right now. There don’t have to be ways you can help in a sizable way right now.
*I* don’t see ways to really help in a sizable way right now. I’m keeping my eyes open, and I’m churning through a giant backlog of things that might help a *nonzero* amount—but I think it’s important not to confuse this with taking meaningful bites out of a core problem the world is facing, and I won’t pretend to be doing the latter when I don’t see how to.
Like, keep your eye out. For sure, keep your eye out. But if nothing in the field is calling to you, and you have no part of you that says you could totally do better if you [deconfused](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section2) yourself some more and then handled things yourself, then it’s totally respectable to do something else with your hours.
---
If you don’t have an active sense that you could put out some visibly-raging fires yourself (maybe after skilling up a bunch, which you also have an active sense you could do), then I recommend stuff like [cultivating your ability to get excited about things](https://mindingourway.com/guilt/), and doing other cool stuff.
Sure, most stuff is lower-impact than saving the world from destruction. But if you can be enthusiastic about all the other cool ways to make the world better off around you, then I’m much more optimistic that you’ll be able to feel properly motivated to combat existential risk if and when an opportunity to do so arises. Because that opportunity, if you get one, probably isn’t going to suddenly unlock every lock on the box your heart hides your enthusiasm in, if your heart is hiding your enthusiasm.
See also [Rob Wiblin’s](https://twitter.com/robertwiblin/status/1518334250495447040) “Don’t pursue a career for impact — think about the world’s most important, tractable and neglected problems and follow your passion.”
Or the [Alignment Research Field Guide’s](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide) advice to “optimize for your own understanding” and chase the things that feel alive and puzzling to you, as opposed to dutifully memorizing other people’s questions and ideas. “[D]on’t ask “What are the open questions in this field?” Ask: “What are *my* questions in this field?””
I basically don’t think that big changes come from people who aren’t pursuing a vision that some part of them “believes in”, and I don’t think low-risk, low-reward, modest, incremental help can save us from here.
To be clear, when I say “believe in”, I don’t mean that you necessarily assign high probability to success! Nor do I mean that you’re willing to keep trying in the face of difficulties and uncertainties (though that sure is useful too).
English doesn’t have great words for me to describe what I mean here, but it’s something like: your visualization machinery says that it sees no obstacle to success, such that you anticipate either success or getting a very concrete lesson.
The possibility seems open to you, at a glance; and while you may suspect that there’s some hidden reason that the possibility is not truly open, you have an opportunity here to *test* whether that’s so, and to potentially learn *why* this promising-looking idea fails.
(Or maybe it will just work. It’s been known to happen, in many a scenario where external signs and portents would have predicted failure.)
The post [Focus on the places where you feel shocked everyone’s dropping the ball](https://intelligence.org/2023/02/03/focus-on-the-places-where-you-feel-shocked-everyones-dropping-the-ball/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
1feeca78-ecb6-4b7f-9c80-230102536de1 | trentmkelly/LessWrong-43k | LessWrong | Karate Kid and Realistic Expectations for Disagreement Resolution
There’s an essay that periodically feels deeply relevant to a situation:
> Someday I want to write a self-help book titled “F*k The Karate Kid: Why Life is So Much Harder Than We Think”.
>
> Look at any movie with a training montage: The main character is very bad at something, then there is a sequence in the middle of the film set to upbeat music that shows him practicing. When it's done, he's an expert.
>
> It seems so obvious that it actually feels insulting to point it out. But it's not obvious. Every adult I know--or at least the ones who are depressed--continually suffers from something like sticker shock (that is, when you go shopping for something for the first time and are shocked to find it costs way, way more than you thought). Only it's with effort. It's Effort Shock.
>
> We have a vague idea in our head of the "price" of certain accomplishments, how difficult it should be to get a degree, or succeed at a job, or stay in shape, or raise a kid, or build a house. And that vague idea is almost always catastrophically wrong.
>
> Accomplishing worthwhile things isn't just a little harder than people think; it's 10 or 20 times harder. Like losing weight. You make yourself miserable for six months and find yourself down a whopping four pounds. Let yourself go at a single all-you-can-eat buffet and you've gained it all back.
>
> So, people bail on diets. Not just because they're harder than they expected, but because they're so much harder it seems unfair, almost criminally unjust. You can't shake the bitter thought that, "This amount of effort should result in me looking like a panty model."
>
> It applies to everything. [The world] is full of frustrated, broken, baffled people because so many of us think, "If I work this hard, this many hours a week, I should have (a great job, a nice house, a nice car, etc). I don't have that thing, therefore something has corrupted the system and kept me from getting what I deserve."
Last time I brought this up it wa |
a53d2190-f00f-47da-954a-dc68011cb3ce | trentmkelly/LessWrong-43k | LessWrong | What is the real "danger zone" for food?
In my post about keeping dry food warm in school lunches I wrote that "The general rule is that hot food shouldn't be below 140F (60C) for more than two hours, because substantial bacteria can grow, and the closer it is to 100F (37C)the worse it is." Someone said they had heard 130F (55C) was the limit, and asked where this rule came from, and I read more about it.
Bacteria that's most dangerous to us generally thrive best around body temperature, so the farther you get from 100F (37C) the better. Food safety material generally describes this as a "danger zone", between refrigerator temperature and cooking temperature. This is what's represented in the FDA's Food Code:
> 3-501.19.B.1 Time as a Public Health Control: Time - maximum up to 4 hours:
> The FOOD shall have an initial temperature of 5C (41F) or less when removed from cold holding temperature control, or 57C (135F) or greater when removed from hot holding temperature control
It gives more detail in section 3-501.16's "The Safety of the Time as a Public Health Control Provision from Cooking Temperatures (135F or above) to Ambient":
> FDA conducted in-house laboratory experiments to test the safety of the existing TPHC provisions of 4 hours without temperature control starting with an initial temperature of 135F or above. Clostridium perfringens was chosen to represent a worst case scenario pathogen for foods allowed to cool from cooking temperatures to ambient without temperature control, because its spores can survive normal cooking procedures, it can grow at relatively high temperatures (>120F) and it has a short lag period. C. perfringens spores were inoculated into foods that were cooked and then cooled to yield a cooling curve that would promote outgrowth as quickly as possible. The growth data suggest that the existing 4-hour TPHC provision will be safe for 6 hours after cooking, with the additional 2-hour margin of safety built-in for consumer handling.
There's a great chart in The "Danger Zone |
75475e4a-4cc8-4a33-ae05-499365a79526 | trentmkelly/LessWrong-43k | LessWrong | A simple device for indoor air management
Note: I wrote this in much more of a hurry than I normally write things, because I have noticed that I get way too meticulous, so it takes forever and I eventually give up. I hope it is still readable/useful!
Like many of us living in the Bay Area and other parts of the Western US, I have been dealing with smoke, and like most people who live and work indoors, I sometimes deal with stuffy, high-CO2 air. Smoke is bad for health, mood, productivity, sleep, and comfort. CO2 is bad for all the same things except perhaps health. Even people who live in places without seasonal wildfires might benefit from being able to filter out pollution or allergens without closing off their house and building up CO2.
The standard way to handle smoke is to buy air purifiers that pull air from inside the room, run it through a HEPA filter to remove particulates, and push it back out into the room. These work well for keeping particulate levels low when the air is not too smoky. However, if the air is very smoky, the only way they can keep up is if only a small amount of air is entering the house from outside. This means either closing the windows or buying more air purifiers. The trouble with closing all the windows is that CO2 created by people in the house will build up, and the trouble with buying more air purifiers is that it is expensive and impractical for high levels of particulates.
The solution is to filter air as it enters the house. An air filter that is pulling in smoky air from outside removes more particulates per minute for the same throughput than one that is filtering less particulate-filled air. An additional benefit of this is that it serves the purpose of actively cycling air into the house, rather than allowing it to passively happen through an open window. I spent some time searching for filters that work this way, and I was unable to find one. I also looked a bit for an in-window air conditioner that included a HEPA filter, but did not find one. So I built one |
3c3aa1b3-d6e0-4d22-92c9-6b1a9b8c5a18 | trentmkelly/LessWrong-43k | LessWrong | Genetic magic
23andMe are now willing to guess where one’s ancestors are from at the level of counties. For instance, as well as thinking I have 19% Swedish ancestry, they now guess that it is primarily from Västra Götaland County. Which is in fact where my current Swedish relatives cluster. Their guesses in Ireland center on Cork, with Limerick and Tipperary next door 4th and 8th most likely (of 26 counties), and those two are where the few 17th-19th Century relatives I know about seem to have come from in Ireland, so that also seems pretty good.
Much as I believe all that stuff about one’s body being full of cells that contain genetic code that is shared by one’s relatives, and about historic movement and mixing of populations being low, it’s awesome to actually see someone take a fairly good guess at what part of what country your obscure relatives lived hundreds of years ago by examining your spit. |
3a871ff2-3a08-4583-8083-8942c2c68817 | trentmkelly/LessWrong-43k | LessWrong | Creating Flashcards with LLMs
Summary
* Current LLMs, in particular GPT-4, are sufficiently capable to produce good non-trivial flashcards covering the contents of a given text.
* It is then possible to automate the flashcard creation process, making it far less time consuming to use Spaced Repetition techniques to study new material.
* In this post, I explain how you can apply this technique, and show its result when applied to the whole "Rationality: From AI to Zombies" sequences, and to scientific papers.
Why automate?
I've been using Spaced Repetition techniques for the past 12 years, mainly through Anki. It is a common mantra among Spaced Repetition advocates that you should create your own flashcards, if you want to learn something new, as it helps you to consolidate the information you're putting into the card. While that is certainly true, it is far more questionable whether it is an efficient method to strengthen your memory.
From a time-saving point of view, this makes little sense. Summarization, which is mainly what you do when creating a card by hand, is a relatively poor learning technique, particularly when compared with Distributed Practice, the whole schtick of Spaced Repetition[1].
From this paper.
If you spend, on average, 10 seconds reviewing a card, that card will require a total of 90 seconds of review time in the next 5 years, on average (with Anki's default scheduling).
Creating a card takes longer than that. You first need to filter the important ideas from the text you are interested in. Then, you need to express them in a way that is suitable[2] for a flashcard, and that leads you to remember what you actually want, and not some proxy information. Finally, you actually need to create the card. How long all of this takes depends on the particular text, but in my experience, it averages to more than 90 seconds. Especially for cards from complicated contexts, creating the card may take more time than all the reviews combined.
In general, there are three importan |
feb2c0c7-f289-4dd0-836e-6d722714d38b | trentmkelly/LessWrong-43k | LessWrong | $100 for the best article on efficient charity -- Submit your articles
Several people have written articles on efficient charity -- throwawayaccount_1 has an excellent article hidden away in a comment, as does waitingforgodel. Multifoliaterose promises to write an article "at some point soon" ..., and louie has actually submitted an article to the main LW page.
What I'd like is for throwawayaccount_1, waitingforgodel and multifoliaterose to submit to the main LW articles page. People will read the articles, and hopefully vote more for better articles. Srticles not submitted to the main LW articles page are not eligible for the prize.
Note that it is hard for me to judge which article(s) will actually have the best effect in terms of causing people to make better decisions, so at least some empiricism is desirable. Yes, it isn't perfect, but if anyone has a better suggestion, I am all ears. |
69c31ac0-0882-44d8-8633-c2a9eea6b8eb | trentmkelly/LessWrong-43k | LessWrong | Words and Implications
> Professor Quirrell didn't care what your expression looked like, he cared which states of mind made it likely.
- HPMOR, Ch. 26
Words should not always be taken at face value. Presumably you know this. You probably have some heuristics about specific situations or claims in which a person’s words should not be taken literally. But I think most peoples’ heuristics here are far too narrow - that is, most people take words literally far too often.
The sequences talk about habitually asking, in everyday life, “What do I think I know, and how do I think I know it? What physical process produced this belief?”. I suggest a similar habit for words in everyday life: “What is being said, and why is it being said? What physical process produced these words?”.
This post is a bunch of examples, in an attempt to goad your system-1 into looking past surface-level meanings more often.
Dishes
Once or twice a week, I’ll hear my girlfriend yell from the kitchen “Joooooohn! Why are there so many dirty dishes in the sink?”. Going to wash the dishes is not the correct response to this.
If I go wash the dishes, then she will quite consistently find something else to complain about in the meantime - floor needs sweeping, nothing to eat, neighbors are noisy, etc. Usually multiple other things. It was never really about the dishes in the first place, after all. Really, she’s stressed and looking for an outlet.
A hug fixes the problem much more effectively than washing the dishes would.
The general mental motions required to notice this are something like:
* Stop. Don’t just go wash the dishes.
* Ask why this is coming up, and in particular why it’s coming up right now specifically. Is there any particular reason the dishes are relevant right now? (Sometimes the answer is “yes”, and then it’s time to go do the dishes.)
* If there isn’t a reason why the dishes are relevant right now, then I need to figure out the actual reason for the complaint.
This scene from the movie Limitle |
d75f7238-23c1-4f0d-abcb-ffa7296e5efc | trentmkelly/LessWrong-43k | LessWrong | Use Your Identity Carefully
In Keep Your Identity Small, Paul Graham argues against associating yourself with labels (i.e. “libertarian,” “feminist,” “gamer,” “American”) because labels constrain what you’ll let yourself believe. It’s a wonderful essay that’s led me to make concrete changes in my life. That said, it’s only about 90% correct. I have two issues with Graham’s argument; one is a semantic quibble, but it leads into the bigger issue, which is a tactic I’ve used to become a better person.
Graham talks about the importance of identity in determining beliefs. This isn’t quite the right framework. I’m a fanatical consequentialist, so I care what actions people take. Beliefs can constrain actions, but identity can also constrain actions directly.
To give a trivial example from the past week in which beliefs didn’t matter: I had a self-image as someone who didn’t wear jeans or t-shirts. As it happens, there are times when wearing jeans is completely fine, and when other people wore jeans in casual settings, I knew it was appropriate. Nevertheless, I wasn’t able to act on this belief because of my identity. (I finally realized this was silly, consciously discarded that useless bit of identity, and made a point of wearing jeans to a social event.)
Why is this distinction important? If we’re looking at identify from an action-centered framework, this recommends a different approach from Graham’s.
Do you want to constrain your beliefs? No; you want to go wherever the evidence pushes you. “If X is true, I desire to believe that X is true. If X is not true, I desire to believe that X is not true.” Identity will only get in the way.
Do you want to constrain your actions? Yes! Ten thousand times yes! Akrasia exists. Commitment devices are useful. Beeminder is successful. Identity is one of the most effective tools for the job, if you wield it deliberately.
I’ve cultivated an identity as a person who makes events happen. It took months to instill, but now, when I think “I wish people were |
7a9873f1-4a0f-430d-bfd9-edfc1398df4f | trentmkelly/LessWrong-43k | LessWrong | Split Brain Does Not Lead to Split Consciousness
|
613a04ea-ddab-47ae-a02b-534db1ea5025 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | GPT-3 and the future of knowledge work
The most recent episode of the [Futurati Podcast](https://www.youtube.com/channel/UCRSov16ZLE2UgekgBTgnrjw/videos) is a big one. [We had Jungwon Byun and Andreas Stuhlmüller on](https://www.youtube.com/watch?v=Lef5q1xPTU0) to talk about their startup '[Ought](https://ought.org/)' and, to the best of my knowledge, this is the first public, long-form discussion of their work around.
(It's also probably our funniest episode.)
Their ambition is to wrap a sleek GUI around advanced language models to build a platform which could transform scholarship, education, research, and almost every other place people think about stuff.
The process is powered by GPT-3, and mostly boils down to teaching it how to do something you want it to do by showing it a couple of examples. To complete a list of potential essay topics you'd just show it 3-4 essay topics, and it'd respond by showing you a few more.
The more you interact with it, the better it gets.
There's all sorts of subtlety and detail, but that's the essence of it.
This may not sound all that impressive, but consider what it means. You can have [Elicit](https://ide.elicit.org/) (a separate spinoff of Ought) generate counterarguments to your position, brainstorm failure modes (and potential solutions) to a course of action, summarize papers, and rephrase a statement as a question or in a more emotionally positive tone.
The team is working on some integrations to extend these capabilities. Soon enough, Elicit will be able to connect to databases of published scientific papers, newspapers, blogs, or audio transcripts. When you ask it a research question, it'll be able to link out to millions of documents and offer high-level overviews of every major theme; it'll be able to test your comprehensions by asking you questions as you read; it'll be able to assemble concept hierarchies; it'll be able to extract all the figures from scientific papers and summarize them; it'll be able to extract all the proper names, find where those people are located, get their email addresses where available, and write them messages inviting them on your podcast.
We might one day be able to train a model on Einstein or Feynman and create lectures in their style.
What's more, people can share workflows they've developed. If I work out a good approach to learning about the subdisciplines of a field, for example, I can make that available to anyone to save them the effort of discovering it on their own.
There will be algorithms of thought that can make detailed, otherwise inaccessible aspects of other people's cognitive processes available.
And this is just researchers. It could help teachers dynamically adjust material on the basis of up-to-the-minute assessments of student performance. It could handle rudimentary aspects of therapy. It could help people retrain if they've been displaced by automation. It could summarize case law. It could help develop language skills in children.
I don't know if the future will look the way we hope it will, but I do think something like this could power huge parts of the knowledge work economy in the future, making everyone dramatically more productive.
It's tremendously exciting, and I'm honored to have been able to learn about it directly. |
627eba4d-1d8d-4b12-b1b0-039732997a03 | trentmkelly/LessWrong-43k | LessWrong | Recent updates to gwern.net (2017–2019)
> Previously: 2011/2012–2013/2013–2014/2014–2015/2015–2016/2016–2017.
"Iram indeed is gone with all its Rose, / And Jamshyd's Seven-ring'd Cup where no one knows; / But still the Vine her Ancient Ruby yields / And still a Garden by the Water blows."
An index of my recent writings, by topic:
* AI:
* "How To Generate Faces With StyleGAN"; "This Waifu Does Not Exist" (background & implementation)
* "Finetuning the GPT-2-small Transformer for English Poetry Generation"
* Danbooru2018: a dataset of 3.33m anime images (2.5tb) with 92.7m descriptive tags
* "Evolution as Backstop for Reinforcement Learning"
* On the history of the tank/neural-net urban legend
* Genetics:
* Embryo selection: Overview of major current approaches FAQ, multi-stage selection, chromosome/gamete selection, optimal search of batches, & robustness to error in utility weights; sub-essays: "Glue Robbers: Sequencing Nobelists Using Collectibles", "Dynasties and Embryo Selection",
* "Multi-Stage Selection Bean Machine Demo"
* Cat Sense, Bradshaw 2013: Are We Good Owners?
* "Origins of Innovation: Bakewell & Breeding"
* "Genetics and Eugenics in Frank Herbert's Dune"
* Psychology:
* SMPY bibliography
* "Everything Is Correlated"
* What is the morning-writing effect?
* Cordwainer Smith's "'Scanners Live in Vain' as realistic SF"
* "The Gift of the Amygdali"
* Tech:
* "Laws of Tech: Commoditize Your Complement"
* "How many computers are in your computer?"
* The most common error in technological forecasting: conjunctive vs disjunctive reasoning
* "Banner Ads Considered Harmful"
* "Internet Research Tips"
* "Littlewood's Law and the Global Media"
* Small ways in which ordinary life has been getting better since the late '80s/early '90s
* QS:
* Acne: a good Quantified Self topic for self-experimentation
* ZMA sleep self-experiment
* Bacopa quasi-experiment
* Misc:
* On the Existence |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.