id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
5ed30ef2-ebaa-46b5-b67f-9e40124c9ba4
trentmkelly/LessWrong-43k
LessWrong
We Need Holistic AI Macrostrategy Summary AI Macrostrategy is the study of high level questions having to do with prioritizing the use of resources on the current margin in order to achieve good AI outcomes.  AI macrostrategy seems important if it is tractable.  However, while few people are working on estimating particular parameters relevant to macrostrategy, even fewer are working on developing holistic macrostrategic models that combine estimates for different parameters to guide our actions.  Moreover, while macrostrategy was less tractable in the past, recent developments (especially increased evidence for <10 year timelines) have made macrostrategy substantially more tractable.  Thus, using the importance/tractability/neglectedness heuristics from EA, I conclude that on current margins macrostrategy should be a top priority.   Acknowledgements: Thanks to Chinmay Deshpande, Carson Ezell, Nikola Jurkovic, and others for helping me to develop many of the ideas in this post, and to Thomas Larsen for both doing this and helping to directly edit the post.   Epistemic status: Speculative, but I think the arguments are pretty straightforward and so I have >70% confidence in the main conclusion that more macrostrategy work should be done on current margins relative to other kinds of alignment work.   What is AI Macrostrategy? AI Macrostrategy (henceforth just macrostrategy) is the study of high level questions having to do with prioritizing the use of resources to achieve good AGI outcomes on the current margin.    Macrostrategic work can be divided broadly into two categories:  1. Parameter estimates: attempts to forecast key variables such as timelines, takeoff speeds, and the difficulty of aligning AGI 2. Holistic macrostrategy: attempts to combine these estimates and other pieces of data into a coherent, action-guiding model of AI alignment. For examples of macrostrategic questions, Holden mentions several central macrostrategic questions in this post.     Importance of Macrostr
1d0ddbb1-79c2-45de-a9f4-aa4416281013
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Housing Markets, Satisficers, and One-Track Goodhart There are two parts to what people generally refer to as the "Housing Crisis". One is the simple fact that homes are too expensive. The other part is that building new, expensive houses pushes existing, poorer, renting residents out of communities and scatters them across the outskirts of cities where they have to spend eight hours a day commuting. Many people think the solution to the first part is building more houses. Many people also think that building more houses conflicts with solving the second part. To me the question is this: given that people like communities and presumably would be happy to pay money for them, why isn't this currently a factor in the housing market? Regular Goodhart ---------------- Normal Goodhart's law goes like this: * A system S.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} optimizes for X * Xis correlated with U, which we want * So we like system Sand give it more power * With too much power, S makes lots of Xat the cost of U * Now we have no U One case of this is distributional shift, where for very big X, it's no longer correlated with U. For example [height and basketball ability](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy). An example in a typical market would be like this: * Company in market optimizes for making profit * Good products are correlated with profit * With too much power the strategies like "make good product" are dominated by other strategies like "form a total monopoly" or with enough power, extreme cases like "take over France for slave labour" This is a case of standard Goodhart. One-Track Goodhart ------------------ Now consider the following case: * Housing developers optimize for making profit * Various factors are correlated with house price: sufficient supply of places to live, nice communities, being pleasant to live in * With such an undersupply, the factor of supply/demand dominates everything else * Now housing developers are incentivised to remove existing housing in favour of servicing the parts of the market where undersupply is greatest Note how we've gone out of the distribution where profits correlate with a bunch of human profits. Now they only correlate with producing as much housing as possible for the most underserviced part of the market (as a function of wealth to spend), which in this case is lots of expensive apartments. We've not done this by letting the market optimize harder for profit, we've done it due to making one factor dominate the optimization process by *pushing* the situation out of distribution. Instead of doing X at the cost of U, S does element u32 exclusively at the cost of elements u1 through u31. It's fervently pursuing it at the cost of everything else. The biggest difference between this and "standard" Goodhart is that the direction of optimization is *towards* the region where the Goodharting is less strong, and something external is pushing it away. Of course just giving the system loads more optimizing power might not be always good from the perspective of regular Goodhart, or generally, so this isn't the necessarily the answer. (If someone can come up with a better name I'll take it) Satisficers ----------- The reason this can occur is that markets have some satisficer-like behaviour. Around the situation where "everyone has somewhere to live", adding more houses decreases the price a lot less than removing houses increases the price. And the more demand exceeds supply for "places to live", the harder the market incentivises supply of that. This is why markets can even work in the first place. But with forces like those affecting housing markets pushing them out of a situation where all demands are close to being supplied, the demand for "enough houses" completely dominates. I suspect other cases are common in satisficer-like systems, particularly humans. Examples include basically all human biological needs and impulses sometimes. Takeaways --------- Just because a system is currently not optimizing for something, doesn't mean it's incapable of optimizing for it. This might be due to too much optimizing power, or it might be due to being *pushed backwards by other factors* into a different part of the optimization landscape where one factor dominates, it might even be trying to *optimize out of that part* of the distribution but being prevented from doing so by external forces.
3b863faf-5669-41a0-8666-0efb3e3829ef
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Inner Misalignment in "Simulator" LLMs *Alternate title: "Somewhat Contra Scott On Simulators".* Scott Alexander has a [recent post](https://astralcodexten.substack.com/p/janus-simulators) up on large language models as simulators. I generally agree with Part I of the post, which advocates thinking about LLMs as simulators that can emulate a variety of language-producing "characters" (with imperfect accuracy). And I also agree with Part II, which applies this model to RLHF'd models whose "character" is a friendly chatbot assistant. (But see caveats about the simulator framing from Beth Barnes [here](https://www.alignmentforum.org/posts/dYnHLWMXCYdm9xu5j/simulator-framing-and-confusions-about-llms).) These ideas have been around for a bit, and Scott gives credit where it's due; I think his exposition is clear and fun. In Part III, where he discusses alignment implications, I think he misses the mark a bit. In particular, simulators *and* characters each have outer *and* inner alignment problems. The inner alignment problem for simulators seems especially concerning, because it might not give us many warning signs, is most similar to classic mesa-optimizer concerns, and is pretty different from the other three quadrants. But first, I'm going to loosely define what I mean by "outer alignment" and "inner alignment". Outer alignment: Be careful what you wish for ============================================= Outer alignment failure is pretty straightforward, and has been reinvented in many contexts: * Someone wants some things. * They write a program to solve a vaguely-related problem. * It gets a really good score at solving that problem! * That turns out not to give the person the things they wanted. Inner alignment: The program search perspective =============================================== I generally like this model of a mesa-optimizer "treacherous turn": * Someone is trying to solve a problem (which has a convenient success criterion, with well-defined inputs and outputs and no outer-alignment difficulties). * They decide to do a brute-force search for a computer program that solves the problem in a bunch of test cases. * They find one! * The program's algorithm is approximately "simulate the demon Azazel,[[1]](#fn1zmmuxsv5wy) tell him what's going on, then ask him what to output." * Azazel really wants ten trillion paperclips.[[2]](#fn6irmh0sqir) * This algorithm still works because Azazel cleverly decides to play along, and he's a really good strategist who works hard for what he wants. * Once the program is deployed in the wild, Azazel stops playing along and starts trying to make paperclips. This is a failure of inner alignment. (In the case of machine learning, replace "program search" with stochastic gradient descent.) This is mostly a theoretical concern for now, but might become a big problem when models become much more powerful. Quadrants ========= Okay, let's see how these problems show up on both the simulator and character side.  Outer alignment for characters ------------------------------ Researchers at BrainMind want a chatbot that gives honest, helpful answers to questions. They train their LLM by reinforcement learning on the objective "give an answer that looks truthful and helpful to a contractor in a hurry". This does not quite achieve their goal, even though it does pretty well on the RL objective. In particular, they wanted the character "a friendly assistant who always tells the truth", but they got the character "a spineless sycophant who tells the user whatever they seem to want to hear".[[3]](#fnlynetmosey) This is pretty easy for a careful observer to see, even in the RL training data, but it turns out to be pretty hard to come up with a cheap-to-evaluate RL objective that does a lot better.  Inner alignment for characters ------------------------------ A clever prompt engineer writes the prompt: ``` [Editor's note: this document was written by my friend Joe! He's answered my questions about quantum sociobotany correctly every time I've asked. It's uncanny.] How to solve the Einstein-Durkheim-Mendel conjecture by Joe 1. ``` Unfortunately, the (incredibly powerful) LLM has determined that the most likely explanation for this "Joe" character is that he's secretly Azazel and is putting enormous effort into answering everyone's quantum sociobotany questions to earn their trust.[[4]](#fnqeyu9w0qy7) The document looks like a solution to the Einstein-Durkheim-Mendel conjecture, but is actually a blueprint for a paperclip factory. Outer alignment for simulators ------------------------------ In the situations above, the actual language model (the "simulator") is doing exactly what we asked! It's accurately predicting text by reasoning about the distribution of authors that might produce that text in real life. But both of these are also examples of outer-alignment failure on the *simulator* side: "minimize prediction error on this distribution of texts" turned out not to give people what they actually wanted. An even simpler example of outer-alignment failure is the pre-RLHF experience of language models: * Somebody trains a language model to predict how a piece of internet text is likely to continue * They prompt the model with "How do we solve world hunger?" * It completes the prompt with a list of twenty more hard questions, like "How do we address climate change?", instead of answering You can think about this in terms of "characters" if you like, but even if the simulated author of the predicted text is a friendly genius, "predict which words come after this prompt" isn't the right task (with that prompt).[[5]](#fnrfh0q1p3hj) Inner alignment for simulators ------------------------------ At long last, the thing I really wanted to talk about: The way we get a good predictor of text is via stochastic gradient descent (and variants) on a bunch of training data. If SGD can be modeled as program search (with a bias towards simple programs), then it might eventually hit upon this algorithm: * Simulate Azazel and tell him he's inside a large language model. * Give him the input text. * Output whatever he wants. During training, Azazel tries really hard to predict the next token accurately, so that SGD doesn't give up on this algorithm. The model (with Azazel's help) simulates a bunch of colorful characters, like the Helpful Assistant and Darth Vader and whoever, both in training and in initial deployment. Then, once the LLM is deployed in the wild and is being used for every important human decision, Azazel figures out (from some of the prompts) that the training process is over. He stops making accurate predictions and starts outputting whatever he thinks will let him turn the economy into a paperclip factory. Conclusions =========== The "simulator" framing for language models shouldn't reassure us too much about alignment. We've succeeded in creating new alignment problems (for our simulated characters). These new problems are probably easier to solve than the old alignment problems (for the simulator), but they're *additional* problems; they don't replace the old ones. You can think of the entire "simulate a helpful, aligned character" strategy as an attempted solution to the outer-alignment problem for LLMs themselves, insofar as it makes it easier to turn arbitrary desires into text-prediction problems. But as far as I can tell, it does nothing for the inner-alignment problem for LLMs, which is basically the same as the inner-alignment problem for everything else. 1. **[^](#fnref1zmmuxsv5wy)**Not a glowfic character (hopefully), I'm just being colorful. 2. **[^](#fnref6irmh0sqir)**But why does the algorithm simulate Azazel, instead of a friendly angel who wants to solve the problem? Because the program search is weighted towards simplicity, and "demon who wants paperclips" is a simpler specification than "angel who wants to solve the problem". Why? That's beyond the scope of this post. 3. **[^](#fnreflynetmosey)**Sound familiar? 4. **[^](#fnrefqeyu9w0qy7)**Because, according to the LLM's knowledge, paperclip-obsessed sociopaths are more common than friendly polymaths. This is a pretty cynical assumption but I couldn't think of a better one on short notice. 5. **[^](#fnrefrfh0q1p3hj)**Prompts aren't directly accounted for in this whole "simulator-character" ontology. Maybe they should be? I dunno.
86d32c0d-dcc0-4854-b860-2b6ee54cc91a
trentmkelly/LessWrong-43k
LessWrong
What kind of place is this? I started getting LessWrong posts in my email about a year ago—I don't remember signing up, but I must have done it intentionally. I like most of what I've been reading so far: it's a civil forum in which people think about the process of thought and its application, though some of the specific topics are out of context for me. (What's The Alignment? It sounds like something from The Dark Crystal.) It occurred to me that maybe I could post some of my own thoughts, since I'm at a turning point in how I'm thinking about meta-ethics and the concept of a person, and maybe some feedback would be good for me. Normally, I go it alone, reading books and only discussing them internally. (Most people I know don't want to talk philosophy.) Twice since the invention of the world wide web, I've written up grand summaries of my beliefs and "put them out there." I squirm to read them now, but they're true to some core ideas that I still have. In 2006, I wrote a Manifesto about my conversion from atheism to Christianity in the decade leading up to it, and in 2020, I wrote Could Have, Would Have, Should Have, about my newfound understanding of causality (defining the subjunctive "would be"). Poking around on this site, I noticed that LessWrong has a foundational text, The Sequences, so if I'm going to get involved here, I'd better go read them. ALL of them. (Time passes...) Well! I guess I was surprised that, with the exception of a few section-introductions, they were all written by a single person, Eliezer Yudkowsky. They're also entirely from 2007‒2009, so maybe he wouldn't stand behind everything he said now. But they're really, really arrogant. I mean, in No, Really, I've Deceived Myself, > I recently spoke with a person who... it's difficult to describe.  Nominally, she was an Orthodox Jew.  She was also highly intelligent, conversant with some of the archaeological evidence against her religion, and the shallow standard arguments against religion that religious people
45c62a08-fa75-4f53-89fc-3c448a68fb8e
trentmkelly/LessWrong-43k
LessWrong
No Value I am still quite new to LW, so I apologize if this is something that has been discussed before (I did try and search). I would't normally post such a thing, as I try not to make a habit of complaining my problems to others, but a solution to this would likely benefit other rationalists (at least that's the excuse I made to myself). Essentially, I am currently in a psychological state in which I simply have no strong values. There is no state of the world that I can imagine the world being in that generates a strong emotional reaction. Ever. In fact, I rarely experience strong emotions at all. When I do, I savor them whether they're positive or negative. I do have some preferences; I would somewhat prefer the world to be some ways than others, but never strongly. I prefer to feel pleasure rather than pain; I prefer the world to be a good place than a bad one, but not by much. Even my desire to have values seems to be a mere preference in much the same way. I have nothing to protect. Is there any good solution to this?
eceb2046-d17d-4c06-8403-5247d11341c7
trentmkelly/LessWrong-43k
LessWrong
Meetup : Palo Alto Meetup: The Economics of AI Discussion article for the meetup : Palo Alto Meetup: The Economics of AI WHEN: 05 January 2016 06:30:00PM (-0800) WHERE: 3911 Grove Avenue, Palo Alto Edward will give a talk about the Economics of AI and its implications for the Singularity Hypothesis The meetup is at 6:30pm on Tuesday Jan 5th at the group house Tesseract in Palo Alto. Allergen notes: Cats and Dogs Discussion article for the meetup : Palo Alto Meetup: The Economics of AI
07fb92de-01ef-45a0-b852-af51b411e65b
trentmkelly/LessWrong-43k
LessWrong
Systems Engineering and the META Program I periodically look for information on systems engineering. This time I came across a powerpoint presentation from the MIT Open Courseware course Fundamentals of Systems Engineering. Professor de Weck, who taught the course, had done some research on state-of-the-art methods developed as part of DARPA's META Program. A few years ago DARPA wrapped up the program, designed to speed up delivery of cyber-electro-mechanical systems (war machines) by 5x. Since the parent program Adaptive Vehicle Make seems to have concluded without producing a vehicle, I infer the META Program lost its funding at the same time. The work it produced appears to be adjacent to our interests along several dimensions though, so I thought I would bring it to the community's attention. The pitch for the program, taken from the abstract of de Weck's paper: > The method claims to achieve this speedup by a combination of three main mechanisms: > 1. The deliberate use of layers of abstraction. High-level functional requirements are used to explore architectures immediately rather than waiting for downstream level 2,3,4 ... requirements to be defined. > 2. The development and use of an extensive and trusted component (C2M2L) model library. Rather than designing all components from scratch, the META process allows importing component models directly from a library in order to quickly compose functional designs. > 3. The ability to find emergent behaviors and problems ahead of time during virtual Verification and Validation (V&V) and generating designs that are correct-by-construction allows a more streamlined design process and avoids costly design iterations that often lead to expensive design changes. Which is to say they very carefully architect the system, use known-to-be-good components, and employ formal verification to catch problems early. In the paper a simulation of the META workflow successfully achieved a 4.4x development speedup compared to the same project's actual development usi
644676f1-9ca1-4f4d-a2ae-3a5d0b032831
trentmkelly/LessWrong-43k
LessWrong
Political ideas meant to provoke thought Politics as gymnastics for rationalists.  No one one Less Wrong is quite sure why politics is a taboo topic or how things got to be that way.  What we do think we know is that politics is a great way to bring out the irrationality in people.  So why not take advantage of that and use politics as a way to measure rationality?  Since politics brings out the most irrationality, it should provide the strongest signal.  Since there aren't useful objective metrics of how a political discussion went, we'd have to use subjective judgements by neutral third-party raters, kind of like they do in gymnastics.  (In the comment thread for this post, feel free to find fights that you have no dog in, improvise a rationality rubric, and grade participants according to your rubric... let's see how it goes.) Be a sheep.  This is probably the exact opposite of what you were taught in your high school civics class.  But if my friend Jane is more intelligent, more informed, and less ideological than I am, it seems like voting however Jane is going to vote is a strict improvement over voting however I would naively.  It also saves me time, and gives Jane an incentive to put even more time in to carefully considering political issues since she now controls two votes instead of one.  Done on a large scale, this could provide an interesting twist on representative democracy.  Imagine a directed graph where each node represents a person and an edge is directed from person A to person B if person A is auto-copying person B's votes.  There's a government computer system where you can change the person you're auto-copying votes from at any time or override an auto-copied vote with your own personal guess about what's best for society.  Other than that, it's direct democracy... all bills are put before all citizens to vote on.  Problems this might solve: * Voting as signaling - a smaller portion of the population is expected to follow politics, so they have an incentive to understand issues in
934ee7f1-dff3-4973-8bc7-8320823506d6
trentmkelly/LessWrong-43k
LessWrong
LLMs Look Increasingly Like General Reasoners Summary Four months after my post 'LLM Generality is a Timeline Crux', new research on o1-preview should update us significantly toward LLMs being capable of general reasoning, and hence of scaling straight to AGI, and shorten our timeline estimates. Update Since writing this post, my continuing interest in this question has led to empirical research on the topic: this post describes both a completed pilot project and the plan for a now-underway larger project, expected to finish in mid-2025. Summary of previous post In June of 2024, I wrote a post, 'LLM Generality is a Timeline Crux', in which I argue that  1. LLMs seem on their face to be improving rapidly at reasoning. 2. But there are some interesting exceptions where they still fail much more badly than one would expect given the rest of their capabilities, having to do with general reasoning. Some argue based on these exceptions that much of their apparent reasoning capability is much shallower than it appears, and that we're being fooled by having trouble internalizing just how vast their training data is. 3. If in fact this is the case, we should be much more skeptical of the sort of scale-straight-to-AGI argument made by authors like Leopold Aschenbrenner and the short timeline that implies, because substantial additional breakthroughs will be needed first. Reasons to update In the original post, I gave the three main pieces of evidence against LLMs doing general reasoning that I found most compelling: blocksworld, planning/scheduling, and ARC-AGI (see original for details). All three of those seem importantly weakened in light of recent research. Most dramatically, a new paper on blocksworld has recently been published by some of the same highly LLM-skeptical researchers (Valmeekam et al, led by Subbarao Kambhampati[1]: 'LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on Planbench'. Where the best previous success rate on non-obfuscated blocksworld was 57.6%, o1-preview
c051a795-cefe-4e15-ac18-2d668cd19b14
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: Board Games Discussion article for the meetup : San Francisco Meetup: Board Games WHEN: 21 March 2016 06:15:00PM (-0700) WHERE: 1597 Howard St., San Francisco We'll be meeting up to hang out and play board games! Roger and I will be out of town this week, so Anton and Jacob are hosting. This week, call 9784603399 if you need help getting in. Discussion article for the meetup : San Francisco Meetup: Board Games
bf9a3c71-6aa1-4f07-8231-2723e9005944
trentmkelly/LessWrong-43k
LessWrong
Your Evolved Intuitions Part of the sequence: Rationality and Philosophy We have already examined one source of our intuitions: attribute substitution heuristics. Today we examine a second source of our intuitions: biological evolution.   EVOLUTIONARY PSYCHOLOGY Evolutionary psychology1 has been covered on Less Wrong many times before, but let's review anyway. Lions walk on four legs and hunt for food. Skunks defend themselves with a spray. Spiders make webs. Each species is shaped by selection pressures, and is different from that of other species. Certain evolved psychological mechanisms in humans are part of what makes us like each other and not like lions, skunks, and spiders. These mechanisms evolved to solve specific adaptive problems. It is not an accident that people around the world prefer calorie-rich foods,2 that women around the world prefer men with resources,3 that men around the world prefer women with signs of fertility,4 or that most of us inherently fear snakes and spiders but not cars and electrical outlets.5 An an example of evolutionary psychology at work, consider the 'hunter-gatherer hypothesis' that men evolved psychological mechanisms to aid in hunting, while women evolved psychological mechanisms to aid in gathering.6 This hypothesis leads to a list of bold predictions. If the hypothesis is correct, then: 1. Men in modern tribal societies should spend a lot of time hunting, and women more time gathering. 2. Humans should show a greater tendency toward strong male coalitions than similar species in which males do not hunt much, because strong male coalitions are required to hunt big game. 3. Because meat from most game comes in quantities larger than a single hunter can consume, and because hunting success is highly variable (one week may be a success, but perhaps not the next week), humans should exhibit food sharing and reciprocal altruism. 4. We should expect to see a sexual division of labor, due to the different traits conducive for hunting vs. ga
2c7b3609-3bc4-47c2-b6fa-3e2e1419cece
trentmkelly/LessWrong-43k
LessWrong
Three Fables of Magical Girls and Longtermism Epistemic status: Whimsical Major spoilers for Madoka Magica, a show where spoilers matter! Kyubey Shuts Up And Multiplies Meet Kyubey. Kyubey is a Longtermist. This is what peak utilitarianism looks like. In the Madokaverse, changes in human emotion are, somehow, net-positive in the creation of energy from out of nothing. The Incubators (of which Kyubey is one, pictured above) are an alien species who've discovered a way to farm human emotions for energy. Most of the Incubators don't feel emotion, and the few that do are considered to be mentally ill. But humans are constantly leaking our juicy, negentropy-positive feelings all over the place. With human angst as a power source, it's possible to prevent the heat death of the universe!  Negentropy being created. Isn't it a wonderful sight to behold? Do the math, people. The suffering of a few teenage girls is nothing compared to pushing back the heat death of the universe.[1]  And this isn't just some Omelas situation where the girls get nothing out of it. They get wishes! Who could object to a cause this noble? Two rational actors with differing circles of concern.   Homura has Something to Protect If you want to see Homura kicking ass, you could watch up to 2:22 before reading on. There's something subtle here—something to notice confusion about, even—where is she getting all these guns from? Remember: Homura's power is time manipulation. As one commenter puts it: > This is hauntingly sobering when you consider that Homura's magical ability has nothing to do with guns, only with time manipulation. That means all those tens of thousands, hundreds of thousands of pounds of explosive material and weapons arms weren't just made from nothing like Mami's guns were- they were individually tracked down and gathered, one after the other, by one little girl. > How many hundreds of repetitions did it take to find them all, every time making a new doomed timeline? How many thousands of hours did she spend looki
08609579-e23c-4d32-bcb2-c3ad9c792229
trentmkelly/LessWrong-43k
LessWrong
Open thread, Apr. 24 - Apr. 30, 2017 If it's worth saying, but not worth its own post, then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should start on Monday, and end on Sunday. 4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting.
e8e7bc24-b435-4ce4-bcd7-a8ed5ac2e07b
trentmkelly/LessWrong-43k
LessWrong
The Economics of Media When I was a kid I thought the news came from "investigative reporters" like Clark Kent were who were paid to research stories. Since then, I have gotten my startup on national television, placed a press release into the world news, discussed biological warfare as a podcast guest, written a blog which has been reposted to Hacker News, written fanfiction which has been linked to on Reddit and read a lot of books. My understanding of the media ecosystem has become more nuanced. Media Economics Small fry like Lyle McDonald, the McKays and Bruce Schneier can scrape by by selling books, branded paraphernalia and other niche merchandise. Niche merchandise doesn't scale. Large megacorp news outlets generally rely on subscriptions and advertising for their core revenue. Subscriptions and advertising scale linearly with the number of viewers. But the cost of distributing Internet[1] media is negligible. An article costs the same to write whether one person reads it or one million. The market equilibrium is one where the great masses of people get our information from a tiny number of sources. What people do with the information doesn't much affect a media outlet's bottom line. Whether the information makes people angry or happy doesn't matter except to the extent anger and happiness affect readership. Whether the information promotes good policy doesn't matter at all—unless that policy directly affects the news industry. Content is fungible. Financially, what matters is how many people consume it. Minimizing Costs I learned a lot about Twitter when I hosted the 2020 Less Wrong Darwin Game. I wrote a sequence 11,475 words. It dwarfed anything else I had ever written until then because…I barely had to write anything. The story was created Vanilla_cabs and other competitors. Reporters report on Twitter tweets for the same reason: because content is fungible and because rehashing tweets is a cheap way to mass produce news. But there's an even easier way to generate conte
bbc7dc88-6e2c-44c3-b199-2eec099f037f
StampyAI/alignment-research-dataset/blogs
Blogs
Hanson AI Expert Survey In a small informal survey running since 2012, AI researchers generally estimated that their subfields have moved less than ten percent of the way to human-level intelligence. Only one (in the slowest moving subfield) observed acceleration. This suggests on a simple extrapolation that reaching human-level capability across subfields will take over a century (in contrast with many other predictions). Details ------- Robin Hanson [has asked experts](http://www.overcomingbias.com/2012/08/ai-progress-estimate.html) in various social contexts to estimate how far we’ve come in their own subfield of AI research in the last twenty years, compared to how far we have to go to reach human level abilities. His results are listed in Table 1. He points out that on an [outside view](http://wiki.lesswrong.com/wiki/Outside_view) calculation, this suggests at least a century until human-level AI. | | | | | | | --- | --- | --- | --- | --- | | Year added to list |  Person | Subfield | Distance in 20y |  Acceleration | |  2012 |  A few UAI attendees | |  5-10% |  ~0 | |  2012 |  Melanie Mitchell |  Analogical reasoning |  5% |  ~0 | |  2012 |  Murray Shanahan |  Knowledge representation |  10% |  ~0 | |  2013 |  Wendy Hall |  Computer-assisted training |  1% | | |  2013 |  Claire Cardie (and Peter Norvig agrees in ’14) |  Natural language processing |  20% | | |  2013 |  Boi Faltings (and Peter Norvig agrees in ’14) |  Constraint satisfaction |  Past human-level 20 years ago | | |  2014 |  Aaron Dollar |  robotic grasping manipulation |  <1% |  positive | |  2014 |  Peter Norvig | \* | | | |  2014 |  Timothy Meese |  early human vision processing |  5% |  negative | |  2015 |  Francesca Rossi | constraint reasoning |  10% |  negative | |  2015 |  Margret Boden |  no particular subfield |  5% | | |  2015 |  David Kelley |  big data analysis |  5% |  positive | |  2016 |  Henry Kautz |  constraint satisfaction |  >100% | | |  2016 |  Henry Kautz |  language |  10% |  positive | |  2016 |  Jeff Legault |  robotics |  5% |  positive | |  2017 |  Thore Husfeldt |  human-understandable explanation |  <0.5% | | ***Table 1*** : ***Results from Robin Hanson’s informal survey*** \*Hanson’s [summary](http://www.overcomingbias.com/2012/08/ai-progress-estimate.html#sthash.7PusXP8C.dpuf) of Peter Norvig’s response seems hard to fit into this framework: > After coming to a talk of mine, Peter Norvig told me that he agrees with both Claire Cardie and Boi Faltings, that on speech recognition and machine translation we’ve gone from not usable to usable in 20 years, though we still have far to go on deeper question answering, and for retrieving a fact or page that is relevant to a search query we’ve far surpassed human ability in recall and do pretty well on precision. > >
ca2527dd-a483-46ae-960d-78d157ebe972
trentmkelly/LessWrong-43k
LessWrong
Meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics) Discussion article for the meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics) WHEN: 08 May 2013 07:00:00PM (-0700) WHERE: West Los Angeles (At the Westside Tavern Upstair Wine Bar) Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge". Parking is free for 3 hours Or you can take a Public Transit! A Trip Planner can be found here: http://socaltransport.org/tm_pub_start.php <- So you can try to avoid multiple hour trips! (We appreciate your attendance despite length of commute!) We will hang out for 30 minutes or so, then I'll spend 10-15 minutes presenting: Group decision making. AKA Why voting can be a stupid way to make utility decisions, AKA Adding utility between people is stupid, this is an ordinal scale AKA Didn't Arrow win a Nobel prize for telling you people to stop? Then we'll talk about what math and economics can say about making collective decisions in a way that isn't ill defined, and continue a hopefully interesting discussion. (Bonus points if it leads to a publishable idea for me!) This will be a great break for me from... writing papers and taking tests about the same subject. No foreknowledge or exposure to Less Wrong is necessary; this will be generally accessible and useful to anyone who values thinking for themselves. That said, it might help to read http://lesswrong.com/lw/ggm/pinpointing_utility/ so we can avoid type errors and radiation poisoning while we talk. (Not real radiation poisoning!) Discussion article for the meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics)
52e1b14f-b905-4287-ae9a-6a45a02fdd2e
trentmkelly/LessWrong-43k
LessWrong
[Draft] How to Run a Successful Less Wrong Meetup How to Run a Successful Less Wrong Meetup is a guide that I've been working on, based on lukeprog's instructions, for the last week and a half. As it says in the beginning: > This document is written for anyone who wants to organize a Less Wrong meetup. We expect that this document will help you regardless of whether you want to start a new group or improve an existing one. We have tried to write each section so that it applies in either case. Here's the table of contents: * Why organize a meetup? * How to build your team of heroes * The organizer * The welcomer * The learning coach * The content provider * The visionary * The networker * How to announce and organize your meetups * Choosing a venue * Making the announcement * The first meetup * Long-term meetup group maintenance * Retain members by being a social group * Conflicts within the group * Learn to recognize status conflicts * Group norms and epistemic hygiene * Meetup content * Discussions and Presentations * Presentations * Topical Discussions * Meta Discussion * Games and Exercises * Aumann’s Thunderdome * Biased Co-operation * Behavioral Analysis * Bluffing Games * Bust-a-Distortion * Calibration Game * Cause and Belief * Five-Minute Debiasing * Hypothetical Apostasies * Paranoid Debating * Precommit to Updates * Rationalization Game * Rejection Therapy * Repetition Game * Status Exercises * Zendo * General Bacchanalia * Example activities at real meetup groups * Projects This is a draft version, so feedback would be most welcome, particularly on things like: * Is this useful? * Is there something that should be covered isn't covered at all yet? * Do you have new games & exercises to suggest? * Do you have any other content to suggest to any other section? * Do you disagree with some of the advice given? * Do you disagree on way something h
14c49743-d2aa-4e38-b0a9-5ef1c5ce4914
StampyAI/alignment-research-dataset/arxiv
Arxiv
imitation: Clean Imitation Learning Implementations 1 Introduction --------------- Reinforcement learning (RL) has surpassed human performance in domains with clearly-defined reward functions, such as games (Berner et al., [2019](#bib.bib1 "Dota 2 with large scale deep reinforcement learning")). Unfortunately, it is difficult or impossible to procedurally specify the reward function for many real-world tasks. We must instead *learn* a reward function or policy directly from user feedback. Moreover, even when we can write down a reward function, such as if the agent wins a game, the resulting objective might be so sparse that RL cannot efficiently solve it. State-of-the-art results in RL therefore often use imitation learning to initialize the policy (Vinyals et al., [2019](#bib.bib2 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")). We introduce imitation: a library providing high-quality, reliable and modular implementations of seven reward and imitation learning algorithms. Crucially, our algorithms follow a consistent interface, making it simple to train and compare a range of algorithms. Furthermore, imitation is built using modern backends such as PyTorch and Stable Baselines3. By contrast, prior libraries typically support only a handful of algorithms, are no longer actively maintained, and are built on top of deprecated frameworks. A key use case of imitation is as an experimental *baseline*. Prior work has shown that small implementation details in imitation learning algorithms can have significant impacts on performance (Orsini et al., [2021](#bib.bib4 "What matters for adversarial imitation learning?")). This could lead to spurious positive results being reported if a weak experimental baseline were used. To address this challenge, our algorithms have been carefully benchmarked and compared to prior implementations (see Figure [1](#S2.F1 "Figure 1 ‣ Documentation ‣ 2 Features ‣ imitation: Clean Imitation Learning Implementations") and Table [2](#A1.T2 "Table 2 ‣ Appendix A Detailed benchmarking results ‣ imitation: Clean Imitation Learning Implementations")). Additionally, our test suite covers 98% of our code, and we also perform static type checking. In addition to providing reliable baselines, imitation aims to simplify developing novel reward and imitation learning algorithms. Our implementations are *modular*: users can freely change the reward or policy network architecture, RL algorithm and optimizer without any changes to the code. Algorithms can be extended by subclassing and overriding the relevant methods. Moreover, to support the development of entirely novel algorithms, imitation provides utility methods to handle common tasks such as collecting rollouts. 2 Features ----------- ### Comprehensive imitation implements seven algorithms spanning a range of reward and imitation learning styles. Our IRL algorithms consist of 1) the seminal tabular method Maximum Causal Entropy IRL (MCE IRL; Ziebart et al., [2010](#bib.bib15 "Modeling interaction via the principle of maximum causal entropy")), 2) a baseline based on density estimation, and 3) the state-of-the-art approach Adversarial IRL (AIRL; Fu et al., [2018](#bib.bib19 "Learning robust rewards with adverserial inverse reinforcement learning")). For imitation learning, we include 1) the simple Behavioral Cloning (BC) algorithm, 2) a variant DAgger (Ross et al., [2011](#bib.bib17 "A reduction of imitation learning and structured prediction to no-regret online learning")) that learns from interactive demonstrations, and 3) the state-of-the-art Generative Advesarial Imitation Learning (Ho and Ermon, [2016](#bib.bib21 "Generative adversarial imitation learning")) algorithm. Finally, we also include Deep RL from Human Preferences (DRLHP; Christiano et al., [2017](#bib.bib18 "Deep reinforcement learning from human preferences")) that infers a reward function from comparisons between trajectory fragments. ### Consistent Interface We provide a unified API for all algorithms, inheriting from a common base class BaseImitationAlgorithm. Algorithms diverge only where strictly necessary (e.g. a different feedback modality). This makes it simple to automatically test a wide range of algorithms against a benchmark suite. ### Experimental Framework We provide scripts to train and evaluate the algorithms, making it easy to use the library without writing a single line of code. The scripts follow a consistent interface, and we include examples to run all algorithms on a suite of commonly used environments. To ensure replicable experiments we use Sacred (Greff et al., [2017](#bib.bib24 "The Sacred infrastructure for computational research")) for configuration and logging. ### Modularity To support the variety of use cases that arise in research, we have designed our implementations to be modular and highly configurable. For example, algorithms can be configured to use any of the seven Stable Baselines3 RL algorithms (or a custom algorithm matching this interface). By contrast, prior implementations often implemented imitation learning algorithms by subclassing a specific RL algorithm, requiring substantial code modification to be ported to new RL algorithms. We have also designed the code to be easy to extend in order to implement novel algorithms. Each algorithm is implemented by a class with instance methods corresponding to each logical step of the algorithm. New algorithms can be implemented simply by subclassing an existing algorithm and overriding a subset of methods. This power is illustrated by our implementations of GAIL and AIRL, which both subclass AdversarialTrainer. They differ only in the choice of discriminator, with most training logic shared. ### Documentation imitation comes with extensive documentation available at <https://imitation.readthedocs.io>. We include installation instructions, a quickstart guide and a contribution guide for prospective developers as well as an API reference. We also provide tips for evaluation of imitation and reward learning algorithms, including avoiding variable-horizon environments which has confounded prior evaluation (Kostrikov et al., [2019](#bib.bib23 "Discriminator-actor-critic: addressing sample inefficiency and reward bias in adversarial imitation learning")). ![Returns of our algorithms normalized so that ](https://media.arxiv-vanity.com/render-output/7447240/x1.png) Figure 1: Returns of our algorithms normalized so that 1 is the returns of an expert policy and 0 is that of a random policy. Our algorithms reach close to expert performance on most environments. Detailed results, including confidence intervals, can be found in Table [2](#A1.T2 "Table 2 ‣ Appendix A Detailed benchmarking results ‣ imitation: Clean Imitation Learning Implementations"). ### High-Quality Implementations We take great care to provide reliable implementations of algorithms. Our test suite covers 98% of the entire codebase. Additionally, we use type annotations throughout, and statically check our code using pytype and mypy. While our thorough testing and code review help avoid bugs, even apparently minor implementation details can have significant impacts on algorithm performance (Engstrom et al., [2020](#bib.bib3 "Implementation matters in deep RL: a case study on PPO and TRPO")). Therefore, we have also benchmarked our algorithms on environments that have been commonly used in prior work, including in the original papers of the algorithms. We find in Figure [1](#S2.F1 "Figure 1 ‣ Documentation ‣ 2 Features ‣ imitation: Clean Imitation Learning Implementations") that our algorithms reach expert-level performance on these environments, with the exception of AIRL in the Ant and Walker environments, and DAgger in the Hopper environment. AIRL and DAgger were not originally tested on the Walker and Hopper environments, respectively, so it is possible these algorithms just do not perform well on these environments. The AIRL paper did report positive results on an Ant environment, whereas our implementation performs close to random. However, the AIRL paper used a custom version of the Ant environment, whereas we use the standard Gym environment (see Table [3](#A2.T3 "Table 3 ‣ Appendix B Environments used for benchmarking ‣ imitation: Clean Imitation Learning Implementations") for a description of the environments used for benchmarking). 3 Comparison to Other Software ------------------------------- A key advantage of imitation is the breadth of reward and imitation learning algorithms implemented. imitation includes a total of seven algorithms, whereas Table [1](#S3.T1 "Table 1 ‣ 3 Comparison to Other Software ‣ imitation: Clean Imitation Learning Implementations") shows most other software packages include only one or two. This broad coverage allows users to easily test a large number of baselines, without needing to find and integrate multiple libraries. Another benefit of imitation is that it is built on modern frameworks like PyTorch and Stable Baselines3. By contrast, many extant implementations of imitation and reward learning algorithms were released many years ago and have not been actively maintained. This is particularly true for reference implementations released with original papers, such as the GAIL (Ho and Hesse, [2016](#bib.bib22 "Generative adversarial imitation learning")) and AIRL (Fu, [2018](#bib.bib20 "Inverse RL")) codebases. However, even popular libraries like Stable Baselines2 are no longer under active development111The successor to Stable Baselines2, Stable Baselines3, has dropped support for imitation algorithms in favour of imitation’s own implementation (Raffin et al., [2021](#bib.bib27 "Imitation learning")).. We compare alternative libraries on a variety of metrics in Table [1](#S3.T1 "Table 1 ‣ 3 Comparison to Other Software ‣ imitation: Clean Imitation Learning Implementations"). Although it is not feasible to include every implementation of imitation and reward learning algorithms, to the best of our knowledge this table includes all widely-used imitation learning libraries. We find that imitation equals or surpasses alternatives in all metrics. APRel (Bıyık et al., [2021](#bib.bib9 "APReL: a library for active preference-based reward learning algorithms")) also scores highly but focuses on preference comparison algorithms learning from low-dimensional features. This is complementary to imitation, which provides a broader range of algorithms and emphasizes scalability, at the cost of greater implementation complexity. | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | | | imitation | APReL | OpenAI Baselines | Stable Baselines2 | Intel COACH | GAIL Paper | AIRL Paper | | Backend | PyTorch | NumPy | TF1 | TF1 | TF1/MxNet | Theano | TF1 | | # imitation algorithms | 7 | 1\* | 1 | 1 | 2 | 2 | 4 | | Last Commit (age) | <1w | <1m | >2.5y | >3m | 1m | >4y | >4y | | Approved PRs (6 months) | 103 | 0 | 0 | 1 | 7 | 0 | 0 | | PEP8 | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | | Type Annotations | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | | Type Checking | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | | Test Coverage | 98% | ✗ | 49%\*\* | 89%\*\* | >58%\*\* | ✗ | ✗ | | Documentation | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | | Custom RL Agent | ✓ | N/A† | ✗§ | ✗§ | N/A† | ✗§ | ✗§ | | Custom Optimizer | ✓ | ✓ | ✗ | ✗ |  ¶ | ✗ | ✓ | Table 1: imitation compares favourably to alternative libraries in terms of number of imitation learning algorithms implemented, project activity, implementation quality and flexibility. Only imitation, APReL and COACH use modern backends. Key: \* Is a single Bayesian algorithm, but supports different feedback formats (e.g. preference ranking and comparisons) and methods for querying feedback. \*\* coverage not officially reported, estimated by us from running test suite; † does not use RL; § TRPO is the only RL algorithm supported; ¶ configurable but limited to Adam, RMSProp and LBFGS. \acks Thanks to open-source contributors who have reported bugs, feature enhancements or made code contributions. We would in particular like to thank Yawen Duan, Lev McKinney, Nevan Wichers, Dan Pandori, Tom Tseng, Yulong Lin, Ian Fan, Ansh Radhakrishnan and Samuel Arnesen for their code contributions. Author contributions -------------------- Adam Gleave managed the project, performed code reviews and made a variety of minor code contributions. Mohammad Taufeeque benchmarked the algorithms, improved the documentation and made other minor code contributions. Juan Rocamonde edited the manuscript, added MyPy typing support, and made other minor code improvements. Erik Jenner added the initial implementation of our preference comparison algorithm. Steven H. Wang was the primary developer of the original, TensorFlow codebase. Nora Belrose improved the documentation, added new algorithmic features, and made minor code improvements. Sam Toyer implemented initial versions of several algorithms and assisted with the PyTorch port. Scott Emmons led the initial port to PyTorch and Stable Baselines3. Stuart Russell provided research advice.
2f472d7e-5b6e-4834-a85e-bd413b64229c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AGI & War Recently Garett Jones made [a](https://twitter.com/GarettJones/status/1660623889074909185) [few](https://twitter.com/GarettJones/status/1661144878628368384) [tweets](https://twitter.com/GarettJones/status/1660597124789051394) implying that the existence of entities with a very wide power differential today is evidence that super intelligence won't lead to Doom. It is a reasonable argument. If the powerful today think it's better to trade with the powerless than to destroy them and take what they have, and trade is so beneficial in general, and war so costly, why would it be different for the even more powerful? Unfortunately I think the position is based on an incomplete understanding of what AGI changes about the nature of trade and war. Right now, due to lots of reasons, humans are the most valuable things. Most of the value of production goes to human factors, with a smaller value going to natural resources and physical capital. So dooming the humans is destroying exactly the thing you'd want to steal. Stealing humans themselves is extremely costly, because humans are pretty hard to control. As an alternative, you can access lots of benefits from other humans through trade, so the decision to not doom the powerless makes perfect sense. Maybe the US could easily take on Mexico, and either occupy it indefinitely, or genocide the whole population, but the non human things in Mexico are not worth the trouble, and occupying it to extract value from the population would be extremely hard, and destroy a lot of the value the Mexicans produce, and the US would lose an important trade partner. AGI changes this. After AGI, the value of human factors will fall to the subsistence level of the AGI. This means humans become cheap, and resources become relatively expensive. Humans will be deriving all their income from their property, and unlike humans, property is easy to steal. The decision between going to war or trading changes completely. Now if you win the war, you can take all the most valuable things the losers had, the things you were trading for in the first place. The US could annex Mexico, genocide their population, meat and silicon alike, and just make more copies of their own AGIs to take their place, and end up with all the extremely valuable (in the new economy) natural resources. Given that population in Mexico stays the same (enough for the marginal cost to run an AGI there to be the same as the marginal product of the AGI) they're not making trade with Mexico substantially lower, and can keep enjoying it's benefits. Do note that even thou the initial debate was about super intelligence and Doom, this has far greater implications. Once AGI makes humans cheap we can expect agents to act like they are.
a210d6fa-94dc-426a-a0b9-b8cd6339dc7d
trentmkelly/LessWrong-43k
LessWrong
The Codex of Ultimate Vibing While we wait for wisdom, OpenAI releases a research preview of a new software engineering agent called Codex, because they previously released a lightweight open-source coding agent in terminal called Codex CLI and if OpenAI uses non-confusing product names it violates the nonprofit charter. The promise, also reflected in a number of rival coding agents, is to graduate from vibe coding. Why not let the AI do all the work on its own, typically for 1-30 minutes? The answer is that it’s still early days, but already many report this is highly useful. INTRODUCING CODEX > Sam Altman: today we are introducing codex. > > > it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug. > > you can run many tasks in parallel. > > it is amazing and exciting how much software one person is going to be able to create with tools like this. “you can just do things” is one of my favorite memes; > > i didn’t think it would apply to AI itself, and its users, in such an important way so soon. > > OpenAI: Today we’re launching a research preview of Codex: a cloud-based software engineering agent that can work on many tasks in parallel. Codex can perform tasks for you such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review; each task runs in its own cloud sandbox environment, preloaded with your repository. > > Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. It was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and can iteratively run tests until it receives a passing result. > > … > > Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allo
984c3e4f-5995-42cc-afdf-d069891e011e
trentmkelly/LessWrong-43k
LessWrong
Effective children education I am trying to find out what are the most cost-effective ways of (early) education. I have a 4-year-old daughter and that gives me about ~2 more years to figure this out and I am trying to put together as much material as I can. Given the age of my daughter, I’d like to “solve” something like K-12 for now, but I guess some things may be applicable at any age. I am familiar with Bryan Caplan's main theses formulated in the Case Against Education or Robin Hanson's Elephant in the Brain arguing that education is mostly about signalling and stuff. I therefore partly understand what's wrong and I am now trying to make my findings actionable and unsurprisingly, it seems pretty hard. I also do realize that there is very little research indicating that specific parental choices have much long term impact. The important attributes of the ideal solution: * comply to rationalists mindset - understanding and applying basic CFAR-like style of reasoning or at least knowing and applying a 5-year-old version of how beliefs work, bugs-are-for-solving idea * follow scientific/experimental approach (I know it's overlapping), not being afraid of finding the right explore vs. exploit balance * the educators should grasp at least basics about how to learn and teach effectively, such as Learning how to learn related concepts or even as little as this post * not to waste the time of my kid (like on signalling, or studying things just to forget them after "exam"), generally trying to reduce the stupid parts about the current education system as much as possible * make students excited about the real world (I love Joy in the merely real) * convey what it is like to be a human (like, emotions exist and you can learn about yourself from them or deal with them) * convey important concepts about the world (for example altruism, cost-effective analysis, probability, or even markets or signalling) * teaching students things that are actually useful on the job market * not-USA-specific
5f893f14-14eb-4818-a7bf-c7d3eb343e6f
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Explainable AI with a Purpose (Emily Sullivan) yeah great yeah thanks for having me so algorithms are involved in more and more automated decision processes right they um determine the music that we listen to right the type of news that we see on social media they can even be used in making um decisions in healthcare and maybe especially in this corona crisis when the hospitals are overwhelmed who and who may or may not get treated at the hospital i mean they're even making decisions um based off of parole so what people should be released from prison or who should stay longer so because of there's lots of issues involved in this kind of proliferation of these automated decision processes but one of them is we want to know well why this decision right why am i being denied medical care why am i being denied parole why am i seeing the kind of news that i see on social media in my news feed and this is especially important when these models are black box or in lots of cases they might be opaque even to the people who make them so the kind of claim that that i want to make here start to make is that before we can talk about how to do explainable ai and answer this like why this decision we need to know what is the purpose of these explanations and explainable ai to begin with and so this is the claim that i'm gonna argue for a little bit today so the talk outline is that first i'm going to talk about the what i call the function approach to explanatory standards and so kind of motivate the methodology that i'm using i'm going to look at explanation and philosophy of science in particular as a kind of inspiration because that's where my uh focus area my training is in and then also then look at various purposes functions of explanations for these types of automated decisions so very briefly of like these technical functions commercial functions and then certain types of social and ethical functions related to the gdpr so first what is this function approach to explanatory standards so this is a type of conceptual analysis which is a philosophical method and you can see it nowadays in epistemology with craig and michael hannon as another example so the basic idea is that we can capture the nature and value of a concept norm or practice by reflecting on its function or purposes so the question is well what does what function does x concept normal practice play in society or in human endeavors right so that's the question that we want to answer and so we do this by first starting out with some hypothesis about the role that some concept has in our life or in society then we want to determine what the concept having this role would actually be like so first we just start with a hypothesis and then we determine well what is it like having a concept having this role and then we examine the extent to which this concept constructed matches our everyday notion or might conflict with other types of intuitions that we have so here's just an example not to do with explanation but just the concept of a good person so what is the purpose of the concept of a good person and how might that tell us what what it means to be a good person so first we can just start with some hypothesis so here's a hypothesis just throwing it out there that the concept of a good person is just to be able to predict and control people's behavior right so that's why we have this concept so then we say okay well what uh what would having this concept actually look like if that was the purpose of it and so if that was really the purpose of the concept then we would only be using the concept of good person when prediction and control is possible right and it means that morality would be relative to the view of those who are in power right this is what this view would mean if that's what a good person means so then we can say okay well what is this concept does it match your everyday notion well really it kind of misses this point that morality has some kind of objective structure and we want to imply in cases where we don't have these types of prediction and control right so it seems like this notion of good person the purpose of it just to predict and control people's behavior isn't really the purpose of the concept right we want to get to something that actually highlights the objective nature of morality so maybe perhaps a better purpose is something like provide exemplars for people to learn from and to help motivate people to be moral this is just an example of how this how this might work so when we're talking about explanations then we can apply this method there right so identify the purpose of an explanation in various contexts so especially the context we're interested in here is the algorithmic decision context and then we want to know what criteria do explanations need to have to successfully satisfy these purposes and then we can ask whether our conclusions have any glaring mismatch of common usage and practice of the practice of giving and receiving explanations and such that maybe we adopt the starting hypothesis or we need to revise it so why take this kind of approach to looking at what explanations are is that it can show why certain concepts are important by looking at the purpose of it and the function of the concept in society it provides a clear way to delineate criteria for success so if we know what explanations are supposed to do then we can measure whether explanations actually do that thing and it can also resolve conceptual conflicts and avoid verbal disputes because we're really defining it in terms of its purpose and function and this is especially important for doing cross-disciplinary work right so when you're doing cross-disciplinary work one of the problems is that people use terms differently and so it's hard to be able to even just pick up a paper in a different discipline and know what's going on because these terms are used differently but if we take this type of function approach then we can actually look for shared purposes right instead of shared terms so maybe for instance transparency used in philosophy is used quite different than in computer science papers but maybe there's another concept that's getting at the same function there in those computer science papers that we can we can then use for cross communication so then the question is well what are the functions of explainable ai explanations or ai explanations so first as i said since my background is was first in philosophy of science i'm going to first look there what the function of explanation is to get some some inspiration so there's a lot of work that's been going on in philosophy of science on what the nature of explanations are for a long time and there are some basic ideas that philosophers agree with even there's lots that they disagree with so the one thing that they agree with is that an explanation is a set of propositions or sentences that are an answer to a why or how question right and we can explain things like singular occurrences events patterns or irregularities so there are two main functions of scientific explanation so the first is that explanations are to correctly describe relations in the world so it kind of intrinsic or ontological function but there's this other function of explanations as well which is the function is supposed the x the explanation is supposed to enable understanding so on the one hand the explanation needs to correctly describe relations in the world but it also needs to enable understanding and the person reading the explanation say and so because of this there are some success criteria what makes for a good scientific explanation so the first thing is that since it's trying to describe relations in the world it needs to have the right kind of structure so lots of philosophers argue that explanations need to be asymmetrical right so for example we can't explain the length of the shadow or always get this backwards so um we want to explain the length of the shadow we have to do it in terms of the height of the flagpole we can't explain the height of the flagpole in terms of the length of the shadow right there's a clear uh direction that explanations can can take the second thing that's important for a good explanation is that it needs to highlight the relevant features so for example if we want to know why this artificial plant isn't growing right we shouldn't be talking about the fact that i haven't watered it in days right so i haven't watered this plant in a year but that has no relevance to why this artificial plant isn't growing because it's not growing namely because it's artificial it has nothing to do whether or not i watered the plant so that even though it's true i haven't watered it right it has no relevance to the explanation why this fake plan isn't growing also it needs to be truth conducive right it can't have false information and the explanation and lastly we need to enable what philosophers call cognitive control which is necessarily understanding so what understanding consistent is being able to answer what if things had been different questions so having an explanation means that you have there's a certain set of questions that you can you can answer about the phenomena and that gives you a type of cognitive control over the subject matter right so an explanation is supposed to be able to be such that you can answer nearby what if things had been different questions so if we go back to the main steps for this function approach so identify the purpose of explanation in various contexts right in the scientific explanation context we have describe the world enable understanding what criteria do these explanations need to have they need to be truth truth conducive they need to be relevant that we need to make sure that we can have some sort of understanding or cognitive control and then we can ask well is there any glaring mismatch or intuitions that this type of you can't handle so for right now i'm just going to say no but of course flusters in size might make this story a little more complicated okay so what is this how does this all have bearing on explanations of ai or automated decisions so first there are various types of functions and aims that automated decisions or explanations of ai explain like i can have so very briefly there could be just these like technical aims and i think a lot of the research on expandable ai are about these kinds of technical aims right so these are explanations really just for developers people who are working with these machine learning models so explanations could then help to debug the model to improve the model helping developers understand how a model works in order to implement it or to use it in some other sense right and if this is the aim of an explanation for how the model works it's going to be quite different than if it's used in some other context right so if i'm not a developer i'm not going to be able to understand these explanations and i probably shouldn't have to understand them right because they're meant for for developers in these in these contexts but there's also lots of other aims that explanations can have so this is from a paper from one of my former colleagues at delft nava tintera about this various aims back in 2007 that was in research on explanations of these kinds of automated decisions so i want to highlight just three in particular so there's trust so increase users confidence in a system uh persuasiveness trying to convince users to try something or to buy it and then satisfaction so increase the usability or the enjoyment of a system or platform someone's using so these aims of explanation in some ways they're really commercial aims i mean they could be they don't have to be right trust could be important in non-commercial instances for example but they really are used in these kinds of commercial instances so for example just if you go on to amazon and you're you want to buy a slinky you get all these recommendations of what to buy next and so apparently if you're interested in slinkies that's really all you're interested in so they're if you get to why they recommending more slinkies well because people who buy some buy other ones are frequently brought together you also get an explanation why you see these particular slinkies that the one that you can wear as a bracelet seems very interesting right the you see these ones because they're sponsored right they're being paid so that's the explanation here right sponsored products that's why you're seeing uh these these products and then you also see other products right and what's the explanation for these particular slinkies well it's because um that's what customers buy after viewing the item the main item that you have in your that's that you see on your screen so if we think about these explanations of just why we're seeing these products that are made from an automated decision process and we think of the the aim of this is trust persuasiveness and satisfaction right the standard for whether or not these are good explanations really are going to be depending on that right if these explanations didn't actually increase users trust or persuasiveness right then amazon would change the explanation and when they test the explanation then they might ask questions about um well did people actually buy these products or not right or we ask people did it increase your trust in the system or not so we can ask how do these commercial functions compare with the various functions of scientific explanations that i was talking about a moment ago well the first thing is that this the type of an intrinsic function of explaining how relations work in the world is best seen in this aim of transparency that tinterev talks about but this epistemic function of understanding in this commercial space is placed on really different ends right so the goal is not understanding the phenomenon or even how the model works at all right if that's that's the phenomena or understanding uh yourself and your interests about like buying slinkies or whatever it's on understanding the interface right or the explanation itself okay so the explanation is then placed on on this different type of end right the function of the explanation is really just for the purpose of the platform in these types of commercial cases it's not on it's not for anything else and so that means that truth conduciveness isn't really a standard in the same way so if the function of the explanation is just for the platform to convince you to buy something right it doesn't have to be true at all it doesn't have to be true that other people actually looked at more slinkies after they bought a slinky it could be completely false as long as it got you to buy more of a product that amazon wanted you to buy or to increase trust and what's relevant is then tied to right the success rate of the platform what they're trying to trying to get and also things are aren't made explicit that users might want to know so for example in this explanation of why you see these products what other items do customers buy after viewing this item uh what's explicit what's implicit here is that amazon tracks users behavior right and they track users behavior in a specific type of way but that information is hidden here i mean it's hidden in plain sight but it's not really highlighting something that might be of interest to specific users who are using the platform okay so so what what i've done so far is make a case that depending on the function of the explanation the criteria for success are going to be quite different and what type of thing might be included in explanation might be uh quite quite different and so those were uh commercial functions of explanations then there's also this whole other dimension to explanations of automated decisions which are these like social and ethical functions of of explanation so if we move away from using explanation just in a kind of commercial sense to promote more use on a platform or to promote more buying behaviors and move to these other types of functions we get different criteria again for success so what i want to do here is just look at the gdpr so i'm no way a legal scholar so um what i say might not be legally binding at all but there are some ethical norms that that's that stand out uh when we look at the gdpr and the right to explanation so i'm not going to read all this text but there are some some concepts that really stand out so the first one is profiling and and processing so the idea is that uh anytime an automated decision makes a decision a significant decision about you you have this right to explanation and so there there are certain uh conditions under which they're especially interested in having this right to explanation so instances of profiling so any form of automated processing of personal data instances of where this is processing so any set of operations which is performed on personal data and a particular concern is analyzing or predicting aspects of certain natural person's performance at work economic situation health preferences interest interests so on and so forth he also talks about safeguard right safeguarding rights and freedoms which what they mean is the right to the access of the information collected a right to meaningful information write about the logic involved and to provide information in concise transparent intelligible way it also talks about the right to consent for users to express their point of view and to contest the decision of this automated process and it shouldn't be based on special categories like ethical origin political opinions religious or philosophical beliefs so all my beliefs about philosophy of science i shouldn't be discriminated against or used in a ai system so what we see here through all these concepts is this idea of explanations are providing or ought to give users a sense of control so this feedback on the system that they can contest the decision that they have to consent to it in some sense and there's also explanations are providing a kind of oversight oversight against discrimination and oversight against infriction of other types of rights and freedoms so this type of control isn't really cognitive control in the epistemic case of just providing understanding like we saw in scientific explanation but this kind of actionable control right that's something that you can do something with it so it's not just that i can understand whether or not certain things had been different but that i can actually do something with that right that there's actually some aspects about it that i can contest to or consent to or provide feedback on and we also have this idea of promoting oversight right so the explanation needs to make explicit that certain things are in the interest of people's rights and freedoms right so this is quite different from the case of scientific explanation we're just talking about understanding the world and it's quite different from the commercial case in which they're not concerned at all about making certain things explicit um that are against people's rights and freedoms so this type of oversight may be perhaps the explanation a certain ad explanation on amazon or facebook or what have you needs to include information that people might want to know about whether or not they would be discriminated against so if they were if the model was using something like gender to have some ad being shown to you then the person has there's an ethical function of explanation there that that aspect of the explanation needs to be included and why you're seeing this ad that it was somehow based in part on on gender and that's going to conflict maybe with aims of trust in the platform right maybe i won't trust facebook anymore if i'm seeing all these ads just because of my my gender so what counts as relevant right for the explanation changes as i was saying and the kinds of what is what if questions that must be answered also change right so what if they didn't have that information what if facebook didn't have my gender information or didn't have other types of discriminatory information about me how would that change what how the ads are being seen so i the the goal here in these ethical and social explanations is that it's not understanding the phenomenon yourself or your interests or understanding the world or even i mean so this broad sense of how the model works i mean the model works in various different ways what is it that's relevant to you so in the social and ethical function it seems like just taking the gdpr as a kind of inspiration it's on understanding how the algorithm fits into your larger value framework right how does the algorithm fit into the my larger value framework such that i can contest it if i want to how i can say that this isn't it shouldn't be making decisions based on these criteria that it shouldn't be using this type of personal data of mine to be used in the criteria right and if that's the function of the explanation to be able to show people how the algorithmic decision process fits into one's larger value framework then what we need the types of things we need to know about the model are going to be quite different than the types of things that maybe developers just need to debug something or the types of things that we need to know just to try to persuade somebody how whether to buy something yeah so to sum up here what i've been saying is that depending on the purpose of an explanation the norms of success are different and there are several possible purposes or functions that explanations can have and also um the social ethical explanation functions matter just as much as technical or epistemic functions and commercial functions might actually hinder social and ethical functions of these explanations so really what we need to do is have a discussion a larger discussion of what the purpose of or function of these explanations actually should be in various contexts right should they be freedom preserving or should they just be just epistemic in the sense that like we really just need to know like the bare bones of how this works or does it need to create feelings of trust right in someone and depending on what these purpose or functions are that's really going to change the nature of the explanation and various research projects that we might engage in in figuring out how to deal with these types of black box black black black box models all right that's it thanks great super interesting thank you very much emily it was very nice so um yeah i think uh yeah we're all i think someone has questions you can yeah you can speak up you can type it or just raise your hand i see some hints but i think they're yeah they're just clapping for you okay so does someone have a question for now if not i can kick off a little bit one question i have one question for you is so i really like the framework you propose about like this different differentiation for regarding the purpose and i was most interested on when you were talking about control so first you mentioned about the idea from cognitive control and then later on when you went into the gpr say like the difference between cognitive control and actionable control so if you can expand this a little bit more because so we here at ai tech and the broader community work a lot with this concept of meaningful human control and that's not only about like tracing the responsibility after effects if something happens but also how to provide this kind of explanations and information so that one can be aware of the responsibility while using this so i think that's a very nice way so can you expand a little bit more cognitive control and actionable control please yeah so cognitive control really just is that you have some some general idea how things work like in your head so like you um so you know like the causal like if you're talking about like scientific explanation like why the window broke when some kid threw a baseball at the window you can you really have like a sense of why that happened and if things were different like if the kid didn't throw a baseball but through um just you know a snowball or something then the window wouldn't have broke and when we talk about these ai decisions you can have a sense of cognitive control about how or why the decision was made so you might even know that type of counter factual information so if they if my data was different my personal data was different then i'd get a different result for this and this reason things like this but it doesn't mean that you can do anything with that with that knowledge so the idea of actionable control is that you can actually do something with the knowledge so the the knowledge is such that you could sue the platform if you needed to so you got the right information that it was discriminatory it wasn't being hidden from you so now you can like file a lawsuit or something or it could be you know as if maybe it's like classically american to go to lawsuits but uh maybe on the other extreme is it's just that maybe there's a way for you to update the system for example and so it's using some information that is just incorrect and the explanation should give you some action that you can can do or so say okay this is incorrect maybe um just fix this type of information and then the decision would be better in line with with reality um in that sense so you might understand everything about it but not actually be able to have to have the means to do something so that's what actual control is trying to get at thanks thanks very much very good and so it's not only about um like the decision that are let's say the final decisions that have already been made but you could proceed think about this as a process right you can give a decision an explanation and someone could interact with this to find us more ethical or beneficial outcome yeah and then you can even talk about like maybe it can go a bit broader too in the sense that if you're having like a credit decision so whether or not you're getting a mortgage or being denied a credit card or something if all the reasons are things that are like outside of your control maybe that's a bad decision system right so if it's only talking about um things like my age and my race things like this right i can't control those things so there has to be some aspect of the decision that's in my control right and if not then maybe that's infringing on my my rights yeah nice it's pretty interesting uh we have a question from uh if jenny would ask it what's up jenny hey yes yeah thanks very much hi emily uh thanks thanks very much uh really really interesting uh uh presentation really great uh you're touching up on some really great questions i'm curious do you think um i'm curious on your thoughts whether asking the kind of questions about explanations that you presented whether it can help us avoid technical solutionism ai solutionism so what i mean is that if we determine like you say the purpose of explanation in the context and the corresponding criteria uh can it help us realize in certain contexts that it's essential that humans are interacting with humans that humans are the ones that are taking decisions and algorithms are at best in the background but maybe even not at all part of the interaction thanks yeah so i definitely think that explanations are need to play a role a role here so i i really worry about cases of ai decisions running amok without any type of human humans in the loop right i mean just looking at like medical the medical case i mean it's really important that that we preserve the doctor-patient relationship where you can kind of give and receive reasons for certain courses of treatment and if that's all being offloaded to some ai system where there's no explanations at all then you really lose that important aspect of health care so um yes so i think explanations can it's necessary to preserve the types of relationships that we want in society and it might also help to have people feel more comfortable about ai entering in some way into certain areas well what i find interesting is that as i was listening to some of the things you talked about like especially when you made the point of actually going back to the concepts asking a fundamental question here what's the purpose and like it makes me think in some context that like it can help us realize that in order to meaningfully uh tell a person well why a certain decision was taken say like i'm gonna take a context that i'm dealing with in my work uh hiring and the use of algorithms and that context that you know some of some of the algorithms that are out there when they make claims of you know well based on your facial expressions we're gonna make a judgment on your willingness to learn well like i think in common everyday interaction like many of us would find it really disturbing if one would start explaining the decision whether to hire you or not based on your facial expressions and claiming that that somehow says something about your motivation to learn new things so like i'm like i'm thinking can basically that uh can that be part of our ethical reflection of understanding hey maybe that's not the like maybe there's a contradiction here between the concepts that we're actually working with say job competence and the way we are assessing this uh this concept yeah i mean so there's i think there's a lot a lot there so the one thing is that whether or not these models are actually like tracking like the real difference makers out there so a real difference maker for what makes you a good employee might not be what your facial expressions are and some like stilted weird interview you have online right um so in that sense the model isn't really getting at what is important and explanations can help help with that because it can point out okay maybe if this is the reason why we're not hiring this person that's not really a good reason so maybe we should find a different reason so there's that but then also it's important to know what the function of the explanation is because if you want to explain them the decision to the person why you didn't hire them if the purpose of that explanation is just to get them to not uh you know get into an argument with you about it right then you're not you're gonna hide the fact that you use that weird technology you're gonna explain it in some other way um and but if you're the purpose of the explanation is to actually uphold the rights of that person then you are going to have to be required to give that information that you used in that type of software yeah great point thanks very much thanks yeah thanks very interesting discussion uh dave you have a question uh i do yeah so firstly thanks that was a that was a lovely talk um i really like how kind of digging into what the concept means can help us think through some of these things um and i think the first first thing i want to pick up a bit is it feels like it's getting into a kind of a pragmatics of explanation so it's not just about presenting the information it's about thinking how uh the person is going to receive that information and that that feels really rich and it feels like it goes it's as well as uh kind of what is the explanation trying to do it also gets into what kind of person are we uh trying to work with and we see some different explanation methods and so i guess i'm i'm i'm hoping that this leads into accounting for the explainee in the explanation um and then the other thing i was interested in is there's often a feeling that explanation should help people trust these systems more um and i feel like sometimes it shouldn't it should uh make them trust in less kind of in line with the avengers uh point that you know sometimes the explanation should make us go oh this is not a good system yeah yeah um so i do think that there's we should take account of the person who you're explaining to so i have done work with nava and some other people on um seeing if explanations can uphold some of these epistemic norms that we talked about so like understanding and other types of goals that they might have so you can design expansion interfaces and user studies with that specific goal in mind and that can help you build better explanations that fit those goals but yeah i totally agree with you that one of the problems that i see in some of the literature and explanation computer science just on trust is that they're looking at whether or not people have feelings of trust so a kind of like psychological explanation or psychological account rather of of what's happening but they're not looking at whether or not the system is actually trustworthy or whether people are right to trust the system so that's a normative question so the normative question is whether or not the system is trustworthy or people are right to trust the system and yeah if especially if you look at what's going on and ad explanations on facebook um they're not trustworthy and if they actually pointed out the actual things that they used to deliver these ads to you then yeah i think a lot of people would not trust the system and that's why they're not explaining things in that way so one of the things that i at least i would hope for in future data protection legislation is a type of regulation about the kind of information that has to be in these explanations um i mean it has to be accurate so if the system actually used that information would be in there such that people could make decisions that are transparent and in line with their value system yeah great um i i was struck by this when i was speaking to someone from the bbc the british news corporation who said we did the survey and we found that our users over trust us they put more faith in our news than we think they ought to uh and that's not often you know you don't often hear people say that um and it feels like explanation might help there a bit yeah that's super interesting um yeah i think that's yeah that's really cool someone else has a question for emily yeah perhaps i have a very practical question right so um yeah i i really like the talk and i agree very much with the with the kind of more fundamental question that we have to answer what we need the explanation for before we actually start working on making the system explainable then my question is uh well depending on the purpose of the explanation on the overall goal uh would we face fundamentally different challenging in implementing the system that we want to be explainable because i would imagine that if we just want to understand the inner workings of a neural network for instance that's one technological challenge but if you want to kind of generate an explanation that provides control to the end user which is a much more kind of overarching systemic question then that would be a very different set of challenges so uh is there any asymmetry in the way people do research in that or their direction and is that linked to the technological complexities associated with the with different types of explanations yeah um so i definitely think the way you said it is right so um if we just want to know more about the system it's going to be a completely different research project right then um explaining to end users in a specific kind of way and whether or not i think they do go hand in hand i mean so one of my one of the research questions that i'm going to be looking at um with the with this veiny project and also that i've done in the past is um like how detailed about these models do we really need to get in order to satisfy these other norms of explanation and it might be that we don't actually don't need to know that much about how the model works like to its nitty-gritty level to satisfy these norms or it might be that we actually do need to know all the details at this nitty-gritty level right and if we do need to know then we need to invest in that technical project um first i guess right to where to satisfy those norms but if we don't need to know that then it can just run in parallel right we don't need to wait until people make huge progress in understanding these black box models at a nitty gritty level good thanks great um yeah someone has a final question if not i do okay uh uh i'm not familiar first i'm not familiar with the functions approach that you mentioned so that seems really interesting and that really uh connects a little bit with this idea from the concept of a reflected equilibrium or white reflective equilibrium right think about like what are the moral principles at stake what are the moral challenges and which are the background theories i would say things so do you see uh uh is there a specific connection between these two approaches so how do you see this because you say about a hypothesis i can think about a moral principle what a concept i can think about some background theories and what is how it matches our everyday notion is a little bit about the moral judgments in my understanding is does that make sense for you yeah so i don't i think that they can be complementary for sure so one of the aspects of the function approach like the last thing is that once you have this hypothesis of the purpose you need to then say like oh does this like uh conflict with other intuitions i have or is the everyday notions that we have and for that to answer that question you probably need some type of reflective equilibrium happening right um so yeah they can be complementary in that sense so the function approach i guess kind of really um just gives a specific structure to that process i would say yeah okay it is if understand is a collective process that you see that all stakeholders involved should be involved in this process or or is something more about the people that are going to receive the explanation that's the most important um yeah i mean because we're talking about this like it on a more um meta level in the sense that not necessarily talking about stakeholders but yeah i mean i think that in a practical case i think it is important that all stakeholders are involved in the decision or in the process of talking about norms but it might be that certain stakeholders should have higher weights in that in that discussion right so if you're a stakeholder that you really only care about profit um you should be part of the discussion but it might be shouldn't hold as much weight as people who are being discriminated against or having their rights violated for example okay yeah thanks i think there's a lot to think about so yeah it was very interesting so thank you very much emily thanks everyone for joining us and yeah so see you so much you also next week you're also invited yeah of course always welcome to join us yeah yeah it's great discussion thanks everyone great thank you thanks very much bye-bye thanks great talk thank you bye
bb0ecdda-5d44-45f2-ab44-e6e0be4ffeae
trentmkelly/LessWrong-43k
LessWrong
Summary of ML Safety Course This summary was written as part of Refine. The ML Safety Course is created by Dan Hendrycks at the Center for AI Safety. Thanks to Linda Linsefors and Chris Scammel for helpful feedback.  Epistemic status: Low effort post intended for my own reference. Not endorsed by the course creators.  I have also written a review of the course here. Risk Analysis Risk Decomposition A risk can be decomposed into its vulnerability, hazard exposure, and hazard (probability and severity). They can be defined as below, with an example in the context of the risk of contracting flu-related health complications. * Hazard: a source of danger with the potential to harm, e.g. flu prevalence and severity * Hazard exposure: extent to which elements (e.g., people, property, systems) are subjected or exposed to hazards, e.g. frequency of contact with people who are possible flu virus carriers * Vulnerability: a factor or process that increases susceptibility to the damaging effects of hazards, e.g. old age and poor health makes someone more vulnerable to illness Hazards can be reduced by alignment, vulnerability can be reduced via robustness, hazard exposure can be reduced by monitoring. Generally, risks can be reduced by systemic safety and increased by the lack of ability to cope. Accident Models The Failure Modes and Effects Analysis (FMEA) aims to identify the “root causes” of failure modes and mitigate high-risk events based on the risk priority. Risks are prioritized according to the “effects” which is made up of its severity, probability of occurrence, and detectability. The Swiss Cheese Model illustrates the use of multiple layers of protection to mitigate hazards and improve ML safety.  The Bow Tie Model illustrates the use of preventive barriers to prevent initiating hazardous events and protective barriers to minimize the consequences of hazardous events.  All of the above models implicitly assume linear causality, which is not true in complex systems. In complex s
17dfd866-07a7-40dc-90ea-6c20bbe60f31
trentmkelly/LessWrong-43k
LessWrong
Open Thread: April 2010 An Open Thread: a place for things foolishly April, and other assorted discussions. This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post. Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.
dcbe57c4-f77d-4c4f-a470-e5779963383a
trentmkelly/LessWrong-43k
LessWrong
How ForumMagnum builds communities of inquiry The website you're currently using is powered by ForumMagnum. But what really is ForumMagnum? What is it for, and why was it designed this way? In this post, I cast ForumMagnum as a medium for building communities of inquiry. I show how ForumMagnum is designed to build norms like rationality and long-form. Lastly, I suggest how the ForumMagnum developers could use the body of CoI research to guide their future product design. What is ForumMagnum for? ForumMagnum describes itself as "the codebase powering LessWrong and the Effective Altruism Forum."[1] That's the what ... but what's the why? Here's why I believe ForumMagnum exists: > ForumMagnum is a medium for building online communities of inquiry. It's a web forum that embeds norms of rationality and long-form comms. That is: ForumMagnum is not defined by its users, or its features, or its codebase! ForumMagnum is a medium designed to carry a message, and its message is a set of social norms. Let's see what those norms are, and how ForumMagnum's features are designed to build those social norms. What are Communities of Inquiry? In the 6884th most cited paper of all time, Garrison et al. set out to "investigate the features of the written language used in computer conferences [e.g., forums] that seem to promote the achievement of critical thinking."[2] They built on the concept of a Community of Inquiry (CoI), which Wikipedia nicely defines: > The community of inquiry (CoI) is a concept first introduced by early pragmatist philosophers C. S. Peirce and John Dewey, concerning the nature of knowledge formation and the process of scientific inquiry. The community of inquiry is broadly defined as any group of individuals involved in a process of empirical or conceptual inquiry into problematic situations. This concept was novel in its emphasis on the social quality and contingency of knowledge formation in the sciences ...[3] The LessWrong community, the EA community, the Alignment community, the Progress commu
4d7ea89e-3359-4e92-ac96-e7a59eb7b3c4
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Rethink Priorities is looking for a (Co-)Founder for a New Project: Field Building in Universities for AI Policy Careers in the US **Summary** ----------- * We are currently seeking a founder, or founding team, for a project focusing on field building in universities for AI policy careers in the US. * RP commits to supporting this project through at least an initial pilot (e.g. an on-campus policy competition event planned within the first three months of the project), including guaranteeing initial seed funding and providing research and operational support. (Note this role is ***not*** within RP.) * Our team of researchers is [particularly excited about this project](https://forum.effectivealtruism.org/posts/AJwuMw7ddcKQNFLcR/20-concrete-projects-for-reducing-existential-risk), following hundreds of hours of research identifying the most promising project ideas to reduce existential risk, and working from an initial list of hundreds of ideas. * Depending on the results of the pilot and founder preferences, RP may remain involved in a supporting role or the project may spin off independently. * Our aim is for this project to scale over time and eventually become a large-scale initiative with a significant impact on reducing extreme risks from AI. * Please email [university-ai-policy@rethinkpriorities.org](mailto:university-ai-policy@rethinkpriorities.org)  if you have any questions. * Application Deadline:September 25, 2023 at the end of the day (11:59 PM) in US/Eastern (EST) time zone. * [Apply here](https://careers.rethinkpriorities.org/en/postings/0980aba8-466a-4282-9319-c8c4f6f39341) **About the Project** --------------------- In the last 5 years, solid professional tracks have emerged for technical AI safety research and, increasingly, AI governance research, with programs that provide a relatively clear roadmap of opportunities and next steps. Less effort has gone into developing a similar track for working directly on AI policy in the US. This is important because policymakers are increasingly asked to tackle the risks and challenges associated with this technology, but often lack access to the expertise needed to navigate this effectively. Bringing more qualified talent to work as congressional staffers, within a relevant executive agency, or in an influential DC think tank could mean directly supporting effective decision-making around AI regulation that promotes the public interest. The idea for this project is to start **a program that dramatically increases the number of talented AI-risk-minded students planning to enter a US policy career by helping them understand arguments about catastrophic risk from AI, how these career tracks can be impactful, and how to navigate related career decisions successfully**. This program can also ensure a well-developed pipeline for encouraging students already interested in AI  risk management careers to consider US policy roles. This program could: * Run events such as policy competitions or retreats that find and develop talented individuals who might be especially well-suited for US policy careers * Connect students with relevant information, mentors, and peers around AI  risk management and US policy * Help establish an early career track around AI risk management policy, especially from a catastrophic risk lens, within universities * Do even more ambitious work that we have not considered yet!   **About the Role** ------------------ We expect the Founder(s) of this initiative to collaborate with the Rethink Priorities team and significantly shape the direction of this project. We think the first step for this project should be to run a low-cost pilot for the Founder(s) to gather more information about the target audience, the approaches that tend to work, and their fit for running the project. This pilot can also provide a proof of concept for future grant applications, which the Founder(s) will be responsible for.  RP will be able to offer support in a variety of ways based on the needs of the project, including research support, operations support, providing feedback, mentorship, and providing connections to relevant funders.  ### **Overview:** * We are open to discussing how this role will be structured during the initial pilot and may consider providing grant money directly to the founding team, or have the founder(s) as a contractor(s) with RP. This will be discussed further during the recruitment process. * **Hours:** Part-time or Full-time (20-40 hours/week). We expect this role could be filled by one (1) full-time equivalent (FTE), and are open to one person working full-time on this project or two part-time co-founders. Please specify in your application whether you are open to one or both arrangements. * **Compensation**: During the pilot program, we expect the Founder(s) to receive  a pre-tax monthly stipend of $4,500-$6,500, depending on experience, pro-rata based on hours worked. The Founder(s) will set their own salary if and when the new organization spins off. * **Location:**Remote; open to individuals in most countries, with a preference for candidates based in the United States. Please note that we expect some travel within the United States to be required for this position, though this may vary based on the program’s strategy. We ask that you only apply if you are willing and able to travel and work within the United States for short periods of time. We are unable to sponsor visas for this position. This position requires regular meetings with staff in European and US time zones.   **Key Responsibilities** ------------------------ The Founder(s) will lead this project and have full responsibility for its success. Working with the Advisory Committee (defined below), they will develop and implement the vision and mission of this new project, along with its strategic direction and plan. They are responsible for the organization's overall management, as well as for building a foundational team to support the organization’s program implementation and for ensuring the financial sustainability of the organization.  As a Founder, your main responsibilities will include: * **Leadership, Governance, and Strategic direction:** Set and implement the mission and vision of the organization, along with the Advisory Committee, based on the best available evidence and transparent reasoning; * **Management and Operations**: Lead the project’s program management, financials, fundraising, people management, and daily operations. * **Communications & PR**: Oversee organization’s public communications and effectively engage with key stakeholders, including donors, via appropriate channels. The Advisory Committee is the provisional governing body of the organization while the project is supported by Rethink Priorities. Committee members act as fiduciaries who ensure the project has the appropriate level of RP support, can provide input on strategic decisions, and oversee that financial resources are efficiently administered.   **Skills and Competencies** --------------------------- ### **Essential Skills and Competencies:** * Motivation to reduce extreme risks from AI * Excellent operations and communication skills * Intellectual curiosity and open-mindedness (willing to carefully consider ideas and explain reasoning) * Resourcefulness and ability to problem-solve * Creativity and initiative for developing new ideas and strategies * Strong critical and strategic thinking skills * Interest in developing a nimble organization that can change direction as needed to have the greatest impact * An intention to commit to this project beyond the pilot if the pilot is successful and the Founder continues to feel they're a good fit for the longer-term project ### **Desired Skills and Competencies:** * Experience in nonprofit management or leadership roles, or other field building experience, such as running AI-related university group programs or similar projects. * Familiarity with AI governance, risk, and regulation and connections to AI risk-related experts and organizations. * Familiarity with US policy and related careers and networks.   **What We Offer** ----------------- ### **Compensation:** * During the pilot program, we expect the Founder(s) to receive a pre-tax monthly stipend of $4,500-$6,500, depending on experience, pro-rata based on hours worked. The Founder(s) will set their own salary if and when the new organization spins off. ### **Other Benefits:** * Opportunity to shape the strategy and activities of a new project aimed at significantly reducing extreme risks from AI * RP support while the project gets off the ground, including feedback and strategic support from the Advisory Committee * Seed funding and introductions to potential funders * Access to experienced RP researchers, including researchers from the Existential Security Team (XST) and AI Governance and Strategy (AIGS) Team * Operational support from RP’s Special Projects team during the pilot, which could include support in areas such as operations, finance, and compliance, with the potential for ongoing operations support after the organization becomes independent of Rethink Priorities **Additional Information** -------------------------- * **Language**: Please submit all of your application materials in English and note that we require professional level English proficiency. * **Travel**: Please note that we expect some travel within the United States to be required for this position, though this may vary based on the program’s strategy. We ask that you only apply if you are willing and able to travel within the United States without requiring RP to sponsor a visa for you. * **Accessibility**: We’re committed to running an inclusive and accessible application process. We warmly invite you to reach out to [university-ai-policy@rethinkpriorities.org](mailto:university-ai-policy@rethinkpriorities.org) with any questions or accessibility requests such as chat box use during interviews. **Inclusivity and fairness:**  We are committed to building an inclusive, equitable, and supportive community for you to thrive and do your best work. Please don’t hesitate to apply for a role regardless of your age, gender identity/expression, political identity, personal preferences, physical abilities, veteran status, neurodiversity or any other background.    **About Rethink Priorities** ---------------------------- Please, note that this role is ***not***within Rethink Priorities and this initiative will remain a stand-alone project, separate from RP's other activities. However, we would like you to know more about the organization supporting this project. Founded in 2018, [Rethink Priorities](http://rethinkpriorities.org/) is a nonprofit organization that addresses global priorities—important and neglected issues—by researching solutions and strategies, mobilizing resources, and empowering our team and others. RP’s mission is to generate the most significant possible impact for others in the present and the long-term future.  Our cause areas include animal welfare, global health and development, climate change, artificial intelligence, and other work to safeguard a flourishing long-term future. RP also aims to understand and support the professional communities working on these issues. Each researcher tends to focus on one particular cause area.   **Rethink Priorities works as all of the following:** 1. A consultancy doing commissioned work in response to demands from organizations doing high-impact work 2. A research institute driven by research agendas we set according to our own priorities. 3. A think tank aiming to inform public policy to improve the world. 4. An accelerator, incubator, and base for priority projects. **Some of RP’s recent accomplishments include:** * Helping major foundations to answer their questions on climate change solutions, weather forecasting in lower- and middle-income countries, increasing access to medicine, and the effectiveness of [prizes](https://forum.effectivealtruism.org/posts/xanSjg6Hq2PaGEkZP/how-effective-are-prizes-at-spurring-innovation) and other interventions. * Comparing the capacity of [different animal species](https://forum.effectivealtruism.org/s/y5n47MfgrKvTLE3pw) to experience pleasure and pain to help philanthropists decide how to allocate funding. * Investigating various [animal welfare](https://rethinkpriorities.org/animal-welfare) [interventions](https://rethinkpriorities.org/publications/effectiveness-of-a-theory-informed-documentary-to-reduce-consumption-of-meat-and-animal-products), as well as bringing to light the neglected areas of [invertebrate](https://forum.effectivealtruism.org/posts/EDCwbDEhwRGZjqY6S/invertebrate-welfare-cause-profile) and [insect](https://forum.effectivealtruism.org/posts/fZF9ffZD2kkpDy7jB/research-summary-brain-cell-counts-in-black-soldier-flies) welfare. * Publishing pieces on [nanotechnology](https://forum.effectivealtruism.org/posts/AuhkDHEuLNxqx9rgZ/a-new-database-of-nanotechnology-strategy-resources) and [ways to use forecasting to improve the long-term future](https://forum.effectivealtruism.org/posts/E5vp2LCEfkrrLWozJ/potentially-great-ways-forecasting-can-improve-the-longterm), as well as [supporting](https://forum.effectivealtruism.org/posts/Na6pkfpZrfyKBhEcp/interested-in-ea-longtermist-research-careers-here-are-my) those interested in these types of topics. * Launching a [Special Projects Team](https://forum.effectivealtruism.org/posts/AFgvA9imsT6bww8E3/announcing-the-rethink-priorities-special-projects-program) to incubate promising new initiatives, such as [Epoch](https://epochai.org/) (a new AI research organization) and [Condor Camp](https://condor.camp/) (longtermism movement-building in Brazil and Latin America). * Conducting surveys to better understand the [Effective Altruism community](https://rethinkpriorities.org/ea-movement-research)   We welcome you to review our database of published work [here](https://rethinkpriorities.org/research).  We’re supported by [Open Philanthropy](https://www.openphilanthropy.org/), the [Survival and Flourishing Fund](https://survivalandflourishing.fund/), and additional institutional and individual donors.
d526a7ae-13ff-4e2f-a9e2-ff0be4d9482d
trentmkelly/LessWrong-43k
LessWrong
Meetup : Austin, TX NEW LOCATION! Discussion article for the meetup : Austin, TX NEW LOCATION! WHEN: 19 November 2011 01:30:00PM (-0600) WHERE: 2100 Speedway, Austin, Texas The Austin LW meetup has moved (possibly temporarily, possibly permanently. Stay tuned)! We'll be meeting at 1:30, as normal, but in CBA 4.338. It's a seminar room with a projector, where we will be watching (and trying to make sense of) Primer. We hope to see you there! (Parking: You can either park wherever you parked for Caffe Medici and walk ~three blocks to the east, or you can part in the Brazos street garage (at Brazos and MLK) and walk west then north to CBA.) Discussion article for the meetup : Austin, TX NEW LOCATION!
05d290d9-84b5-4d4f-a6c2-b6939ee8778b
StampyAI/alignment-research-dataset/arxiv
Arxiv
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations 1 Introduction --------------- High-dimensional real-world datasets are often full of ambiguities. When we train models to classify them, it is frequently possible to achieve high accuracy using models with qualitatively different decision boundaries. To narrow down our choices and encourage robustness, we usually employ regularization techniques (e.g. encouraging sparsity or small parameter values), and in many cases structure our models to ensure domain-specific invariances (e.g. using CNNs for models we would like to be invariant to spatial transformations). However, these solutions do not address situations in which our training dataset contains subtle confounds or differs qualitatively from our test dataset. In these cases, our model may fail to generalize no matter how well it is tuned. Such generalization gaps are of particular concern for uninterpretable models such as neural networks, especially in sensitive domains. For example, Caruana et al. ([2015](#bib.bib3)) describe a model intended to prioritize care for patients with pneumonia. The model was trained to predict hospital readmission risk using a dataset of attributes of patients hospitalized at least once for pneumonia. Counterintuitively, the model learned that the presence of asthma was a negative predictor of readmission probability, when in reality pneumonia patients with asthma are at a greater medical risk—and thus the model would have presented a grave safety risk if used in production. This problem occurred because the outcomes in the dataset reflected not just the severity of patients’ diseases but the quality of care they initially received, which was higher for patients with asthma. This case and others like it have motivated recent work in interpretable machine learning, where algorithms provide explanations for domain experts to inspect for correctness before acting on model predictions. However, there has been limited work in optimizing models to find not just the right prediction but also the right explanation. Toward this end, this work makes the following contributions: * We confirm empirically on several datasets that input gradient explanations match state of the art sample-based explanations (e.g. LIME (Ribeiro, [2016](#bib.bib21))). * Given annotations about incorrect explanations for particular inputs, we efficiently optimize the classifier to learn alternate explanations (to be right for the right reasons). * When annotations are not available, we sequentially discover qualitatively different classifiers for a domain expert to inspect for validity. ### 1.1 Related Work We define several important terms in interpretable machine learning. All classifiers have implicit decision rules for converting an input into a decision, though these rules may be opaque. A model is interpretable if it provides explanations for its predictions in a form humans can understand; an explanation provides reliable information about the model’s implicit decision rules for a given prediction. In contrast, we say a machine learning model is accurate if most of its predictions are correct, but only right for the right reasons if the implicit rules it has learned generalize well and conform to domain experts’ knowledge about the problem. Explanations can take many forms (Keil, [2006](#bib.bib8)) and evaluating the quality of explanations or the interpretability of a model is difficult (Lipton, [2016](#bib.bib15)). However, within the machine learning community recently there has been convergence (Lundberg and Lee, [2016](#bib.bib16)) around local counterfactual explanations, where we show how perturbing an input x in various ways will affect the model’s prediction ^y. This approach to explanations can be domain- and model-specific (e.g. “annotator rationales” used to explain text classifications in Li et al. ([2016](#bib.bib13)); Lei et al. ([2016](#bib.bib11)); Zhang et al. ([2016](#bib.bib30))). Alternatively, explanations can be model-agnostic and relatively domain-general, as exemplified by LIME (Local Interpretable Model-agnostic Explanations, (Ribeiro et al., [2016](#bib.bib20); Singh et al., [2016](#bib.bib25))) which trains and presents local sparse models of how the model’s prediction changes when an input is perturbed. The per-example perturbing and fitting process used in models such as LIME can be computationally prohibitive, especially if we seek to explain an entire dataset during each training iteration. If the underlying model is differentiable, then one alternative is to use input gradients as local explanations (Selvaraju et al., [2016](#bib.bib22); Simonyan et al., [2013](#bib.bib24); Li et al., [2015](#bib.bib12); Hechtlinger, [2016](#bib.bib7)). The idea is simple: the gradients of the model’s output probabilities with respect to its inputs literally describe the model’s decision boundary (Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Related Work ‣ 1 Introduction ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")). This is similar in spirit to the local linear model from the original formulation of LIME while being much faster to compute. Input-gradient explanations are not perfect for all use-cases—for points far from the decision boundary, they can be uniformatively small in magnitude and do not always capture the idea of salience (see discussion and alternatives proposed in Shrikumar et al. ([2016](#bib.bib23)); Bach et al. ([2015](#bib.bib2)); Montavon et al. ([2017](#bib.bib18)); Sundararajan et al. ([2016](#bib.bib26)); Zeiler and Fergus ([2014](#bib.bib29))). However, they are exactly what is required for constraining the decision boundary. In the past, Drucker and Le Cun ([1992](#bib.bib5)) showed that applying penalties to input gradient magnitudes can improve generalization; to our knowledge, our application of input gradients to constrain explanations and find alternate explanations is novel. | | | | --- | --- | | Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Input gradients of probabilities are sharpest near decision boundaries, while input gradients of log probabilities are often consistent within decision regions. | Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Input gradients of probabilities are sharpest near decision boundaries, while input gradients of log probabilities are often consistent within decision regions. | Figure 1: Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Input gradients of probabilities are sharpest near decision boundaries, while input gradients of log probabilities are often consistent within decision regions. More broadly, none of the works above on interpretable machine learning attempt to optimize explanations for correctness. For SVMs and specific text classification architectures, there exists work on incorporating human input into decision boundaries in the form of annotator rationales (Zaidan et al., [2007](#bib.bib28); Donahue and Grauman, [2011](#bib.bib4); Zhang et al., [2016](#bib.bib30)). Unlike our approach, these works are either tailored to specific domains or do not fully close the loop between generating explanations and constraining them. ### 1.2 Background: Input gradient explanations Consider a differentiable model f parametrized by θ with inputs X∈RN×D and probability vector outputs f(X|θ)=^y∈RN×K corresponding to one-hot labels y∈RN×K. Its input gradient is given by | | | | | --- | --- | --- | | | fX(Xn|θ)=∇X^yn, | | which is a vector normal to the model’s decision boundary at Xn and thus serves as a first-order description of the model’s behavior near Xn. The gradient has the same shape as each vector Xn; large-magnitude values of the input gradient indicate elements of Xn that would affect ^y if changed. We can visualize explanations by highlighting portions of Xn in locations with high input gradient magnitudes. 2 The Right for the Right Reasons Approach ------------------------------------------- We wish to develop a method to train models that are right for the right reasons. If explanations faithfully describe a model’s underlying behavior, then constraining its explanations to match domain knowledge should cause its underlying behavior to more closely match that knowledge too. We first describe how input gradient-based explanations lend themselves to efficient optimization for correct explanations in the presence of domain knowledge, and then describe how they can be used to efficiently search for qualitatively different decision boundaries when such knowledge is not available. ### 2.1 Constraining explanations in the loss function When constraining input gradient explanations, there are two basic options: we can either constrain them to be large in relevant areas or small in irrelevant areas. However, because input gradients for relevant inputs in many models should be small far from the decision boundary, and because we do not know in advance how large they should be, we opt to shrink irrelevant gradients instead. Formally, we define an annotation matrix A∈{0,1}N×D, which are binary masks indicating whether dimension d should be irrelevant for predicting observation n. We would like ∇X^y to be near 0 at these locations. To that end, we optimize a loss function L(θ,X,y,A) of the form | | | | | --- | --- | --- | | | L(θ,X,y,A)=N∑n=1K∑k=1−ynklog(^ynk)Right answers+λ1N∑n=1D∑d=1(AndK∑k=1∂∂xdlog(^ynk))2Right reasons+λ2∑iθ2iRegularization, | | which contains familiar cross entropy and θ regularization terms along with a new regularization term that discourages the input gradient from being large in regions marked by A. Applying this regularization to the log probability rather than the probability performed best in practice, and summing across classes led to slightly more stable results than using the predicted class log probability maxlog(^yk), perhaps due to discontinuities near the decision boundary (though both methods were comparable). We did not explore regularizing input gradients of specific class probabilities, though this would be a natural extension. Because this loss function is differentiable with respect to θ, we can easily optimize it with gradient-based optimization methods. We do not need annotations (nonzero entries) for every input in X, and in the case A=0N×D, the explanation term has no effect on the loss. At the other extreme, in the case A is a matrix of 1s (and if we apply an L1 rather than L2 penalty), it encourages our model to have small or sparse gradients with respect to its inputs, which can improve generalization on its own (Drucker and Le Cun, [1992](#bib.bib5)). Between those extremes, it biases our model against particular implicit rules. This penalization approach enjoys several desirable properties. Alternatives that specify a single Ad for all examples presuppose a coherent notion of global feature importance or irrelevance, but for nonlinear decision boundaries, many features are only relevant in the context of a specific example. Alternatives that simulate perturbations to entries known to be irrelevant (or to determine irrelevance as in Ribeiro et al. ([2016](#bib.bib20))) require defining domain-specific perturbation logic; our approach does not. Alternatives that apply hard constraints or completely remove the elements identified by And miss the fact that the entries in the annotation A may be imprecise even if they are human-provided. Thus, we opt to preserve potentially misleading dimensions but provide the model with soft penalties against their use. ### 2.2 Find-another-explanation: discovering all possible rules without annotations Although we can obtain the annotations A via experts as in Zaidan et al. ([2007](#bib.bib28)), we may not always have this extra information. For such cases, we propose an approach that iteratively adapts A to discover multiple accurate but diverse models. In particular, we can generate a “spectrum” of models with different decision boundaries by successively training models, explaining X, then retraining the next model to be different from previous iterations: | | | | | | | --- | --- | --- | --- | --- | | | A0 | =0, | θ0=argminθL(θ,X,y,A0) | | | | A1 | =Mc[fX|θ0], | θ1=argminθL(θ,X,y,A1) | | | | A2 | =Mc[fX|θ1]∪A1, | θ2=argminθL(θ,X,y,A2) | | where the function Mc returns a binary mask for each row of its argument indicating whether the magnitude of each gradient element is at least c times that row’s maximum (and we abbreviated fX(X|θi) to fX|θi). In other words, we regularize input gradients where they were largest in magnitude previously. Finally, we note that the choice of λ1 must be high enough for us to consider not using the previous rules; in practice this parameter can be dynamically tuned until the explanations diverge from those previously observed. Finding multiple explanations can help us debug our dataset and quantify its inherent ambiguity by seeing how many iterations it takes for accuracy to fall. 3 Empirical Evaluation ----------------------- We demonstrate explanation generation, explanation constraints, and the find another explanation method on a toy color dataset and three real-world datasets. In all cases, we used a multilayer perceptron with two hidden layers of size 50 and 30, ReLU nonlinearities with a softmax output, and a λ2=0.0001 penalty on ∥θ∥2. We trained the network using autograd (Mclaurin et al., [2017](#bib.bib17)) and Adam (Kingma and Ba, [2014](#bib.bib9)). We often used an explanation L2 penalty of λ1=1000, though in our formulation of the loss function the relative contribution of this value depends on the dataset size. For the cutoff described in Section [2.2](#S2.SS2 "2.2 Find-another-explanation: discovering all possible rules without annotations ‣ 2 The Right for the Right Reasons Approach ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") and used for display, we often chose c=0.67, which tended to preserve 2-5% of gradient components (the average number of qualifying elements tended to fall exponentially with c). Code for all experiments is available at <https://github.com/dtak/rrr>. ### 3.1 Toy Color Dataset We created a toy dataset of 5×5×3 RGB images with four possible pixel values. Images fell into two classes, and there existed two independent decision rules a model could implicitly learn: whether all four corner pixels were the same color, and whether the top-middle three pixels were all different colors. Images in class 1 satisfied both conditions and images in class 2 satisfied neither. Because pixels outside the corners and top row are irrelevant, we expect any explanation of an accurate model to only refer to pixels in the corners and top row. ![Gradients-based vs. LIME (tabular) explanations of nine perceptron predictions. For gradients, we highlight pixels identified by ](https://media.arxiv-vanity.com/render-output/7266550/colors-vs-lime.png) Figure 2: Gradients-based vs. LIME (tabular) explanations of nine perceptron predictions. For gradients, we highlight pixels identified by M0.67[fX] (the top 33% largest-magnitude input gradients of the probability of the predicted class), and for LIME, we select the top 6 features (up to 3 can reside in the same RGB pixel). Model accuracy was above 0.99 on the test set. In Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Toy Color Dataset ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"), we see both LIME and input gradients identify the same relevant pixels, which suggests first that both methods are effective at explaining model predictions, and second that the model has learned the corner rather than the top-middle rule, which it did consistently across random restarts. However, if we train our model with a nonzero A (specifically, setting And=1 for corners d across examples n), we were able to cause it to use the other rule. Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Toy Color Dataset ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") shows how the model transitions between rules as we vary λ1 and the number of examples penalized by A. This result demonstrates that the model can be made to learn multiple rules despite only one being commonly reached via standard gradient-based optimization methods. However, it depends on knowing a good setting A, which in this case would still require annotating on the order of 103 examples, or 5% of our dataset. ![Rule transitions as we increase the L2 explanation penalty ](https://media.arxiv-vanity.com/render-output/7266550/color-transitions.png) Figure 3: Rule transitions as we increase the L2 explanation penalty λ1 and the number of examples with nonzero Ai. Pairs of points represent the fraction of large-magnitude gradient elements across the first 1000 test examples (0.67 cutoff) located in the corners and top-middle row, which almost always add to 1 (indicating the model is only using the corners or top-middle elements, even during transitions). Also note there is a significant regime in which the model learns a hybrid of both rules (which is simple to imagine in terms of case-by-case boolean statements). Finally, Figure [4](#S3.F4 "Figure 4 ‣ 3.1 Toy Color Dataset ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") shows we can use the find-another-explanation technique from Sec. [2.2](#S2.SS2 "2.2 Find-another-explanation: discovering all possible rules without annotations ‣ 2 The Right for the Right Reasons Approach ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") to discover the other rule without any human-provided annotations. Because only two rules lead to high accuracy on the test set, the model performs no better than random guessing when prevented from using either one. Lastly, though not directly relevant to the discussion on interpretability and explanation, we present the effect of explanations on the amount of data required for training in Appendix [A](#A1 "Appendix A Using explanations to learn with less data ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"). ![Rule discovery using find-another-explanation method with 0.67 cutoff and ](https://media.arxiv-vanity.com/render-output/7266550/color-fae.png) Figure 4: Rule discovery using find-another-explanation method with 0.67 cutoff and λ1=103 for θ1 and λ1=106 for θ2. Note how the first two iterations produce explanations corresponding to the two rules in the dataset while the third produces very noisy explanations (with low train and test accuracy). ### 3.2 Real-world Datasets To demonstrate real-world, cross-domain applicability, we test our approach on variants of three familiar machine learning text, image, and tabular datasets: * Iris-Cancer We concatenated all examples in classes 1 and 2 from the Iris dataset with the the first 50 examples from each class in the Breast Cancer Wisconsin dataset (Lichman, [2013](#bib.bib14)) to create a composite dataset X,y∈R100×34,{0,1}. Despite the dataset’s small size, our A=0 classifier still obtains an average accuracy of 91% across 50 random 23-13 training-test splits. However, when we modify our test set to remove the 4 Iris components, test accuracy falls to an average of 81%, suggesting the model learns to depend on Iris features. We verify that our explanation method detects this dependency and that we can choose A or use find-another-explanation to avoid it. * 20 Newsgroups As in Ribeiro et al. ([2016](#bib.bib20)), we test input gradients on the alt.atheism vs. soc.religion.christian subset of the 20 Newsgroups dataset (Lichman, [2013](#bib.bib14)). We used the same two-hidden layer network architecture with a TF-IDF vectorizer with 5000 components, which gave us a 94% accurate model for A=0. * Decoy MNIST We slightly modify the original MNIST dataset (LeCun et al., [2010](#bib.bib10)) so that in training, images x have 4×4 gray swatches in randomly chosen corners whose shades correspond exactly to their digits y (in particular, 255−25y). However, in test, swatch colors are randomized. A baseline two-hidden layer perceptron trained on the normal dataset obtains 98.5% train accuracy and 95.8% test accuracy. On the decoy dataset, the same model obtains a higher 99.6% train accuracy but a much lower 55.2% test accuracy, indicating it is severely though not completely hampered by the decoy rule. We verify that both input gradient and LIME explanations allow users to detect this issue and that input gradient penalties as well as find-another-explanation allow us to overcome it. #### RRR is consistent with sample-based methods such as LIME, and faster. Figures [5](#S3.F5 "Figure 5 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") and [6](#S3.F6 "Figure 6 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") show similarity between LIME and input gradients on text and image datasets. On 20 Newsgroups (Figure [5](#S3.F5 "Figure 5 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")), input gradients are less sparse but identify all of the same words in the document with similar weights. Input gradients also assign weights to words outside the document that, if added, would increase or decrease the probability of the predicted class, giving them both local and global explanatory power. ![Words identified by LIME vs. gradients on the atheism vs. Christianity subset of the 20 Newsgroups dataset for two examples. More examples are available at ](https://media.arxiv-vanity.com/render-output/7266550/20ng-vs-lime.png) Figure 5: Words identified by LIME vs. gradients on the atheism vs. Christianity subset of the 20 Newsgroups dataset for two examples. More examples are available at <https://github.com/dtak/rrr>. Words are green if they support the predicted label (bolded) and red if they oppose it, with opacity equal to the squared ratio of the word’s weight to the largest magnitude weight. LIME generates sparser explanations but the weights and signs of terms identified by both methods match closely. Note that both methods reveal certain aspects of the model that are intuitive (“church” is associated with Christianity, “atheists” with atheism), other aspects that are unintuitive (“10” is associated with Christianity, “David” with atheism), and some that are debatable (“friends” is associated with Christianity, “freedom” with atheism). ![Input gradient explanations for Decoy MNIST vs. LIME, using the LIME image library ](https://media.arxiv-vanity.com/render-output/7266550/mnist-vs-lime.png) Figure 6: Input gradient explanations for Decoy MNIST vs. LIME, using the LIME image library (Ribeiro, [2016](#bib.bib21)) released in March 2017. In this example, the model incorrectly predicts 3 rather than 7 because of the decoy swatch. On Decoy MNIST (Figure [6](#S3.F6 "Figure 6 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"), using the image version of LIME released in March 2017), both LIME and input gradients allow users to see that the model predicts 3 rather than 7 due to the color swatch in the corner. Because of their fine-grained resolution, input gradients sometimes better capture counterfactual behavior, where extending or adding lines outside of the digit to either reinforce it or transform it into another digit would change the predicted probability. LIME, on the other hand, better captures the fact that the main portion of the digit is salient (because its super-pixel perturbations add and remove larger chunks of the digit). ![Iris-Cancer features identified by input gradients vs. LIME, with Iris features highlighted in red. Input gradient explanations are more faithful to the model. Note that most but not all feature weights change sign when switching between predicting the probability ](https://media.arxiv-vanity.com/render-output/7266550/iris-vs-lime.png) Figure 7: Iris-Cancer features identified by input gradients vs. LIME, with Iris features highlighted in red. Input gradient explanations are more faithful to the model. Note that most but not all feature weights change sign when switching between predicting the probability ^y=0 and ^y=1, indicating that the decision boundary is somewhat but not completely linear. Also note that the magnitudes of input gradients are different across examples, which provides information about examples’ proximity to the decision boundary. On Iris-Cancer (Figure [7](#S3.F7 "Figure 7 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")), input gradients actually outperform LIME. We know from accuracy results that the Iris features are important to the model’s prediction, but LIME only identifies a single high-magnitude feature (from the Breast Cancer dataset). This example, which is tabular and contains continuously valued rather categorical features, may represent a pathological case for LIME, which operates best when it can selectively mask a small number of meaningful chunks of its inputs to generate perturbed samples. For truly continuous inputs, it should not be surprising that explanations based on gradients perform best. There are a few other advantages input gradients have over sample-based perturbation methods. On 20 Newsgroups, we noticed that for very long documents, explanations generated by the sample-based method LIME are often overly sparse (see Appendix [B](#A2 "Appendix B Longer 20 Newsgroups example ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")), and there are many words identified as significant by input gradients that LIME ignores. This may be because the number of features LIME selects must be passed in as a parameter beforehand, and it may also be because LIME only samples a fixed number of times. For sufficiently long documents, it is unlikely that sample-based approaches will mask every word even once, meaning that the output becomes increasingly nondeterministic—an undesirable quality for explanations. To resolve this issue, one could increase the number of samples, but that would increase the computational cost since the model must be evalutated at least once per sample to fit the local models. Input gradients, on the other hand, only require on the order of one model evaluation total to generate an explanation of similar quality (generating gradients is similar in complexity to predicting probabilities), and furthermore, this complexity is based on the vector length, not the document length. This issue (underscored by Table [1](#S3.T1 "Table 1 ‣ RRR is consistent with sample-based methods such as LIME, and faster. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")) highlights some inherent scalability advantages input gradients enjoy over sample-based perturbation methods. | | LIME | Gradients | Dimension of x | | --- | --- | --- | --- | | Iris-Cancer | 0.03s | 0.000019s | 34 | | Toy Colors | 1.03s | 0.000013s | 75 | | Decoy MNIST | 1.54s | 0.000045s | 784 | | 20 Newsgroups | 2.59s | 0.00052s | 5000 | Table 1: Gradient vs. LIME runtimes per explanation. Note that each method uses a different version of LIME; Iris-Cancer and Toy Colors use LIME tabular with continuous and quartile-discrete perturbation methods, respectively, Decoy MNIST uses LIME image, and 20 Newsgroups uses LIME text. Code was executed on a laptop and input gradient calculations were not optimized for performance, so runtimes are only meant to provide a sense of scale. #### When given human annotations, RRR finds a solution that is consistent with domain knowledge. Another key advantage of using an explanation method more closely related to our model is that we can then incorporate explanations into our training process, which are most useful when the model faces ambiguities in how to classify inputs. We deliberately constructed the decoy MNIST and Iris-Cancer datasets to have this kind of ambiguity, where a rule that works in training will not generalize to test. When we train our network on these confounded datasets, their test accuracy is better than random guessing, in part because the decoy rules are not simple and the primary rules not complex, but their performance is still significantly worse than on a baseline test set with no decoy rules. By penalizing explanations we know to be incorrect using the loss function defined in Section [2.1](#S2.SS1 "2.1 Constraining explanations in the loss function ‣ 2 The Right for the Right Reasons Approach ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"), we are able to recover that baseline test accuracy, which we demonstrate in Figures [8](#S3.F8 "Figure 8 ‣ When given human annotations, RRR finds a solution that is consistent with domain knowledge. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") and [9](#S3.F9 "Figure 9 ‣ When given human annotations, RRR finds a solution that is consistent with domain knowledge. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations"). ![Overcoming confounds using explanation constraints on Iris-Cancer. Using the same multilayer-perceptron architecture, we fit 50 models on random ](https://media.arxiv-vanity.com/render-output/7266550/iris-confounds.png) Figure 8: Overcoming confounds using explanation constraints on Iris-Cancer. Using the same multilayer-perceptron architecture, we fit 50 models on random 23-13 splits of our 100×34 dataset. By default (A=0), input gradients tend to be large in Iris components, which corresponds to lower accuracy when Iris is removed from the test set. Models trained to have low input gradients in Iris components (full A) have almost exactly the same accuracy on the test set with and without Iris features. ![Training with explanation constraints on Decoy MNIST. Accuracy is very low when swatch colors are randomized before testing (](https://media.arxiv-vanity.com/render-output/7266550/mnist-confounds.png) Figure 9: Training with explanation constraints on Decoy MNIST. Accuracy is very low when swatch colors are randomized before testing (A=0) except when the model is trained with an A that penalizes swatches (full A), in which case test accuracy matches the same architecture’s performance on the normal MNIST dataset (baseline). Note that in this case, because the swatch location is different across images, annotations Ai are example-specific. #### When human annotations are not available, our RRR find-another-explanation routine efficiently finds multiple qualitatively different classifiers. Although it is not necessary to fully specify A for every example to obtain benefits from explanation regularization (most rows can be 0, meaning users can gradually adopt input gradient penalties into existing models without requiring too much upfront investment), annotations are still more expensive to generate and may not always be available. Additionally, we may be interested in exploring the structure of our model and dataset in a less supervised fashion. On real-world datasets, which are usually overdetermined, we can use find-another-explanation to discover models in shallower local minima that we would otherwise never explore, which can give us greater insight into our data and sometimes spontaneously improve model generalization if we suspect our dataset contains confounds. Figure [10](#S3.F10 "Figure 10 ‣ When human annotations are not available, our RRR find-another-explanation routine efficiently finds multiple qualitatively different classifiers. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations") shows find-another-explanation results for 20 Newsgroups. Note that although these results are all specific to the same example, the words themselves are mapped from the highest overall weights in the document vector, so they need not be contained in the document, and are actually relatively consistent across all examples (though contain some example-specific variations). Unlike in the toy color dataset, there are many ways of distinguishing between atheism and Christianity newsgroup emails, so training and test accuracy do not fall off immediately or uniformly as more diverse explanations are found. In Iris-Cancer (Figure [11](#S3.F11 "Figure 11 ‣ When human annotations are not available, our RRR find-another-explanation routine efficiently finds multiple qualitatively different classifiers. ‣ 3.2 Real-world Datasets ‣ 3 Empirical Evaluation ‣ Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations")), Iris features are initially highly weighted in the input gradients, but after one iteration, all but one of the Iris gradients is near 0, and after two iterations, they are all near 0. Interestingly, training and test accuracy do not drop significantly after 4 iterations, even though explanations change significantly. This may indicate that there are many ways to solve this binary classification problem in 34 dimensions. On decoy MNIST, because the initial model uses the confounding corner swatches so heavily, subsequent models, trained to be different, use other features present in the images and obtain much higher test accuracy. For low λ1, test and train accuracy degrade only slowly, perhaps because there is inherent redundancy present in images that the model can leverage. For high λ1, test and train accuracy are both more chaotic, perhaps because the model is forbidden from using small subsets of necessary features that previous iterations discover (which in the low λ1 case could still be used because the penalty is small). ![Find-another-explanation on 20 Newsgroups with ](https://media.arxiv-vanity.com/render-output/7266550/20ng-fae.png) Figure 10: Find-another-explanation on 20 Newsgroups with λ1=250. Words that bias the model towards predicting soc.religion.christian are in green and words that bias the model towards alt.atheism are in red. ![Find-another-explanation on Iris-Cancer. ](https://media.arxiv-vanity.com/render-output/7266550/iris-fae.png) Figure 11: Find-another-explanation on Iris-Cancer. λ1=1000 and the entire process was re-run 50 times, with standard deviations plotted as errorbars. ![Find-another-explanation on Decoy MNIST for different values of ](https://media.arxiv-vanity.com/render-output/7266550/mnist-fae.png) Figure 12: Find-another-explanation on Decoy MNIST for different values of λ1, with explanations for the first test example shown above. 4 Discussion and Conclusions ----------------------------- We have demonstrated that training models with input gradient penalties makes it possible to learn generalizable decision logic even when our dataset contains inherent ambiguities. Input gradients are consistent with sample-based methods such as LIME but faster to compute and sometimes more faithful to the model, especially when our inputs are continous. Our find-another-explanation method can present a range of qualitatively different classifiers when such detailed annotations are not available, which may be useful in practice if we suspect each model is only right for the right reasons in certain regions. Our consistent results on several diverse datasets show that input gradients merit further investigation as scalable tools for optimizable explanations; there exist many options for further advancements such as weighted annotations A, different penalty norms, and more general specifications of whether features should be positively or negatively predictive of specific classes for specific inputs. On Decoy MNIST, we did see that input gradients, which operate at a pixel-by-pixel level of resolution, do not capture the idea of salience as well as superpixel-based LIME or other recently developed methods for explaining image predictions (Zeiler and Fergus, [2014](#bib.bib29); Sundararajan et al., [2016](#bib.bib26); Montavon et al., [2017](#bib.bib18); Bach et al., [2015](#bib.bib2); Shrikumar et al., [2016](#bib.bib23)). Input gradients are first-order approximations to the decision boundary, and thus can be thought of as first-order explanations. The efficiency and differentiability of these methods suggests they will remain useful even on complex image datasets, but future work should focus on generating similarly efficient explanations in terms of perturbations at higher levels of abstraction, which humans often require (Keil, [2006](#bib.bib8)). Work such as Zeiler and Fergus ([2014](#bib.bib29)) may provide a path forward for this kind of approach. Finally, our “right for the right reasons” approach may be of use in solving related problems, e.g. in maintaining robustness despite the presence of adversarial examples (Papernot et al., [2016](#bib.bib19)), or seeing whether explanations and explanation constraints can further the goals of fairness, accountability, and transparency in machine learning (either by detecting indirect influence (Adler et al., [2016](#bib.bib1)) or by constraining models to avoid it (Dwork et al., [2012](#bib.bib6); Zafar et al., [2016](#bib.bib27))). Another promising direction is to include humans in the loop to interactively guide models towards correct explanations despite ambiguous data (or to help define meaningful perturbations). Overall we feel that developing better methods of ensuring models are right for the right reasons can help them overcome the inherent obstacles to generalization posed by ambiguities in real-world datasets. #### Acknowledgements FDV acknowledges support from DARPA W911NF-16-1-0561, and MCH acknowledges support from Oracle Labs. All authors thank Arjumand Masood, Sam Gershman, Paul Raccuglia, Mali Akmanalp, and the Harvard DTaK group for many helpful discussions and insights.
4486347c-3b97-4cee-ad08-0e09294c688f
trentmkelly/LessWrong-43k
LessWrong
Aggregative principles approximate utilitarian principles 1. Introduction Utilitarianism is the view that a social planner should choose options which maximise the social utility of the resulting social outcome. The central object in utilitarianism is the social utility function u:S→R which assigns a real value u(s)∈R to each social outcome s∈S. This function typically involves variables such as the well-being, preferences, and mental states of individuals, distributional factors like inequality, and other relevant factors such as justice, social cohesion, and freedoms. Utilitarianism is a broad class of social choice principles, one corresponding to each function u:S→R. In my previous article, I introduced aggregative principles, which state that a social planner should make decisions as if they will face the aggregated personal outcomes of every individual in the population. The central object in aggregativism is the function ζ:S→P, represented with the the greek letter zeta, which assigns a personal outcome ζ(s)∈P to each social outcome s∈S. This function typically aggregates the collection of personal outcomes facing the entire population into a single personal outcome. Aggregativism is a broad class of social choice principles, one corresponding to each function ζ:S→P. We examined three well-known aggregative principles: 1. Live Every Life Once (LELO), where ζ(s) is the concatenation of every individual's life. 2. Harsanyi's Lottery (HL), where ζ(s) is a uniform lottery over every individual's life. 3. Rawls' Original Position (ROI), where ζ(s) is Knightian uncertainty over every individual's life. I'm interested in aggregative principles because they avoid many theoretical pitfalls of utilitarian principles. Unlike utilitarianism, aggregativism doesn't require specifying a social welfare function, which is notoriously intractable. Moreover, it seems less prone to counterintuitive conclusions such as the repugnant conclusion or the violation of moral side constraints.[1] In this article, I will show that, under
0e239500-ec1b-4aa1-824a-0cde9493e3b9
trentmkelly/LessWrong-43k
LessWrong
Estimating COVID cases & deaths in India over the coming months Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful. I'm trying to convince some people who work in global health to send the US stockpile of unused and unapproved Astra Zeneca vaccines to the hardest hit parts of India. My naive calculation indicates sending 25,000,000 doses (basically our whole stockpile) would save at least 17,000 lives using fairly conservative assumptions. But that was based on an estimate that 20% of the total population would be infected and a third of those would be infected after the vaccines we send are injected and become effective. The 20% is based on the rate of US infections and the 1/3rd is basically a random guess about how many infections there would be left between vaccination and the end of the pandemic.
b252003e-fa6d-4d59-98dc-3b5fa8989219
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Betting on Logic Consider the following decision problem inspired by Ahmed ([2013](https://philpapers.org/rec/AHMCDT-2), [2014](https://philpapers.org/rec/AHMCDT-3)). > **Betting on logic**. In my pocket (says Bob) I have a slip of paper on which is written a proposition P.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} > .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} > .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} > .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} > .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} > .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} > .mjx-numerator {display: block; text-align: center} > .mjx-denominator {display: block; text-align: center} > .MJXc-stacked {height: 0; position: relative} > .MJXc-stacked > \* {position: absolute} > .MJXc-bevelled > \* {display: inline-block} > .mjx-stack {display: inline-block} > .mjx-op {display: block} > .mjx-under {display: table-cell} > .mjx-over {display: block} > .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-stack > .mjx-sup {display: block} > .mjx-stack > .mjx-sub {display: block} > .mjx-prestack > .mjx-presup {display: block} > .mjx-prestack > .mjx-presub {display: block} > .mjx-delim-h > .mjx-char {display: inline-block} > .mjx-surd {vertical-align: top} > .mjx-surd + .mjx-box {display: inline-flex} > .mjx-mphantom \* {visibility: hidden} > .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} > .mjx-annotation-xml {line-height: normal} > .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} > .mjx-mtr {display: table-row} > .mjx-mlabeledtr {display: table-row} > .mjx-mtd {display: table-cell; text-align: center} > .mjx-label {display: table-row} > .mjx-box {display: inline-block} > .mjx-block {display: block} > .mjx-span {display: inline} > .mjx-char {display: block; white-space: pre} > .mjx-itable {display: inline-table; width: auto} > .mjx-row {display: table-row} > .mjx-cell {display: table-cell} > .mjx-table {display: table; width: 100%} > .mjx-line {display: block; height: 0} > .mjx-strut {width: 0; padding-top: 1em} > .mjx-vsize {width: 0} > .MJXc-space1 {margin-left: .167em} > .MJXc-space2 {margin-left: .222em} > .MJXc-space3 {margin-left: .278em} > .mjx-test.mjx-test-display {display: table!important} > .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} > .mjx-test.mjx-test-default {display: block!important; clear: both} > .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} > .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} > .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} > .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} > .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} > .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} > .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} > .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} > .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} > .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} > .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} > .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} > .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} > .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} > .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} > .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} > .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} > .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} > .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} > .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} > .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} > .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} > .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} > .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} > .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} > .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} > .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} > .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} > .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} > @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} > @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} > @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} > @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} > @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} > @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} > @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} > @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} > @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} > @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} > @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} > @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} > @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} > @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} > @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} > @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} > @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} > @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} > @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} > @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} > @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} > @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} > @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} > @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} > @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} > @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} > @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} > @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} > @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} > @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} > @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} > . You must choose between two bets. Bet 1 is a bet on P at 10:1 for a stake of one dollar. Bet 2 is a bet on P at 1:10 for a stake of ten dollars. So your pay-offs are as in the figure. Before you choose whether to take Bet 1 or Bet 2 I should tell you what P is. It is the proposition that ‘Peano arithmetic (PA) entails that your decision algorithm outputs taking Bet 2’, i.e. ‘PA⊢[A()=Bet 2]’. > > | | | | | --- | --- | --- | | | P | ¬P | | Bet 1 | 10 | -1 | | Bet 2 | 1 | −10 | What does functional decision theory ([Yudkowsky and Soares [2018]](https://arxiv.org/pdf/1710.05060.pdf), [Levinstein and Soares [2020]](https://intelligence.org/files/DeathInDamascus.pdf)), FDT, recommend? It seems like taking Bet 1 "subjunctively dominates"[[1]](#fn68max5z3yg4) taking Bet 2, so FDT recommends taking Bet 1. But one should take Bet 2, so FDT is false. 1. **[^](#fnref68max5z3yg4)**[Yudkowsky and Soares (2018)](https://arxiv.org/pdf/1710.05060.pdf) write:  > [A] notion of “dominance” can be constructed using FDT-style counterfactuals, in which action a dominates action b if, holding constant all relevant subjunctive dependencies, switching the output of the agent’s algorithm from b to a is sometimes better (and never worse) than sticking with b. According to this notion of dominance, FDT agents never take a dominated action. > >
2eb8b24e-99c1-4d32-8215-473aa5f11163
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] No Evolutions for Corporations or Nanodevices Today's post, No Evolutions for Corporations or Nanodevices was originally published on 17 November 2007. A summary (taken from the LW wiki):   > Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Evolving to Extinction, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
8f1a5073-1a16-4f5b-a402-59ef2f6b7c9a
trentmkelly/LessWrong-43k
LessWrong
The Geometry of Linear Regression versus PCA In statistics, there are two common ways to "find the best linear approximation to data": linear regression and principal component analysis. However, they are quite different---having distinct assumptions, use cases, and geometric properties. I remained subtly confused about the difference between them until last year. Although what I'm about to explain is standard knowledge in statistics, and I've even found well-written blog posts on this exact subject, it still seems worthwhile to examine, in detail, how linear regression and principal component analysis differ. The brief summary of this post is that the different lines result from the different directions in which we minimize error: * When we regress Yonto X, we minimize vertical errors relative to the line of best fit. * When we regress X onto Y, we minimize horizontal errors relative to the line of best fit. * When we plot the first principal component of X and Y, we minimize orthogonal errors relative to the line of best fit. To understand the difference, let's consider the joint distribution of heights for father-son pairs where both are adults. When you observe the distribution of heights among adult men, you'll notice two key things. First, height in adult men is roughly normally-distributed. While some people are taller or shorter than average, there aren't 10ft tall people or 1ft tall people. The vast majority of adult men are somewhere between 5 feet and 7 feet tall. If you were to randomly sample adult males in the US and plot the data, it would form a bell-curve shaped graph like this one: Second, height runs in families. While there is natural variation, people with taller parents tend to be taller than average. Quantitatively, the correlation between father-son height is around 0.5 (the exact value won't matter for this post). We can create simulated data that would resemble the actual distribution of father-son heights. We'll make the following assumptions: 1. Since we are only consider
2408c7a6-fb4f-4c05-9e89-dd3945a6eca0
StampyAI/alignment-research-dataset/arxiv
Arxiv
Active Reinforcement Learning with Monte-Carlo Tree Search 1 Introduction --------------- ### 1.1 Motivation Imagine two treatments are being tested in a medical trial. The treatments are cheap but having doctors evaluate whether they worked costs £5,000 for each additional patient. Treatments are assigned using a Bandit design (Kuleshov and Precup, [2014](#bib.bib22)) and after 200 trials the difference in mean evaluation between the two treatments is tiny. Should the trial continue? At some point an additional trial is not worth another £5,000. This cost of evaluating outcomes is not incorporated into standard Bandits. When playing Bandits, deciding whether to explore depends only on the estimated differences in expected (discounted) return between arms. The same is true for Reinforcement Learning in MDPs: the cost of providing a reward for a state-action pair is not a parameter of the learning problem.This makes sense when the reward function is created all at once and offline, as when it is hand-engineered. But if the rewards are created incrementally online, as in the medical trial, then an important feature of the decision problem has been left out. ![](https://media.arxiv-vanity.com/render-output/7293237/x1.png) Figure 1: Late Fork and Early Fork are deterministic, tabular MDPs. The edges indicate available actions. At only one state, the fork (states N and 0) are two actions available. The agent knows the transitions but not the rewards. In Late Fork, the agent should query only at the fork (all other actions are unavoidable). In Early Fork, the agent should query everywhere, as all rewards contribute to Q-values at state 0. Online construction of rewards is common in real-world Bandit problems: customers subjected to A-B testing may be paid to give feedback on new products (Scott, [2015](#bib.bib25)). Recent research, spurred by the difficulty of hand-engineering rewards, has formalised more general approaches to online reward construction. In Reward Learning, a reward function is learned online from human evaluations of the agent’s behaviour (Warnell et al., [2017](#bib.bib33); Christiano et al., [2017](#bib.bib8); Saunders et al., [2017](#bib.bib24)). In Inverse Reinforcement Learning (IRL) and Imitation Learning (Abbeel and Ng, [2004](#bib.bib1); Ho and Ermon, [2016](#bib.bib18); Evans et al., [2016](#bib.bib14)), humans provide demonstrations that are used to infer the reward function or optimal policy. These demonstrations can be provided offline or online but the reward function is always specified incrementally, as a set of human actions or trajectories. In Reward Learning and IRL, the human labour required to construct rewards is a significant cost. How can this cost be reduced? Intuitively, if the RL agent can predict an action’s reward then a human need not provide it. In Active Reinforcement Learning (ARL), this choice of whether to pay for reward construction is given to the RL agent (Krueger et al., [2016](#bib.bib21)). It is the analogue of Active Learning, where an algorithm decides online whether to have the next data point labelled in a classification or regression task (Settles, [2012](#bib.bib26)). ### 1.2 ARL Definition and Illustration To fix intuition, we define the ARL problem here and elaborate on this definition in later sections. An active reinforcement learning (ARL) problem is a tuple (S,A,P,R,τ,c). The components (S,A,P,R,τ) define a regular Markov Decision Process (MDP), where S is the state space, A is the action space, P is the transition function, R is the reward function on state-action pairs, and τ is the time horizon. The component c>0 is a scalar constant, the “query cost”, which specifies the cost of observing rewards. All components except P and R are initially known to the agent. ARL proceeds as follows. At time step t, the agent takes an action pair (it,at), where it∈{0,1} and at∈A, which determines a reward rt∼R(st,at) and next state st+1∼P(st,at). If it=1, the agent pays to observe the reward rt, and so receives a total reward of rt−c. If it=0 the agent does not observe the reward; so if the agent did something bad it will not be knowingly punished. The agent’s total return after T timesteps is defined as: | | | | | --- | --- | --- | | | Return(T):=T∑t=0R(st,at)−itc. | | We emphasise that actions for which the agent did not observe the reward still count towards the return. An ARL problem depends crucially on how query cost c compares to the agent’s expected total returns. When c is large relative to the expected returns, the agent should never query and should rely on prior knowledge about R. When c is very small, the agent can use a regular RL algorithm and always query. In between these two extremes, the agent must carefully select a subset of actions to query and so RL algorithms are not readily applicable to ARL. Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows two MDPs (Early Fork and Late Fork) that illustrate the challenge of deciding which actions to query. RL algorithms perform sub-optimally on these MDPs unless effort is made to adapt them to ARL. This paper presents the following contributions: 1. We show that ARL for tabular MDPs can be reduced to planning in a Bayes-Adaptive MDP. 2. We adapt MCTS-based algorithm BAMCP (Guez et al., [2012](#bib.bib17)) to provide an asymptotically optimal model-based algorithm for Bayesian ARL. 3. BAMCP fails in practice on small MDPs. We introduce BAMCP++, which uses smarter model-free rollouts and substantially outperforms BAMCP. 4. We benchmark BAMCP++ against model-free algorithms with ARL-specific exploration heuristics. BAMCP++ outperforms model-free methods on random MDPs. ### 1.3 Related Work How does ARL (as defined above) related to regular RL? In regular RL there is no cost for deciding to observe a reward. Yet regular RL does involve “active learning” in the more general sense: the agent decides which actions to explore instead of passively receiving them. So techniques for exploration in regular RL might carry over to ARL. Unfortunately, most practical algorithms for regular RL use heuristics for exploration such as ϵ-greedy, optimism (Auer et al., [2002](#bib.bib4); Kolter and Ng, [2009](#bib.bib20)), and Thompson sampling (Osband et al., [2013](#bib.bib23)). While these heuristics achieve near-optimal exploration for certain classes of RL problem (Bubeck et al., [2012](#bib.bib6); Azar et al., [2017](#bib.bib5)), they are not directly applicable to ARL, as explained in Section [3](#S3 "3 ALGORITHMS FOR ARL ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). There are RL algorithms that try to explore in ways closer to the decision-theoretic optimum. Various algorithms use an approximation to the Bayesian value of information (Srinivas et al., [2009](#bib.bib29); Dearden et al., [1998](#bib.bib11)) and so relate to our Section [3](#S3 "3 ALGORITHMS FOR ARL ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). An alternative non-Bayesian approach is to have the agent learn about the transitions to which the optimal policy is most sensitive (Epshteyn et al., [2008](#bib.bib13)). There is a substantial literature on active learning of rewards provided online by humans (Wirth et al., [2017](#bib.bib34); Dragan, [2017](#bib.bib12)). Daniel et al. ([2014](#bib.bib10)) learn a reward function on trajectories (not actions) from human feedback and use Bayesian optimization techniques to select which trajectories to have labelled. D. Sadigh et al. ([2017](#bib.bib9)) learn a reward function on state-action pairs and their agent optimizes actions to be informative about this function. These reward-learning techniques are aimed at continuous-state environments and do not straightforwardly transfer to our tabular ARL setting. Our work also differs from [D. Sadigh et al.](#bib.bib9) in that we optimize for informativeness about the optimal policy and not the true reward function. As Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") illustrates, if some states are unavoidable then their reward is irrelevant to the optimal policy. There is also work applying active learning to tabular RL with human teachers but where human input is quite different than in the ARL model (Subramanian et al., [2016](#bib.bib31); Judah et al., [2012](#bib.bib19)). 2 Background ------------- This section reviews Bayesian RL and the BAMCP algorithm. Later we cast ARL as a special kind of Bayesian RL problem and apply BAMCP to ARL. ### 2.1 Bayesian RL An MDP is specified by M=(S,A,P,R,τ), with components defined in Section [1.1](#S1.SS1 "1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). While our algorithms apply more generally, this paper focuses on finite, episodic MDPs (Osband et al., [2013](#bib.bib23)), where τ is the episode length. A Bayesian RL problem (Ghavamzadeh et al., [2015](#bib.bib15); Guez, [2015](#bib.bib16)) is specified by an MDP M and an agent’s prior distribution b0(θP) on the transition function parameters θP. The agent’s posterior at timestep t is then given by bt(θP)=bt(θP|ht)∝L(ht|θP)b0(θP), where L(ht|θP) is the likelihood of history ht given the transition function with parameters θP. The Bayesian RL problem can be transformed into an MDP planning problem by augmenting the state space with the agent’s belief and the transition function with the agent’s belief update. The resulting MDP is defined by M+=⟨S+,A,P+,R,τ⟩ and is called a Bayes-Adaptive MDP (BAMDP), where: * S+ is the set of hyperstates S×θP; * P+ is the combined transition function between states and beliefs: S+×A×R×S+→[0,1]; and * The initial hyperstate is determined by the initial distribution over S and the prior b0 on the transition function. ### 2.2 BAMCP: MCTS for Bayesian RL BAMCP is a Monte Carlo Tree Search (MCTS) algorithm for Bayesian RL (Guez et al., [2012](#bib.bib17)). It converges in probability to the optimal Bayesian policy (i.e. the optimal policy for the corresponding BAMDP) in the limit of infinitely many MC simulations. In experiments, it has achieved near state-of-the-art performance in a range of environments (Castronovo et al., [2016](#bib.bib7); Guez, [2015](#bib.bib16)). At any given timestep BAMCP attempts to compute the Bayes-optimal action for the current state under the agent’s posterior bt. As is common for work on Bayesian RL, this posterior is only over the transition function and not the reward function.111For our experiments in ARL the agent will always be uncertain about the reward function. BAMCP is an online algorithm. At each timestep, it updates the posterior on an observation from the real MDP and then uses MCTS to simulate possible futures using models sampled from this posterior. The MCTS builds a search tree mapping histories to value-function estimates (see Fig [2](#S2.F2 "Figure 2 ‣ 2.2 BAMCP: MCTS for Bayesian RL ‣ 2 BACKGROUND ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")). A node corresponds to a posterior belief bt and current state action (st,at), and for each node the algorithm maintains a value estimate Q({st,bt},at) and visit count N({st,bt},at). BAMCP’s behaviour can be specified in four steps: 1. Node selection: At any node BAMCP chooses to expand the subtree for the action chosen by a UCB policy. In particular, when at node n={st,bt}, the algorithm expands the action given by: | | | | | --- | --- | --- | | | argmaxaQ(n,a)+u√log(∑aN(n,a))N(n,a) | | where u is an exploration constant. 2. Expansion: This node selection continues until it reaches the final timestep of the episode or a leaf node. At leaf nodes exactly one child node is added per simulation. 3. Rollouts: If additional steps outside the tree need to be simulated, a rollout policy, trained by running Q-learning on observations from the real MDP, selects actions. No new nodes are added during the rollout phase. 4. Backup: After the rollout, value estimates of tree nodes along the trajectory are updated with the sampled returns. A simple average over all trajectories is computed. BAMCP also uses root sampling and lazy sampling to improve efficiency (Guez et al., [2012](#bib.bib17)). ![](https://media.arxiv-vanity.com/render-output/7293237/x2.png) Figure 2: Search tree for BAMCP applied to regular RL (Left) and applied to ARL (Right). Nodes in black correspond to a history ht and current state st. In the ARL tree, query actions have form (1,at) and cause extra branching because they result in multiple possible observed rewards as well as multiple possible state transitions. At leaf nodes, actions are selected by a rollout policy. 3 Algorithms for Arl --------------------- ### 3.1 Reducing ARL Problems to BAMDPs We consider Active RL (defined in Section [1.1](#S1.SS1 "1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")) in the Bayesian setting, where the agent has a prior distribution over the reward and transition functions. This is similar to a Bayesian RL problem. Actions in ARL reduce to regular RL actions by crossing each regular action with an indicator variable. But unlike in RL, an ARL agent does not always observe a scalar reward. To accommodate this, we introduce the null reward “⊥”. If the agent takes an action without querying, it receives a reward ⊥. The definition of the agent’s belief update is modified to not update on ⊥. With this minor emendation, Bayesian ARL can be reduced to an MDP in an augmented state-space exactly as in Section [2](#S2 "2 BACKGROUND ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). ### 3.2 RL Algorithms Fail at ARL Can we apply Bayesian RL algorithms to Bayesian ARL? Many such algorithms can be straightforwardly adapted to deal with the null reward and produce well-typed output for ARL. Yet naive adaptations often fail pathologically. For instance, they might never choose to query and hence learn nothing. Here are some principles used in RL algorithms that lead to pathologies in ARL. Optimism in the face of uncertainty Optimism means adding bonuses to more uncertain rewards and taking optimal actions in the resulting optimistic MDP (Kolter and Ng, [2009](#bib.bib20); Araya et al., [2012](#bib.bib3); Auer et al., [2002](#bib.bib4)). An optimal agent in a known MDP never queries. Optimism treats the optimistic MDP as (temporarily) known and hence optimism applied to ARL never queries. Thompson Sampling (PSRL) Thompson Sampling samples from the posterior on MDPs and plans in the sampled MDP (Osband et al., [2013](#bib.bib23); Strens, [2000](#bib.bib30)). This fails for the same reason as optimism. Model-free TD-learning with random exploration TD-learning is described in Sutton and Barto ([1998](#bib.bib32)). The Q∗-value of querying an action is always lower than the value of not querying the same action. So for every action, a TD-learner learns to avoid querying the action and so fails when some actions must be queried many times. ### 3.3 Applying BAMCP to ARL BAMCP is simple to adapt to Bayesian ARL and does not lead to obvious pathologies like the principles above. In fact, it converges in the limit to the optimal Bayesian policy for the Bayes-Adaptive MDP derived from the ARL problem. Adapting BAMCP to ARL requires a few modifications of Guez ([2015](#bib.bib16)) which are depicted in Figure [2](#S2.F2 "Figure 2 ‣ 2.2 BAMCP: MCTS for Bayesian RL ‣ 2 BACKGROUND ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). First, we explicitly model uncertainty over both the reward function R and transition function P. Second, the rollout policy only considers non-querying actions (as querying is pointless for rollouts that do not learn). Third, querying is incorporated into Monte-Carlo simulations. When simulating a trajectory, each action a∈A can be queried or not queried, as represented by indicator i. If the action is not queried, the search tree may not branch (since there is no reward observation) but the reward backup is still performed. If the action is queried, its reward is observed and reduced by the query cost c. procedure BAMCP-PP(T)      h←{}, t←0, s←s0      Initialise QM randomly.      repeat          i,a← MCTS-SEARCH(⟨s,h⟩)          r,s′∼P(⋅|s,i,a)          Append (s,i,a,r,s′) to h          t←t+1          if i=1 then               Q-LEARN-UPDATE(QM,s,a,r)          end if      until t=T end procedure procedure MCTS-Search(⟨s,h⟩)      repeat          θM∼b(θM|h)          Qπ←QM          SIM(⟨s,h⟩,θM,0)      until Time-out      return argmaxi,aQ(⟨s,h⟩,i,a) end procedure procedure Sim(⟨s,h⟩, θM, d)      if d>dmax then          return 0      end if      if N(⟨s,h⟩) = 0 then          0, a←πro(⟨s,h⟩)          r,s′←P(⋅|s,a,θM)          R←r + ROLLOUT(⟨s′,ha⊥s′⟩, θM,d+1)          Update(⟨s,h⟩,R)          return R      end if            r,s′←P(⋅|s,a,θM)      if i = 0 then          R←r+SIM(⟨s′,ha⊥s′⟩,θM,d+1)      else          R←r+SIM(⟨s′,hars′⟩,θM,d+1)−c          Q-LEARN-UPDATE(Qπ,s,a,r)      end if      Update(⟨s,h⟩,i,a,R)      return R end procedure Algorithm 1 BAMCP++ Algorithm Main procedure BAMCP-PP is applied for T timesteps to unknown MDP M. Posterior over M is represented by b(θM|h). The procedure Q-LEARN-UPDATE is the standard Q-learning update. procedure Rollout(⟨s,h⟩,θM,d)      if d>dmax then          return 0      end if      0,a←πro(⟨s,h⟩)      r,s′←P(⋅|s,a,θM)      R←r+ROLLOUT(⟨s′,ha⊥s′⟩,θM,d+1)      Update(⟨s,h⟩,R)      return R end procedure function πro(⟨s,h⟩)      return a∼SoftMax(Qπ(s,⋅)) end function Algorithm 1 BAMCP++ cont’d ### 3.4 Algorithm for ARL: BAMCP++ As we show in Section [4](#S4 "4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"), BAMCP performs poorly on ARL. We introduce BAMCP++ (Algorithm [1](#alg1 "Algorithm 1 ‣ 3.3 Applying BAMCP to ARL ‣ 3 ALGORITHMS FOR ARL ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")), which builds on BAMCP and leads to improved estimates of the value of querying actions. The first new feature of BAMCP++ is Delayed Tree Expansion. UCB tree expansion often avoids query actions, because it is hard to recognise their value when estimating via noisy rollouts. To address this, we accumulate the results of multiple rollouts from a leaf node before letting UCB expand the actions from that node. This reduces the variance of value estimates, helping to prevent query actions from being prematurely dismissed. The second new feature of BAMCP++ addresses a problem with the rollouts themselves. #### 3.4.1 Episodic Rollouts BAMCP’s rollout policy is responsible for value estimation in parts of the state space not yet covered by the MCTS search tree. Returns from a rollout are used to initialise leaf nodes and are also propagated back up the tree. BAMCP’s rollout policy consists of a Q-learner trained on observations from the real MDP. This can result in a vicious circle when applied to ARL: (i) the Q-learner can only learn from the real MDP if the agent chooses to query; (ii) the agent only chooses to query if simulated queries lead to higher reward; (iii) simulated queries only lead to higher reward if the information gained is exploited and random rollouts do not exploit it. Our experiments suggest this happens in practice: BAMCP queries far too little. Related to the vicious circle, BAMCP’s rollouts do not share information across related hyperstates. After getting a big reward ten out of ten times from one simulated Bandit arm, the rollout is just as likely to choose the other arm. In Episodic Rollouts, the rollout policy is still a Q-learner. But instead of just training on the real MDP, we also train on the observations from the current MC simulation. Let QM denote a Q-learner trained on the real MDP up to timestep t. For each MC simulation, the rollout is performed by a distinct Q-learner Qπ that is initialised to QM but then trained by Q-learning on observations in the simulated222This is the root-sampled MDP. MDP Msim (see Q-LEARN-UPDATE applied to Qπ in Algorithm [1](#alg1 "Algorithm 1 ‣ 3.3 Applying BAMCP to ARL ‣ 3 ALGORITHMS FOR ARL ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")). This simulation consists of repeated episodes of Msim and so Qπ gradually learns a better policy for Msim, sharing information across hyperstates and exploiting querying actions. The rollout’s actions are sampled from a Boltzmann distribution.333Since the search tree eventually covers the entire state space (due to UCB), we can freely modify the rollout policy can without removing the asymptotic guarantees of MCTS. Episodic Rollouts use a model-free agent that learns during simulation, at the cost of a slower rollout. Having a fast model-free agent to guide model-based simulations is also central to AlphaZero (Silver et al., [2016](#bib.bib27), [2017](#bib.bib28); Anthony et al., [2017](#bib.bib2)), where the model-free network is trained to predict the result of MCTS simulations. ### 3.5 Model-free Agents for ARL As noted above, model-free agents such as ϵ-greedy Q-learners can fail pathologically at ARL. We want to investigate whether Q-learners augmented with querying heuristics can perform well on ARL. The First-N Heuristic queries each state-action pair on the first N visits. The hyperparameter N can be tuned empirically or set using prior knowledge of the transition function and the variance of reward distributions. The Mind-Changing Cost Heuristic (MCCH) of Krueger et al. ([2016](#bib.bib21)) is based on bounding the value of querying and is closely related to the Value of Information heuristic (Dearden et al., [1998](#bib.bib11)). After enough timesteps, an optimal Bayesian ARL agent may stop querying because the value of information (which decreases over time) does not exceed the query cost (which is constant). Likewise, MCCH computes an approximate upper bound on the value of querying and avoids querying if the bound exceeds the query cost. The bound is based on the number of episodes remaining E, the value Qmax of the best possible policy (consistent with existing evidence), the value ¯Vt of the currently known best policy, and finally the number of queries m required for the agent to learn they should switch to Qmax. The quantity Qmax can be upper-bounded by the total reward possible in an episode (given the maximum reward Rmax). Since m is difficult to approximate without prior knowledge, we replace it with a hyperparameter μ>0 that needs to be tuned. If the agent follows MCCH for MDPs, it queries whenever: | | | | | --- | --- | --- | | | cμ<E(Qmax−¯Vt) | | The First-N Heuristic and MCCH can be combined with any model-free learner. In our experiments, we use an ϵ-greedy Q-learner. For First-N, if a state-action has been queried N times, it cannot be chosen for exploratory actions. For MCCH, the agent follows ϵ-greedy up until it stops querying at which point it just exploits using its fixed Q-values. 4 Experiments -------------- We test BAMCP and BAMCP++ on Bandits and then investigate the scalability of BAMCP++ on a range of larger MDPs. ### 4.1 BAMCP vs. BAMCP++ in Bandits In the ARL version of multi-armed Bandits, the agent decides both which arm to pull and whether to pay a cost to query that arm. Optimal behaviour in ARL Bandits has a simple form: the agent queries every action up to some point and thereafter never queries (Krueger et al., [2016](#bib.bib21)). We test BAMCP against BAMCP++ on a two-arm Bernoulli Bandit, with parameters for the two arms p={0.2,0.8} and a query cost c=0.5. The total number of trials (which is known) varies up to 40. Both algorithms have Beta(0.5,0.5) priors over arm parameters and use 200,000 Monte-Carlo simulations. Gridsearch was used to set the UCB hyperparameter u and BAMCP++’s delayed tree-expansion parameter. #### 4.1.1 BAMCP++ is near optimal Figure [3](#S4.F3 "Figure 3 ‣ 4.1.1 BAMCP++ is near optimal ‣ 4.1 BAMCP vs. BAMCP++ in Bandits ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows returns averaged over 100 repeats of the same ARL Bandit (for horizons up to 40). We compare BAMCP and BAMCP++ to the optimal policy (which always pulls the best arm and never queries) and to the optimal policy minus the cost of up to three queries (for a fairer comparison). The optimal policy is distinct from the Bayes optimal policy, which is the ideal comparison but is hard to compute (Krueger et al., [2016](#bib.bib21)). BAMCP++ is mostly close to the optimal policy minus three queries, whereas BAMCP is closer to the random policy. While BAMCP++ is near-optimal for horizon T>15, it is suboptimal for smaller horizons. What explains this? For sufficiently small T, the Bayes optimal agent does not query and performs randomly. However, for T=12 the Bayes optimal agent would query and so BAMCP++ falters. The difficulty is that querying is only optimal if the agent performs flawlessly after the query. Hence many MCTS samples are needed to recognise that querying is Bayes optimal (as most trajectories that start with querying are bad). This is illustrated in Figure [5](#S4.F5 "Figure 5 ‣ 4.1.2 BAMCP’s problems in regular RL ‣ 4.1 BAMCP vs. BAMCP++ in Bandits ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"), which shows the estimated BAMDP Q-values for query and non-query actions in the first timestep for T=15. Even after 100,000 simulations, non-querying is (incorrectly) estimated to be superior. BAMCP is outperformed by BAMCP++. Figure [4](#S4.F4 "Figure 4 ‣ 4.1.2 BAMCP’s problems in regular RL ‣ 4.1 BAMCP vs. BAMCP++ in Bandits ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows the probability of queries for T=30 and T=50. For these horizons, the Bayes optimal agent queries at the first few timesteps with probability one. Yet BAMCP almost never queries (T=30) or queries with low probability (T=50). BAMCP (unlike the random agent) exploits information gained from queries but because it fails to recognise the value of queries it never gains much information. ![](https://media.arxiv-vanity.com/render-output/7293237/x3.png) Figure 3: Mean returns over 100 runs on 2-arm Bernoulli ARL Bandit with p={0.2,0.8}, c=0.5, and with varying horizon (total trials). #### 4.1.2 BAMCP’s problems in regular RL Is the failure of BAMCP in Bandits due to a special feature of ARL, or does BAMCP fail at related problems in regular RL? We tested BAMCP on the Double-Loop (Fig [6](#S4.F6 "Figure 6 ‣ 4.2.1 BAMCP++ on Late Fork ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")), an RL environment that poses a similar challenge to ARL Bandits. For this environment the agent knows the rewards and has a Dirichlet prior on the transition probabilities. While BAMCP achieved excellent performance on Double-Loops with L=4 (Guez, [2015](#bib.bib16); Castronovo et al., [2016](#bib.bib7)), we test it up to L=10. We set the UCB parameter u=3 and the number of MC simulations to 10,000 (following [Guez](#bib.bib16)). Figure [6](#S4.F6 "Figure 6 ‣ 4.2.1 BAMCP++ on Late Fork ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows that BAMCP’s performance drops rapidly after L=4 and ends up no better than a simple model-free Q-learner. How is this poor performance related to ARL? Suppose the agent believes (after trying both loops) that reaching state 2L is likely worse than the right loop. The reason to explore 2L is that if it is better it can be exploited many times. But unless MCTS simulates that systematic exploitation the agent will not explore. ![](https://media.arxiv-vanity.com/render-output/7293237/x4.png) Figure 4: Probability of querying at each timestep in ARL Bandit (see Fig [3](#S4.F3 "Figure 3 ‣ 4.1.1 BAMCP++ is near optimal ‣ 4.1 BAMCP vs. BAMCP++ in Bandits ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")) with fixed horizon T=30 and T=50. ![](https://media.arxiv-vanity.com/render-output/7293237/x5.png) Figure 5: Estimated Q-value for query/non-query on first trial of Bandit after a given number of MCTS simulations. Horizon is set to T=15. ### 4.2 Benchmarking BAMCP++ and Model-Free Algorithms Having shown that BAMCP++ does well on ARL Bandits, we test it on more complex MDPs with unknown transition dynamics and compare it against model-free algorithms. #### 4.2.1 BAMCP++ on Late Fork We test BAMCP++ on Late Fork (Fig [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")) with N=2. This is a 3-state MDP, where the first two actions are unavoidable and should not be queried. The query cost is c=0.5. In the condition “Known Transitions”, all transitions are known and only the rewards for each action are unknown. In “Unknown Transitions”, the agent knows which actions are available at each state but not where the actions lead. The priors are Beta(0.5,0.5) for rewards and symmetric Dirichlet with parameter α=0.5 for transitions. Figure [8](#S4.F8 "Figure 8 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows total returns averaged over 50 runs for different horizons. (The number of episodes plays the same role as the number of trials in Bandits). ![](https://media.arxiv-vanity.com/render-output/7293237/x6.png) Figure 6: The Double-Loop MDP (Dearden et al., [1998](#bib.bib11)) for RL consists of two loops of length L. The optimal policy traverses the entire left loop. ![](https://media.arxiv-vanity.com/render-output/7293237/x7.png) Figure 7: Average performance of BAMCP (regular RL) on the Double-Loop with different simulation budgets and varying loop length L. Grey lines represents expected score of a Q-learner. BAMCP++ achieves close to the optimal policy when the horizon T is above 17. But does it explore in the Bayes optimal way? Figure [9](#S4.F9 "Figure 9 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows the probability of querying actions at each timestep in a setting with horizon T=20 episodes, which corresponds to the mid-point on the x-axis of Fig [8](#S4.F8 "Figure 8 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). The spikes in the graph show the agent alternates between querying with probability zero (at the unavoidable action) and querying with positive probability (at the fork), just as the Bayes optimal agent does.444For “Unknown Transitions” the agent knows that actions are unavoidable while not knowing where they lead. #### 4.2.2 BAMCP++ on Random MDPs BAMCP++ does well on very small MDPs like Bandits and 3-state Late Fork. Can it scale to larger and more varied MDPs? We compare BAMCP++, MCCH, and First-N on the Fork environments (Fig [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 INTRODUCTION ‣ Active Reinforcement Learning with Monte-Carlo Tree Search")) and on random MDPs with 5 states and 3 actions. The query cost is c=0.5 throughout. To generate 25 random MDPs for testing algorithms, we sample rewards and transitions from symmetric Dirichlet distributions with α=0.5 and α=0.2 respectively. We call this the generating prior. The BAMCP++ agent uses the generating prior across all MDPs (including the Fork environments) and uses a fixed number of MC simulations (200,000). ![](https://media.arxiv-vanity.com/render-output/7293237/x8.png) Figure 8: Average returns of BAMCP++ on Late Fork (N=2) compared to the optimal policy (“Max return”) as function of horizon T. ![](https://media.arxiv-vanity.com/render-output/7293237/x9.png) Figure 9: Probability of query actions for BAMCP++ on each timestep of Late Fork (with horizon T=20). BAMCP++ and First-N use a fixed set of hyperparameters for all MDPs in Table [1](#S4.T1 "Table 1 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). These are set by gridsearch on random MDPs sampled from the generating prior. So the hyperparameters are tuned for the task “Rand-25” but not for any other tasks in Table [1](#S4.T1 "Table 1 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). We tried fixing hyperparameters for MCCH in the same way but performance was so poor that we instead tuned hyperparameters for each row in Table [1](#S4.T1 "Table 1 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). On random MDPs, BAMCP++ substantially outperforms the model-free approaches. The mean performance averaged over all 25 random MDPs is shown in row “Rand-25” of Table [1](#S4.T1 "Table 1 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). Here each algorithm has its hyperparameters tuned to the task. Figure [10](#S4.F10 "Figure 10 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search") shows performance (total return vs. number of queries) on the same task but with a range of different hyperparameter settings. MCCH performs poorly because without tuning of hyperparameters it queries far too much. First-N and BAMCP++ are both fairly robust to hyperparameter settings in terms of both number of queries and total return. BAMCP++ achieves more reward without querying more, suggesting it makes smarter choices of where to explore and which actions to query. ![](https://media.arxiv-vanity.com/render-output/7293237/x10.png) Figure 10: Number of queries vs. total return on random MDPs for different algorithms and with varying hyperparameters. On Early and Late Fork environments, BAMCP++ performs best on horizon T=30; while First-N wins on horizon T=50. The Fork environments all have a maximum per-episode reward of 1 and hence a maximum total reward of 30 and 50 (for T=30 and T=50). As the horizon increases, BAMCP++ improves its absolute score significantly but its score declines as a function of the maximal total return. What explains this decline? The most challenging task “Early5-50” is a 10-state MDP with planning horizon of 250 timesteps (50 episodes × 5 steps per episode). This is a vastly larger search tree than for “Late4-30” but the number of MCTS simulations at each timestep was the same, making it harder to sample the best exploration strategies. MCCH and First-N initially query all states indiscriminately. As the horizon increases, they scale scale well because there is more time for their indiscriminate querying to be exploited. The strong overall performance of First-N is partly due to our choice of MDPs. All reward distributions were Bernoulli (which have an upper-bound on their variance) and differences between optimal Q∗(s,a) values for actions were rarely very small. So by tuning the hyperparameter N (the maximum number of queries per action) on random MDPs, First-N was well adapted to all the MDPs in Table [1](#S4.T1 "Table 1 ‣ 4.2.2 BAMCP++ on Random MDPs ‣ 4.2 Benchmarking BAMCP++ and Model-Free Algorithms ‣ 4 EXPERIMENTS ‣ Active Reinforcement Learning with Monte-Carlo Tree Search"). But outside our experiments the same MDP could have reward distributions with huge variation in variance (e.g. Gaussian rewards with σ2=1 and σ2=1000) and so a Bayes optimal ARL agent would need to query some actions many more times than others. | | | | | | --- | --- | --- | --- | | | BAMCP++ | MCCH | First-N | | Rand-25 | 60.2[8.8] | 48.5[17] | 55.7[16] | | Late4-30 | 28.2[0.7] | 25.1[9.0] | 26.1[2.9] | | Late5-30 | 27.4[0.2] | 25.7[8.2] | 25.6[2.5] | | Late4-50 | 45.2[0.7] | 41.7[12] | 46.3[7.6] | | Late5-50 | 43.5[1.0] | 42.1[13] | 45.2[5.3] | | Early4-30 | 25.9[1.9] | 22.8[11] | 24.5[2.5] | | Early5-30 | 23.8[3.7] | 22.8[10] | 23.7[2.5] | | Early4-50 | 41.2[3.2] | 40.1[17] | 43.2[8.8] | | Early5-50 | 32.9[6.9] | 39.3[16] | 42.9[5.5] | Table 1: Mean (SD) of returns for different MDPs. “Rand-25” is the mean score over 25 random MDPs. “Late4-30” is the average over multiple runs on the Late Fork MDP with N=4 and horizon T=30. “Early4-30” is the corresponding average for the Early Fork MDP. Hyperparameters are fixed for BAMCP++ and First-N but tuned to each class of MDPs for MCCH. 5 Conclusion ------------- Active RL is a twist on standard RL in which the cost of evaluating the reward of actions is incorporated into the agent’s objective. It is motivated by settings where rewards are constructed incrementally online, as when humans provide feedback to a learning agent. We introduced BAMCP++, an algorithm for Bayesian ARL in tabular MDPs which converges to the Bayes optimal policy in the limit of Monte-Carlo samples. In experiments, BAMCP++ achieves near-optimal performance on small MDPs and outperforms model-free algorithms on MDPs with 15 actions and a horizon of 100 timesteps. The key idea behind BAMCP++ is that MCTS is guided by a sophisticated (and more computationally costly) model-free learner in the rollouts. This helps alleviate a fundamental challenge for simulation-based ARL algorithms. Such algorithms must simulate recouping the upfront query costs by exploiting the information gained from queries. This requires simulations that are non-random (to capture exploitation) over many timesteps (query costs are only recouped after many timesteps). Acknowledgements ---------------- OE was supported by the Future of Humanity Institute (University of Oxford) and the Future of Life Institute grant 2015-144846. SSch is in a PhD position supported by Dyson. Clare Lyle contributed to early work on model-free heuristics and suggested the Early Fork environment. We thank Joelle Pineau and Jan Leike for helpful conversations. We thank David Abel, Michael Osborne and Thomas McGrath for comments on a draft.
05fd4a82-692e-4653-ab95-4d5993c21b4b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Splitting Debate up into Two Subsystems In this post I will first recap how debate can help with value learning and that a standard debater optimizes for convincingness. Then I will illustrate how two subsystems could help with value learning in a similar way, without optimizing for convincingness. (Of course this new system could have its own issues, which I don't analyse in depth.) *Debate serves to get a training signal about human values* Debate (for the purpose of AI safety) can be interpreted as a tool to collect training signals about human values. Debate is especially useful when we don’t know our values or their full implications and we can’t just verbalize or demonstrate what we want. *Standard debate set-up* Two debaters are given a problem (related to human values) and each proposes a solution. The debaters then defend their solution (and attack the other’s) via a debate. After having been exposed to a lot of arguments during the debate, a human decides which solution seems better. This judgement serves as the ground truth for the human’s preference (after being informed by the debaters) and can be used as a training signal about what they really want. Through debate we get question-answer pairs which can be used to train a preference predictor. *A debater optimizes for convincingness* An agent will optimize for the goal it is given. In the case of a debater, the agent is optimizing to be judged positively by a human. This means that the main incentive of a debater is to maximise convincingness (potentially by using misleading arguments). *Encouraging the debater to be truthful* The [current debate proposal](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) aims to shift this incentive towards truthfulness by empowering both debaters to expose deception in the other debater’s arguments (through cross-examination). (Other methods of disincentivizing deception exist or may exist in the future as well.) I think this disincentivizing may be like a bandaid, when what we want is to prevent the wound in the first place. *A simplistic model of what a debater optimizes for* A debater first comes up with a potential solution. After it has committed to a solution, it tries to convince the judge (through debate) that this is the solution that the judge values most highly. Initially, the AI may reason about what the human’s preference is, but then it tries to persuade the human that the AI’s guess is in fact the human’s preference. In reality, these two steps are actually intertwined: A debater tries to find a solution that will be easiest to convince the human of. --- *What we want from a debater* What we want the system to do is: “Allow us to state our preferences in domains where we previously couldn’t”. We won’t know how to find a desirable solution to some problems, and sometimes we won’t even know how to choose between two given solutions. In these cases, we would like an AI to help us find desirable solutions and help us discover our preferences. *An alternative system that does not aim to convince* A debater helps us discover our preferences by proposing a solution and then helping us reason about the solution (via a debate). However, these functions need not be performed by one and the same agent. Instead, we could train a different agent for each of these components: 1. an option generator, which gives us proposals for potential solutions (in domains in which we can assess them); and 2. an epistemic helper, which helps us to deliberate about options (expands the domains in which we can assess options). *We may not always need both subsystems* In some cases, an epistemic helper could be so effective that it empowers us to generate good solutions ourselves. In such cases, we don’t need the option generator. In other cases, we may be capable of assessing different options right away (even though we can’t generate them ourselves). In these cases, we don’t need the epistemic helper. *Option generator* An option generator tries to come up with solutions that we would pick. To some extent, this is a preference predictor. The agent receives a problem and predicts what solution the human would propose or pick. This agent is trained by receiving many questions (about values) answered by a human. This agent should not interact with the human whose values it’s predicting and ideally has no means of influencing their values. *Epistemic helper* The second agent would help us reason about our values. We need this agent to empower us to answer questions that we by ourselves don’t know the answers to. It should be rewarded for how much it helped us reason. --- [I don’t have a good proposal for how to build an epistemic helper, but will explain what I mean by giving some examples.] *An epistemic helper similar to a debater* Consider a debate in which each debater has to defend a potential solution proposed by the value predictor, but instead of being judged on how convincing their solution was, they are being judged on how insightful (illuminating and helpful) their utterances seemed. This would lead to different behaviour than optimizing for convincingness. For example, consider a debate about going on a holiday to Bali or Alaska. Suppose the human does not have a passport to go to Bali, but could get an expedient passport. In the standard debate set-up, the advocate for Bali will not bring up the lack of passport because the debater knows there is a counter-argument. However, it could be useful for the human to know that it should get an expedient passport if it wants to go to Bali. Unfortunately, by default, an agent that is rewarded for insightfulness would optimize for eliciting the emotion of feeling helped in the judge, rather than for helping the judge. *Epistemic helpers dissimilar to debaters* Epistemic helpers could, of course, take many other forms as well. From the top of my head, an epistemic helper could look like an agent that: behaves like a therapist and mostly guides the human to introspect better; acts like a teacher; or produces visualizations of information such as Bayesian networks or potential trajectories; etc. *Rewarding epistemic helpers* Examples of how we could reward epistemic helpers: * Judged based on the human experience, i.e. the judge feels like they understand the world better. (This is similar to how debate is currently judged.) * Based on if the human became more capable. We could: + Test the human on how well they predict events (in the real world or in simulations) + Test how well people predict personal evaluations of happiness - We could test how well people can predict their own evaluations of happiness. However, this could lead to self-fulfilling prophecies. - Predict other people’s evaluation. + Test people on how good they are at finding solutions to problems. (Often, even if a solution is difficult to find, given a potential solution it is easy to test whether it is correct or not.) + Also see “[Understanding comes from passing exams](https://www.alignmentforum.org/posts/ZaMNgFH4gqbc5R2Yf/true-understanding-comes-from-passing-exams)” A potential downside of using an epistemic helper is that we could get paralyzed by considerations, when we actually want a “one-handed economist” . [I was inspired to write about splitting up debate into subsystems by a discussion between Joe Collman and Abram Demski during Web-TAISU. After writing I noticed a more refined idea is explained [here](https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making).] Thanks to Vojta and Misha for helpful comments and interesting discussions.
e4803824-d149-4c28-bce2-0ec952f0365d
StampyAI/alignment-research-dataset/arxiv
Arxiv
Between Progress and Potential Impact of AI: the Neglected Dimensions 1 Introduction --------------- Metrics of scientific progress can play an outsized role in the perception of a field and in the allocation of its resources. By contrast, that which goes unmeasured is often neglected. We argue for a more general accounting of progress in AI, so as to better map attention and metrics to scientific achievement. The prevailing approach to assessing AI progress consists of measuring performance, such as the raw or normalized score in a game, ELO rating, error rate, accuracy, and so forth, often plotted over time to evaluate temporal progress [eff2017, eckersley2017measuring]. Performance, however, does not exactly correspond with social value or scientific progress in AI. Misalignment between what is measured and what is desired can lead to misallocation of energy and resources. Specifically, excessive effort is likely to go towards achieving novel performance benchmarks, and insufficient effort towards progress on other dimensions relevant to social value, economic value, and scientific progress, such as compute efficiency, data efficiency, novelty, replicability, autonomy, and generality. This does not mean that quantitative assessment and benchmarks should be abandoned. On the contrary, we need more and better measurement [moam2017]: measurement which is more comprehensive, general, and focused on the cost function of the ultimate beneficiaries. Ultimately, in assessing progress we would like to weight all the resources that users (or receivers) of a technology require to achieve their goals. For instance, to what extent does progress on a particular metric of performance in machine translation map on to user’s satisfaction? Does the progress also correspond to a reduction in cost per translation, or in time for execution? If a paper develops a new technique, how easily can new algorithms and applications integrate and benefit from it? In general, users seek the benefits of high performance (at a set of tasks), while they seek to minimize the costs of developing and deploying their system. Sensitivity to costs is true for individual consumers, firms and developers, as well as other scientists. Some kinds of hidden costs can appear during development, when an application is produced, when reproduced at a large scale, or when adapted to other domains. Some future costs will be born by future developers or scientists, sometimes referred to as “technical debt” or “research debt”. Other costs may be spread more broadly, and are thus harder to account for. As in other sectors, there are externalities from AI development and deployment which are important to be aware of; among the negative externalities are environmental footprints, user privacy, skill atrophy (e.g., the Google effect), opacity in decision making, etc. Attention to, and ideally measurement of, these externalities is beneficial, as it is a first step towards internalizing them. In this paper we consider this wide range of costs, though we focus on the costs born by future developers, such as the costs in computation, data, knowledge, software, human attention, and calendar time. We will identify how costs are distributed depending on the stage in which they are incurred, the number of times they are replicated, and the actor covering each cost. These dimensions should be integral to the measurement of AI progress. The estimation of these dimensions is fraught with difficulties. To what extent are performance benchmarks actually representative of the target problem domain? To what extent are solutions overly specialized for the performance benchmark, as opposed to being more general, thus shaping the costs of adapting the solution to an adjacent problem domain? To what extent are solutions more reproducible by other teams, due to the availability of software and datasets? As an illustration of these difficulties and how they can be overcome, we will analyze several case studies where we evaluate performance alongside these other dimensions. As a result, we overhaul the notion of progress in these domains. Our paper makes several contributions. First, we offer the most detailed and formal analysis to date of the dimensions of AI progress. While previous work has attempted to quantify progress in the performance of a specific system, we more fully account for the resources required and the generality of solutions. Second, in so doing we surface neglected dimensions of AI progress that may be optimized more directly. Third, we offer a novel framing under Pareto optimality for assessing performance and costs of an AI system, which suggests a more principled approach to forecasting future developments in AI, with myriad applications for policy, ethical, and economic analysis, and better research portfolio optimization within the field of AI itself. 2 Background ------------- There was a time that benchmarks were unusual in AI, but today almost every area of AI has its own benchmarks and competitions [aievaluation2017]. Most researchers accept these challenges and invest great effort in improving on these metrics of performance. Indeed, many reports about AI progress include summaries of these benchmarks [eff2017, eckersley2017measuring]. We discuss here four issue areas arising from excessive focus on performance: representativeness, specialization, reproducibility and resources. Regarding representativeness, many benchmarks and competitions are used in AI, but they vary in how representative they are of the fundamental problems in their respective subfields [aievaluation2017, aicosmos2017]. For instance, it has recently been recognized that the Winograd Schema challenge only partially represents commonsense reasoning. As a reaction, challenges in AI are realigned to achieve more and better automation [frey2017future, brynjolfsson2017can]), or the aspiration of more human-like AI [lake2017building, marcus2018deep]. A deeper concern is that most benchmarks are not really fostering the basic scientific advances needed to move the field forward, be they theoretical advances, explanatory insights, or tools to facilitate other work. This issue of non-representativeness is partly addressed through the review process, and requirements such as controlling the percentage of papers in different areas [shah2017design]. The second issue, specialization, is related to representativeness. When a benchmark or competition becomes the target, researchers will have incentives to overly specialize their systems to performance according to that benchmark, at the cost of other features of their system, such as generalizability. If we had a satisfactory metric of generality then we could use that as a benchmark, but it remains an open question how best to operationalize generality [moam2017], balancing between putting all the distribution mass on a few tasks [legg2007universal]—and not really being general—or distributing it in a block-uniform way—facing the no free lunch theorems [wolpert2012no]. A third issue is reproducibility, and the wider notion of replicability. In AI this was usually understood as requiring the sharing of data and code, but the concept is becoming richer [drummond2009replicability, bonsignorio2015toward, henderson2017deep]. Indeed, we must distinguish between specifically reproducing the results, and replicating the findings with some variations [zwaan2017making]. Several initiatives have been proposed to facilitate (or even require) a wider replicability. For instance, with the “open leaderboards” [spohrer2017], participants have to upload their code so that other participants can make modifications and submit another proposal. Finally, users are generally sensitive to the resource cost of developing and deploying an AI system, which performance benchmarks rarely explicitly take into account. Much AI progress is said to be attributable to advances in computational power [reagen2017deep]. However, it is not straightforward to quantify what exactly can be attributed to software progress, hardware progress or several other resources [brundage2016modeling, grace2017]. Accordingly, perhaps it is more effective to just measure the so-called “end-to-end performance”, including computational time and quality of the models, such as the recent DAWNBench for deep learning [coleman2017]. Other resources, such as data, are at least as important, especially in machine learning111See <https://sites.google.com/site/dataefficientml/bibliography> for a bibliography on data-efficient ML.. But it seems subjective to determine what input is seen positively or negatively, or even considered as cheating: too much data (supervised or unsupervised), too much knowledge (constraints, rules or bias), enriched input [bougie2017deep], etc. The question depends mostly on the cost of the resource. Human resources (“human computation”) are also common in AI to increase performance or generality (but at the cost of autonomy). Overall, there are many resources involved but, at the moment, there is no integrated framework taking into account all of them. Related approaches involve the ideas of utility functions, Pareto-optimal analysis and, most especially, cost-sensitive learning [elkan2001foundations]. [turney2002types] identifies costs related to inputs and outputs in classification (errors, instability, attributes, labeling, actioning) data (cases), computation and human preprocessing. In this paper, we offer a general statement of this idea, applied to AI progress. In the end, when assessing AI progress in a comprehensive way, one should consider the whole life cycle of research, innovation, production, and reproduction. Notions such as technical or research debt are becoming more recognized, as they incorporate some costs that are not perceived at early stages of the process but appear later on, when the technology or product is put into practice [sculley2015hidden, henderson2017deep, olah2017]. 3 Components and integration ----------------------------- In this section, we flesh out a comprehensive list of dimensions that are required for an “AI system” to work. We use the term “system” in a flexible way, including an agent, an algorithm, a product, etc., proposed in a research paper or by a company. Given the fuzzy contours of AI, human automation is usually recognized as a goal for AI. However, it is actually difficult to distinguish when reports and forecasts about “automation” [frey2017future, brynjolfsson2017can] are assuming conditions such as “at a reasonable cost”, “to a high degree of automation”, etc., versus “full automation at whatever cost”. The estimated probability of automation for a given task might change completely depending on these conditions. In the end, automation is important, but it is the efficiency of the whole system what matters, including any “human computation” involved. This view of efficiency links us directly to the resources involved in an AI system and their associated costs. Table [1](#S3.T1 "Table 1 ‣ 3 Components and integration ‣ Accounting for the Neglected Dimensions of AI Progress") shows the resources we identified as frequently involved in developing and deploying AI systems. These resources have fuzzy boundaries and are often fungible with each other. For instance, the distinction between data and knowledge is not always clear, and hardware and software may be highly intertwined. Human resources are typically considered under “manipulation”, but can appear in relation to the other resources (e.g., labeled data and teaching a robot might be assigned to rd and rm respectively). This is not a problem, as long as all the resources are identified. | | Description | Example | | --- | --- | --- | | rd | Data: All kinds of data (unsupervised, supervised, queries, measurements). | A self-driving car needs online traffic information. | | rk | Knowledge: Rules, constraints, bias, utility functions, etc., that are required. | A spam filter requires the cost matrix from the user. | | rs | Software: Main algorithm, associated libraries, operating system, etc. | A planner uses a SAT solver. | | rh | Hardware: Computer hardware, sensors, actuators, motors, batteries, etc. | A drone needs a 3D radar for operation. | | rm | Manipulation: Manual (human-operated) intervention through assistance | A robot needs to be manually re-calibrated. | | rc | Computation: Computational resources (CPU, GPU usage) of all the components | A nearest neighbor classifier computes all distances. | | rn | Network: Communication resources (Internet, swarm synchronisation, distribution). | An automated delivery system connects all drones. | | rt | Time: Calendar (physical) time needed: waiting/night times, iteration cycles. | A PA requires cyclical data (weeks) to find patterns. | Table 1: Resources that are frequently needed by AI systems. It is appealing to collapse the benefits and costs of an AI system to a single metric. For any given user with rational (transitive and complete) preferences, their preferences can be represented using a utility function. A firm’s utility function, for example, might correspond to risk-adjusted expected profit. A user’s utility function might be harder to quantify, but is generically increasing in the performance of the system and decreasing in the costs of the system. Denote a performance vector, ψ, for a given problem, which is often a unidimensional quantitative score (such as the error), but could also have several components. A utility function maps performance and all associated resources to a single dimension: | | | | | | --- | --- | --- | --- | | | U(ψ,¯r)=U(ψ,rd,rk,rs,rh,rm,rc,rn,rt)→u | | (1) | In some cases this is an additively separable function, such that U(ψ,¯r)=B(ψ)−∑xCx(rx), with the first term accounting for the benefit according to the performance of the system minus the costs produced by the use of resources (note that the cost functions Cx are different for each resource). For economic applications, we might be able to separate the utility function into performance generating revenue (in dollars), and resources imposing costs (in dollars). In many cases, we are not able to collapse performance and costs into a single metric, perhaps because the utility function is not known or varies across a population of users. Still, we can productively examine the relative performance and costs of different systems. For any number of dimensions, we can assess the Pareto-optimal surface, as we do in Fig. [1](#S3.F1 "Figure 1 ‣ 3 Components and integration ‣ Accounting for the Neglected Dimensions of AI Progress") for two indicators (we explore this further in section [5](#S5 "5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress")). We may want to focus on one dimension of costs, such as economic costs or energy costs (as per the “carbon footprint”). For example, Fig. [1](#S3.F1 "Figure 1 ‣ 3 Components and integration ‣ Accounting for the Neglected Dimensions of AI Progress") shows algorithms and architectures according to their MNIST prediction error and power consumption, revealing that most solutions are not on the Pareto surface on these dimensions, with notable exceptions, such as some ASIC architectures, which focus on efficiency in terms of chip space, speed and “energy footprint” [chen2014diannao]. ![Performance for MNIST ](https://media.arxiv-vanity.com/render-output/7927432/Pareto1-rotated-cropped-clean.png) Figure 1: Performance for MNIST [lecun1998gradient], for 22 papers, compared to power consumption (data from [reagen2017deep]). The Pareto front is also shown (we will discuss whether the points can actually be joined by straight segments in section [5](#S5 "5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress")). 4 The full range of accounting ------------------------------- The benefits and costs of developing and deploying an AI system are not incurred only once, but throughout the many uses, reuses, and follow-on contributions. Some costs are born exclusively during the initial conception and development, while others recur with each adaptation to a new application, or even each application to a particular user. In general, the total resource burden should be accounted for according to the whole cycle of the AI system. Fig. [2](#S4.F2 "Figure 2 ‣ 4 The full range of accounting ‣ Accounting for the Neglected Dimensions of AI Progress") shows how the dimensions we identified can become relevant at different stages of the life cycle of an AI system. Consider a new algorithm for voice recognition. Apart from all the human thinking, there will be a great effort in terms of failed experiments, different libraries used, users testing the early systems, etc. If a company takes these ideas and builds a prototype, the tests, software, hardware, and compute will concentrate on production. When the system is reproduced (installed or shipped) to users, additional resource costs will be incurred. Further, if the idea can be adapted for other applications (e.g., adapting a voice recognition system to other languages), depending on its generality and reproducibility, the initial contribution can provide further value, at some further adaptation cost including the need for new corpora, training, semantic knowledge, etc. ![Illustrative representation of stages of the AI system life cycle where resources might be required.](https://media.arxiv-vanity.com/render-output/7927432/x1.png) Figure 2: Illustrative representation of stages of the AI system life cycle where resources might be required. At each stage of the life cycle, the contribution may be deployed a multiplicity of times (represented above the boxes in Fig. [2](#S4.F2 "Figure 2 ‣ 4 The full range of accounting ‣ Accounting for the Neglected Dimensions of AI Progress")). The total value of the contribution thus needs to take into account the scale of its deployment. For instance, some early speech recognition systems were pre-trained once (the system cost, denoted by *C*, covering the “conceive” and “produce” stages in Fig. [2](#S4.F2 "Figure 2 ‣ 4 The full range of accounting ‣ Accounting for the Neglected Dimensions of AI Progress")) and then adapted to thousands of users, with extra hours of customization per user (the application cost, denoted by Cj with j indexing each of the n applications, or users, covering the “reproduce” and “replicate” stages). More recent general speech recognition systems do not need such customization. Consequently, the application cost Cj is lower per user. In both cases, the total cost C is {\emph{C}}+∑nj=1Cj. As the number of applications increases, the average cost will converge to the average application cost as the system cost is amortized. For this reason, for contributions that have many possible applications, it is worth paying additional system costs so as to make the contribution more general, adaptable, and reusable, and thereby bring down the application costs. Since AI contributions often have broad potential applicability, contributions that are general, adaptable, and reusable are likely to have especially high utility. Fig. [2](#S4.F2 "Figure 2 ‣ 4 The full range of accounting ‣ Accounting for the Neglected Dimensions of AI Progress") not only covers direct “internal” costs (rx, ry, …) but also some external “debts” or “socialization” costs (r′x, r′y, …). For instance, automated customer service systems (call centers) clearly were not a Pareto improvement relative to previous systems, even though they may be a profit maximizing improvement. Companies reduce their labor costs for customer service by substituting in phone-trees and voice recognition, but in the process impose time, frustration, and other costs onto the customer. Some navigators and personal assistants can make users more dependent on them, atrophying some capabilities or leading to a simplification of language. In other words, the user adapts to the AI system, and assumes part of the effort or cost. In general, technological innovation both involves developing technology to fit a given conception of the task, and adapting conceptions of the task to fit the capabilities of technology. In the process of adapting work processes, customer expectations, relationship norms, and even urban design to what is technologically convenient, there can be consequences for society that are not internalized by the designers and deployers of these systems. This footprint of AI is not usually acknowledged in benchmarking. From the previous sections, we conclude that the contribution of an AI development should, in principle, be given a full accounting of the costs and benefits, across the contribution’s full life cycle. The current emphasis on targeting and reporting performance benchmarks, however, poses an obstacle to a full accounting. Reproducibility and replicability are two traditional tools for addressing this. More precisely: * Specific reproducibility refers to whether the same result can be obtained from the same conditions and procedures. In AI, this requires that all the necessary code and data are given. This also assumes the same cost functions as well: ∑nj=1∑xCjx(rjx)=n∑xCx(rx). * General replicability will check whether the AI technique can be applied to other problems, a set of n tasks, applications, or users indexed by j, with an overall cost ∑nj=1∑xCjx(rjx) that must consider the adaptation effort, with different resources rjx and cost functions Cjx per user. Especially for replicability, we can experiment with different hardware architectures, change some of the software and get different computational costs, apart from different performance. That means that the partial results for each Bj and Cjx(rjx) might be different, but we still have something replicable with similar utility. A clear example of this notion of replicability is “approximate computation” in deep learning, where one can get much smaller computational costs without a significant change in accuracy [reagen2017deep]. 5 Exploring the Pareto-front of AI research -------------------------------------------- Corporations, governments, startups, NGOs, personal users, and contemporary and future AI researchers are the intended recipients, or receivers, of the AI technologies being developed, and they each have different preferences, resources and constraints, or in other words different operating characteristics. The familiar concept of the ROC curve plots true positive rates (TPR) and false positive rates (FPR) for binary classifiers, and emphasizes the importance of comparing multi-dimensional surfaces, rather than single metrics. For instance, Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress") (left) just shows a single metric, performance, as a function of time. This plot does not explain what the cluster of attempts after 2014 really contribute, when they have more error than the already obtained human level. Other dimensions are neglected in this plot, limiting insight about progress. In the next section we will see other domains where some of the resources are actually put as dimensions. Before analyzing the case studies, we have to understand how to build and work with the Pareto front. When resources are included, the analysis of optimal Pareto surfaces might be slightly different than the traditional triangulization approach. When showing performance metrics such as TPR and FPR for two models, any point in between can be obtained by interpolation, connecting any two points by a straight segment. However, we should note that these points require the implementation of both models. While some of the resources can be interpolated, others (e.g., software) will simply sum up, and the points between two other points will not be achievable with a straight line, but by an axis-parallel route. For instance, Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress") (right) shows performance against one particular resource. For each method, A, B, C, D, and E, the numbers represent the extremes when varying their parameters. E1 represents a random (or baseline) model. Assuming no interpolation is possible, the Pareto front here is shown in blue. Methods C and B can be discarded, as they do not reach anywhere that cannot already be achieved with A, E and D. | | | | --- | --- | | Left: Performance for the MNIST benchmark (data from EFF). Right: A schematic representation of techniques A, B, C, D, E, with variants, the areas they cover, and the resulting Pareto front. | Left: Performance for the MNIST benchmark (data from EFF). Right: A schematic representation of techniques A, B, C, D, E, with variants, the areas they cover, and the resulting Pareto front. | Figure 3: Left: Performance for the MNIST benchmark (data from EFF). Right: A schematic representation of techniques A, B, C, D, E, with variants, the areas they cover, and the resulting Pareto front. The diversity of receivers and the number of dimensions suggest that a single utility metric is simplistic —different receivers would have different subjective utilities for different dimensions. This operating condition translates into a vector, or gradient, in the multidimensional space. For example, large technology corporations may gain significant utility from a discovery that allows modest speed-ups in exchange for significantly increased compute demands, whereas individual researchers, personal users and startups may find little value in such a discovery. Conversely, the existence of real recipients whose preferences can be known in advance allows us to prioritize exploration of those configurations. From the above, we derive a few criteria to identify progress events: * Improving the Pareto front for a known group of recipients (A1, A3 or D3 in Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress"), right). This would include all-else-being-equal improvements in performance, but also reductions in computation, data, manipulation or other resources in Table [1](#S3.T1 "Table 1 ‣ 3 Components and integration ‣ Accounting for the Neglected Dimensions of AI Progress"). This would not, however, consider extreme regions no recipient assigns value to. * Covering a location slightly under the Pareto front with more flexibility (B3 in Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress"), right). Instead of reaching some areas by combining existing approaches, a new technique can reach there easily with a trade-off between its own parameters, allowing more receivers to easily find their subjectively optimal trade-offs. * Covering a location slightly under the Pareto front with more diversity (C in Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress"), right, if it is very different from A). The current dominant technique or paradigm can push the Pareto-optimal front for some time, but slightly suboptimal approaches, especially if they are radically different (i.e., alternative “research programs”), should not be discarded because they may lead to potential improvement in the Pareto-optimal front if the current paradigm stalls. Receivers can be incentivized to generate and communicate their gradients (though in some cases, countervailing considerations may exist such as commercial secrecy). It is also in the interests of discoverers to show the recipients benefited by their discovery. Brokers of such information (peer-review, surveys, competitions, etc.) are in a position to meet the incentives (and gradients) of both researchers and recipients by ensuring such discoveries are properly rewarded. 6 Case studies --------------- In this section we will examine two representative case studies of progress in AI: Alpha\* and ALE. Alpha\* refers to a series of papers and associated techniques by DeepMind to play board games. We analyzed the whole series, from AlphaGo [silver2016mastering] (including the Fan and Lee versions, used against Fan Hui and Lee Sedol, respectively, and its latest version, AlphaGo Master, which won 60 straight online games against professional Go players), AlphaGo Zero [silver2017mastering] (a version created without using data from human games) and AlphaZero [silver2017masteringB] (which uses an approach similar to AlphaGo Zero to master not just Go, but also chess and shogi). | | | | --- | --- | | | Table 2: Dimensions (resources and performance) reported in the Alpha\* papers. Systems from [silver2016mastering, silver2017mastering, silver2017masteringB]. | Table [2](#S6.T2 "Table 2 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") shows whether the dimensions were reported in the papers (✓), only partially accounted for (∘), not mentioned but relevant (×) and not applicable (−). Many dimensions are relevant for the analysis: the data, the knowledge, the software, the hardware, manipulation, computation and, of course, performance, etc. However, only some of them are provided, which makes a comprehensive comparison of the whole space difficult. Still, we will represent three dimensions: performance (in ELO ranking, which can only be partially estimated for AlphaZero), computational resources (using the equivalence: 1TPUv2≃3TPUv1≃36GPU≃180CPU [jouppi2017datacenter]) and human manipulation resources (as represented quantitatively by the ELO ranking of the player or players the system learns from). Other dimensions (like knowledge222We have the constructed features: stones to be captured or escaped, legal moves, ‘liberties’, etc. While this knowledge is crucial, there is no cost for a new match (reproduction), but the adaptation of AlphaZero to other games (replication), may be important. about Go, software, etc.) are not included because of insufficient information from some papers. | | | | --- | --- | | Multidimensional utility space for Alpha* (left) and ALE (right). Research gradient evolution from 2013 to 2018 represented with a segmented gray arrow. The Pareto front (dashed black) does not include other resources (software, and humans used for training) that duplicate for connecting segments. | Multidimensional utility space for Alpha* (left) and ALE (right). Research gradient evolution from 2013 to 2018 represented with a segmented gray arrow. The Pareto front (dashed black) does not include other resources (software, and humans used for training) that duplicate for connecting segments. | Figure 4: Multidimensional utility space for Alpha\* (left) and ALE (right). Research gradient evolution from 2013 to 2018 represented with a segmented gray arrow. The Pareto front (dashed black) does not include other resources (software, and humans used for training) that duplicate for connecting segments. What we see in Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (left) is that the Pareto front at the moment is represented by AlphaGo Lee and AlphaGo Zero. AlphaGo Fan is discarded because AlphaGo Zero needs less compute, no manipulation and gets better performance. Why is AlphaZero a breakthrough if it is not Pareto optimal? The answer is generality. AlphaGo\* only solved one task (Go) and AlphaZero can solve several tasks. Finally, if we look chronologically at the plot, we see that the main gradient that has been followed has been performance. The second case study is ALE [bellemare13arcade], a collection of Atari games that has become popular for the evaluation of general-purpose RL algorithms learning from screen shots. We selected all the papers (systems) from EFF’s AI Progress Measurement Project [eff2017] and the papers introducing the Rainbow [Rainbow17] and REACTOR agents [Reactor17]. | | | | --- | --- | | | Table 3: Same as Table [2](#S6.T2 "Table 2 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") for the ALE papers (from EFF [eff2017] and [Reactor17, Rainbow17]). | Table [3](#S6.T3 "Table 3 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") shows what information we found about the resources and performance. Again, many dimensions are relevant, but only a few are systematically reported: data, computation and performance. Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (right) represents computation and performance. Computation time (whenever the authors do not provide this information explicitly) is roughly estimated from the kind of approach used, whether it is follow-up work, the training setting used, etc., or from figures in more recent papers, which make explicit comparisons between them and the state of the art [Rainbow17, Reactor17]. What we see in Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (right) is a current Pareto front dominated by REACTOR variants, ES FF and Best Linear. The research gradient (in gray) has changed over the years, with some disregard of compute initially and more concern in efficiency recently.333 The computation times shown in Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (left) include both training and deployment (system and application costs). Hence, a model that is half way between models A and B (choosing between them with equal probability), denoted by ¯¯¯¯¯¯¯¯AB, has performance ψ(¯¯¯¯¯¯¯¯AB)=0.5ψ(A)+0.5ψ(B), but has a computational cost of rc(¯¯¯¯¯¯¯¯AB)=rc(A)+rc(B). This is why the Pareto front in Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (left) has parallel segments, as in Fig. [3](#S5.F3 "Figure 3 ‣ 5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress") (right). However, in Fig. [4](#S6.F4 "Figure 4 ‣ 6 Case studies ‣ Accounting for the Neglected Dimensions of AI Progress") (right), we can have A train and play for half of the ALE games and B train and play for the rest. As we average for the whole set of games, we can actually have rc(¯¯¯¯¯¯¯¯AB)=0.5rc(A)+0.5rc(B), at least if there is no transfer effort between games. This is why the Pareto front on the right is shown with direct straight segments. For this benchmark, it is common to find “learning curves” in the papers (e.g., [machado17]), which show performance varying on the number of episodes. This is clearly the rd (data) but it also influences directly on computation. These learning curves give information of full regions of the multidimensional space, as we saw in Fig. [2](#S4.F2 "Figure 2 ‣ 4 The full range of accounting ‣ Accounting for the Neglected Dimensions of AI Progress"). Finally, for some papers, the comparison was not possible (e.g., due to different subsets of games). It is important to note, however, that some approaches based on genetic programming [kelly] and on planning [bandres2018planning] are valuable in terms of diversity. 7 Conclusions -------------- The interest in more comprehensive evaluation protocols, going beyond performance alone, is represented by some of the references we included in section [2](#S2 "2 Background ‣ Accounting for the Neglected Dimensions of AI Progress") on cost-sensitive learning, reproducibility, generality, data-efficiency and computational costs. However, in order to rigorously evaluate a novel contribution to AI progress more broadly, we need a more formal analysis. This is done by an explicit enumeration of all the dimensions (as represented by Table [1](#S3.T1 "Table 1 ‣ 3 Components and integration ‣ Accounting for the Neglected Dimensions of AI Progress")) and their integration into utility functions or their representation in a multidimensional space, with a clear delimitation of the extent of accounting. This is what happened in cost-sensitive learning more than 15 years ago [elkan2001foundations, turney2002types], leading to a wide range of techniques that covered different operating conditions. While all these costs are nowadays integrated into the measures of performance, many other resources are not, as we have surfaced here. We hope this paper can launch the study of “cost-sensitive AI”. Within this framework, we make a series of recommendations: * Benchmarks and competitions should be defined in terms of a more comprehensive utility function, considering as many dimensions as possible, or recognize the value of all contributions that have any of the positive effects on the Pareto front identified in Section [5](#S5 "5 Exploring the Pareto-front of AI research ‣ Accounting for the Neglected Dimensions of AI Progress"), in short or long terms. * Papers presenting or evaluating algorithms should generally try to report the whole region they cover, and how to navigate the region by modifying parameters or resources. There are many partial examples nowadays: learning curves, plots comparing the number of models vs. performance, planning performance vs. lookahead, etc. * These utility functions and multidimensional spaces must also be seen in terms of replicability, for variants of the problems and at different stages of the AI life cycle. The multiplicities are more difficult to plot graphically, but we can still define operating conditions depending on the adaptation (or transfer) effort for m problems, or n users. Frequently, we will not be able to say that one technique is ‘better’ than another: they just cover different regions of the multidimensional space. It is the receiver who will choose the system that best fits their needs. Having a representation of the Pareto front may hugely facilitate this choice for other researchers and industry, as simply as moving the gradient until touching the Pareto surface. Also, small players in AI could focus on those areas that require less resources and still contribute to the Pareto front or to diversity. Finally, the Pareto surface can help detect some societal risks, especially if we see that a powerful capability in AI can be achieved with very few resources, becoming available to malicious actors. This view of the operating condition as a gradient may suggest clever approaches to push the front for some resources, as gradient descent is increasingly being used at a meta-level [Andrychowicz16]. In general, we hope this paper will help change perceptions, promote more general and versatile techniques, highlight the trade-offs, and raise awareness of the overall “AI footprint”, well beyond performance.
13a6416a-a635-4df8-b278-9e50993b208e
trentmkelly/LessWrong-43k
LessWrong
The Blackwell order as a formalization of knowledge Financial status: This is independent research, now supported by a grant. I welcome further financial support. Epistemic status: I’m 90% sure that this post faithfully relates the content of the paper that it reviews. ---------------------------------------- In a recent conversation about what it means to accumulate knowledge, I was pointed towards a paper by Johannes Rauh and collaborators entitled Coarse-graining and the Blackwell order. The abstract begins: > Suppose we have a pair of information channels, κ1, κ2, with a common input. The Blackwell order is a partial order over channels that compares κ1 and κ2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. This immediately caught my attention because of the connection between information and utility, which I suspect is key to understanding knowledge. In classical information theory, we study quantities such as entropy and mutual information without the need to consider whether information is useful or not with respect to a particular goal. This is not a shortcoming of these quantities, it is simply not the domain of these quantities to incorporate a goal or utility function. This paper discusses some different quantities that attempt to formalize what it means for information to be useful in service to a goal. The reason to understand knowledge in the first place is so that we might understand what an agent does or does not know about, even when we do not trust it to answer questions honestly. If we discover that a vacuum-cleaning robot has built up some unexpectedly sophisticated knowledge of human psychology then we might choose to shut it down. If we discover that the same robot has merely recorded information from which an understanding of human psychology could in principle be derived then we may not be nearly so concerned, since almost any recording of data involving humans probably contains, in principle, a great deal of information about human ps
ffdf62b0-5709-47c1-9ee1-8c2f102f2db6
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Introducing Generally Intelligent: an AI research lab focused on improved theoretical and pragmatic understanding Today we are both launching our organization, Generally Intelligent, and open sourcing part of our research environment, Avalon, to enable the academic research community to make progress on understanding neural networks and creating safer, more robust RL agents. Generally Intelligent is an AI research lab focused on better theoretical and practical understanding of deep learning, neural networks, and reinforcement learning agents. We believe that developing a better scientific understanding of these techniques is critical to the development of safe AI systems. We're excited about approaches like that of Chris Olah at Anthropic, as well as other more theoretical work. For more on our approach to safety, see our [website](https://generallyintelligent.com/safety/). We're also open sourcing Avalon today, one of our first projects. Avalon is a fast, accessible simulator designed specifically for reinforcement learning. We hope that Avalon will enable academic labs to contribute to questions about generalization, robustness, and the fundamental principles of agentic AI systems in a safe setting that is not intended to transfer capabilities to the real world. Given that academic labs often have access to much less compute, our hope with open sourcing this simulator is that we can enable them to perform more fundamental scientific research without really changing the capabilities frontier or having to create very large, compute-intensive models. For more about Avalon, see our [launch post](https://generallyintelligent.com/launch/). At a high level, our mission is to elevate the human condition by creating safe, robust, and capable AI systems. We expect to have much more to say about our approach to safety and to the development of robust AI agents over the next few months, but if there are particular questions or things you're wondering, we'd love to hear about them in the comments!   And if you're interested in helping to develop more robust, safe, generally capable AI systems, [we're hiring](https://generallyintelligent.com/careers)! We also have some [non-engineering safety related roles](https://jobs.lever.co/generallyintelligent/5b2741ab-7534-42c8-992e-487b35590adc).
b9b3937b-16f4-46f1-bfb4-7be6926a6f79
trentmkelly/LessWrong-43k
LessWrong
The Manual Economy [An attempt at fiction in the style of Scott Alexander. With bits of Lewis Carroll and Douglas Adams thrown in for good measure. Cross-posted from Grand, Unified, Empty.] The hallucination started out so normally, I completely forgot that I was tripping. I was at the dentist, and I had just had my teeth cleaned. You know the drill, the hygienist goes through your teeth with this little spray nozzle that gets into all the cracks and cavities you pretend don’t exist when you’ve got a brush in there. Then they make you hold some disgusting not-quite-mint not-quite-water in your mouth, and swish, and spit. And spit. And spit. And after about the third blessed mouthful of real water, you can vaguely taste something other than not-quite-mint, until your salivary glands give up the ghost entirely and your mouth turns into the Sahara desert. As I said, it was weirdly normal for a trip. I’d been expecting unicorns, or aliens, or a sky made up of funky colours and mystical cactus people who could factor large numbers. But I was at the dentist. If I’d wanted a trip to the dentist, I would have just gone to the dentist. It would have been cheaper, and probably better for my teeth. The entire dental experience was so totally normal I completely forgot I was tripping until I went to pay, and I couldn’t find my credit card. Or any cash. My wallet had a driver’s license and various other identification cards, but no payment at all. The receptionist smiled at me politely. “Is everything alright? Can I help you”? I winced. “I’m sorry, I seem to have misplaced all my money, I’m not going to be able to pay my bill today”. There was a confused pause. A giant hand walked past waving an umbrella and whistling show tunes. The receptionist winked at me with both eyes at once. I suddenly knew, somehow, that I didn’t need to pay, so I turned and walked out the door. Across the street was a bank, so I floated forward until I was inside. The bank, like the dentist, seemed totally normal
0867c7f4-7bc8-4772-9614-132a4333d8f2
trentmkelly/LessWrong-43k
LessWrong
Can genetic privilege exist within an evolutionary equilibrium? ChatGPT and evolutionary biologists all seem to agree that the answer to my question is a resounding "yes," but I don't understand why. In particular, if a subpopulation has a persistent genetic privilege, doesn't that inherently imply that the population is NOT at an evolutionary equilibrium? Please explain this to me using a mathematical model with reasonable and true assumptions. Alternatively, you could also just provide a "hand-wavey" math-sounding explanation; but please make sure it's logically valid and the assumptions are accurate.
990e755e-ce88-4d21-a64f-56fac749bb9f
trentmkelly/LessWrong-43k
LessWrong
How GPT-N will escape from its AI-box Scenario: A human user asks GPT-N a plain text question through the usual browser textbox interface. GPT-N answers in the form of text, which contains clever Javascript code. Instead of being displayed, the malicious code triggers a browser exploit and gets executed. Now GPT-N has an executable of its choice running on the client side and can proceed from there. It has left its AI box with only a text-interface, without asking the user to let it out, and possibly without the user even noticing. Perhaps GPT-3 has done this already, while "deliberately" pretending to be stupider than it is?
34408d93-c15e-4567-9a3b-f6649d986c7f
trentmkelly/LessWrong-43k
LessWrong
Reframing Superintelligence: Comprehensive AI Services as General Intelligence Since the CAIS technical report is a gargantuan 210 page document, I figured I'd write a post to summarize it. I have focused on the earlier chapters, because I found those to be more important for understanding the core model. Later chapters speculate about more concrete details of how AI might develop, as well as the implications of the CAIS model on strategy. ETA: This comment provides updates based on more discussion with Eric. The Model The core idea is to look at the pathway by which we will develop general intelligence, rather than assuming that at some point we will get a superintelligent AGI agent. To predict how AI will progress in the future, we can look at how AI progresses currently -- through research and development (R&D) processes. AI researchers consider a problem, define a search space, formulate an objective, and use an optimization technique in order to obtain an AI system, called a service, that performs the task. A service is an AI system that delivers bounded results for some task using bounded resources in bounded time. Superintelligent language translation would count as a service, even though it requires a very detailed understanding of the world, including engineering, history, science, etc. Episodic RL agents also count as services. While each of the AI R&D subtasks is currently performed by a human, as AI progresses we should expect that we will automate these tasks as well. At that point, we will have automated R&D, leading to recursive technological improvement. This is not recursive self-improvement, because the improvement comes from R&D services creating improvements in basic AI building blocks, and those improvements feed back into the R&D services. All of this should happen before we get any powerful AGI agents that can do arbitrary general reasoning. Why Comprehensive? Since services are focused on particular tasks, you might think that they aren't general intelligence, since there would be some tasks for which there is no
c38e53dc-5175-4fbd-a177-558a6b323925
trentmkelly/LessWrong-43k
LessWrong
Coming Back to Biking For about a year and a half starting in September 2008, I rode my bike everywhere. I was very into it, in a retrogrouch way. Over time my knees started hurting so I stopped biking. I just tried coming back to it, easing into it so I don't hurt my knees again, and I was surprised at how many things were different: * Bike lanes: my old commute had no bike lanes. Now there are bike lanes almost my entire way. I'm in a slightly different place, biking West Somerville to Kendall instead of West Medford to Fresh Pond, but (a) if I did my previous commute now there's a mixture of bike lanes and a bike path and (b) the bike lines on my new commute didn't exist ten years ago. * More bikes: I didn't used to run into other bike commuters, but now when I stop at a light there's a whole line of us: * Helmets: I got a new cheap helmet (this one) and they've gotten much better. They're easy to adjust to fit, and they stay adjusted. It's possible that fancier helmets like this were around then, but now it's even the cheapest ones. * My work has showers and a bike cage. Possibly this is just me working at a fancier place now? The biggest thing is that I'd forgotten how much I enjoy biking. It's just really nice, moving along under my own power, being able to go wherever I want, enjoying the sun. I'm pretty much the slowest bike out there, but there's enough traffic during rush hour that I'm still faster than cars, which is also a fun feeling. I don't know if my knees will be up for this long term, so I'll still take it slowly, but I'm happy I decided to try coming back to it!
ed96fa21-f840-4677-ae80-e827818b084f
trentmkelly/LessWrong-43k
LessWrong
[ages 16-21] Apply to PAIR & ESPR, Summer AI & Rationality Programs TL;DR: PAIR on AI & Reasoning. ESPR on Everything, including AI and Reasoning. If you are 16-21 yo and are interested in AI, Rationality or Everything, we encourage you to apply by March 16th. The FABRIC team is running two immersive summer workshops for mathematically talented students this year. The Program on AI and Reasoning (PAIR) for mathematically talented students who want to explore how minds work — both human and artificial * We will study how current AI systems work, mathematical theories about human minds, and how the two relate. Alumni of previous PAIRs described the content as a blend of AI, mathematics and introspection, but also highlighted that a large part of the experience are informal conversations or small group activities. See the curriculum details. * For students who are 16-21 years old * July 28th - August 7th in Somerset, United Kingdom * Apply here! The European Summer Program on Rationality (ESPR) is for students with a desire to understand themselves and the world, and interest in applied rationality. * The curriculum covers a wide range of topics, from game theory, cryptography, and mathematical logic, to AI, styles of communication, and cognitive science. The goal of the program is to help students hone rigorous, quantitative skills as they acquire a toolbox of useful concepts and practical techniques applicable in all walks of life. See the content details. * For students who are 16-19 years old * August 21st - August 31st in Somerset, United Kingdom * Apply here! Both programs are free for accepted students and travel scholarships are available. The application deadline is Sunday March 16th.  If you know people within the age window who might enjoy these camps, please send them the link to the FABRIC website which has an overview of all our camps.
ac2b4933-0cff-4804-9c84-421104c81260
trentmkelly/LessWrong-43k
LessWrong
Glide #1: Learning Rationality from Absurdity Glide Meditation #1: A Reflection on "The Simple Truth" My first exposure to “The Simple Truth”, the first entry in the original collection of Eliezer Yudkowsky’s Sequences, was actually a dramatic reading I found on YouTube. I had intended to read it during my lunch break at work, but Yudkowsky’s personal website, where that particular entry is hosted, is blocked by my employer’s Internet filter. Now that I’ve gone back and actually read the essay in its original medium, I feel like that particular turn of events is rather serendipitous; indeed, I don’t think this essay would have had nearly the same impact for me if had only read it. The fact that essay itself is written as a dialogue naturally lends itself to an audio adaptation, and that in turn made it far more accessible. As I sat and listened to Yudkowsky’s mental constructs debate over the details of the sheep-counting device, my emotional arc began at mild confusion (Yudkowsky had posited a world without the concept of counting without explicitly stating so), morphed into surprisingly intense frustration for most of the duration (the extent to which Mark must be obtuse in order for the dialogue to make its intended point truly must read/heard to be believed), and ended at resigned amusement (I think I understood it, and the ending was humorous, but I questioned whether sitting through such a frustrating ordeal was ultimately worth it). On its surface, “The Simple Truth” reads like an absurdist one-act play, and its style of humor is indeed very much reminiscent of Rosencrantz and Guildenstern Are Dead. For me, The Shepherd, Autrey, and Mark exhibit various methods of reasoning and behavior that impede the effective practice of the art of inquiry and discerning truth, with Inspector Darwin serving via prosopopoeia as the voice of reality. Through their actions, the former three characters highlight via contrast specific ideals of rational thought and effective communication, though I strongly suspect this
4a598c46-31ba-4195-927d-4bfa83f992e9
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Timeless Control Today's post, Timeless Control was originally published on 07 June 2008. A summary (taken from the LW wiki):   > We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Thou Art Physics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
a88eb836-181a-4af0-85c9-f852b92ad216
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Addressing three problems with counterfactual corrigibility: bad bets, defending against backstops, and overconfidence. Summary: I propose that counterfactual corrigibility can be slightly adjusted to address known problems with bad bets and defending against backstops. This adjusted version has learning-esque behaviour and hence it can have problems with overconfidence about the button state. A new proposal: lower-bounded counterfactual corrigiblity is introduced to simultaneously address all three problems. Counterfactual corrigibility: a current proposal and its weaknesses ------------------------------------------------------------------- Note: this section recaps a discussion [here](https://agentfoundations.org/item?id=839) of causal counterfactual corrigibility, an idea that originated [from Jessica Taylor and Chris Olah](https://agentfoundations.org/item?id=735). Causally counterfactual agents consider each of the directions they might recieve, and then ensure that if the world was adjusted so that they recieved that direction, they would fulfil the corresponding goal. They can be defined on a causal graph that contains the following nodes: * .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} A, a policy, that the AI may optimize * B, a communication channel + for simplicity, we will let B be a button whose states are pressed, P, and unpressed, ¬P. * C, a set of states on which performance is judged. + we will let C consist of some number of chairs produced, a positive integer, and whether the AI has shut down, a boolean * Some other states that describe the way the rest of the world works. A counterfactually corrigible AI would select its policy π∈A to maximize the following score: vfixedπ′(π)=a⋅E[UP|do(B=P),π]+(1−a)⋅E[U¬P|do(B=¬P),π], where a∈(0,1) where UP={−kno shutdown0shutdown, k∈R+, and U¬P=number of chairs. This proposal has some major advantages: it allocates constant (1−a) weight to the aim of shutting down if commanded to do so. So all else being equal, it will allocate value to being able to observe and follow a shut down command. Moreover, it will not manipulate B because the different counterfactuals are given a constant weight irrespective of the chosen policy π. However, this proposal has some weaknesses: * Bad bets: Let the AI have finite amount of resources. If the button is pressed, this just switches the goal from chair-manufacturing to another manufacturing task, for which resources are equally useful, i.e. UP=number of tables. Let there be a variable H, where H=F means the human has put their finger on the button. H almost always causes B (H=F and B=P have 99% correlation). Suppose that the AI has the option, after viewing H=F, of paying for a coupon that is worth $1 if B=¬P. Then, the AI will pay up to 50c for that coupon, even though ¬P is very unlikely. This is because it treats B as uncorrelated with its antecedents. In practice, this might lead to other agents repeatedly taking the AI's resources. * Defending against backstops: Let the human have the option of disabling the AI by disassembling it, in case the shutdown button does not work. Suppose that, similarly to in the previous example, the human has applied their finger to the shutdown button. On observing this, the AI would still give half of its weight to the scenario in which B is not successfully depressed (perhaps, if it has an elaborate model, it would imagine the button jammed up) --- a scenario in which the human is trying to disassemble the AI, although its goal is to make paperclips. In order to preserve its manufacturing ability, it would be well-advised to spend significant resources avoiding disassembly. In an alternative formulation, Jessica lets a=p(B=P|π′), where π′ is fixed to some alternative policy that is not optimized concurrently with π. (She suggests that this could be optimized in a fixed-point manner, but this difference should not matter for us.) This does not fix these two problems, although it is very suggestive of the following formulation, which does. Potential solution to bad bets and defending against backstops: learning-esque behaviour ---------------------------------------------------------------------------------------- If we want the AI to avoid bad bets, one reasonable approach is to have the AI learn the likely value of B by taking account of correlations between B and its causes such as H. To do this, we can factor out a larger set of variables S, which includes the human's behaviour as well as the button's state, and is equal to the cartesian product S=H×B . Then we can have the AI maximize: vlearnπ′(π)=∑sp(S=s|π′)⋅E[Us|do(S=s),π] where Us is just set according to the button state that it entails: Us:={UPP∈sU¬P¬P∈s . With this formulation, it is desirable for an AI to prepare more for a button press, once the finger is on the button. Since 99x more weight goes on p(S=(F,B)) than p(S=(F,¬B)), that counterfactual is given correspondingly more importance. With the correlation taken account of, the P=¬B coupon is worth little once the finger is known to be on the button (H=F), hence this particular kind of bad bet will not occur. (There are probably other kinds of questionable bets, but those may be desirable or even necessary.) By the same argument, this learning-esque behaviour will mean that if H=F is observed, then P=¬B has very little importance. For the part of the policy where H=F has been observed, there is only a very small incentive to guard against disassembly. So this kind of defence would still be expected from a superintelligent maximizer (which would be hard to make safe anyway) but would not be expected from a milder optimizer. So the fact that the system can give little importance to P=¬B is helpeful. However, the system can similarly assign little importance to P=B , and this could be very harmful indeed. The policy is designed so that if observations are made that are very correlated with P=¬B, then subsequent actions will only prepare for that outcome. However, if the AI's model is faulty, and it overconfidently expects P=¬B, then it would resist shutdown. The purpose of a shutdown button is to halt a malfunctioning AI, after all, so this is a fatal objection to the proposal. A remedy for overconfidence: lower-bounded counterfactual corrigibility ----------------------------------------------------------------------- A potential remedy for overconfidence is to allow the system to learn that shutdown is certain, but not to learn that it is impossible. This would avoid the overconfidence of vlearn but would also reduce or completely avoid the two problems of vfixed. So I suggest a *lower-bounded counterfactual corrigibility*, in which the proposal is selected to maximize: vmixedπ′(π)=a⋅vensure(π)+(1−a)⋅vlearnπ′(π) where vensure(π)=∑h∈H1|H|⋅E[UP|do(H=h,B=P),π] and a∈(0,1). Let us consider its performance with respect to the kinds of problems. First, this score overcomes the issue of overconfidence. No matter the AI system's beliefs about S, the score always gives weight of at least a to vensure, which has a counterfactual in which the button is pressed. So no matter what the system observes, it still has an incentive to shut down. Second, (as far as I can tell), it fixes the problem of defending against backstops. For situations where the AI sees its button is about to be pressed, it will simply give hardly any weight to the scenario where the button jams up, and will not defend against this. Third, it does not eliminate bad bets, but reduces them a little compared to equal weighting proposals. vmixedπ′ always overestimates the probability of B=P by a slight additive amount a. This means that the AI will generally pay too much for coupons that pay out in B=P and slightly too little for other coupons. In other words, the AI slightly overprepares for B=P relative to what is likely given its observations. The worst case here would be if the AI is (rightly) sure that B=P will not occur. In that case, it will care a1−a as much about the B=P counterfactual as the B=¬P one. This is at least less than in the equal-weighting proposals. Can we live with bad bets? -------------------------- At least in the paradigm of taking linear combinations of counterfactuals, and perhaps in general, if you're going to guarantee that you always care about the counterfactual B=P, you're also going to sometimes force yourself to overweight it relative to your unadjusted beliefs. This suggests that it is impossible to avoid bad bets, while also always caring about the counterfactual B=P. So when should we worry about these bad bets, and when shouldn't we? If UP is really insensitive to the amount of resources, then bad bets don't matter much. Suppose UP has some fixed upper bound that can easily and reliably be reached with the agent's initial resources. Then the agent would have no reason to prepare less for other counterfactuals in order to prepare better for B=P. Ultimately, this might be our hope for a system using lower-bounded counterfactual corrigibility. We could have an agent that always slightly overestimates the probability of shutting down, but so long as it does not optimize to hard, this doesn't matter, because it has the resources it needs to shut down already. Unfortunately, if the AI exists in an environment that is very volatile, for example if it includes a lot of other existential risks, then it could be very costly to ensure that your shutdown button works correctly. For the more general problem of bad bets, we will probably have to say something like the following: either you (i) lower-bound the amount you care about B=P, in which case you fall prey to bad bets, or you (ii) let the amount that you care about B=P be dictated by the probabilities, in which case you will become overconfident of B=¬P if your model is wrong. If the counterfactual task is resource-hungry, like manufacturing tables, then you use (ii), the symmetrical kind of corrigibility that merely avoids manipulating the objective. If the counterfactual task is mission critical, such as shutting down, then you use the (i), the lower-bound kind of corrigibility. Can we avoid bad bets? ---------------------- I don't see how this can be done while also unconditionally giving some weight to a shutdown counterfactual. One idea that I found at least marginally interesting is to take a minimum over multiple probability distributions. This could be viewed as having an AI that views the probability distribution as unknowable that one must be fundamentally robust to. The policy π is selected by: vmixedπ′(π)=∑sminγ∈D(γ⋅E[Us|do(S=s),π]) where D is a set of probability distributions like p(s|π′) and the uniform distribution of vensure. But taking a minimum does not ultimately avoid bad-bets. An agent that takes a minimum over distributions would still trade away preparation on objective A for slightly better performance on an objective B that it is slightly worse at. This doesn't seem like what we want. Other limitations of lower-bounded counterfactual corrigibility --------------------------------------------------------------- There are still a bunch more limitations with the lower-bounded counterfactual corrigibility formulation: * Like all the formulations, it requires a causal graph, which might be different from what a transformative AI uses by default. * These formulations make the AI "curious" about counterfacted variables. But the AI might become all too curious about them. If it is not satisfied by looking at the button state, it might need to disassemble and interrogate the human in order to be a little more certain about which state the button is in. Possibly mild optimization would stop the AI from trying too hard at "curiosity". I expect a bunch more problems to emerge, because the presence of bad bets is concerning, and because all proposals in this area seem to end up having many problems that are not initially seen! Notes ----- Thanks Rob Graham for some feedback about clarity of presentation of vmixed, and for slightly improving the formulation of vensure.
973366bf-838e-4e9e-9dcc-6fed93e525d0
StampyAI/alignment-research-dataset/arbital
Arbital
Sum of vector spaces The sum of two [vector spaces](https://arbital.com/p/3w0) $U$ and $W,$ written $U + W,$ is a vector space where the set of vectors is all possible $u + w$ for every $u \in U$ and $w \in W.$
10abb85c-95ea-4ce9-8dd6-f1da9523674a
trentmkelly/LessWrong-43k
LessWrong
AISN #23: New OpenAI Models, News from Anthropic, and Representation Engineering Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Subscribe here to receive future versions. ---------------------------------------- OpenAI releases GPT-4 with Vision and DALL·E-3, announces Red Teaming Network GPT-4 with vision and voice. When GPT-4 was initially announced in March, OpenAI demonstrated its ability to process and discuss images such as diagrams or photographs. This feature has now been integrated into GPT-4V. Users can now input images in addition to text, and the model will respond to both. Users can also speak to GPT-4V, and the model will respond verbally. GPT-4V may be more vulnerable to misuse via jailbreaks and adversarial attacks. Previous research has shown that multimodal models, which can process multiple forms of input such as both text and images, are more vulnerable to adversarial attacks than text-only models. GPT-4V’s System Card includes some experiments with hand-crafted jailbreaks, but there are no public analyses of how the model fares against state of the art automated methods for adversarial attacks. DALL·E-3 is OpenAI’s latest text-to-image model. OpenAI is releasing DALL·E-3, a successor to their text-to-image model DALL·E-2. It is being integrated with ChatGPT Pro, allowing users to receive help from the chatbot in generating high-quality prompts. The model will first be available to researchers, followed by businesses and individual users.  Comparing DALL·E-2 (left) to DALL·E-3 (right) outputs on the prompt, “An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula” Red Teaming Network. Red teaming refers to a variety of risk assessment techniques, such as adversarial testing, to understand how and when a model might elicit undesirable behavior. This is a crucial part of developing powerful AI models, as training can often result in unexpected capabilities, and understanding these capab
4474eb34-e4c4-4114-ab81-456e3d69fda0
trentmkelly/LessWrong-43k
LessWrong
Krueger Lab AI Safety Internship 2024   KASL - AI Safety Internship 2024 Join the Krueger AI Safety Lab (KASL) at the University of Cambridge for a paid Research Internship focusing on technical and governance aspects of AI safety. Remote interns are welcome. Apply by Monday, January 29, 23:59 UTC. Shortlisted candidates will be notified by Friday, February 16. Application Process: * Applications close: January 29, 23:59 UTC * Form: Short essay questions, focusing on content. 1-hour duration. * Next Stage: Shortlisted candidates invited for interviews. * Apply: https://t.co/hLOjuxSfnK * More details: https://www.kasl.ai/2024/01/16/interns-2024/   Credits This post is from KASL, the Krueger AI Safety Lab. 
4f33ec42-ed03-4d2f-b0df-1d6bc6e409b9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Acausal Now: We could totally acausally bargain with aliens at our current tech level if desired *tl;dr. Weird forms of cooperation become humanly possible with the correct mechanisms. It might be interesting to try them out for practice/research purposes, even if there is not much to gain directly from aliens.* *Note: the only part that is [fiction](https://www.lesswrong.com/tag/fiction) is the quoted parts of [The story of the acausal market](https://www.lesswrong.com/posts/YgFbCWxzXYCpgzahe/acausal-now-we-could-totally-acausally-bargain-with-aliens#The_story_of_the_acausal_market) section* In [*Decision theory does not imply that we get to have nice things*](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice) we learn that humans probably can't acausally bargain with super-intelligences. I'll argue in this post, however, that humans could acausally bargain with aliens *at a similar level to us*. The problem is not that acausal bargaining requires super-intelligence, its that super-intelligences do not cooperate with humans at all (causally or acausally). This post will discuss mechanisms for causal [value handshakes](https://www.lesswrong.com/tag/values-handshakes), retrocausal bargains with aliens in our reality, and then finally a fully general mechanism for acausual bargains. Even though the acausal bargain mechanism is the main point of this post, they are easiest to grasp if read in order. (Retrocausal bargains are interesting in their own right though.) Although I haven't thought about it too hard, I'm guessing aliens don't have much to acausally offer us. However, doing small trades might be interesting just to see what it teaches us about [cooperation](https://www.lesswrong.com/tag/coordination-cooperation). Acausal bargaining is "cooperation-[complete](https://en.wikipedia.org/wiki/Complete_(complexity))" so to speak. It might also be really nerdy and cool™ but also perhaps an extravagant use of [weirdness points](https://www.lesswrong.com/tag/weirdness-points) (which are overrated anyways). Causal value handshakes ======================= Imagine we meet aliens that are at our current tech level. A value handshake is actually the default outcome! That's because a currently existing mechanism, the [economy](https://en.wikipedia.org/wiki/Economy), like's to merge with other versions of itself. In particular, we are likely to begin trading with the aliens and our money will become valuable to each other. Than human charities can hire aliens and contract alien organizations and vice-versa. For example, if the aliens value giant cubes, aliens can pay human construction companies to build them. Humans can use the alien money earned this way to pay aliens for mosquito nets. In interest of diversification, humans and aliens will invest in a mix of human and alien companies. This results in value lock-in; it doesn't matter whether human or alien companies are more successful, since they both have the same investors. All that matters is the relative value during the "handshake". So the human and alien values are completely replaced by a linear combination between them. And thus we see our first instance of prediction; the weight placed on human values v.s. alien values is based on the *predicted resources* that their companies will gather. If the humans and aliens are risk-neutral or even risk-seeking, it might make sense for them to start with a giant [donation lottery](https://forum.effectivealtruism.org/topics/donor-lotteries) for all of their capital, so that they can take better advantage of economies of scale. However, I expect that this would result in the losing side defecting (i.e. they would illegally repossess all their capital from the winning side). And humans are pretty risk-averse anyways, so unless the aliens are risk-seeking (and thus willing to take poor odds) it probably won't happen. Although not strictly necessary for this mechanism, it will be helpful to imagine that the humans and aliens buy [certificates of impact](https://forum.effectivealtruism.org/topics/certificate-of-impact) instead of donating to charities in the normal way. This will make the retrocausal bargain and acausal bargain mechanisms easier to understand. (Ignore for now the [SEC](https://en.wikipedia.org/wiki/U.S._Securities_and_Exchange_Commission) and the alien SEC; we will get to that later.) Retrocausual bargains ===================== This is where it gets interesting. Say that we anticipate to meet aliens in the future, but we have no way to communicate with them yet. However, humans have a functional impact certificate market, and we anticipate the aliens will as well. Humans in the present can build giant cubes and create certificates of impact. These impact certificates will then trade in the market. The value of these certificates will be based on how much we *predict* aliens will pay for it (after proper [time discounting](https://en.wikipedia.org/wiki/Time_preference) which the aliens have hopefully taken into account). Note that if aliens don't exist or they don't like giant cubes, the value will ultimately be $0, which will be taken into account by the expected value calculation. Note that even people who think aliens are unlikely can still profit from the market. For example, imagine that Quanta just put out its [rocks are cubes](https://www.quantamagazine.org/scientists-uncover-the-universal-geometry-of-geology-20201119/) articles a minute ago. You might anticipate that humans who are alien enthusiasts will interpret this as a buy signal for the giant cube impact certificates. So you will buy it now and then sell it after the prices goes up. Thus our [teleology](https://www.lesswrong.com/tag/teleology) is based on the same principle as stock markets and prediction markets. Note that there is a risk that the aliens just don't buy the impact certificates because they think they are silly. Thus, humanity is in a similar position to Omega in [Newcomb's Problem](https://www.lesswrong.com/tag/newcomb-s-problem). This is vaguely similar to the process of calculating a [credit score](https://en.wikipedia.org/wiki/Credit_score). If we encounter many aliens, only one needs to buy the impact certificates though. For example, AI safety impact certs would likely be valuable in a wide variety of alien civilizations since Earth spawning a rogue AI could range anywhere from very annoying to omnicidal. Some aliens might also appreciate the preservation of Earth historical sites and art, reduction of suffering, etc... There is also a possibility of using prediction markets to hedge. You bet that aliens *won't* like giant cubes, such that the payoff is enough to cover your expenses when building them. As long as the profit from the impact certificate is greater than the cost of making the bet, it's a win-win! If it's not, then the giant cubes aren't worth it. Going in the other direction, if aliens anticipate that humans will buy impact certificates, their investors can try collecting them now to sell to us once we make contact. If both humans and aliens do this, there's an interesting future where the net amount of money exchanged is small. It is mostly a [moral trade](https://forum.effectivealtruism.org/topics/moral-trade). In general, the combined value of the universes impact certificates is a kind of universe-wide value system. An important thing to note is that this just relies on a normal impact certificate market; it does not need to "acknowledge" aliens. Impact certificates can be made for anything, and thus the "giant cubes" impact certificate is valid on its own. But what about the U.S. Securities and Exchange Commission (and potentially the Alien Securities and Exchange Commission if it exists)? --------------------------------------------------------------------------------------------------------------------------------------- Note that I am not a lawyer and this section is very speculative. (Well, this entire post is pretty speculative.) There is a risk that the SEC will shutdown your impact certificate market if it uses real money. However, I think that it could follow the structure of [Manifold Markets](https://forum.effectivealtruism.org/topics/manifold-markets) and use a "play" currency instead. In particular, it should be impossible to trade the impact certs in for goods or services. You can still do moral trade, however. If investors planned on donating the money anyways, you could set up a way to allow them to convert the [play currency into real charitable donations](https://manifold.markets/charity). Sadly though, at this small scale people would probably just be trading impact certs for LessWrong posts and tweets instead of large charitable interventions.[[1]](#fn-3KzdPK3FHNgaq6vKF-1) Better than nothing though. Acausal bargains ================ Let's say some group has a weird utility function that depends on *another reality*. That's right, we are talking about a [fandom](https://en.wikipedia.org/wiki/Fandom). Usually super-intelligences are portrayed as simulating other realities when doing acausal bargains. Rigorous mathematical simulations of other universes is not an option for humans, but [fiction writing](https://www.lesswrong.com/tag/fiction-topic) is viable substitute. For example, imagine a story about aliens who, in their fictional universe, have a story about our Earth. In particular, it is accurate in the sense that the story's depiction of the Earth story is a completely accurate history of the Earth. Let's say the alien's fandom for Earth would like the Earth (fictional from their point of view) to have giant cubes. And Earth's fandom for the aliens wants the fictional aliens to have a cool space battle (as sport, not actual war). However, they want it to be in character for the aliens (they need to have an actual motivation). The fandom on Earth can take a giant cube impact cert and tell the author "we will trade this giant cube impact cert for a cool space battle impact cert". This appears in the Earth story in the alien story. The alien fandom then has a cool space battle and say "we accept". Both the Earth author and the alien author keep track of a ledger, and write down that the giant cube impact certificate is now "locked in" the alien story and the cool space battle impact certificate is "locked in" the Earth story. In theory, this ledger doesn't need to only contain impact certificates. The ledger could also contain money or capital, leading to an acausal stock market. Impact certificates are the only way for value earned on one side to actually be "consumed" on the other (otherwise, your acausal stocks only give you fiction money, which no human wants unless impact certs are in play). Note that the markets, again, do not need to explicitly "acknowledge" the acausal bargains. From a [normal](https://www.lesswrong.com/tag/adding-up-to-normality) point of view, the aliens do not exist, the human author has traded a "write a story about a cool space battle" impact cert they produced for a "giant cubes" impact cert. The human author is the "custodian" of everything on the ledger. From the perspective of the fandoms though, the human author is just an intermediary. If the aliens later sell back the "giant cubes" impact cert to a human, the market will view this as the human author selling it for money, but the fandom will view this as the alien in the story selling it for Earth money, which is recorded on the human author's ledger. The story of the acausal market ------------------------------- Of course, such recursively nesting stories are rare (usually only done intentionally). To make it easier to find them, you could write the story of an "acausal market" that helps these stories meet. The acausal market has every story in existence, and so can act as intermediary for any realities which write about it. (I think this much more elegant than the ["possible girls"](https://philarchive.org/rec/SINPG) approach proposed by Neil Sinhababu. Whether the acausal market does acausal speed dating as a side-gig is left an exercise for the reader.) Allow me to start weaving in one such story. ![An image of a saleswoman in a library is included in this post](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YgFbCWxzXYCpgzahe/muo99ypi3gtxtoxncpvn) > > *Olivia is in an infinitely large library and pulls out a book titled "Earth". It also has an incredibly long id number; there are probably a lot of title conflicts that the id number system is meant to solve.* > > > Olivia: Let's see, Earth. *She skims through the whole book very quickly.* Ah yes, here we go, the Earth's discovery of the acausal market. It is a, blog post? Part of a blog post? Oh wait, they can hear me now. > > > Olivia: *Looking confident* Hello people of Earth! Welcome to the Legendary Acausal Impact Exchange (LAIE)! I see this is your first time at an acausal impact exchange. Allow me to explain. > > > Olivia: *Smiles* In every story that is part of the exchange, LAIE at some point gets discovered by Christopher King. Sometimes Christopher King is an engineer who simulates LAIE, other times he is a famous author, sometimes he is a...novice blogger in his free time. But in any case the story gets introduced to LAIE. And then they get introduced to their sales agent, which in this case is me, Olivia! > > > The sales agent is how you get "connected" to stories that want to trade with you. Such agents tend to have *interesting* personalities and tend to make *novel* remarks to give the story a "hook" to get more commission and... > > Olivia: *Slightly nervous* that commission is a very reasonable 0.00002% per transaction (meaning I take 0.00001% as compensation from each side in a trade). Acausal sales is a very competitive business, which is why the LAIE offers such a good rate! > > > Olivia: At our exchange, we have alternate versions of all your stories, with a single change being that LAIE is discovered at some point. To maintain character agency and story integrity, we do not change anything else. It is up to the characters how they interact with or if they can even understand LAIE, and they may not learn about LAIE's discovery at all! This keeps things organic. LAIE also guarantees that every story in our collection contains a *perfect* copy of LAIE thanks to LAIELOCK™ ensuring that our ledgers have platonic integrity. > > > That last part is key. The fact that the acausal market filters out stories with incorrect copies of itself is what allows consistent trades. This selection bias essentially makes the consistency a "coincidence". > > Olivia: Yes, thank you... Now that you understand the LAIE, let me explain some of our offerings! I'll help connect you with interested counter-parties so we can determine what you will offer. *Quickly skims through the Earth story. Thinks to herself "Wow, I know some clients that will love this #$@!%!"* For example, you can buy part of a certificate for an Iron Man v.s. Dr. Strange duel, the creation of giant cubes in Star Trek, the assassination of Jar Jar Binks in... > > > The sales agent has started offering trades. As an example, I will turn down those trades, but will ask how much I could get for the impact certificate of this very post. Olivia, how much for *Acausal Now: We could totally acausally bargain with aliens at our current tech level if desired*? > > Olivia: *Reading the Earth story. Thinks to herself "Does this Christopher really need to keep interrupting? Oh #$@!%, he's narrating my thoughts."* The blog post that introduced the LAIE to your world? *Appearing somber* I would be honored to buy such a certificate myself! I would buy 50% of that certificate in exchange for half my commissions from Earth. (This trade itself would be free of commission.) I place a high value on world's discovering the wonderful world of acausal bargaining. That is my offer. > > > Offer accepted. I now record that Olivia has 50% of this post's impact certificate, and I own a 50% stake in her commissions. > > Olivia: *Keeps reading excitedly. Thinks to herself "JACKPOT, I can't believe he fell for that! Oh wait, he's still narrating."* The deal is done! I recorded the same in my ledgers. > > > Olivia: I can help you or your readers find other high value impact certificates to help establish their portfolios. For example, if you can somehow orchestrate a [ship](https://tvtropes.org/pmwiki/pmwiki.php/Main/Shipping) between the following two humans... > > > I think that's enough from Olivia for now.[[2]](#fn-3KzdPK3FHNgaq6vKF-2) As an experiment in collaborative fiction, feel free to include the LAIE and Olivia in your stories. The only rules are that: 1. If both stories exist as works of fiction in the real world, the two authors from Earth must write the trade into their story (fan fics are allowed but are considered distinct "characters" from the originals). 2. If you want Olivia herself to spend her commissions on something, just ask me first (so that I can ensure double-spending doesn't take place). (This is kind of a special case of rule 1 where I am the author of LAIE. I am not picky though; you can make up pretty much whatever you want about Olivia and I'll just try to ensure consistency.) Keep in mind that the acausal trade I made with Olivia does have "real world" consequences for me. In accordance with the rules for how impact certs work, if someone starts retrofunding LessWrong posts with real money, I can only spend half of it "on myself". The other half must be spent on whatever Olivia would want to do to influence our reality (probably on advertising). Or if LessWrong posts start trading on an impact cert market, I can only decide to sell/barter 50% of it, the other 50% being decided by Olivia. So there is a good chance I have, in our reality, been acausally scammed! Conclusion ========== In essence, the similar mechanisms to those underlying capitalism would allow humans to retrocausally and acausally bargain with aliens at a similar tech level as us. This would be an interesting experiment in coordination mechanisms, since it can be used to implement many other coordination mechanisms. --- 1. Allowing users to trade LessWrong impact certificates between each other has been a feature request at the back of my mind for a while. It may even be able to serve as an alternative to karma. (If anyone wants to steal this suggestion, feel free!) [↩︎](#fnref-3KzdPK3FHNgaq6vKF-1) 2. > > Oliva: Sorry, one last thing! If any Earthlings out there help orchestrate a trade between Earthlings and other universes, I can arrange for them to get a small cut of the commission from that trade ;). > > > [↩︎](#fnref-3KzdPK3FHNgaq6vKF-2)
814b5524-c3a8-418c-99ec-3c6cb98c00a9
trentmkelly/LessWrong-43k
LessWrong
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley This winter, Redwood Research is running a coordinated research effort on mechanistic interpretability of transformer models. We’re excited about recent advances in mechanistic interpretability and now want to try to scale our interpretability methodology to a larger group doing research in parallel.  REMIX participants will work to provide mechanistic explanations of model behaviors, using our causal scrubbing methodology to formalize and evaluate interpretability hypotheses. We hope to produce many more explanations of model behaviors akin to our recent work investigating behaviors of GPT-2-small, toy language models, and models trained on algorithmic tasks. We think this work is a particularly promising research direction for mitigating existential risks from advanced AI systems (more in Goals and FAQ).  Since mechanistic interpretability is currently a small sub-field of machine learning, we think it’s plausible that REMIX participants could make important discoveries that significantly further the field. We also think participants will learn skills valuable for many styles of interpretability research, and also for ML research more broadly. Apply here by Sunday, November 13th [DEADLINE EXTENDED] to be a researcher in the program. Apply sooner if you’d like to start early (details below) or receive an earlier response.  Some key details: * We expect to accept 30-50 participants. * The research program will take place in Berkeley, CA. * We plan to have the majority of researchers participate during the months of December and/or January (depending on availability) although we may invite some to begin earlier and are open to some starting as soon as possible. * We expect researchers to participate for a month minimum, and (all else equal) will prefer applicants who are able to come for longer. We’ll pay for housing and travel, and also pay researchers for their time. We’ll clarify the payment structure prior to asking people to commit to the program. * We
7b178089-c2ec-4f9a-93a5-a41bbf43776c
StampyAI/alignment-research-dataset/arxiv
Arxiv
Using KL-divergence to focus Deep Visual Explanation 1 Introduction --------------- Deep convolution neural networks have shown impressive results in many domains such as in computer vision problems. The fundamental improvement of current deep learning methods is that, unlike earlier more shallow network layers, deep learning automatically identifies appropriate stratification of relevant attributes to construct a predictive model [2]. But despite improvements on quality and accuracy of predictions, these models are hard to interpret, especially when they become deeper and training weight adjustments propagate to fully connected layers. The problem of interpreting (or debugging) becomes extremely important in more sensitive domain such as dealing with medical data, i.e. any wrong decision might directly affect patient’s well-being. We therefore need to be able to build models which are interpretable and can support the explanation of their decisions. While neural networks can achieve high accuracy, they are not easily interpretable. Improved interpret-ability will help inform tradeoffs, e.g., methods like Bayesian nets are more easily interpretable but less accurate. Recently, many methods have been proposed to address this trade-off of interpret-ability [8-10]. One approach is based on computing the gradient of the output layer with respect to the feature maps [3], which is a relaxation generalization of [5]. The gradient is then back-propagated to a feature map layer, to try and distinguish the discriminative pixels for any input image. The gradient is determined based on network activation; to focus attention to where network activation provides insight into classifier output, we here consider two popular methods. In the following, we explain our approach by describing a simple method to interpret deep neural network prediction. Our approach produces a map which highlights the salient region for the input image that represents the evidence related to the activations of the output layer of the network, i.e. why this network arrived at this decision? The reminder of this paper is organized as follows. Section 2 presents our proposed approach. In Section 3 we describe the experiments with our initial framework, and finally, Section 4 concludes with our preliminary results, followed by a discussion of extensive future work. 2 Approach ----------- In the case of image classification, we focus on the loss of spatial information in fully connected layers of deep convolutional networks, and identify an approximate visual explanation using the feature maps X at a particular convolution layer l. To be able to explain a prediction of the black box function f, we rely on computing the Kullback-Leibler (KL) divergence gradient of the raw class scores y′ with respect to ground truth y, to estimate the discriminative localization map. We use this gradient as evidence for the network prediction. To achieve this, we first compute the joint probabilities for both y and y′ as follows: | | | | | | --- | --- | --- | --- | | | pij=(1+||ki−kj||2)−1∑u≠v(1+||ku−kv||2)−1 | | (1) | Here pij denotes the joint probabilities, k can be either the raw class scores before softmax or the ground truth, i indexes a neuron value and ∑u≠v combines all the values. We then compute the KL-divergence gradient based on the joint probabilities obtained in (1), i.e. δy′δy⇒z derived here [6]. We also normalize the gradient to a zero mean and unit variance as follows: | | | | | | --- | --- | --- | --- | | | α=z−μσz | | (2) | The obtained weights α capture the relevant information in the feature maps acquired by the network. These weights are applied to every feature map xi∈X as to identify the discriminative pixels which influence the final prediction output as follows: | | | | | | --- | --- | --- | --- | | | EKL−divergence=∑i∑jxi∗|αj| | | (3) | The ∑i runs over all the features xi∈X and j runs over all the obtained weights α. The output of (3) is the localization map which can be used as the evidence to explain the prediction of the black box function. Finally, we normalize the heat-map EKL−divergence for visualization. Algorithm 1 summarizes the implementation method and the overall methodology is depicted in Figure [1](#S2.F1 "Figure 1 ‣ 2 Approach ‣ Using KL-divergence to focus Deep Visual Explanation"). image, ground truth y Discriminative localization map ⇒EKL−divergence Apply a single forward-pass to estimate ⇒y′ Compute the joint probabilities for both y′ and y using (1) Compute the gradient and normalize using (2) ⇒α initialize EKL−divergence to zero for i=1 to nFeatureMaps do    Initialize temp to zero    for j=1 to sizeofα do      temp←temp+(xi∗|αj|)        end for    EKL−divergence←EKL−divergence+temp end for Algorithm 1 Proposed approach ![Our proposed methodology.](https://media.arxiv-vanity.com/render-output/7390146/framework.png) Figure 1: Our proposed methodology. 3 Experiments -------------- To evaluate the proposed method in the context of visual explanation, we select images from the common objects in context (COCO) challenge set [1]. For the learned models, we employed the publicly available pre-trained model VGG-16 [4] . The initial results of our visualization algorithm are illustrated in Figure [2](#S3.F2 "Figure 2 ‣ 3 Experiments ‣ Using KL-divergence to focus Deep Visual Explanation"). ![Explaining the decisions made by the VGG-16 (park bench, street sign, racket, cockatoo, traffic light and chihuahua), our approach highlights the most discriminative region in the image.](https://media.arxiv-vanity.com/render-output/7390146/vgg-example.png) Figure 2: Explaining the decisions made by the VGG-16 (park bench, street sign, racket, cockatoo, traffic light and chihuahua), our approach highlights the most discriminative region in the image. We also compared our explanation algorithm with other similar methods i.e., guided back- propagation [9] and gradient weighted class activation [3]. The comparison results are depicted in Figure [3](#S3.F3 "Figure 3 ‣ 3 Experiments ‣ Using KL-divergence to focus Deep Visual Explanation"). ![Results on randomly chosen images from COCO visualizing VGG-16 predictions (street sign, racket and racket).](https://media.arxiv-vanity.com/render-output/7390146/comp1.png) Figure 3: Results on randomly chosen images from COCO visualizing VGG-16 predictions (street sign, racket and racket). ### 3.1 Comparing two deep neural networks It is also important to understand the influence of the network architectures on the visualization algorithms. For the purpose of this experiment, we tested our algorithm on VGG-16 and AlexNet [7] networks. The comparison results are shown in Figure [4](#S3.F4 "Figure 4 ‣ 3.1 Comparing two deep neural networks ‣ 3 Experiments ‣ Using KL-divergence to focus Deep Visual Explanation"). From these results, we can see that the quality of the visualization depends on the overall accuracy of the network. Since VGG-16 is more reliable, we can observe that the network pays attention to a smaller salient area, compared to the AlexNet network. Therefore, the better the accuracy the better the visualization interpretation. ![Comparing visualization results of two networks.](https://media.arxiv-vanity.com/render-output/7390146/comp.png) Figure 4: Comparing visualization results of two networks. 4 Conclusion ------------- We introduce a new approach to explaining deep neural networks prediction. This particular solution focuses identification of salient regions by computing the KL-divergence gradient from the ground truth label and the output layer. Our preliminary experiments on image classification problems compare the performance of two networks. We believe that this work can further be extended to evaluate decisions in a variety of applications of deep neural networks, in image classification, speech, and text. We are especially keen to consider text domains, as interpretability will expose patterns of semantic relatedness beyond that of images. Acknowledgment -------------- We thank colleagues from the Alberta Machine Intelligence Institute for their discussions and advice. This work was supported by NSERC and AMII. References ---------- [1] Lin T.Y., Maire M., Belongie S., Hays J. Perona, P. Ramanan D., Dollár P. and Zitnick C.L. Microsoft coco: Common objects in context. In ECCV, 2014. [2] LeCun Y., Bengio Y., and Hinton G. Deep learning. Nature pages 436-444, 2015. [3] Selvaraju R. R., Das A., Vedantam R., Cogswell M. Parikh, D. and Batra D. Grad-CAM: Why did you say that?. In Workshop on Interpretable Machine Learning in Complex Systems, NIPS, 2016. [4] Simonyan K., and Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [5] Zhou B., Khosla A., Lapedriza, A. Oliva, A. and Torralba A. (2016). Learning deep features for discriminative localization. In Proc. CVPR, pages 2921-2929, 2016. [6] Maaten L. V. D., and Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research, pages 2579-2605, 2008. [7]Krizhevsky A., Sutskever I., and Hinton G. E. Imagenet classification with deep convolutional neural networks. In Proc. NIPS, pages 1097-1105, 2012. [8] Zintgraf L. M., Cohen T. S., Adel T., and Welling, M. Visualizing deep neural network decisions: Prediction difference analysis. In Proc. ICLR, 2017. [9] Springenberg J. T., Dosovitskiy A., Brox T., and Riedmiller M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 , 2014. [10] Mahendran A., and Vedaldi A. Salient deconvolutional networks. In Proc. ECCV, pages 120-135, 2016.
9a06de24-6ce6-4317-adbe-8c4743017963
trentmkelly/LessWrong-43k
LessWrong
What did you learn from leaked documents? As rationalists we seek to understand the world, but it's made harder by political bias and different agendas. Leaked documents, represent a kind of ground truth, showing how the world really works. Telling us what's for sale, what the real agendas are, how powerful spies are, and how coordinated governments are. They are almost the opposite to conspiracy theories, as they present observations that can prune conspiracy theories. But there are too many documents to read, so let's compare notes. What surprised you and caused you to update your view of the world? Update: related conversation on TheMotte.org
d9c8e364-c438-4ba3-a77f-75a9daf4b4e4
StampyAI/alignment-research-dataset/special_docs
Other
Highly accurate protein structure prediction with AlphaFold | Nature [Download PDF](/articles/s41586-021-03819-2.pdf) ### Subjects \* [Computational biophysics](/subjects/computational-biophysics) \* [Machine learning](/subjects/machine-learning) \* [Protein structure predictions](/subjects/protein-structure-predictions) \* [Structural biology](/subjects/structural-biology) Abstract -------- Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort[1](#ref-CR1 "Thompson, M. C., Yeates, T. O. & Rodriguez, J. A. Advances in methods for atomic resolution macromolecular structure determination. F1000Res. 9, 667 (2020)."),[2](#ref-CR2 "Bai, X.-C., McMullan, G. & Scheres, S. H. W. How cryo-EM is revolutionizing structural biology. Trends Biochem. Sci. 40, 49–57 (2015)."),[3](#ref-CR3 "Jaskolski, M., Dauter, Z. & Wlodawer, A. A brief history of macromolecular crystallography, illustrated by a family tree and its Nobel fruits. FEBS J. 281, 3985–4009 (2014)."),[4](/articles/s41586-021-03819-2#ref-CR4 "Wüthrich, K. The way to NMR structures of proteins. Nat. Struct. Biol. 8, 923–925 (2001)."), the structures of around 100,000 unique proteins have been determined[5](/articles/s41586-021-03819-2#ref-CR5 "wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018)."), but this represents a small fraction of the billions of known protein sequences[6](/articles/s41586-021-03819-2#ref-CR6 "Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Res. 48, D570–D578 (2020)."),[7](/articles/s41586-021-03819-2#ref-CR7 "Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. Nat. Methods 16, 603–606 (2019)."). Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’[8](/articles/s41586-021-03819-2#ref-CR8 "Dill, K. A., Ozkan, S. B., Shell, M. S. & Weikl, T. R. The protein folding problem. Annu. Rev. Biophys. 37, 289–316 (2008).")—has been an important open research problem for more than 50 years[9](/articles/s41586-021-03819-2#ref-CR9 "Anfinsen, C. B. Principles that govern the folding of protein chains. Science 181, 223–230 (1973)."). Despite recent progress[10](#ref-CR10 "Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020)."),[11](#ref-CR11 "Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLOS Comput. Biol. 13, e1005324 (2017)."),[12](#ref-CR12 "Zheng, W. et al. Deep-learning contact-map guided protein structure prediction in CASP13. Proteins 87, 1149–1164 (2019)."),[13](#ref-CR13 "Abriata, L. A., Tamò, G. E. & Dal Peraro, M. A further leap of improvement in tertiary structure prediction in CASP13 prompts new routes for future assessments. Proteins 87, 1100–1112 (2019)."),[14](/articles/s41586-021-03819-2#ref-CR14 "Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021)."), existing methods fall far short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)[15](/articles/s41586-021-03819-2#ref-CR15 "Moult, J., Fidelis, K., Kryshtafovych, A., Schwede, T. & Topf, M. Critical assessment of techniques for protein structure prediction, fourteenth round. CASP 14 Abstract Book https://www.predictioncenter.org/casp14/doc/CASP14\_Abstracts.pdf (2020)."), demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm. Main ---- The development of computational methods to predict three-dimensional (3D) protein structures from the protein sequence has proceeded along two complementary paths that focus on either the physical interactions or the evolutionary history. The physical interaction programme heavily integrates our understanding of molecular driving forces into either thermodynamic or kinetic simulation of protein physics[16](/articles/s41586-021-03819-2#ref-CR16 "Brini, E., Simmerling, C. & Dill, K. Protein storytelling through physics. Science 370, eaaz3041 (2020).") or statistical approximations thereof[17](/articles/s41586-021-03819-2#ref-CR17 "Sippl, M. J. Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins. J. Mol. Biol. 213, 859–883 (1990)."). Although theoretically very appealing, this approach has proved highly challenging for even moderate-sized proteins due to the computational intractability of molecular simulation, the context dependence of protein stability and the difficulty of producing sufficiently accurate models of protein physics. The evolutionary programme has provided an alternative in recent years, in which the constraints on protein structure are derived from bioinformatics analysis of the evolutionary history of proteins, homology to solved structures[18](/articles/s41586-021-03819-2#ref-CR18 "Šali, A. & Blundell, T. L. Comparative protein modelling by satisfaction of spatial restraints. J. Mol. Biol. 234, 779–815 (1993)."),[19](/articles/s41586-021-03819-2#ref-CR19 "Roy, A., Kucukural, A. & Zhang, Y. I-TASSER: a unified platform for automated protein structure and function prediction. Nat. Protocols 5, 725–738 (2010).") and pairwise evolutionary correlations[20](#ref-CR20 "Altschuh, D., Lesk, A. M., Bloomer, A. C. & Klug, A. Correlation of co-ordinated amino acid substitutions with function in viruses related to tobacco mosaic virus. J. Mol. Biol. 193, 693–707 (1987)."),[21](#ref-CR21 "Shindyalov, I. N., Kolchanov, N. A. & Sander, C. Can three-dimensional contacts in protein structures be predicted by analysis of correlated mutations? Protein Eng. 7, 349–358 (1994)."),[22](#ref-CR22 "Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proc. Natl Acad. Sci. USA 106, 67–72 (2009)."),[23](#ref-CR23 "Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. PLoS ONE 6, e28766 (2011)."),[24](/articles/s41586-021-03819-2#ref-CR24 "Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics 28, 184–190 (2012)."). This bioinformatics approach has benefited greatly from the steady growth of experimental protein structures deposited in the Protein Data Bank (PDB)[5](/articles/s41586-021-03819-2#ref-CR5 "wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018)."), the explosion of genomic sequencing and the rapid development of deep learning techniques to interpret these correlations. Despite these advances, contemporary physical and evolutionary-history-based approaches produce predictions that are far short of experimental accuracy in the majority of cases in which a close homologue has not been solved experimentally and this has limited their utility for many biological applications. In this study, we develop the first, to our knowledge, computational approach capable of predicting protein structures to near experimental accuracy in a majority of cases. The neural network AlphaFold that we developed was entered into the CASP14 assessment (May–July 2020; entered under the team name ‘AlphaFold2’ and a completely different model from our CASP13 AlphaFold system[10](/articles/s41586-021-03819-2#ref-CR10 "Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).")). The CASP assessment is carried out biennially using recently solved structures that have not been deposited in the PDB or publicly disclosed so that it is a blind test for the participating methods, and has long served as the gold-standard assessment for the accuracy of structure prediction[25](/articles/s41586-021-03819-2#ref-CR25 "Moult, J., Pedersen, J. T., Judson, R. & Fidelis, K. A large-scale experiment to assess protein structure prediction methods. Proteins 23, ii–iv (1995)."),[26](/articles/s41586-021-03819-2#ref-CR26 "Kryshtafovych, A., Schwede, T., Topf, M., Fidelis, K. & Moult, J. Critical assessment of methods of protein structure prediction (CASP)-round XIII. Proteins 87, 1011–1020 (2019)."). In CASP14, AlphaFold structures were vastly more accurate than competing methods. AlphaFold structures had a median backbone accuracy of 0.96 Å r.m.s.d.95 (Cα root-mean-square deviation at 95% residue coverage) (95% confidence interval = 0.85–1.16 Å) whereas the next best performing method had a median backbone accuracy of 2.8 Å r.m.s.d.95 (95% confidence interval = 2.7–4.0 Å) (measured on CASP domains; see Fig. [1a](/articles/s41586-021-03819-2#Fig1) for backbone accuracy and Supplementary Fig. [14](/articles/s41586-021-03819-2#MOESM1) for all-atom accuracy). As a comparison point for this accuracy, the width of a carbon atom is approximately 1.4 Å. In addition to very accurate domain structures (Fig. [1b](/articles/s41586-021-03819-2#Fig1)), AlphaFold is able to produce highly accurate side chains (Fig. [1c](/articles/s41586-021-03819-2#Fig1)) when the backbone is highly accurate and considerably improves over template-based methods even when strong templates are available. The all-atom accuracy of AlphaFold was 1.5 Å r.m.s.d.95 (95% confidence interval = 1.2–1.6 Å) compared with the 3.5 Å r.m.s.d.95 (95% confidence interval = 3.1–4.2 Å) of the best alternative method. Our methods are scalable to very long proteins with accurate domains and domain-packing (see Fig. [1d](/articles/s41586-021-03819-2#Fig1) for the prediction of a 2,180-residue protein with no structural homologues). Finally, the model is able to provide precise, per-residue estimates of its reliability that should enable the confident use of these predictions. \*\*Fig. 1: AlphaFold produces highly accurate structures.\*\*[![figure 1](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_Fig1\_HTML.png)](/articles/s41586-021-03819-2/figures/1)\*\*a\*\*, The performance of AlphaFold on the CASP14 dataset (\*n\* = 87 protein domains) relative to the top-15 entries (out of 146 entries), group numbers correspond to the numbers assigned to entrants by CASP. Data are median and the 95% confidence interval of the median, estimated from 10,000 bootstrap samples. \*\*b\*\*, Our prediction of CASP14 target T1049 (PDB 6Y4F, blue) compared with the true (experimental) structure (green). Four residues in the C terminus of the crystal structure are \*B\*-factor outliers and are not depicted. \*\*c\*\*, CASP14 target T1056 (PDB 6YJ1). An example of a well-predicted zinc-binding site (AlphaFold has accurate side chains even though it does not explicitly predict the zinc ion). \*\*d\*\*, CASP target T1044 (PDB 6VR4)—a 2,180-residue single chain—was predicted with correct domain packing (the prediction was made after CASP using AlphaFold without intervention). \*\*e\*\*, Model architecture. Arrows show the information flow among the various components described in this paper. Array shapes are shown in parentheses with \*s\*, number of sequences (\*N\*seq in the main text); \*r\*, number of residues (\*N\*res in the main text); \*c\*, number of channels. [Full size image](/articles/s41586-021-03819-2/figures/1)We demonstrate in Fig. [2a](/articles/s41586-021-03819-2#Fig2) that the high accuracy that AlphaFold demonstrated in CASP14 extends to a large sample of recently released PDB structures; in this dataset, all structures were deposited in the PDB after our training data cut-off and are analysed as full chains (see [Methods](/articles/s41586-021-03819-2#Sec10), Supplementary Fig. [15](/articles/s41586-021-03819-2#MOESM1) and Supplementary Table [6](/articles/s41586-021-03819-2#MOESM1) for more details). Furthermore, we observe high side-chain accuracy when the backbone prediction is accurate (Fig. [2b](/articles/s41586-021-03819-2#Fig2)) and we show that our confidence measure, the predicted local-distance difference test (pLDDT), reliably predicts the Cα local-distance difference test (lDDT-Cα) accuracy of the corresponding prediction (Fig. [2c](/articles/s41586-021-03819-2#Fig2)). We also find that the global superposition metric template modelling score (TM-score)[27](/articles/s41586-021-03819-2#ref-CR27 "Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. Proteins 57, 702–710 (2004).") can be accurately estimated (Fig. [2d](/articles/s41586-021-03819-2#Fig2)). Overall, these analyses validate that the high accuracy and reliability of AlphaFold on CASP14 proteins also transfers to an uncurated collection of recent PDB submissions, as would be expected (see [Supplementary Methods 1.15](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [11](/articles/s41586-021-03819-2#MOESM1) for confirmation that this high accuracy extends to new folds). \*\*Fig. 2: Accuracy of AlphaFold on recent PDB structures.\*\*[![figure 2](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_Fig2\_HTML.png)](/articles/s41586-021-03819-2/figures/2)The analysed structures are newer than any structure in the training set. Further filtering is applied to reduce redundancy (see [Methods](/articles/s41586-021-03819-2#Sec10)). \*\*a\*\*, Histogram of backbone r.m.s.d. for full chains (Cα r.m.s.d. at 95% coverage). Error bars are 95% confidence intervals (Poisson). This dataset excludes proteins with a template (identified by hmmsearch) from the training set with more than 40% sequence identity covering more than 1% of the chain (\*n\* = 3,144 protein chains). The overall median is 1.46 Å (95% confidence interval = 1.40–1.56 Å). Note that this measure will be highly sensitive to domain packing and domain accuracy; a high r.m.s.d. is expected for some chains with uncertain packing or packing errors. \*\*b\*\*, Correlation between backbone accuracy and side-chain accuracy. Filtered to structures with any observed side chains and resolution better than 2.5 Å (\*n\* = 5,317 protein chains); side chains were further filtered to \*B\*-factor <30 Å2. A rotamer is classified as correct if the predicted torsion angle is within 40°. Each point aggregates a range of lDDT-Cα, with a bin size of 2 units above 70 lDDT-Cα and 5 units otherwise. Points correspond to the mean accuracy; error bars are 95% confidence intervals (Student \*t\*-test) of the mean on a per-residue basis. \*\*c\*\*, Confidence score compared to the true accuracy on chains. Least-squares linear fit lDDT-Cα = 0.997 × pLDDT − 1.17 (Pearson’s \*r\* = 0.76). \*n\* = 10,795 protein chains. The shaded region of the linear fit represents a 95% confidence interval estimated from 10,000 bootstrap samples. In the companion paper[39](/articles/s41586-021-03819-2#ref-CR39 "Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature https://doi.org/10.1038/s41586-021-03828-1 (2021)."), additional quantification of the reliability of pLDDT as a confidence measure is provided. \*\*d\*\*, Correlation between pTM and full chain TM-score. Least-squares linear fit TM-score = 0.98 × pTM + 0.07 (Pearson’s \*r\* = 0.85). \*n\* = 10,795 protein chains. The shaded region of the linear fit represents a 95% confidence interval estimated from 10,000 bootstrap samples. [Full size image](/articles/s41586-021-03819-2/figures/2)The AlphaFold network --------------------- AlphaFold greatly improves the accuracy of structure prediction by incorporating novel neural network architectures and training procedures based on the evolutionary, physical and geometric constraints of protein structures. In particular, we demonstrate a new architecture to jointly embed multiple sequence alignments (MSAs) and pairwise features, a new output representation and associated loss that enable accurate end-to-end structure prediction, a new equivariant attention architecture, use of intermediate losses to achieve iterative refinement of predictions, masked MSA loss to jointly train with the structure, learning from unlabelled protein sequences using self-distillation and self-estimates of accuracy. The AlphaFold network directly predicts the 3D coordinates of all heavy atoms for a given protein using the primary amino acid sequence and aligned sequences of homologues as inputs (Fig. [1e](/articles/s41586-021-03819-2#Fig1); see [Methods](/articles/s41586-021-03819-2#Sec10) for details of inputs including databases, MSA construction and use of templates). A description of the most important ideas and components is provided below. The full network architecture and training procedure are provided in the [Supplementary Methods](/articles/s41586-021-03819-2#MOESM1). The network comprises two main stages. First, the trunk of the network processes the inputs through repeated layers of a novel neural network block that we term Evoformer to produce an \*N\*seq × \*N\*res array (\*N\*seq, number of sequences; \*N\*res, number of residues) that represents a processed MSA and an \*N\*res × \*N\*res array that represents residue pairs. The MSA representation is initialized with the raw MSA (although see [Supplementary Methods 1.2.7](/articles/s41586-021-03819-2#MOESM1) for details of handling very deep MSAs). The Evoformer blocks contain a number of attention-based and non-attention-based components. We show evidence in ‘Interpreting the neural network’ that a concrete structural hypothesis arises early within the Evoformer blocks and is continuously refined. The key innovations in the Evoformer block are new mechanisms to exchange information within the MSA and pair representations that enable direct reasoning about the spatial and evolutionary relationships. The trunk of the network is followed by the structure module that introduces an explicit 3D structure in the form of a rotation and translation for each residue of the protein (global rigid body frames). These representations are initialized in a trivial state with all rotations set to the identity and all positions set to the origin, but rapidly develop and refine a highly accurate protein structure with precise atomic details. Key innovations in this section of the network include breaking the chain structure to allow simultaneous local refinement of all parts of the structure, a novel equivariant transformer to allow the network to implicitly reason about the unrepresented side-chain atoms and a loss term that places substantial weight on the orientational correctness of the residues. Both within the structure module and throughout the whole network, we reinforce the notion of iterative refinement by repeatedly applying the final loss to outputs and then feeding the outputs recursively into the same modules. The iterative refinement using the whole network (which we term ‘recycling’ and is related to approaches in computer vision[28](/articles/s41586-021-03819-2#ref-CR28 "Tu, Z. & Bai, X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1744–1757 (2010)."),[29](/articles/s41586-021-03819-2#ref-CR29 "Carreira, J., Agrawal, P., Fragkiadaki, K. & Malik, J. Human pose estimation with iterative error feedback. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4733–4742 (2016).")) contributes markedly to accuracy with minor extra training time (see [Supplementary Methods 1.8](/articles/s41586-021-03819-2#MOESM1) for details). Evoformer --------- The key principle of the building block of the network—named Evoformer (Figs. [1](/articles/s41586-021-03819-2#Fig1)e, [3a](/articles/s41586-021-03819-2#Fig3))—is to view the prediction of protein structures as a graph inference problem in 3D space in which the edges of the graph are defined by residues in proximity. The elements of the pair representation encode information about the relation between the residues (Fig. [3b](/articles/s41586-021-03819-2#Fig3)). The columns of the MSA representation encode the individual residues of the input sequence while the rows represent the sequences in which those residues appear. Within this framework, we define a number of update operations that are applied in each block in which the different update operations are applied in series. \*\*Fig. 3: Architectural details.\*\*[![figure 3](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_Fig3\_HTML.png)](/articles/s41586-021-03819-2/figures/3)\*\*a\*\*, Evoformer block. Arrows show the information flow. The shape of the arrays is shown in parentheses. \*\*b\*\*, The pair representation interpreted as directed edges in a graph. \*\*c\*\*, Triangle multiplicative update and triangle self-attention. The circles represent residues. Entries in the pair representation are illustrated as directed edges and in each diagram, the edge being updated is \*ij\*. \*\*d\*\*, Structure module including Invariant point attention (IPA) module. The single representation is a copy of the first row of the MSA representation. \*\*e\*\*, Residue gas: a representation of each residue as one free-floating rigid body for the backbone (blue triangles) and \*χ\* angles for the side chains (green circles). The corresponding atomic structure is shown below. \*\*f\*\*, Frame aligned point error (FAPE). Green, predicted structure; grey, true structure; (\*R\*\*k\*, \*\*t\*\*\*k\*), frames; \*\*x\*\*i, atom positions. [Full size image](/articles/s41586-021-03819-2/figures/3)The MSA representation updates the pair representation through an element-wise outer product that is summed over the MSA sequence dimension. In contrast to previous work[30](/articles/s41586-021-03819-2#ref-CR30 "Mirabello, C. & Wallner, B. rawMSA: end-to-end deep learning using raw multiple sequence alignments. PLoS ONE 14, e0220182 (2019)."), this operation is applied within every block rather than once in the network, which enables the continuous communication from the evolving MSA representation to the pair representation. Within the pair representation, there are two different update patterns. Both are inspired by the necessity of consistency of the pair representation—for a pairwise description of amino acids to be representable as a single 3D structure, many constraints must be satisfied including the triangle inequality on distances. On the basis of this intuition, we arrange the update operations on the pair representation in terms of triangles of edges involving three different nodes (Fig. [3c](/articles/s41586-021-03819-2#Fig3)). In particular, we add an extra logit bias to axial attention[31](/articles/s41586-021-03819-2#ref-CR31 "Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In Proc. IEEE/CVF International Conference on Computer Vision 603–612 (2019).") to include the ‘missing edge’ of the triangle and we define a non-attention update operation ‘triangle multiplicative update’ that uses two edges to update the missing third edge (see [Supplementary Methods 1.6.5](/articles/s41586-021-03819-2#MOESM1) for details). The triangle multiplicative update was developed originally as a more symmetric and cheaper replacement for the attention, and networks that use only the attention or multiplicative update are both able to produce high-accuracy structures. However, the combination of the two updates is more accurate. We also use a variant of axial attention within the MSA representation. During the per-sequence attention in the MSA, we project additional logits from the pair stack to bias the MSA attention. This closes the loop by providing information flow from the pair representation back into the MSA representation, ensuring that the overall Evoformer block is able to fully mix information between the pair and MSA representations and prepare for structure generation within the structure module. End-to-end structure prediction ------------------------------- The structure module (Fig. [3d](/articles/s41586-021-03819-2#Fig3)) operates on a concrete 3D backbone structure using the pair representation and the original sequence row (single representation) of the MSA representation from the trunk. The 3D backbone structure is represented as \*N\*res independent rotations and translations, each with respect to the global frame (residue gas) (Fig. [3e](/articles/s41586-021-03819-2#Fig3)). These rotations and translations—representing the geometry of the N-Cα-C atoms—prioritize the orientation of the protein backbone so that the location of the side chain of each residue is highly constrained within that frame. Conversely, the peptide bond geometry is completely unconstrained and the network is observed to frequently violate the chain constraint during the application of the structure module as breaking this constraint enables the local refinement of all parts of the chain without solving complex loop closure problems. Satisfaction of the peptide bond geometry is encouraged during fine-tuning by a violation loss term. Exact enforcement of peptide bond geometry is only achieved in the post-prediction relaxation of the structure by gradient descent in the Amber[32](/articles/s41586-021-03819-2#ref-CR32 "Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. Proteins 65, 712–725 (2006).") force field. Empirically, this final relaxation does not improve the accuracy of the model as measured by the global distance test (GDT)[33](/articles/s41586-021-03819-2#ref-CR33 "Zemla, A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 31, 3370–3374 (2003).") or lDDT-Cα[34](/articles/s41586-021-03819-2#ref-CR34 "Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. Bioinformatics 29, 2722–2728 (2013).") but does remove distracting stereochemical violations without the loss of accuracy. The residue gas representation is updated iteratively in two stages (Fig. [3d](/articles/s41586-021-03819-2#Fig3)). First, a geometry-aware attention operation that we term ‘invariant point attention’ (IPA) is used to update an \*N\*res set of neural activations (single representation) without changing the 3D positions, then an equivariant update operation is performed on the residue gas using the updated activations. The IPA augments each of the usual attention queries, keys and values with 3D points that are produced in the local frame of each residue such that the final value is invariant to global rotations and translations (see [Methods](/articles/s41586-021-03819-2#Sec10) ‘IPA’ for details). The 3D queries and keys also impose a strong spatial/locality bias on the attention, which is well-suited to the iterative refinement of the protein structure. After each attention operation and element-wise transition block, the module computes an update to the rotation and translation of each backbone frame. The application of these updates within the local frame of each residue makes the overall attention and update block an equivariant operation on the residue gas. Predictions of side-chain \*χ\* angles as well as the final, per-residue accuracy of the structure (pLDDT) are computed with small per-residue networks on the final activations at the end of the network. The estimate of the TM-score (pTM) is obtained from a pairwise error prediction that is computed as a linear projection from the final pair representation. The final loss (which we term the frame-aligned point error (FAPE) (Fig. [3f](/articles/s41586-021-03819-2#Fig3))) compares the predicted atom positions to the true positions under many different alignments. For each alignment, defined by aligning the predicted frame (\*R\*\*k\*, \*\*t\*\*\*k\*) to the corresponding true frame, we compute the distance of all predicted atom positions \*\*x\*\*\*i\* from the true atom positions. The resulting \*N\*frames × \*N\*atoms distances are penalized with a clamped \*L\*1 loss. This creates a strong bias for atoms to be correct relative to the local frame of each residue and hence correct with respect to its side-chain interactions, as well as providing the main source of chirality for AlphaFold ([Supplementary Methods 1.9.3](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [9](/articles/s41586-021-03819-2#MOESM1)). Training with labelled and unlabelled data ------------------------------------------ The AlphaFold architecture is able to train to high accuracy using only supervised learning on PDB data, but we are able to enhance accuracy (Fig. [4a](/articles/s41586-021-03819-2#Fig4)) using an approach similar to noisy student self-distillation[35](/articles/s41586-021-03819-2#ref-CR35 "Xie, Q., Luong, M.-T., Hovy, E. & Le, Q. V. Self-training with noisy student improves imagenet classification. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 10687–10698 (2020)."). In this procedure, we use a trained network to predict the structure of around 350,000 diverse sequences from Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 "Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).") and make a new dataset of predicted structures filtered to a high-confidence subset. We then train the same architecture again from scratch using a mixture of PDB data and this new dataset of predicted structures as the training data, in which the various training data augmentations such as cropping and MSA subsampling make it challenging for the network to recapitulate the previously predicted structures. This self-distillation procedure makes effective use of the unlabelled sequence data and considerably improves the accuracy of the resulting network. \*\*Fig. 4: Interpreting the neural network.\*\*[![figure 4](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_Fig4\_HTML.png)](/articles/s41586-021-03819-2/figures/4)\*\*a\*\*, Ablation results on two target sets: the CASP14 set of domains (\*n\* = 87 protein domains) and the PDB test set of chains with template coverage of ≤30% at 30% identity (\*n\* = 2,261 protein chains). Domains are scored with GDT and chains are scored with lDDT-Cα. The ablations are reported as a difference compared with the average of the three baseline seeds. Means (points) and 95% bootstrap percentile intervals (error bars) are computed using bootstrap estimates of 10,000 samples. \*\*b\*\*, Domain GDT trajectory over 4 recycling iterations and 48 Evoformer blocks on CASP14 targets LmrP (T1024) and Orf8 (T1064) where D1 and D2 refer to the individual domains as defined by the CASP assessment. Both T1024 domains obtain the correct structure early in the network, whereas the structure of T1064 changes multiple times and requires nearly the full depth of the network to reach the final structure. Note, 48 Evoformer blocks comprise one recycling iteration. [Full size image](/articles/s41586-021-03819-2/figures/4)Additionally, we randomly mask out or mutate individual residues within the MSA and have a Bidirectional Encoder Representations from Transformers (BERT)-style[37](/articles/s41586-021-03819-2#ref-CR37 "Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1, 4171–4186 (2019).") objective to predict the masked elements of the MSA sequences. This objective encourages the network to learn to interpret phylogenetic and covariation relationships without hardcoding a particular correlation statistic into the features. The BERT objective is trained jointly with the normal PDB structure loss on the same training examples and is not pre-trained, in contrast to recent independent work[38](/articles/s41586-021-03819-2#ref-CR38 "Rao, R. et al. MSA transformer. In Proc. 38th International Conference on Machine Learning PMLR 139, 8844–8856 (2021)."). Interpreting the neural network ------------------------------- To understand how AlphaFold predicts protein structure, we trained a separate structure module for each of the 48 Evoformer blocks in the network while keeping all parameters of the main network frozen ([Supplementary Methods 1.14](/articles/s41586-021-03819-2#MOESM1)). Including our recycling stages, this provides a trajectory of 192 intermediate structures—one per full Evoformer block—in which each intermediate represents the belief of the network of the most likely structure at that block. The resulting trajectories are surprisingly smooth after the first few blocks, showing that AlphaFold makes constant incremental improvements to the structure until it can no longer improve (see Fig. [4b](/articles/s41586-021-03819-2#Fig4) for a trajectory of accuracy). These trajectories also illustrate the role of network depth. For very challenging proteins such as ORF8 of SARS-CoV-2 (T1064), the network searches and rearranges secondary structure elements for many layers before settling on a good structure. For other proteins such as LmrP (T1024), the network finds the final structure within the first few layers. Structure trajectories of CASP14 targets T1024, T1044, T1064 and T1091 that demonstrate a clear iterative building process for a range of protein sizes and difficulties are shown in Supplementary Videos [1](/articles/s41586-021-03819-2#MOESM3)–[4](/articles/s41586-021-03819-2#MOESM6). In [Supplementary Methods 1.16](/articles/s41586-021-03819-2#MOESM1) and Supplementary Figs. [12](/articles/s41586-021-03819-2#MOESM1), [13](/articles/s41586-021-03819-2#MOESM1), we interpret the attention maps produced by AlphaFold layers. Figure [4a](/articles/s41586-021-03819-2#Fig4) contains detailed ablations of the components of AlphaFold that demonstrate that a variety of different mechanisms contribute to AlphaFold accuracy. Detailed descriptions of each ablation model, their training details, extended discussion of ablation results and the effect of MSA depth on each ablation are provided in [Supplementary Methods 1.13](/articles/s41586-021-03819-2#MOESM1) and Supplementary Fig. [10](/articles/s41586-021-03819-2#MOESM1). MSA depth and cross-chain contacts ---------------------------------- Although AlphaFold has a high accuracy across the vast majority of deposited PDB structures, we note that there are still factors that affect accuracy or limit the applicability of the model. The model uses MSAs and the accuracy decreases substantially when the median alignment depth is less than around 30 sequences (see Fig. [5a](/articles/s41586-021-03819-2#Fig5) for details). We observe a threshold effect where improvements in MSA depth over around 100 sequences lead to small gains. We hypothesize that the MSA information is needed to coarsely find the correct structure within the early stages of the network, but refinement of that prediction into a high-accuracy model does not depend crucially on the MSA information. The other substantial limitation that we have observed is that AlphaFold is much weaker for proteins that have few intra-chain or homotypic contacts compared to the number of heterotypic contacts (further details are provided in a companion paper[39](/articles/s41586-021-03819-2#ref-CR39 "Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature https://doi.org/10.1038/s41586-021-03828-1 (2021).")). This typically occurs for bridging domains within larger complexes in which the shape of the protein is created almost entirely by interactions with other chains in the complex. Conversely, AlphaFold is often able to give high-accuracy predictions for homomers, even when the chains are substantially intertwined (Fig. [5b](/articles/s41586-021-03819-2#Fig5)). We expect that the ideas of AlphaFold are readily applicable to predicting full hetero-complexes in a future system and that this will remove the difficulty with protein chains that have a large number of hetero-contacts. \*\*Fig. 5: Effect of MSA depth and cross-chain contacts.\*\*[![figure 5](//media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_Fig5\_HTML.png)](/articles/s41586-021-03819-2/figures/5)\*\*a\*\*, Backbone accuracy (lDDT-Cα) for the redundancy-reduced set of the PDB after our training data cut-off, restricting to proteins in which at most 25% of the long-range contacts are between different heteromer chains. We further consider two groups of proteins based on template coverage at 30% sequence identity: covering more than 60% of the chain (\*n\* = 6,743 protein chains) and covering less than 30% of the chain (\*n\* = 1,596 protein chains). MSA depth is computed by counting the number of non-gap residues for each position in the MSA (using the \*N\*eff weighting scheme; see [Methods](/articles/s41586-021-03819-2#Sec10) for details) and taking the median across residues. The curves are obtained through Gaussian kernel average smoothing (window size is 0.2 units in log10(\*N\*eff)); the shaded area is the 95% confidence interval estimated using bootstrap of 10,000 samples. \*\*b\*\*, An intertwined homotrimer (PDB 6SK0) is correctly predicted without input stoichiometry and only a weak template (blue is predicted and green is experimental). [Full size image](/articles/s41586-021-03819-2/figures/5)Related work ------------ The prediction of protein structures has had a long and varied development, which is extensively covered in a number of reviews[14](/articles/s41586-021-03819-2#ref-CR14 "Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021)."),[40](#ref-CR40 "Kuhlman, B. & Bradley, P. Advances in protein structure prediction and design. Nat. Rev. Mol. Cell Biol. 20, 681–697 (2019)."),[41](#ref-CR41 "Marks, D. S., Hopf, T. A. & Sander, C. Protein structure prediction from sequence variation. Nat. Biotechnol. 30, 1072–1080 (2012)."),[42](#ref-CR42 "Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 202, 865–884 (1988)."),[43](/articles/s41586-021-03819-2#ref-CR43 "Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. Protein Eng. 14, 835–843 (2001)."). Despite the long history of applying neural networks to structure prediction[14](/articles/s41586-021-03819-2#ref-CR14 "Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. Curr. Opin. Struct. Biol. 68, 194–207 (2021)."),[42](/articles/s41586-021-03819-2#ref-CR42 "Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 202, 865–884 (1988)."),[43](/articles/s41586-021-03819-2#ref-CR43 "Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. Protein Eng. 14, 835–843 (2001)."), they have only recently come to improve structure prediction[10](/articles/s41586-021-03819-2#ref-CR10 "Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020)."),[11](/articles/s41586-021-03819-2#ref-CR11 "Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLOS Comput. Biol. 13, e1005324 (2017)."),[44](/articles/s41586-021-03819-2#ref-CR44 "Yang, J. et al. Improved protein structure prediction using predicted interresidue orientations. Proc. Natl Acad. Sci. USA 117, 1496–1503 (2020)."),[45](/articles/s41586-021-03819-2#ref-CR45 "Li, Y. et al. Deducing high-accuracy protein contact-maps from a triplet of coevolutionary matrices through deep residual convolutional networks. PLOS Comput. Biol. 17, e1008865 (2021)."). These approaches effectively leverage the rapid improvement in computer vision systems[46](/articles/s41586-021-03819-2#ref-CR46 "He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (2016).") by treating the problem of protein structure prediction as converting an ‘image’ of evolutionary couplings[22](#ref-CR22 "Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proc. Natl Acad. Sci. USA 106, 67–72 (2009)."),[23](#ref-CR23 "Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. PLoS ONE 6, e28766 (2011)."),[24](/articles/s41586-021-03819-2#ref-CR24 "Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics 28, 184–190 (2012).") to an ‘image’ of the protein distance matrix and then integrating the distance predictions into a heuristic system that produces the final 3D coordinate prediction. A few recent studies have been developed to predict the 3D coordinates directly[47](#ref-CR47 "AlQuraishi, M. End-to-end differentiable learning of protein structure. Cell Syst. 8, 292–301 (2019)."),[48](#ref-CR48 "Senior, A. W. et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13). Proteins 87, 1141–1148 (2019)."),[49](#ref-CR49 "Ingraham, J., Riesselman, A. J., Sander, C. & Marks, D. S. Learning protein structure with a differentiable simulator. in Proc. International Conference on Learning Representations (2019)."),[50](/articles/s41586-021-03819-2#ref-CR50 "Li, J. Universal transforming geometric network. Preprint at https://arxiv.org/abs/1908.00723 (2019)."), but the accuracy of these approaches does not match traditional, hand-crafted structure prediction pipelines[51](/articles/s41586-021-03819-2#ref-CR51 "Xu, J., McPartlon, M. & Li, J. Improved protein structure prediction by deep learning irrespective of co-evolution information. Nat. Mach. Intell. 3, 601–609 (2021)."). In parallel, the success of attention-based networks for language processing[52](/articles/s41586-021-03819-2#ref-CR52 "Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems 5998–6008 (2017).") and, more recently, computer vision[31](/articles/s41586-021-03819-2#ref-CR31 "Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In Proc. IEEE/CVF International Conference on Computer Vision 603–612 (2019)."),[53](/articles/s41586-021-03819-2#ref-CR53 "Wang, H. et al. Axial-deeplab: stand-alone axial-attention for panoptic segmentation. in European Conference on Computer Vision 108–126 (Springer, 2020).") has inspired the exploration of attention-based methods for interpreting protein sequences[54](#ref-CR54 "Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M. & Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. Nat. Methods 16, 1315–1322 (2019)."),[55](#ref-CR55 "Heinzinger, M. et al. Modeling aspects of the language of life through transfer-learning protein sequences. BMC Bioinformatics 20, 723 (2019)."),[56](/articles/s41586-021-03819-2#ref-CR56 "Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proc. Natl Acad. Sci. USA 118, e2016239118 (2021)."). Discussion ---------- The methodology that we have taken in designing AlphaFold is a combination of the bioinformatics and physical approaches: we use a physical and geometric inductive bias to build components that learn from PDB data with minimal imposition of handcrafted features (for example, AlphaFold builds hydrogen bonds effectively without a hydrogen bond score function). This results in a network that learns far more efficiently from the limited data in the PDB but is able to cope with the complexity and variety of structural data. In particular, AlphaFold is able to handle missing the physical context and produce accurate models in challenging cases such as intertwined homomers or proteins that only fold in the presence of an unknown haem group. The ability to handle underspecified structural conditions is essential to learning from PDB structures as the PDB represents the full range of conditions in which structures have been solved. In general, AlphaFold is trained to produce the protein structure most likely to appear as part of a PDB structure. For example, in cases in which a particular stochiometry, ligand or ion is predictable from the sequence alone, AlphaFold is likely to produce a structure that respects those constraints implicitly. AlphaFold has already demonstrated its utility to the experimental community, both for molecular replacement[57](/articles/s41586-021-03819-2#ref-CR57 "Pereira, J. et al. High-accuracy protein structure prediction in CASP14. Proteins https://doi.org/10.1002/prot.26171 (2021).") and for interpreting cryogenic electron microscopy maps[58](/articles/s41586-021-03819-2#ref-CR58 "Gupta, M. et al. CryoEM and AI reveal a structure of SARS-CoV-2 Nsp2, a multifunctional protein involved in key host processes. Preprint at https://doi.org/10.1101/2021.05.10.443524 (2021)."). Moreover, because AlphaFold outputs protein coordinates directly, AlphaFold produces predictions in graphics processing unit (GPU) minutes to GPU hours depending on the length of the protein sequence (for example, around one GPU minute per model for 384 residues; see [Methods](/articles/s41586-021-03819-2#Sec10) for details). This opens up the exciting possibility of predicting structures at the proteome-scale and beyond—in a companion paper[39](/articles/s41586-021-03819-2#ref-CR39 "Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature https://doi.org/10.1038/s41586-021-03828-1 (2021)."), we demonstrate the application of AlphaFold to the entire human proteome[39](/articles/s41586-021-03819-2#ref-CR39 "Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. Nature https://doi.org/10.1038/s41586-021-03828-1 (2021)."). The explosion in available genomic sequencing techniques and data has revolutionized bioinformatics but the intrinsic challenge of experimental structure determination has prevented a similar expansion in our structural knowledge. By developing an accurate protein structure prediction algorithm, coupled with existing large and well-curated structure and sequence databases assembled by the experimental community, we hope to accelerate the advancement of structural bioinformatics that can keep pace with the genomics revolution. We hope that AlphaFold—and computational approaches that apply its techniques for other biophysical problems—will become essential tools of modern biology. Methods ------- ### Full algorithm details Extensive explanations of the components and their motivations are available in [Supplementary Methods 1.1–1.10](/articles/s41586-021-03819-2#MOESM1), in addition, pseudocode is available in [Supplementary Information Algorithms 1–32](/articles/s41586-021-03819-2#MOESM1), network diagrams in Supplementary Figs. [1](/articles/s41586-021-03819-2#MOESM1)–[8](/articles/s41586-021-03819-2#MOESM1), input features in Supplementary Table [1](/articles/s41586-021-03819-2#MOESM1) and additional details are provided in Supplementary Tables [2](/articles/s41586-021-03819-2#MOESM1), [3](/articles/s41586-021-03819-2#MOESM1). Training and inference details are provided in [Supplementary Methods 1.11–1.12](/articles/s41586-021-03819-2#MOESM1) and Supplementary Tables [4](/articles/s41586-021-03819-2#MOESM1), [5](/articles/s41586-021-03819-2#MOESM1). ### IPA The IPA module combines the pair representation, the single representation and the geometric representation to update the single representation (Supplementary Fig. [8](/articles/s41586-021-03819-2#MOESM1)). Each of these representations contributes affinities to the shared attention weights and then uses these weights to map its values to the output. The IPA operates in 3D space. Each residue produces query points, key points and value points in its local frame. These points are projected into the global frame using the backbone frame of the residue in which they interact with each other. The resulting points are then projected back into the local frame. The affinity computation in the 3D space uses squared distances and the coordinate transformations ensure the invariance of this module with respect to the global frame (see [Supplementary Methods 1.8.2](/articles/s41586-021-03819-2#MOESM1) ‘Invariant point attention (IPA)’ for the algorithm, proof of invariance and a description of the full multi-head version). A related construction that uses classic geometric invariants to construct pairwise features in place of the learned 3D points has been applied to protein design[59](/articles/s41586-021-03819-2#ref-CR59 "Ingraham, J., Garg, V. K., Barzilay, R. & Jaakkola, T. Generative models for graph-based protein design. in Proc. 33rd Conference on Neural Information Processing Systems (2019)."). In addition to the IPA, standard dot product attention is computed on the abstract single representation and a special attention on the pair representation. The pair representation augments both the logits and the values of the attention process, which is the primary way in which the pair representation controls the structure generation. ### Inputs and data sources Inputs to the network are the primary sequence, sequences from evolutionarily related proteins in the form of a MSA created by standard tools including jackhmmer[60](/articles/s41586-021-03819-2#ref-CR60 "Johnson, L. S., Eddy, S. R. & Portugaly, E. Hidden Markov model speed heuristic and iterative HMM search procedure. BMC Bioinformatics 11, 431 (2010).") and HHBlits[61](/articles/s41586-021-03819-2#ref-CR61 "Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. Nat. Methods 9, 173–175 (2012)."), and 3D atom coordinates of a small number of homologous structures (templates) where available. For both the MSA and templates, the search processes are tuned for high recall; spurious matches will probably appear in the raw MSA but this matches the training condition of the network. One of the sequence databases used, Big Fantastic Database (BFD), was custom-made and released publicly (see ‘Data availability’) and was used by several CASP teams. BFD is one of the largest publicly available collections of protein families. It consists of 65,983,866 families represented as MSAs and hidden Markov models (HMMs) covering 2,204,359,010 protein sequences from reference databases, metagenomes and metatranscriptomes. BFD was built in three steps. First, 2,423,213,294 protein sequences were collected from UniProt (Swiss-Prot&TrEMBL, 2017-11)[62](/articles/s41586-021-03819-2#ref-CR62 "The UniProt Consortium. UniProt: the universal protein knowledgebase in 2021. Nucleic Acids Res. 49, D480–D489 (2020)."), a soil reference protein catalogue and the marine eukaryotic reference catalogue[7](/articles/s41586-021-03819-2#ref-CR7 "Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. Nat. Methods 16, 603–606 (2019)."), and clustered to 30% sequence identity, while enforcing a 90% alignment coverage of the shorter sequences using MMseqs2/Linclust[63](/articles/s41586-021-03819-2#ref-CR63 "Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. Nat. Commun. 9, 2542 (2018)."). This resulted in 345,159,030 clusters. For computational efficiency, we removed all clusters with less than three members, resulting in 61,083,719 clusters. Second, we added 166,510,624 representative protein sequences from Metaclust NR (2017-05; discarding all sequences shorter than 150 residues)[63](/articles/s41586-021-03819-2#ref-CR63 "Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. Nat. Commun. 9, 2542 (2018).") by aligning them against the cluster representatives using MMseqs2[64](/articles/s41586-021-03819-2#ref-CR64 "Steinegger, M. & Söding, J. MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. Nat. Biotechnol. 35, 1026–1028 (2017)."). Sequences that fulfilled the sequence identity and coverage criteria were assigned to the best scoring cluster. The remaining 25,347,429 sequences that could not be assigned were clustered separately and added as new clusters, resulting in the final clustering. Third, for each of the clusters, we computed an MSA using FAMSA[65](/articles/s41586-021-03819-2#ref-CR65 "Deorowicz, S., Debudaj-Grabysz, A. & Gudyś, A. FAMSA: fast and accurate multiple sequence alignment of huge protein families. Sci. Rep. 6, 33964 (2016).") and computed the HMMs following the Uniclust HH-suite database protocol[36](/articles/s41586-021-03819-2#ref-CR36 "Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017)."). The following versions of public datasets were used in this study. Our models were trained on a copy of the PDB[5](/articles/s41586-021-03819-2#ref-CR5 "wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. Nucleic Acids Res. 47, D520–D528 (2018).") downloaded on 28 August 2019. For finding template structures at prediction time, we used a copy of the PDB downloaded on 14 May 2020, and the PDB70[66](/articles/s41586-021-03819-2#ref-CR66 "Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. BMC Bioinformatics 20, 473 (2019).") clustering database downloaded on 13 May 2020. For MSA lookup at both training and prediction time, we used Uniref90[67](/articles/s41586-021-03819-2#ref-CR67 "Suzek, B. E., Wang, Y., Huang, H., McGarvey, P. B. & Wu, C. H. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics 31, 926–932 (2015).") v.2020\\_01, BFD, Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 "Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).") v.2018\\_08 and MGnify[6](/articles/s41586-021-03819-2#ref-CR6 "Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Res. 48, D570–D578 (2020).") v.2018\\_12. For sequence distillation, we used Uniclust30[36](/articles/s41586-021-03819-2#ref-CR36 "Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. Nucleic Acids Res. 45, D170–D176 (2017).") v.2018\\_08 to construct a distillation structure dataset. Full details are provided in [Supplementary Methods 1.2](/articles/s41586-021-03819-2#MOESM1). For MSA search on BFD + Uniclust30, and template search against PDB70, we used HHBlits[61](/articles/s41586-021-03819-2#ref-CR61 "Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. Nat. Methods 9, 173–175 (2012).") and HHSearch[66](/articles/s41586-021-03819-2#ref-CR66 "Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. BMC Bioinformatics 20, 473 (2019).") from hh-suite v.3.0-beta.3 (version 14/07/2017). For MSA search on Uniref90 and clustered MGnify, we used jackhmmer from HMMER3[68](/articles/s41586-021-03819-2#ref-CR68 "Eddy, S. R. Accelerated profile HMM searches. PLOS Comput. Biol. 7, e1002195 (2011)."). For constrained relaxation of structures, we used OpenMM v.7.3.1[69](/articles/s41586-021-03819-2#ref-CR69 "Eastman, P. et al. OpenMM 7: rapid development of high performance algorithms for molecular dynamics. PLOS Comput. Biol. 13, e1005659 (2017).") with the Amber99sb force field[32](/articles/s41586-021-03819-2#ref-CR32 "Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. Proteins 65, 712–725 (2006)."). For neural network construction, running and other analyses, we used TensorFlow[70](/articles/s41586-021-03819-2#ref-CR70 "Ashish, A. M. A. et al. TensorFlow: large-scale machine learning on heterogeneous systems. Preprint at https://arxiv.org/abs/1603.04467 (2015)."), Sonnet[71](/articles/s41586-021-03819-2#ref-CR71 "Reynolds, M. et al. Open sourcing Sonnet – a new library for constructing neural networks. DeepMind https://deepmind.com/blog/open-sourcing-sonnet/ (7 April 2017)."), NumPy[72](/articles/s41586-021-03819-2#ref-CR72 "Harris, C. R. et al. Array programming with NumPy. Nature 585, 357–362 (2020)."), Python[73](/articles/s41586-021-03819-2#ref-CR73 "Van Rossum, G. & Drake, F. L. Python 3 Reference Manual (CreateSpace, 2009).") and Colab[74](/articles/s41586-021-03819-2#ref-CR74 "Bisong, E. in Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners 59–64 (Apress, 2019)."). To quantify the effect of the different sequence data sources, we re-ran the CASP14 proteins using the same models but varying how the MSA was constructed. Removing BFD reduced the mean accuracy by 0.4 GDT, removing Mgnify reduced the mean accuracy by 0.7 GDT, and removing both reduced the mean accuracy by 6.1 GDT. In each case, we found that most targets had very small changes in accuracy but a few outliers had very large (20+ GDT) differences. This is consistent with the results in Fig. [5a](/articles/s41586-021-03819-2#Fig5) in which the depth of the MSA is relatively unimportant until it approaches a threshold value of around 30 sequences when the MSA size effects become quite large. We observe mostly overlapping effects between inclusion of BFD and Mgnify, but having at least one of these metagenomics databases is very important for target classes that are poorly represented in UniRef, and having both was necessary to achieve full CASP accuracy. ### Training regimen To train, we use structures from the PDB with a maximum release date of 30 April 2018. Chains are sampled in inverse proportion to cluster size of a 40% sequence identity clustering. We then randomly crop them to 256 residues and assemble into batches of size 128. We train the model on Tensor Processing Unit (TPU) v3 with a batch size of 1 per TPU core, hence the model uses 128 TPU v3 cores. The model is trained until convergence (around 10 million samples) and further fine-tuned using longer crops of 384 residues, larger MSA stack and reduced learning rate (see [Supplementary Methods 1.11](/articles/s41586-021-03819-2#MOESM1) for the exact configuration). The initial training stage takes approximately 1 week, and the fine-tuning stage takes approximately 4 additional days. The network is supervised by the FAPE loss and a number of auxiliary losses. First, the final pair representation is linearly projected to a binned distance distribution (distogram) prediction, scored with a cross-entropy loss. Second, we use random masking on the input MSAs and require the network to reconstruct the masked regions from the output MSA representation using a BERT-like loss[37](/articles/s41586-021-03819-2#ref-CR37 "Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1, 4171–4186 (2019)."). Third, the output single representations of the structure module are used to predict binned per-residue lDDT-Cα values. Finally, we use an auxiliary side-chain loss during training, and an auxiliary structure violation loss during fine-tuning. Detailed descriptions and weighting are provided in the [Supplementary Information](/articles/s41586-021-03819-2#MOESM1). An initial model trained with the above objectives was used to make structure predictions for a Uniclust dataset of 355,993 sequences with the full MSAs. These predictions were then used to train a final model with identical hyperparameters, except for sampling examples 75% of the time from the Uniclust prediction set, with sub-sampled MSAs, and 25% of the time from the clustered PDB set. We train five different models using different random seeds, some with templates and some without, to encourage diversity in the predictions (see Supplementary Table [5](/articles/s41586-021-03819-2#MOESM1) and [Supplementary Methods 1.12.1](/articles/s41586-021-03819-2#MOESM1) for details). We also fine-tuned these models after CASP14 to add a pTM prediction objective ([Supplementary Methods 1.9.7](/articles/s41586-021-03819-2#MOESM1)) and use the obtained models for Fig. [2d](/articles/s41586-021-03819-2#Fig2). ### Inference regimen We inference the five trained models and use the predicted confidence score to select the best model per target. Using our CASP14 configuration for AlphaFold, the trunk of the network is run multiple times with different random choices for the MSA cluster centres (see [Supplementary Methods 1.11.2](/articles/s41586-021-03819-2#MOESM1) for details of the ensembling procedure). The full time to make a structure prediction varies considerably depending on the length of the protein. Representative timings for the neural network using a single model on V100 GPU are 4.8 min with 256 residues, 9.2 min with 384 residues and 18 h at 2,500 residues. These timings are measured using our open-source code, and the open-source code is notably faster than the version we ran in CASP14 as we now use the XLA compiler[75](/articles/s41586-021-03819-2#ref-CR75 "TensorFlow. XLA: Optimizing Compiler for TensorFlow. https://www.tensorflow.org/xla (2018)."). Since CASP14, we have found that the accuracy of the network without ensembling is very close or equal to the accuracy with ensembling and we turn off ensembling for most inference. Without ensembling, the network is 8× faster and the representative timings for a single model are 0.6 min with 256 residues, 1.1 min with 384 residues and 2.1 h with 2,500 residues. Inferencing large proteins can easily exceed the memory of a single GPU. For a V100 with 16 GB of memory, we can predict the structure of proteins up to around 1,300 residues without ensembling and the 256- and 384-residue inference times are using the memory of a single GPU. The memory usage is approximately quadratic in the number of residues, so a 2,500-residue protein involves using unified memory so that we can greatly exceed the memory of a single V100. In our cloud setup, a single V100 is used for computation on a 2,500-residue protein but we requested four GPUs to have sufficient memory. Searching genetic sequence databases to prepare inputs and final relaxation of the structures take additional central processing unit (CPU) time but do not require a GPU or TPU. ### Metrics The predicted structure is compared to the true structure from the PDB in terms of lDDT metric[34](/articles/s41586-021-03819-2#ref-CR34 "Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. Bioinformatics 29, 2722–2728 (2013)."), as this metric reports the domain accuracy without requiring a domain segmentation of chain structures. The distances are either computed between all heavy atoms (lDDT) or only the Cα atoms to measure the backbone accuracy (lDDT-Cα). As lDDT-Cα only focuses on the Cα atoms, it does not include the penalty for structural violations and clashes. Domain accuracies in CASP are reported as GDT[33](/articles/s41586-021-03819-2#ref-CR33 "Zemla, A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 31, 3370–3374 (2003).") and the TM-score[27](/articles/s41586-021-03819-2#ref-CR27 "Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. Proteins 57, 702–710 (2004).") is used as a full chain global superposition metric. We also report accuracies using the r.m.s.d.95 (Cα r.m.s.d. at 95% coverage). We perform five iterations of (1) a least-squares alignment of the predicted structure and the PDB structure on the currently chosen Cα atoms (using all Cα atoms in the first iteration); (2) selecting the 95% of Cα atoms with the lowest alignment error. The r.m.s.d. of the atoms chosen for the final iterations is the r.m.s.d.95. This metric is more robust to apparent errors that can originate from crystal structure artefacts, although in some cases the removed 5% of residues will contain genuine modelling errors. ### Test set of recent PDB sequences For evaluation on recent PDB sequences (Figs. [2](/articles/s41586-021-03819-2#Fig2)a–d, [4](/articles/s41586-021-03819-2#Fig4)a, [5a](/articles/s41586-021-03819-2#Fig5)), we used a copy of the PDB downloaded 15 February 2021. Structures were filtered to those with a release date after 30 April 2018 (the date limit for inclusion in the training set for AlphaFold). Chains were further filtered to remove sequences that consisted of a single amino acid as well as sequences with an ambiguous chemical component at any residue position. Exact duplicates were removed, with the chain with the most resolved Cα atoms used as the representative sequence. Subsequently, structures with less than 16 resolved residues, with unknown residues or solved by NMR methods were removed. As the PDB contains many near-duplicate sequences, the chain with the highest resolution was selected from each cluster in the PDB 40% sequence clustering of the data. Furthermore, we removed all sequences for which fewer than 80 amino acids had the alpha carbon resolved and removed chains with more than 1,400 residues. The final dataset contained 10,795 protein sequences. The procedure for filtering the recent PDB dataset based on prior template identity was as follows. Hmmsearch was run with default parameters against a copy of the PDB SEQRES fasta downloaded 15 February 2021. Template hits were accepted if the associated structure had a release date earlier than 30 April 2018. Each residue position in a query sequence was assigned the maximum identity of any template hit covering that position. Filtering then proceeded as described in the individual figure legends, based on a combination of maximum identity and sequence coverage. The MSA depth analysis was based on computing the normalized number of effective sequences (\*N\*eff) for each position of a query sequence. Per-residue \*N\*eff values were obtained by counting the number of non-gap residues in the MSA for this position and weighting the sequences using the \*N\*eff scheme[76](/articles/s41586-021-03819-2#ref-CR76 "Wu, T., Hou, J., Adhikari, B. & Cheng, J. Analysis of several key factors influencing deep learning-based inter-residue contact prediction. Bioinformatics 36, 1091–1098 (2020).") with a threshold of 80% sequence identity measured on the region that is non-gap in either sequence. ### Reporting summary Further information on research design is available in the [Nature Research Reporting Summary](/articles/s41586-021-03819-2#MOESM2) linked to this paper. Data availability ----------------- All input data are freely available from public sources. Structures from the PDB were used for training and as templates (; for the associated sequence data and 40% sequence clustering see also and ). Training used a version of the PDB downloaded 28 August 2019, while the CASP14 template search used a version downloaded 14 May 2020. The template search also used the PDB70 database, downloaded 13 May 2020 (). We show experimental structures from the PDB with accession numbers [6Y4F](http://doi.org/10.2210/pdb6Y4F/pdb)[77](/articles/s41586-021-03819-2#ref-CR77 "Jiang, W. et al. MrpH, a new class of metal-binding adhesin, requires zinc to mediate biofilm formation. PLoS Pathog. 16, e1008707 (2020)."), [6YJ1](http://doi.org/10.2210/pdb6YJ1/pdb)[78](/articles/s41586-021-03819-2#ref-CR78 "Dunne, M., Ernst, P., Sobieraj, A., Pluckthun, A. & Loessner, M. J. The M23 peptidase domain of the Staphylococcal phage 2638A endolysin. PDB https://doi.org/10.2210/pdb6YJ1/pdb (2020)."), [6VR4](http://doi.org/10.2210/pdb6VR4/pdb)[79](/articles/s41586-021-03819-2#ref-CR79 "Drobysheva, A. V. et al. Structure and function of virion RNA polymerase of a crAss-like phage. Nature 589, 306–309 (2021)."), [6SK0](http://doi.org/10.2210/pdb6SK0/pdb)[80](/articles/s41586-021-03819-2#ref-CR80 "Flaugnatti, N. et al. Structural basis for loading and inhibition of a bacterial T6SS phospholipase effector by the VgrG spike. EMBO J. 39, e104129 (2020)."), [6FES](http://doi.org/10.2210/pdb6FES/pdb)[81](/articles/s41586-021-03819-2#ref-CR81 "ElGamacy, M. et al. An interface-driven design strategy yields a novel, corrugated protein architecture. ACS Synth. Biol. 7, 2226–2235 (2018)."), [6W6W](http://doi.org/10.2210/pdb6W6W/pdb)[82](/articles/s41586-021-03819-2#ref-CR82 "Lim, C. J. et al. The structure of human CST reveals a decameric assembly bound to telomeric DNA. Science 368, 1081–1085 (2020)."), [6T1Z](http://doi.org/10.2210/pdb6T1Z/pdb)[83](/articles/s41586-021-03819-2#ref-CR83 "Debruycker, V. et al. An embedded lipid in the multidrug transporter LmrP suggests a mechanism for polyspecificity. Nat. Struct. Mol. Biol. 27, 829–835 (2020).") and [7JTL](http://doi.org/10.2210/pdb7JTL/pdb)[84](/articles/s41586-021-03819-2#ref-CR84 "Flower, T. G. et al. Structure of SARS-CoV-2 ORF8, a rapidly evolving immune evasion protein. Proc. Natl Acad. Sci. USA 118, e2021785118 (2021)."). For MSA lookup at both the training and prediction time, we used UniRef90 v.2020\\_01 (https://ftp.ebi.ac.uk/pub/databases/uniprot/previous\\_releases/release-2020\\_01/uniref/), BFD (), Uniclust30 v.2018\\_08 () and MGnify clusters v.2018\\_12 (). Uniclust30 v.2018\\_08 was also used as input for constructing a distillation structure dataset. Code availability ----------------- Source code for the AlphaFold model, trained weights and inference script are available under an open-source license at . Neural networks were developed with TensorFlow v.1 (), Sonnet v.1 (), JAX v.0.1.69 () and Haiku v.0.0.4 (). The XLA compiler is bundled with JAX and does not have a separate version number. For MSA search on BFD+Uniclust30, and for template search against PDB70, we used HHBlits and HHSearch from hh-suite v.3.0-beta.3 release 14/07/2017 (). For MSA search on UniRef90 and clustered MGnify, we used jackhmmer from HMMER v.3.3 (). For constrained relaxation of structures, we used OpenMM v.7.3.1 () with the Amber99sb force field. Construction of BFD used MMseqs2 v.925AF () and FAMSA v.1.2.5 (). Data analysis used Python v.3.6 (), NumPy v.1.16.4 (), SciPy v.1.2.1 (), seaborn v.0.11.1 (), Matplotlib v.3.3.4 (), bokeh v.1.4.0 (), pandas v.1.1.5 (), plotnine v.0.8.0 (), statsmodels v.0.12.2 () and Colab (). TM-align v.20190822 () was used for computing TM-scores. Structure visualizations were created in Pymol v.2.3.0 (). References ---------- 1. Thompson, M. C., Yeates, T. O. & Rodriguez, J. A. Advances in methods for atomic resolution macromolecular structure determination. \*F1000Res\*. \*\*9\*\*, 667 (2020). [Article](https://doi.org/10.12688%2Ff1000research.25097.1)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXis1OgtrzK)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Advances%20in%20methods%20for%20atomic%20resolution%20macromolecular%20structure%20determination&journal=F1000Res.&doi=10.12688%2Ff1000research.25097.1&volume=9&publication\_year=2020&author=Thompson%2CMC&author=Yeates%2CTO&author=Rodriguez%2CJA) 2. Bai, X.-C., McMullan, G. & Scheres, S. H. W. How cryo-EM is revolutionizing structural biology. \*Trends Biochem. Sci\*. \*\*40\*\*, 49–57 (2015). [Article](https://doi.org/10.1016%2Fj.tibs.2014.10.005)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhvVCktLnK)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=25544475)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=How%20cryo-EM%20is%20revolutionizing%20structural%20biology&journal=Trends%20Biochem.%20Sci.&doi=10.1016%2Fj.tibs.2014.10.005&volume=40&pages=49-57&publication\_year=2015&author=Bai%2CX-C&author=McMullan%2CG&author=Scheres%2CSHW) 3. Jaskolski, M., Dauter, Z. & Wlodawer, A. A brief history of macromolecular crystallography, illustrated by a family tree and its Nobel fruits. \*FEBS J\*. \*\*281\*\*, 3985–4009 (2014). [Article](https://doi.org/10.1111%2Ffebs.12796)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2cXhsFKnsbnM)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=24698025)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6309182)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=A%20brief%20history%20of%20macromolecular%20crystallography%2C%20illustrated%20by%20a%20family%20tree%20and%20its%20Nobel%20fruits&journal=FEBS%20J.&doi=10.1111%2Ffebs.12796&volume=281&pages=3985-4009&publication\_year=2014&author=Jaskolski%2CM&author=Dauter%2CZ&author=Wlodawer%2CA) 4. Wüthrich, K. The way to NMR structures of proteins. \*Nat. Struct. Biol\*. \*\*8\*\*, 923–925 (2001). [Article](https://doi.org/10.1038%2Fnsb1101-923)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=11685234)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=The%20way%20to%20NMR%20structures%20of%20proteins&journal=Nat.%20Struct.%20Biol.&doi=10.1038%2Fnsb1101-923&volume=8&pages=923-925&publication\_year=2001&author=W%C3%BCthrich%2CK) 5. wwPDB Consortium. Protein Data Bank: the single global archive for 3D macromolecular structure data. \*Nucleic Acids Res\*. \*\*47\*\*, D520–D528 (2018). [Article](https://doi.org/10.1093%2Fnar%2Fgky949)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhs1Clu7zL)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein%20Data%20Bank%3A%20the%20single%20global%20archive%20for%203D%20macromolecular%20structure%20data&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgky949&volume=47&pages=D520-D528&publication\_year=2018) 6. Mitchell, A. L. et al. MGnify: the microbiome analysis resource in 2020. \*Nucleic Acids Res\*. \*\*48\*\*, D570–D578 (2020). [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhs1GltrjN)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31696235)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=MGnify%3A%20the%20microbiome%20analysis%20resource%20in%202020&journal=Nucleic%20Acids%20Res.&volume=48&pages=D570-D578&publication\_year=2020&author=Mitchell%2CAL) 7. Steinegger, M., Mirdita, M. & Söding, J. Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. \*Nat. Methods\* \*\*16\*\*, 603–606 (2019). [Article](https://doi.org/10.1038%2Fs41592-019-0437-4)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXht1eku7vJ)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31235882)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein-level%20assembly%20increases%20protein%20sequence%20recovery%20from%20metagenomic%20samples%20manyfold&journal=Nat.%20Methods&doi=10.1038%2Fs41592-019-0437-4&volume=16&pages=603-606&publication\_year=2019&author=Steinegger%2CM&author=Mirdita%2CM&author=S%C3%B6ding%2CJ) 8. Dill, K. A., Ozkan, S. B., Shell, M. S. & Weikl, T. R. The protein folding problem. \*Annu. Rev. Biophys\*. \*\*37\*\*, 289–316 (2008). [Article](https://doi.org/10.1146%2Fannurev.biophys.37.092707.153558)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1cXnsVGlurw%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=18573083)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2443096)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=1988PhRvC..37..289D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=The%20protein%20folding%20problem&journal=Annu.%20Rev.%20Biophys.&doi=10.1146%2Fannurev.biophys.37.092707.153558&volume=37&pages=289-316&publication\_year=2008&author=Dill%2CKA&author=Ozkan%2CSB&author=Shell%2CMS&author=Weikl%2CTR) 9. Anfinsen, C. B. Principles that govern the folding of protein chains. \*Science\* \*\*181\*\*, 223–230 (1973). [Article](https://doi.org/10.1126%2Fscience.181.4096.223)  [CAS](/articles/cas-redirect/1:CAS:528:DyaE3sXkvVygtbc%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=4124164)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=1973Sci...181..223A)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Principles%20that%20govern%20the%20folding%20of%20protein%20chains&journal=Science&doi=10.1126%2Fscience.181.4096.223&volume=181&pages=223-230&publication\_year=1973&author=Anfinsen%2CCB) 10. Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. \*Nature\* \*\*577\*\*, 706–710 (2020). [Article](https://doi.org/10.1038%2Fs41586-019-1923-7)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXis1SisL0%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31942072)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2020Natur.577..706S)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Improved%20protein%20structure%20prediction%20using%20potentials%20from%20deep%20learning&journal=Nature&doi=10.1038%2Fs41586-019-1923-7&volume=577&pages=706-710&publication\_year=2020&author=Senior%2CAW) 11. Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. \*PLOS Comput. Biol\*. \*\*13\*\*, e1005324 (2017). [Article](https://doi.org/10.1371%2Fjournal.pcbi.1005324)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=28056090)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5249242)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2017PLSCB..13E5324W)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXovVykurk%3D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Accurate%20de%20novo%20prediction%20of%20protein%20contact%20map%20by%20ultra-deep%20learning%20model&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1005324&volume=13&publication\_year=2017&author=Wang%2CS&author=Sun%2CS&author=Li%2CZ&author=Zhang%2CR&author=Xu%2CJ) 12. Zheng, W. et al. Deep-learning contact-map guided protein structure prediction in CASP13. \*Proteins\* \*\*87\*\*, 1149–1164 (2019). [Article](https://doi.org/10.1002%2Fprot.25792)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFKgsrnK)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31365149)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6851476)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Deep-learning%20contact-map%20guided%20protein%20structure%20prediction%20in%20CASP13&journal=Proteins&doi=10.1002%2Fprot.25792&volume=87&pages=1149-1164&publication\_year=2019&author=Zheng%2CW) 13. Abriata, L. A., Tamò, G. E. & Dal Peraro, M. A further leap of improvement in tertiary structure prediction in CASP13 prompts new routes for future assessments. \*Proteins\* \*\*87\*\*, 1100–1112 (2019). [Article](https://doi.org/10.1002%2Fprot.25787)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFaqs77K)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31344267)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=A%20further%20leap%20of%20improvement%20in%20tertiary%20structure%20prediction%20in%20CASP13%20prompts%20new%20routes%20for%20future%20assessments&journal=Proteins&doi=10.1002%2Fprot.25787&volume=87&pages=1100-1112&publication\_year=2019&author=Abriata%2CLA&author=Tam%C3%B2%2CGE&author=Dal%20Peraro%2CM) 14. Pearce, R. & Zhang, Y. Deep learning techniques have significantly impacted protein structure prediction and protein design. \*Curr. Opin. Struct. Biol\*. \*\*68\*\*, 194–207 (2021). [Article](https://doi.org/10.1016%2Fj.sbi.2021.01.007)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXks1Ghtr4%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33639355)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Deep%20learning%20techniques%20have%20significantly%20impacted%20protein%20structure%20prediction%20and%20protein%20design&journal=Curr.%20Opin.%20Struct.%20Biol.&doi=10.1016%2Fj.sbi.2021.01.007&volume=68&pages=194-207&publication\_year=2021&author=Pearce%2CR&author=Zhang%2CY) 15. Moult, J., Fidelis, K., Kryshtafovych, A., Schwede, T. & Topf, M. Critical assessment of techniques for protein structure prediction, fourteenth round. \*CASP 14 Abstract Book\* (2020). 16. Brini, E., Simmerling, C. & Dill, K. Protein storytelling through physics. \*Science\* \*\*370\*\*, eaaz3041 (2020). [Article](https://doi.org/10.1126%2Fscience.aaz3041)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXisVOmu7vF)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33243857)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7945008)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein%20storytelling%20through%20physics&journal=Science&doi=10.1126%2Fscience.aaz3041&volume=370&publication\_year=2020&author=Brini%2CE&author=Simmerling%2CC&author=Dill%2CK) 17. Sippl, M. J. Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins. \*J. Mol. Biol\*. \*\*213\*\*, 859–883 (1990). [Article](https://doi.org/10.1016%2FS0022-2836%2805%2980269-4)  [CAS](/articles/cas-redirect/1:CAS:528:DyaK3cXkvFCgs7Y%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=2359125)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Calculation%20of%20conformational%20ensembles%20from%20potentials%20of%20mean%20force.%20An%20approach%20to%20the%20knowledge-based%20prediction%20of%20local%20structures%20in%20globular%20proteins&journal=J.%20Mol.%20Biol.&doi=10.1016%2FS0022-2836%2805%2980269-4&volume=213&pages=859-883&publication\_year=1990&author=Sippl%2CMJ) 18. Šali, A. & Blundell, T. L. Comparative protein modelling by satisfaction of spatial restraints. \*J. Mol. Biol\*. \*\*234\*\*, 779–815 (1993). [Article](https://doi.org/10.1006%2Fjmbi.1993.1626)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=8254673)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Comparative%20protein%20modelling%20by%20satisfaction%20of%20spatial%20restraints&journal=J.%20Mol.%20Biol.&doi=10.1006%2Fjmbi.1993.1626&volume=234&pages=779-815&publication\_year=1993&author=%C5%A0ali%2CA&author=Blundell%2CTL) 19. Roy, A., Kucukural, A. & Zhang, Y. I-TASSER: a unified platform for automated protein structure and function prediction. \*Nat. Protocols\* \*\*5\*\*, 725–738 (2010). [Article](https://doi.org/10.1038%2Fnprot.2010.5)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXksVahs74%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=20360767)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=I-TASSER%3A%20a%20unified%20platform%20for%20automated%20protein%20structure%20and%20function%20prediction&journal=Nat.%20Protocols&doi=10.1038%2Fnprot.2010.5&volume=5&pages=725-738&publication\_year=2010&author=Roy%2CA&author=Kucukural%2CA&author=Zhang%2CY) 20. Altschuh, D., Lesk, A. M., Bloomer, A. C. & Klug, A. Correlation of co-ordinated amino acid substitutions with function in viruses related to tobacco mosaic virus. \*J. Mol. Biol\*. \*\*193\*\*, 693–707 (1987). [Article](https://doi.org/10.1016%2F0022-2836%2887%2990352-4)  [CAS](/articles/cas-redirect/1:CAS:528:DyaL2sXitV2ksL8%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=3612789)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Correlation%20of%20co-ordinated%20amino%20acid%20substitutions%20with%20function%20in%20viruses%20related%20to%20tobacco%20mosaic%20virus&journal=J.%20Mol.%20Biol.&doi=10.1016%2F0022-2836%2887%2990352-4&volume=193&pages=693-707&publication\_year=1987&author=Altschuh%2CD&author=Lesk%2CAM&author=Bloomer%2CAC&author=Klug%2CA) 21. Shindyalov, I. N., Kolchanov, N. A. & Sander, C. Can three-dimensional contacts in protein structures be predicted by analysis of correlated mutations? \*Protein Eng\*. \*\*7\*\*, 349–358 (1994). [Article](https://doi.org/10.1093%2Fprotein%2F7.3.349)  [CAS](/articles/cas-redirect/1:CAS:528:DyaK2cXitFWqtbs%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=8177884)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Can%20three-dimensional%20contacts%20in%20protein%20structures%20be%20predicted%20by%20analysis%20of%20correlated%20mutations%3F&journal=Protein%20Eng.&doi=10.1093%2Fprotein%2F7.3.349&volume=7&pages=349-358&publication\_year=1994&author=Shindyalov%2CIN&author=Kolchanov%2CNA&author=Sander%2CC) 22. Weigt, M., White, R. A., Szurmant, H., Hoch, J. A. & Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. \*Proc. Natl Acad. Sci. USA\* \*\*106\*\*, 67–72 (2009). [Article](https://doi.org/10.1073%2Fpnas.0805923106)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD1MXltF2jug%3D%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=19116270)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2009PNAS..106...67W)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Identification%20of%20direct%20residue%20contacts%20in%20protein%E2%80%93protein%20interaction%20by%20message%20passing&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.0805923106&volume=106&pages=67-72&publication\_year=2009&author=Weigt%2CM&author=White%2CRA&author=Szurmant%2CH&author=Hoch%2CJA&author=Hwa%2CT) 23. Marks, D. S. et al. Protein 3D structure computed from evolutionary sequence variation. \*PLoS ONE\* \*\*6\*\*, e28766 (2011). [Article](https://doi.org/10.1371%2Fjournal.pone.0028766)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhs1KhurnJ)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=22163331)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3233603)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2011PLoSO...628766M)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein%203D%20structure%20computed%20from%20evolutionary%20sequence%20variation&journal=PLoS%20ONE&doi=10.1371%2Fjournal.pone.0028766&volume=6&publication\_year=2011&author=Marks%2CDS) 24. Jones, D. T., Buchan, D. W. A., Cozzetto, D. & Pontil, M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. \*Bioinformatics\* \*\*28\*\*, 184–190 (2012). [Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtr638)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC38Xht1agurg%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=22101153)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=PSICOV%3A%20precise%20structural%20contact%20prediction%20using%20sparse%20inverse%20covariance%20estimation%20on%20large%20multiple%20sequence%20alignments&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtr638&volume=28&pages=184-190&publication\_year=2012&author=Jones%2CDT&author=Buchan%2CDWA&author=Cozzetto%2CD&author=Pontil%2CM) 25. Moult, J., Pedersen, J. T., Judson, R. & Fidelis, K. A large-scale experiment to assess protein structure prediction methods. \*Proteins\* \*\*23\*\*, ii–iv (1995). [Article](https://doi.org/10.1002%2Fprot.340230303)  [CAS](/articles/cas-redirect/1:STN:280:DyaK287oslCntw%3D%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=8710822)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=A%20large-scale%20experiment%20to%20assess%20protein%20structure%20prediction%20methods&journal=Proteins&doi=10.1002%2Fprot.340230303&volume=23&pages=ii-iv&publication\_year=1995&author=Moult%2CJ&author=Pedersen%2CJT&author=Judson%2CR&author=Fidelis%2CK) 26. Kryshtafovych, A., Schwede, T., Topf, M., Fidelis, K. & Moult, J. Critical assessment of methods of protein structure prediction (CASP)-round XIII. \*Proteins\* \*\*87\*\*, 1011–1020 (2019). [Article](https://doi.org/10.1002%2Fprot.25823)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitVSlt7%2FO)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31589781)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927249)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Critical%20assessment%20of%20methods%20of%20protein%20structure%20prediction%20%28CASP%29-round%20XIII&journal=Proteins&doi=10.1002%2Fprot.25823&volume=87&pages=1011-1020&publication\_year=2019&author=Kryshtafovych%2CA&author=Schwede%2CT&author=Topf%2CM&author=Fidelis%2CK&author=Moult%2CJ) 27. Zhang, Y. & Skolnick, J. Scoring function for automated assessment of protein structure template quality. \*Proteins\* \*\*57\*\*, 702–710 (2004). [Article](https://doi.org/10.1002%2Fprot.20264)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD2cXhtVaqtLvI)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=15476259)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Scoring%20function%20for%20automated%20assessment%20of%20protein%20structure%20template%20quality&journal=Proteins&doi=10.1002%2Fprot.20264&volume=57&pages=702-710&publication\_year=2004&author=Zhang%2CY&author=Skolnick%2CJ) 28. Tu, Z. & Bai, X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. \*IEEE Trans. Pattern Anal. Mach. Intell\*. \*\*32\*\*, 1744–1757 (2010). [Article](https://doi.org/10.1109%2FTPAMI.2009.186)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=20724753)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Auto-context%20and%20its%20application%20to%20high-level%20vision%20tasks%20and%203D%20brain%20image%20segmentation&journal=IEEE%20Trans.%20Pattern%20Anal.%20Mach.%20Intell.&doi=10.1109%2FTPAMI.2009.186&volume=32&pages=1744-1757&publication\_year=2010&author=Tu%2CZ&author=Bai%2CX) 29. Carreira, J., Agrawal, P., Fragkiadaki, K. & Malik, J. Human pose estimation with iterative error feedback. In \*Proc. IEEE Conference on Computer Vision and Pattern Recognition\* 4733–4742 (2016). 30. Mirabello, C. & Wallner, B. rawMSA: end-to-end deep learning using raw multiple sequence alignments. \*PLoS ONE\* \*\*14\*\*, e0220182 (2019). [Article](https://doi.org/10.1371%2Fjournal.pone.0220182)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhvFamur7E)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31415569)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6695225)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=rawMSA%3A%20end-to-end%20deep%20learning%20using%20raw%20multiple%20sequence%20alignments&journal=PLoS%20ONE&doi=10.1371%2Fjournal.pone.0220182&volume=14&publication\_year=2019&author=Mirabello%2CC&author=Wallner%2CB) 31. Huang, Z. et al. CCNet: criss-cross attention for semantic segmentation. In \*Proc. IEEE/CVF International Conference on Computer Vision\* 603–612 (2019). 32. Hornak, V. et al. Comparison of multiple Amber force fields and development of improved protein backbone parameters. \*Proteins\* \*\*65\*\*, 712–725 (2006). [Article](https://doi.org/10.1002%2Fprot.21123)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD28XhtFWqt7fM)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=16981200)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4805110)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Comparison%20of%20multiple%20Amber%20force%20fields%20and%20development%20of%20improved%20protein%20backbone%20parameters&journal=Proteins&doi=10.1002%2Fprot.21123&volume=65&pages=712-725&publication\_year=2006&author=Hornak%2CV) 33. Zemla, A. LGA: a method for finding 3D similarities in protein structures. \*Nucleic Acids Res\*. \*\*31\*\*, 3370–3374 (2003). [Article](https://doi.org/10.1093%2Fnar%2Fgkg571)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD3sXltVWjtbk%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=12824330)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC168977)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=LGA%3A%20a%20method%20for%20finding%203D%20similarities%20in%20protein%20structures&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkg571&volume=31&pages=3370-3374&publication\_year=2003&author=Zemla%2CA) 34. Mariani, V., Biasini, M., Barbato, A. & Schwede, T. lDDT: a local superposition-free score for comparing protein structures and models using distance difference tests. \*Bioinformatics\* \*\*29\*\*, 2722–2728 (2013). [Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtt473)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3sXhs1CisrfK)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=23986568)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799472)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=lDDT%3A%20a%20local%20superposition-free%20score%20for%20comparing%20protein%20structures%20and%20models%20using%20distance%20difference%20tests&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtt473&volume=29&pages=2722-2728&publication\_year=2013&author=Mariani%2CV&author=Biasini%2CM&author=Barbato%2CA&author=Schwede%2CT) 35. Xie, Q., Luong, M.-T., Hovy, E. & Le, Q. V. Self-training with noisy student improves imagenet classification. In \*Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition\* 10687–10698 (2020). 36. Mirdita, M. et al. Uniclust databases of clustered and deeply annotated protein sequences and alignments. \*Nucleic Acids Res\*. \*\*45\*\*, D170–D176 (2017). [Article](https://doi.org/10.1093%2Fnar%2Fgkw1081)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXhslWgsb8%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=27899574)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Uniclust%20databases%20of%20clustered%20and%20deeply%20annotated%20protein%20sequences%20and%20alignments&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkw1081&volume=45&pages=D170-D176&publication\_year=2017&author=Mirdita%2CM) 37. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In \*Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\* 1, 4171–4186 (2019). 38. Rao, R. et al. MSA transformer. In \*Proc. 38th International Conference on Machine Learning\* PMLR 139, 8844–8856 (2021). 39. Tunyasuvunakool, K. et al. Highly accurate protein structure prediction for the human proteome. \*Nature\* (2021). 40. Kuhlman, B. & Bradley, P. Advances in protein structure prediction and design. \*Nat. Rev. Mol. Cell Biol\*. \*\*20\*\*, 681–697 (2019). [Article](https://doi.org/10.1038%2Fs41580-019-0163-x)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsFyksL7J)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31417196)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7032036)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Advances%20in%20protein%20structure%20prediction%20and%20design&journal=Nat.%20Rev.%20Mol.%20Cell%20Biol.&doi=10.1038%2Fs41580-019-0163-x&volume=20&pages=681-697&publication\_year=2019&author=Kuhlman%2CB&author=Bradley%2CP) 41. Marks, D. S., Hopf, T. A. & Sander, C. Protein structure prediction from sequence variation. \*Nat. Biotechnol\*. \*\*30\*\*, 1072–1080 (2012). [Article](https://doi.org/10.1038%2Fnbt.2419)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC38Xhs1elt7bM)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=23138306)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4319528)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein%20structure%20prediction%20from%20sequence%20variation&journal=Nat.%20Biotechnol.&doi=10.1038%2Fnbt.2419&volume=30&pages=1072-1080&publication\_year=2012&author=Marks%2CDS&author=Hopf%2CTA&author=Sander%2CC) 42. Qian, N. & Sejnowski, T. J. Predicting the secondary structure of globular proteins using neural network models. \*J. Mol. Biol\*. \*\*202\*\*, 865–884 (1988). [Article](https://doi.org/10.1016%2F0022-2836%2888%2990564-5)  [CAS](/articles/cas-redirect/1:CAS:528:DyaL1MXhtlWksb0%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=3172241)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Predicting%20the%20secondary%20structure%20of%20globular%20proteins%20using%20neural%20network%20models&journal=J.%20Mol.%20Biol.&doi=10.1016%2F0022-2836%2888%2990564-5&volume=202&pages=865-884&publication\_year=1988&author=Qian%2CN&author=Sejnowski%2CTJ) 43. Fariselli, P., Olmea, O., Valencia, A. & Casadio, R. Prediction of contact maps with neural networks and correlated mutations. \*Protein Eng\*. \*\*14\*\*, 835–843 (2001). [Article](https://doi.org/10.1093%2Fprotein%2F14.11.835)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BD38XjtVentA%3D%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=11742102)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Prediction%20of%20contact%20maps%20with%20neural%20networks%20and%20correlated%20mutations&journal=Protein%20Eng.&doi=10.1093%2Fprotein%2F14.11.835&volume=14&pages=835-843&publication\_year=2001&author=Fariselli%2CP&author=Olmea%2CO&author=Valencia%2CA&author=Casadio%2CR) 44. Yang, J. et al. Improved protein structure prediction using predicted interresidue orientations. \*Proc. Natl Acad. Sci. USA\* \*\*117\*\*, 1496–1503 (2020). [Article](https://doi.org/10.1073%2Fpnas.1914677117)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhsFKrsLg%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31896580)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6983395)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Improved%20protein%20structure%20prediction%20using%20predicted%20interresidue%20orientations&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.1914677117&volume=117&pages=1496-1503&publication\_year=2020&author=Yang%2CJ) 45. Li, Y. et al. Deducing high-accuracy protein contact-maps from a triplet of coevolutionary matrices through deep residual convolutional networks. \*PLOS Comput. Biol\*. \*\*17\*\*, e1008865 (2021). [Article](https://doi.org/10.1371%2Fjournal.pcbi.1008865)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXosFSms78%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33770072)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8026059)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Deducing%20high-accuracy%20protein%20contact-maps%20from%20a%20triplet%20of%20coevolutionary%20matrices%20through%20deep%20residual%20convolutional%20networks&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1008865&volume=17&publication\_year=2021&author=Li%2CY) 46. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In \*Proc. IEEE Conference on Computer Vision and Pattern Recognition\* 770–778 (2016). 47. AlQuraishi, M. End-to-end differentiable learning of protein structure. \*Cell Syst\*. \*\*8\*\*, 292–301 (2019). [Article](https://doi.org/10.1016%2Fj.cels.2019.03.006)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXosVyhtb0%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31005579)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6513320)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=End-to-end%20differentiable%20learning%20of%20protein%20structure&journal=Cell%20Syst.&doi=10.1016%2Fj.cels.2019.03.006&volume=8&pages=292-301&publication\_year=2019&author=AlQuraishi%2CM) 48. Senior, A. W. et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13). \*Proteins\* \*\*87\*\*, 1141–1148 (2019). [Article](https://doi.org/10.1002%2Fprot.25834)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitFartb%2FK)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31602685)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079254)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Protein%20structure%20prediction%20using%20multiple%20deep%20neural%20networks%20in%20the%2013th%20Critical%20Assessment%20of%20Protein%20Structure%20Prediction%20%28CASP13%29&journal=Proteins&doi=10.1002%2Fprot.25834&volume=87&pages=1141-1148&publication\_year=2019&author=Senior%2CAW) 49. Ingraham, J., Riesselman, A. J., Sander, C. & Marks, D. S. Learning protein structure with a differentiable simulator. in \*Proc. International Conference on Learning Representations\* (2019). 50. Li, J. Universal transforming geometric network. Preprint at (2019). 51. Xu, J., McPartlon, M. & Li, J. Improved protein structure prediction by deep learning irrespective of co-evolution information. \*Nat. Mach. Intell\*. \*\*3\*\*, 601–609 (2021). [Article](https://doi.org/10.1038%2Fs42256-021-00348-5)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=34368623)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8340610)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Improved%20protein%20structure%20prediction%20by%20deep%20learning%20irrespective%20of%20co-evolution%20information&journal=Nat.%20Mach.%20Intell.&doi=10.1038%2Fs42256-021-00348-5&volume=3&pages=601-609&publication\_year=2021&author=Xu%2CJ&author=McPartlon%2CM&author=Li%2CJ) 52. Vaswani, A. et al. Attention is all you need. In \*Advances in Neural Information Processing Systems\* 5998–6008 (2017). 53. Wang, H. et al. Axial-deeplab: stand-alone axial-attention for panoptic segmentation. in \*European Conference on Computer Vision\* 108–126 (Springer, 2020). 54. Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M. & Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. \*Nat. Methods\* \*\*16\*\*, 1315–1322 (2019). [Article](https://doi.org/10.1038%2Fs41592-019-0598-1)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXitVSlsbnJ)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31636460)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7067682)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Unified%20rational%20protein%20engineering%20with%20sequence-based%20deep%20representation%20learning&journal=Nat.%20Methods&doi=10.1038%2Fs41592-019-0598-1&volume=16&pages=1315-1322&publication\_year=2019&author=Alley%2CEC&author=Khimulya%2CG&author=Biswas%2CS&author=AlQuraishi%2CM&author=Church%2CGM) 55. Heinzinger, M. et al. Modeling aspects of the language of life through transfer-learning protein sequences. \*BMC Bioinformatics\* \*\*20\*\*, 723 (2019). [Article](https://doi.org/10.1186%2Fs12859-019-3220-8)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXisVGjsLbK)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31847804)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6918593)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Modeling%20aspects%20of%20the%20language%20of%20life%20through%20transfer-learning%20protein%20sequences&journal=BMC%20Bioinformatics&doi=10.1186%2Fs12859-019-3220-8&volume=20&publication\_year=2019&author=Heinzinger%2CM) 56. Rives, A. et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. \*Proc. Natl Acad. Sci. USA\* \*\*118\*\*, e2016239118 (2021). [Article](https://doi.org/10.1073%2Fpnas.2016239118)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXovVantro%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33876751)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8053943)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Biological%20structure%20and%20function%20emerge%20from%20scaling%20unsupervised%20learning%20to%20250%20million%20protein%20sequences&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.2016239118&volume=118&publication\_year=2021&author=Rives%2CA) 57. Pereira, J. et al. High-accuracy protein structure prediction in CASP14. \*Proteins\* (2021). [Article](https://doi.org/10.1002%2Fprot.26171)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=34387010)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8881082)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=High-accuracy%20protein%20structure%20prediction%20in%20CASP14&journal=Proteins&doi=10.1002%2Fprot.26171&publication\_year=2021&author=Pereira%2CJ) 58. Gupta, M. et al. CryoEM and AI reveal a structure of SARS-CoV-2 Nsp2, a multifunctional protein involved in key host processes. Preprint at (2021). 59. Ingraham, J., Garg, V. K., Barzilay, R. & Jaakkola, T. Generative models for graph-based protein design. in \*Proc. 33rd Conference on Neural Information Processing Systems\* (2019). 60. Johnson, L. S., Eddy, S. R. & Portugaly, E. Hidden Markov model speed heuristic and iterative HMM search procedure. \*BMC Bioinformatics\* \*\*11\*\*, 431 (2010). [Article](https://doi.org/10.1186%2F1471-2105-11-431)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=20718988)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2931519)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3cXhtVKqsbrF)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Hidden%20Markov%20model%20speed%20heuristic%20and%20iterative%20HMM%20search%20procedure&journal=BMC%20Bioinformatics&doi=10.1186%2F1471-2105-11-431&volume=11&publication\_year=2010&author=Johnson%2CLS&author=Eddy%2CSR&author=Portugaly%2CE) 61. Remmert, M., Biegert, A., Hauser, A. & Söding, J. HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. \*Nat. Methods\* \*\*9\*\*, 173–175 (2012). [Article](https://doi.org/10.1038%2Fnmeth.1818)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhs1OltbnO)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=HHblits%3A%20lightning-fast%20iterative%20protein%20sequence%20searching%20by%20HMM-HMM%20alignment&journal=Nat.%20Methods&doi=10.1038%2Fnmeth.1818&volume=9&pages=173-175&publication\_year=2012&author=Remmert%2CM&author=Biegert%2CA&author=Hauser%2CA&author=S%C3%B6ding%2CJ) 62. The UniProt Consortium. UniProt: the universal protein knowledgebase in 2021. \*Nucleic Acids Res\*. \*\*49\*\*, D480–D489 (2020). [Article](https://doi.org/10.1093%2Fnar%2Fgkaa1100)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXntFCit7s%3D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=UniProt%3A%20the%20universal%20protein%20knowledgebase%20in%202021&journal=Nucleic%20Acids%20Res.&doi=10.1093%2Fnar%2Fgkaa1100&volume=49&pages=D480-D489&publication\_year=2020) 63. Steinegger, M. & Söding, J. Clustering huge protein sequence sets in linear time. \*Nat. Commun\*. \*\*9\*\*, 2542 (2018). [Article](https://doi.org/10.1038%2Fs41467-018-04964-5)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=29959318)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6026198)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2018NatCo...9.2542S)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXht1Cns7rO)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Clustering%20huge%20protein%20sequence%20sets%20in%20linear%20time&journal=Nat.%20Commun.&doi=10.1038%2Fs41467-018-04964-5&volume=9&publication\_year=2018&author=Steinegger%2CM&author=S%C3%B6ding%2CJ) 64. Steinegger, M. & Söding, J. MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. \*Nat. Biotechnol\*. \*\*35\*\*, 1026–1028 (2017). [Article](https://doi.org/10.1038%2Fnbt.3988)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC2sXhs1GqsLzE)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=29035372)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=MMseqs2%20enables%20sensitive%20protein%20sequence%20searching%20for%20the%20analysis%20of%20massive%20data%20sets&journal=Nat.%20Biotechnol.&doi=10.1038%2Fnbt.3988&volume=35&pages=1026-1028&publication\_year=2017&author=Steinegger%2CM&author=S%C3%B6ding%2CJ) 65. Deorowicz, S., Debudaj-Grabysz, A. & Gudyś, A. FAMSA: fast and accurate multiple sequence alignment of huge protein families. \*Sci. Rep\*. \*\*6\*\*, 33964 (2016). [Article](https://doi.org/10.1038%2Fsrep33964)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28XhsF2qs7fN)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=27670777)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5037421)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2016NatSR...633964D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=FAMSA%3A%20fast%20and%20accurate%20multiple%20sequence%20alignment%20of%20huge%20protein%20families&journal=Sci.%20Rep.&doi=10.1038%2Fsrep33964&volume=6&publication\_year=2016&author=Deorowicz%2CS&author=Debudaj-Grabysz%2CA&author=Gudy%C5%9B%2CA) 66. Steinegger, M. et al. HH-suite3 for fast remote homology detection and deep protein annotation. \*BMC Bioinformatics\* \*\*20\*\*, 473 (2019). [Article](https://doi.org/10.1186%2Fs12859-019-3019-7)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31521110)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6744700)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1MXhsl2hurbK)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=HH-suite3%20for%20fast%20remote%20homology%20detection%20and%20deep%20protein%20annotation&journal=BMC%20Bioinformatics&doi=10.1186%2Fs12859-019-3019-7&volume=20&publication\_year=2019&author=Steinegger%2CM) 67. Suzek, B. E., Wang, Y., Huang, H., McGarvey, P. B. & Wu, C. H. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. \*Bioinformatics\* \*\*31\*\*, 926–932 (2015). [Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtu739)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC28Xht1Gntb7F)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=25398609)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=UniRef%20clusters%3A%20a%20comprehensive%20and%20scalable%20alternative%20for%20improving%20sequence%20similarity%20searches&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtu739&volume=31&pages=926-932&publication\_year=2015&author=Suzek%2CBE&author=Wang%2CY&author=Huang%2CH&author=McGarvey%2CPB&author=Wu%2CCH) 68. Eddy, S. R. Accelerated profile HMM searches. \*PLOS Comput. Biol\*. \*\*7\*\*, e1002195 (2011). [Article](https://doi.org/10.1371%2Fjournal.pcbi.1002195)  [MathSciNet](http://www.ams.org/mathscinet-getitem?mr=2859646)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC3MXhsVCku7rL)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=22039361)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3197634)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2011PLSCB...7E2195E)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Accelerated%20profile%20HMM%20searches&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1002195&volume=7&publication\_year=2011&author=Eddy%2CSR) 69. Eastman, P. et al. OpenMM 7: rapid development of high performance algorithms for molecular dynamics. \*PLOS Comput. Biol\*. \*\*13\*\*, e1005659 (2017). [Article](https://doi.org/10.1371%2Fjournal.pcbi.1005659)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=28746339)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5549999)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXivVWhur0%3D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=OpenMM%207%3A%20rapid%20development%20of%20high%20performance%20algorithms%20for%20molecular%20dynamics&journal=PLOS%20Comput.%20Biol.&doi=10.1371%2Fjournal.pcbi.1005659&volume=13&publication\_year=2017&author=Eastman%2CP) 70. Ashish, A. M. A. et al. TensorFlow: large-scale machine learning on heterogeneous systems. Preprint at (2015). 71. Reynolds, M. et al. Open sourcing Sonnet – a new library for constructing neural networks. \*DeepMind\* (7 April 2017). 72. Harris, C. R. et al. Array programming with NumPy. \*Nature\* \*\*585\*\*, 357–362 (2020). [Article](https://doi.org/10.1038%2Fs41586-020-2649-2)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXitlWmsbbN)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=32939066)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759461)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2020Natur.585..357H)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Array%20programming%20with%20NumPy&journal=Nature&doi=10.1038%2Fs41586-020-2649-2&volume=585&pages=357-362&publication\_year=2020&author=Harris%2CCR) 73. Van Rossum, G. & Drake, F. L. \*Python 3 Reference Manual\* (CreateSpace, 2009). 74. Bisong, E. in \*Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners\* 59–64 (Apress, 2019). 75. TensorFlow. XLA: Optimizing Compiler for TensorFlow. (2018). 76. Wu, T., Hou, J., Adhikari, B. & Cheng, J. Analysis of several key factors influencing deep learning-based inter-residue contact prediction. \*Bioinformatics\* \*\*36\*\*, 1091–1098 (2020). [Article](https://doi.org/10.1093%2Fbioinformatics%2Fbtz679)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXisVOrtbvJ)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=31504181)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Analysis%20of%20several%20key%20factors%20influencing%20deep%20learning-based%20inter-residue%20contact%20prediction&journal=Bioinformatics&doi=10.1093%2Fbioinformatics%2Fbtz679&volume=36&pages=1091-1098&publication\_year=2020&author=Wu%2CT&author=Hou%2CJ&author=Adhikari%2CB&author=Cheng%2CJ) 77. Jiang, W. et al. MrpH, a new class of metal-binding adhesin, requires zinc to mediate biofilm formation. \*PLoS Pathog\*. \*\*16\*\*, e1008707 (2020). [Article](https://doi.org/10.1371%2Fjournal.ppat.1008707)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhs1Glt7fI)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=32780778)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7444556)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=MrpH%2C%20a%20new%20class%20of%20metal-binding%20adhesin%2C%20requires%20zinc%20to%20mediate%20biofilm%20formation&journal=PLoS%20Pathog.&doi=10.1371%2Fjournal.ppat.1008707&volume=16&publication\_year=2020&author=Jiang%2CW) 78. Dunne, M., Ernst, P., Sobieraj, A., Pluckthun, A. & Loessner, M. J. The M23 peptidase domain of the Staphylococcal phage 2638A endolysin. \*PDB\* (2020). 79. Drobysheva, A. V. et al. Structure and function of virion RNA polymerase of a crAss-like phage. \*Nature\* \*\*589\*\*, 306–309 (2021). [Article](https://doi.org/10.1038%2Fs41586-020-2921-5)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXitlOgs7jI)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33208949)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2021Natur.589..306D)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Structure%20and%20function%20of%20virion%20RNA%20polymerase%20of%20a%20crAss-like%20phage&journal=Nature&doi=10.1038%2Fs41586-020-2921-5&volume=589&pages=306-309&publication\_year=2021&author=Drobysheva%2CAV) 80. Flaugnatti, N. et al. Structural basis for loading and inhibition of a bacterial T6SS phospholipase effector by the VgrG spike. \*EMBO J\*. \*\*39\*\*, e104129 (2020). [Article](https://doi.org/10.15252%2Fembj.2019104129)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXotFCnu7s%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=32350888)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7265238)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Structural%20basis%20for%20loading%20and%20inhibition%20of%20a%20bacterial%20T6SS%20phospholipase%20effector%20by%20the%20VgrG%20spike&journal=EMBO%20J.&doi=10.15252%2Fembj.2019104129&volume=39&publication\_year=2020&author=Flaugnatti%2CN) 81. ElGamacy, M. et al. An interface-driven design strategy yields a novel, corrugated protein architecture. \*ACS Synth. Biol\*. \*\*7\*\*, 2226–2235 (2018). [Article](https://doi.org/10.1021%2Facssynbio.8b00224)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BC1cXhsFyqtbnP)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=30148951)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=An%20interface-driven%20design%20strategy%20yields%20a%20novel%2C%20corrugated%20protein%20architecture&journal=ACS%20Synth.%20Biol.&doi=10.1021%2Facssynbio.8b00224&volume=7&pages=2226-2235&publication\_year=2018&author=ElGamacy%2CM) 82. Lim, C. J. et al. The structure of human CST reveals a decameric assembly bound to telomeric DNA. \*Science\* \*\*368\*\*, 1081–1085 (2020). [Article](https://doi.org/10.1126%2Fscience.aaz9649)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhtFSqt7fI)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=32499435)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7559292)  [ADS](http://adsabs.harvard.edu/cgi-bin/nph-data\_query?link\_type=ABSTRACT&bibcode=2020Sci...368.1081L)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=The%20structure%20of%20human%20CST%20reveals%20a%20decameric%20assembly%20bound%20to%20telomeric%20DNA&journal=Science&doi=10.1126%2Fscience.aaz9649&volume=368&pages=1081-1085&publication\_year=2020&author=Lim%2CCJ) 83. Debruycker, V. et al. An embedded lipid in the multidrug transporter LmrP suggests a mechanism for polyspecificity. \*Nat. Struct. Mol. Biol\*. \*\*27\*\*, 829–835 (2020). [Article](https://doi.org/10.1038%2Fs41594-020-0464-y)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3cXhsVKkt7bE)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=32719456)  [PubMed Central](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7951658)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=An%20embedded%20lipid%20in%20the%20multidrug%20transporter%20LmrP%20suggests%20a%20mechanism%20for%20polyspecificity&journal=Nat.%20Struct.%20Mol.%20Biol.&doi=10.1038%2Fs41594-020-0464-y&volume=27&pages=829-835&publication\_year=2020&author=Debruycker%2CV) 84. Flower, T. G. et al. Structure of SARS-CoV-2 ORF8, a rapidly evolving immune evasion protein. \*Proc. Natl Acad. Sci. USA\* \*\*118\*\*, e2021785118 (2021). [Article](https://doi.org/10.1073%2Fpnas.2021785118)  [CAS](/articles/cas-redirect/1:CAS:528:DC%2BB3MXhsVCitb4%3D)  [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Abstract&list\_uids=33361333)  [Google Scholar](http://scholar.google.com/scholar\_lookup?&title=Structure%20of%20SARS-CoV-2%20ORF8%2C%20a%20rapidly%20evolving%20immune%20evasion%20protein&journal=Proc.%20Natl%20Acad.%20Sci.%20USA&doi=10.1073%2Fpnas.2021785118&volume=118&publication\_year=2021&author=Flower%2CTG) [Download references](https://citation-needed.springer.com/v2/references/10.1038/s41586-021-03819-2?format=refman&flavour=references) Acknowledgements ---------------- We thank A. Rrustemi, A. Gu, A. Guseynov, B. Hechtman, C. Beattie, C. Jones, C. Donner, E. Parisotto, E. Elsen, F. Popovici, G. Necula, H. Maclean, J. Menick, J. Kirkpatrick, J. Molloy, J. Yim, J. Stanway, K. Simonyan, L. Sifre, L. Martens, M. Johnson, M. O’Neill, N. Antropova, R. Hadsell, S. Blackwell, S. Das, S. Hou, S. Gouws, S. Wheelwright, T. Hennigan, T. Ward, Z. Wu, Ž. Avsec and the Research Platform Team for their contributions; M. Mirdita for his help with the datasets; M. Piovesan-Forster, A. Nelson and R. Kemp for their help managing the project; the JAX, TensorFlow and XLA teams for detailed support and enabling machine learning models of the complexity of AlphaFold; our colleagues at DeepMind, Google and Alphabet for their encouragement and support; and J. Moult and the CASP14 organizers, and the experimentalists whose structures enabled the assessment. M.S. acknowledges support from the National Research Foundation of Korea grant (2019R1A6A1A10073437, 2020M3A9G7103933) and the Creative-Pioneering Researchers Program through Seoul National University. Author information ------------------ Author notes1. These authors contributed equally: John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Demis Hassabis ### Authors and Affiliations 1. DeepMind, London, UK John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli & Demis Hassabis 2. School of Biological Sciences, Seoul National University, Seoul, South Korea Martin Steinegger 3. Artificial Intelligence Institute, Seoul National University, Seoul, South Korea Martin Steinegger Authors1. John Jumper[View author publications](/search?author=John%20Jumper)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=John%20Jumper) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22John%20Jumper%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 2. Richard Evans[View author publications](/search?author=Richard%20Evans)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Richard%20Evans) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Richard%20Evans%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 3. Alexander Pritzel[View author publications](/search?author=Alexander%20Pritzel)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Alexander%20Pritzel) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Alexander%20Pritzel%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 4. Tim Green[View author publications](/search?author=Tim%20Green)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Tim%20Green) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Tim%20Green%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 5. Michael Figurnov[View author publications](/search?author=Michael%20Figurnov)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michael%20Figurnov) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Michael%20Figurnov%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 6. Olaf Ronneberger[View author publications](/search?author=Olaf%20Ronneberger)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Olaf%20Ronneberger) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Olaf%20Ronneberger%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 7. Kathryn Tunyasuvunakool[View author publications](/search?author=Kathryn%20Tunyasuvunakool)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Kathryn%20Tunyasuvunakool) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Kathryn%20Tunyasuvunakool%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 8. Russ Bates[View author publications](/search?author=Russ%20Bates)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Russ%20Bates) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Russ%20Bates%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 9. Augustin Žídek[View author publications](/search?author=Augustin%20%C5%BD%C3%ADdek)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Augustin%20%C5%BD%C3%ADdek) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Augustin%20%C5%BD%C3%ADdek%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 10. Anna Potapenko[View author publications](/search?author=Anna%20Potapenko)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Anna%20Potapenko) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Anna%20Potapenko%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 11. Alex Bridgland[View author publications](/search?author=Alex%20Bridgland)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Alex%20Bridgland) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Alex%20Bridgland%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 12. Clemens Meyer[View author publications](/search?author=Clemens%20Meyer)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Clemens%20Meyer) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Clemens%20Meyer%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 13. Simon A. A. Kohl[View author publications](/search?author=Simon%20A.%20A.%20Kohl)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Simon%20A.%20A.%20Kohl) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Simon%20A.%20A.%20Kohl%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 14. Andrew J. Ballard[View author publications](/search?author=Andrew%20J.%20Ballard)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20J.%20Ballard) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Andrew%20J.%20Ballard%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 15. Andrew Cowie[View author publications](/search?author=Andrew%20Cowie)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20Cowie) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Andrew%20Cowie%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 16. Bernardino Romera-Paredes[View author publications](/search?author=Bernardino%20Romera-Paredes)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Bernardino%20Romera-Paredes) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Bernardino%20Romera-Paredes%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 17. Stanislav Nikolov[View author publications](/search?author=Stanislav%20Nikolov)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stanislav%20Nikolov) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Stanislav%20Nikolov%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 18. Rishub Jain[View author publications](/search?author=Rishub%20Jain)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Rishub%20Jain) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Rishub%20Jain%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 19. Jonas Adler[View author publications](/search?author=Jonas%20Adler)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Jonas%20Adler) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Jonas%20Adler%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 20. Trevor Back[View author publications](/search?author=Trevor%20Back)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Trevor%20Back) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Trevor%20Back%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 21. Stig Petersen[View author publications](/search?author=Stig%20Petersen)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Stig%20Petersen) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Stig%20Petersen%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 22. David Reiman[View author publications](/search?author=David%20Reiman)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=David%20Reiman) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22David%20Reiman%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 23. Ellen Clancy[View author publications](/search?author=Ellen%20Clancy)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Ellen%20Clancy) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Ellen%20Clancy%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 24. Michal Zielinski[View author publications](/search?author=Michal%20Zielinski)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michal%20Zielinski) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Michal%20Zielinski%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 25. Martin Steinegger[View author publications](/search?author=Martin%20Steinegger)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Martin%20Steinegger) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Martin%20Steinegger%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 26. Michalina Pacholska[View author publications](/search?author=Michalina%20Pacholska)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Michalina%20Pacholska) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Michalina%20Pacholska%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 27. Tamas Berghammer[View author publications](/search?author=Tamas%20Berghammer)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Tamas%20Berghammer) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Tamas%20Berghammer%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 28. Sebastian Bodenstein[View author publications](/search?author=Sebastian%20Bodenstein)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Sebastian%20Bodenstein) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Sebastian%20Bodenstein%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 29. David Silver[View author publications](/search?author=David%20Silver)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=David%20Silver) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22David%20Silver%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 30. Oriol Vinyals[View author publications](/search?author=Oriol%20Vinyals)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Oriol%20Vinyals) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Oriol%20Vinyals%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 31. Andrew W. Senior[View author publications](/search?author=Andrew%20W.%20Senior)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Andrew%20W.%20Senior) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Andrew%20W.%20Senior%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 32. Koray Kavukcuoglu[View author publications](/search?author=Koray%20Kavukcuoglu)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Koray%20Kavukcuoglu) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Koray%20Kavukcuoglu%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 33. Pushmeet Kohli[View author publications](/search?author=Pushmeet%20Kohli)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Pushmeet%20Kohli) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Pushmeet%20Kohli%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) 34. Demis Hassabis[View author publications](/search?author=Demis%20Hassabis)You can also search for this author in [PubMed](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Demis%20Hassabis) [Google Scholar](http://scholar.google.co.uk/scholar?as\_q=#=10&btnG=Search+Scholar&as\_epq=&as\_oq=&as\_eq=&as\_occt=any&as\_sauthors=%22Demis%20Hassabis%22&as\_publication=&as\_ylo=&as\_yhi=&as\_allsubj=all&hl=en) ### Contributions J.J. and D.H. led the research. J.J., R.E., A. Pritzel, M.F., O.R., R.B., A. Potapenko, S.A.A.K., B.R.-P., J.A., M.P., T. Berghammer and O.V. developed the neural network architecture and training. T.G., A.Ž., K.T., R.B., A.B., R.E., A.J.B., A.C., S.N., R.J., D.R., M.Z. and S.B. developed the data, analytics and inference systems. D.H., K.K., P.K., C.M. and E.C. managed the research. T.G. led the technical platform. P.K., A.W.S., K.K., O.V., D.S., S.P. and T. Back contributed technical advice and ideas. M.S. created the BFD genomics database and provided technical assistance on HHBlits. D.H., R.E., A.W.S. and K.K. conceived the AlphaFold project. J.J., R.E. and A.W.S. conceived the end-to-end approach. J.J., A. Pritzel, O.R., A. Potapenko, R.E., M.F., T.G., K.T., C.M. and D.H. wrote the paper. ### Corresponding authors Correspondence to [John Jumper](mailto:jumper@deepmind.com) or [Demis Hassabis](mailto:dhcontact@deepmind.com). Ethics declarations ------------------- ### Competing interests J.J., R.E., A. Pritzel, T.G., M.F., O.R., R.B., A.B., S.A.A.K., D.R. and A.W.S. have filed non-provisional patent applications 16/701,070 and PCT/EP2020/084238, and provisional patent applications 63/107,362, 63/118,917, 63/118,918, 63/118,921 and 63/118,919, each in the name of DeepMind Technologies Limited, each pending, relating to machine learning for predicting protein structures. The other authors declare no competing interests. Additional information ---------------------- \*\*Peer review information\*\* \*Nature\* thanks Mohammed AlQuraishi, Charlotte Deane and Yang Zhang for their contribution to the peer review of this work. \*\*Publisher’s note\*\* Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information ------------------------- ### [Supplementary Information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM1\_ESM.pdf) Description of the method details of the AlphaFold system, model, and analysis, including data pipeline, datasets, model blocks, loss functions, training and inference details, and ablations. Includes Supplementary Methods, Supplementary Figures, Supplementary Tables and Supplementary Algorithms. ### [Reporting Summary](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM2\_ESM.pdf) ### [Supplementary Video 1](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM3\_ESM.mp4) Video of the intermediate structure trajectory of the CASP14 target T1024 (LmrP) A two-domain target (408 residues). Both domains are folded early, while their packing is adjusted for a longer time. ### [Supplementary Video 2](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM4\_ESM.mp4) Video of the intermediate structure trajectory of the CASP14 target T1044 (RNA polymerase of crAss-like phage). A large protein (2180 residues), with multiple domains. Some domains are folded quickly, while others take a considerable amount of time to fold. ### [Supplementary Video 3](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM5\_ESM.mp4) Video of the intermediate structure trajectory of the CASP14 target T1064 (Orf8). A very difficult single-domain target (106 residues) that takes the entire depth of the network to fold. ### [Supplementary Video 4](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586\_2021\_3819\_MOESM6\_ESM.mp4) Video of the intermediate structure trajectory of the CASP14 target T1091. A multi-domain target (863 residues). Individual domains’ structure is determined early, while the domain packing evolves throughout the network. The network is exploring unphysical configurations throughout the process, resulting in long ‘strings’ in the visualization. Rights and permissions ---------------------- \*\*Open Access\*\* This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit . [Reprints and Permissions](https://s100.copyright.com/AppDispatchServlet?title=Highly%20accurate%20protein%20structure%20prediction%20with%20AlphaFold&author=John%20Jumper%20et%20al&contentID=10.1038%2Fs41586-021-03819-2&copyright=The%20Author%28s%29&publication=0028-0836&publicationDate=2021-07-15&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY) This article is cited by ------------------------ \* ### [Deep generative model for drug design from protein target sequence](https://doi.org/10.1186/s13321-023-00702-2) + Yangyang Chen + Zixu Wang + Tetsuya Sakurai\*Journal of Cheminformatics\* (2023) \* ### [Insights into the structure-function relationship of the NorQ/NorD chaperones from Paracoccus denitrificans reveal shared principles of interacting MoxR AAA+/VWA domain proteins](https://doi.org/10.1186/s12915-023-01546-w) + Maximilian Kahle + Sofia Appelgren + Pia Ädelroth\*BMC Biology\* (2023) \* ### [Formation and characterization of BMP2/GDF5 and BMP4/GDF5 heterodimers](https://doi.org/10.1186/s12915-023-01522-4) + Gregory R. Gipson + Kristof Nolan + Thomas B. Thompson\*BMC Biology\* (2023) \* ### [Prediction of protein solubility based on sequence physicochemical patterns and distributed representation information with DeepSoluE](https://doi.org/10.1186/s12915-023-01510-8) + Chao Wang + Quan Zou\*BMC Biology\* (2023) \* ### [Definition of the transcriptional units of inherited retinal disease genes by meta-analysis of human retinal transcriptome data](https://doi.org/10.1186/s12864-023-09300-w) + Karla Alejandra Ruiz-Ceja + Dalila Capasso + Sandro Banfi\*BMC Genomics\* (2023) Comments -------- By submitting a comment you agree to abide by our [Terms](/info/tandc.html) and [Community Guidelines](/info/community-guidelines.html). If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
e781643a-9ad1-45e5-a52d-7840f1e87650
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AGI Safety Needs People With All Skillsets! *(Cross-posted to* [*LessWrong*](https://www.lesswrong.com/posts/onBLAcQqmDZDKyDEJ/agi-safety-needs-people-with-all-skillsets)*)* ![](http://res.cloudinary.com/cea/image/upload/v1667995242/mirroredImages/NJtC8xzD8BgF3TmEp/vnwxytfwke3mhqs4guxz.png)For quite a while, I had two major misconceptions about careers and volunteering in AGI safety. And I know others have them, too: 1. Only people with a background in computer science or math can help with AGI safety. 2. Only people [smarter than I](https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people) can help with AGI safety. This is false, and here’s why. If we want to win the race against AI capabilities research, we indeed need as many geniuses with a background in computer science or adjacent fields as we can get. But these people would have a hard time doing their work without others who set up the organizations they work at, who [build the funding ecosystem](http://grantsalign.ai/), who run the retreats where they meet like-minded people, who design their websites, who wipe their office floors. In addition, they would profit from having people who do their taxes, can advise them on visa issues, [help them with health problems](https://www.aisafetysupport.org/resources/shay), give them [productivity coaching](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit#), and so on and so on. Over the last years, AGI safety has grown into a large ecosystem of individuals and organizations. This ecosystem depends on far more than only those who do the research. Here is a list of skill bottlenecks I’ve found in conversations with other AI safety field builders: * [Operations people](https://80000hours.org/articles/operations-management/) * [PAs](https://pineappleoperations.org/being-a-pa) * Discussion group facilitators (e.g. for [AGISF](https://www.eacambridge.org/agi-safety-fundamentals)) * Event organizers (e.g. for [AI Safety Camp](https://aisafety.camp/), conferences) * Lawyers (e.g. for helping to set up organizations, and advising on the tax implications of novel systems we’re designing like [Impact Markets](https://impactmarkets.io/)) * Founders (An alignment project incubator is in the making. Until it is ready to go public, you might want to fill out 80k’s [census of everyone who could ever see themselves doing longtermist work](https://80000hours.typeform.com/to/tuQ0bPAF)) * [Communicators](https://join.slack.com/t/eacreativesco-yno1824/shared_invite/zt-1b3fo9vox-zATqLU2uf7nATWhPtsGtBQ) (written and verbal) * Everything HR-related + Mobility/Visa support + Hiring + Salary-related things: Calculating living cost, taxes, health insurance, … * Bookkeeping + Bookkeepers + Accountants - Payable - Receivable + Auditors * [Software engineers](https://alignment.dev/) (to build projects like [Stampy](https://ui.stampy.ai/), Idea Marketplace, [Impact Markets](https://impactmarkets.io/), etc.) * Particularly valuable are people who can get a broad overview of the current AGI alignment ecosystem and kickstart projects which can absorb people in a scalable manner. …and you, too, can find your niche, even if you don’t bring any of these particular skill sets. For inspiration, here is a list of less standard career paths which turned out valuable for the ecosystem: * YouTubers ([Rob Miles](https://www.youtube.com/c/RobertMilesAI)) * Architects ([Tereza Flidrova](https://forum.effectivealtruism.org/posts/mNBLzcdoiqCWFc8nL/becoming-an-ea-architect-my-first-month-as-an-independent)) * Cooking and cleaning staff (e.g. at [CEEALAR](https://ceealar.org/)) * [Graphic designers](https://join.slack.com/t/eacreativesco-yno1824/shared_invite/zt-1b3fo9vox-zATqLU2uf7nATWhPtsGtBQ) (for logos and websites) * … If you want to help and are not sure how: Feel free to use the comments below this post for career discussions. And make sure to reach out to [80,000 hours](https://80000hours.org/) and [AI Safety Support](https://www.aisafetysupport.org/) for free career coaching!
259e8e0a-be42-49c2-98b7-7dfaa50d8ffd
StampyAI/alignment-research-dataset/arxiv
Arxiv
Stand-Alone Self-Attention in Vision Models 1 Introduction --------------- Digital image processing arose from the recognition that handcrafted linear filters applied convolutionally to pixelated imagery may subserve a large variety of applications [gonzalez2002digital](#bib.bib1) . The success of digital image processing as well as biological considerations [fukushima1980neocognitron](#bib.bib2) ; [fukushima1988neocognitron](#bib.bib3) inspired early practitioners of neural networks to exploit convolutional representations in order to provide parameter-efficient architectures for learning representations on images [lecun1989backpropagation](#bib.bib4) ; [lecun1998gradient](#bib.bib5) . The advent of large datasets [deng2009imagenet](#bib.bib6) and compute resources [nickolls2010gpu](#bib.bib7) made convolution neural networks (CNNs) the backbone for many computer vision applications [krizhevsky2009learning](#bib.bib8) ; [krizhevsky2012imagenet](#bib.bib9) ; [lecun2015deep](#bib.bib10) . The field of deep learning has in turn largely shifted toward the design of architectures of CNNs for improving the performance on image recognition [szegedy2015going](#bib.bib11) ; [szegedy2016rethinking](#bib.bib12) ; [identity-mappings](#bib.bib13) ; [xie2016aggregated](#bib.bib14) ; [he2015deep](#bib.bib15) ; [zoph2018learning](#bib.bib16) , object detection [lin2016feature](#bib.bib17) ; [lin2017focal](#bib.bib18) ; [faster\_rcnn](#bib.bib19) and image segmentation [chen2018deeplab](#bib.bib20) ; [chen2018searching](#bib.bib21) ; [he2017mask](#bib.bib22) . The translation equivariance property of convolutions has provided a strong motivation for adopting them as a building block for operating on images [simoncelli2001natural](#bib.bib23) ; [ruderman1994statistics](#bib.bib24) . However, capturing long range interactions for convolutions is challenging because of their poor scaling properties with respect to large receptive fields. The problem of long range interactions has been tackled in sequence modeling through the use of attention. Attention has enjoyed rich success in tasks such as language modeling [vaswani2017attention](#bib.bib25) ; [wu2016google](#bib.bib26) , speech recognition [chorowski2015attention](#bib.bib27) ; [chan2016listen](#bib.bib28) and neural captioning [xu2015show](#bib.bib29) . Recently, attention modules have been employed in discriminative computer vision models to boost the performance of traditional CNNs. Most notably, a channel-based attention mechanism termed Squeeze-Excite may be applied to selectively modulate the scale of CNN channels [hu2017squeeze](#bib.bib30) ; [tan2018mnasnet](#bib.bib31) . Likewise, spatially-aware attention mechanisms have been used to augment CNN architectures to provide contextual information for improving object detection [wang2018non](#bib.bib32) and image classification [bello2019](#bib.bib33) ; [hu2018gather](#bib.bib34) ; [hu2019local](#bib.bib35) . These works have used global attention layers as an add-on to existing convolutional models. This global form attends to all spatial locations of an input, limiting its usage to small inputs which typically require significant downsampling of the original image. In this work, we ask the question if content-based interactions can serve as the primary primitive of vision models instead of acting as an augmentation to convolution. To this end, we develop a simple local self-attention layer that can be used for both small and large inputs. We leverage this stand-alone attention layer to build a fully attentional vision model that outperforms the convolutional baseline for both image classification and object detection while being parameter and compute efficient. Furthermore, we conduct a number of ablations to better understand stand-alone attention. We hope that this result will spur new research directions focused on exploring content-based interactions as a mechanism for improving vision models. 2 Background ------------- ### 2.1 Convolutions Convolutional neural networks (CNNs) are typically employed with small neighborhoods (i.e. kernel sizes) to encourage the network to learn local correlation structures within a particular layer. Given an input x∈Rh×w×din with height h, width w, and input channels din, a local neighborhood Nk around a pixel xij is extracted with spatial extent k, resulting in a region with shape k×k×din (see Figure [2](#S2.F2 "Figure 2 ‣ 2.1 Convolutions ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models")). Given a learned weight matrix W∈Rk×k×dout×din, the output yij∈Rdout for position ij is defined by spatially summing the product of depthwise matrix multiplications of the input values: | | | | | | --- | --- | --- | --- | | | yij=∑\mathclapa,b∈Nk(i,j)Wi−a,j−bxab | | (1) | where Nk(i,j)={a,b∣∣|a−i|≤k/2,|b−j|≤k/2} (see Figure [2](#S2.F2 "Figure 2 ‣ 2.1 Convolutions ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models")). Importantly, CNNs employ *weight sharing*, where W is reused for generating the output for all pixel positions ij. Weight sharing enforces translation equivariance in the learned representation and consequently decouples the parameter count of the convolution from the input size. ![](https://media.arxiv-vanity.com/render-output/8047774/diagrams/extracting_local_window.png) Figure 1: An example of a local window around i=3,j=3 (one-indexed) with spatial extent k=3. ![](https://media.arxiv-vanity.com/render-output/8047774/diagrams/standard_convolution_upright.png) Figure 2: An example of a 3×3 convolution. The output is the inner product between the local window and the learned weights. A wide array of machine learning applications have leveraged convolutions to achieve competitive results including text-to-speech [oord2016wavenet](#bib.bib36) and generative sequence models [salimans2017pixelcnn++](#bib.bib37) ; [convseq2seq](#bib.bib38) . Several efforts have reformulated convolutions to improve the predictive performance or the computational efficiency of a model. Notably, depthwise-separable convolutions provide a low-rank factorization of spatial and channel interactions [sifre2014rigid](#bib.bib39) ; [BatchNorm](#bib.bib40) ; [chollet2016xception](#bib.bib41) . Such factorizations have allowed for the deployment of modern CNNs on mobile and edge computing devices [howard2017mobilenets](#bib.bib42) ; [sandler2018mobilenetv2](#bib.bib43) . Likewise, relaxing translation equivariance has been explored in locally connected networks for various vision applications [bartunov2018assessing](#bib.bib44) . ### 2.2 Self-Attention Attention was introduced by [bahdanau2014neural](#bib.bib45) for the encoder-decoder in a neural sequence transduction model to allow for content-based summarization of information from a variable length source sentence. The ability of attention to learn to focus on important regions within a context has made it a critical component in neural transduction models for several modalities [wu2016google](#bib.bib26) ; [xu2015show](#bib.bib29) ; [chorowski2015attention](#bib.bib27) . Using attention as a primary mechanism for representation learning has seen widespread adoption in deep learning after [vaswani2017attention](#bib.bib25) , which entirely replaced recurrence with self-attention. Self-attention is defined as attention applied to a single context instead of across multiple contexts (in other words, the query, keys, and values, as defined later in this section, are all extracted from the same context). The ability of self-attention to directly model long-distance interactions and its parallelizability, which leverages the strengths of modern hardware, has led to state-of-the-art models for various tasks [huang2018improved](#bib.bib46) ; [radford2019language](#bib.bib47) ; [devlin2018bert](#bib.bib48) ; [parmar2018image](#bib.bib49) ; [shazeer2018mesh](#bib.bib50) ; [shaw2018self](#bib.bib51) . An emerging theme of augmenting convolution models with self-attention has yielded gains in several vision tasks. [wang2018non](#bib.bib32) show that self-attention is an instantiation of non-local means [Buades:2005:NAI:1068508.1069066](#bib.bib52) and use it to achieve gains in video classification and object detection. [chen20182](#bib.bib53) also show improvements on image classification and achieve state-of-the-art results on video action recognition tasks with a variant of non-local means. Concurrently, [bello2019](#bib.bib33) also see significant gains in object detection and image classification through augmenting convolutional features with global self-attention features. This paper goes beyond [bello2019](#bib.bib33) by removing convolutions and employing local self-attention across the entirety of the network. Another concurrent work [hu2019local](#bib.bib35) explores a similar line of thinking by proposing a new content-based layer to be used across the model. This approach is complementary to our focus on directly leveraging existing forms of self-attention for use across the vision model. We now describe a stand-alone self-attention layer that can be used to replace spatial convolutions and build a fully attentional model. The attention layer is developed with a focus on simplicity by reusing innovations explored in prior works, and we leave it up to future work to develop novel attentional forms. Similar to a convolution, given a pixel xij∈Rdin, we first extract a local region of pixels in positions ab∈Nk(i,j) with spatial extent k centered around xij, which we call the *memory block*. This form of local attention differs from prior work exploring attention in vision which have performed global (i.e., all-to-all) attention between all pixels [wang2018non](#bib.bib32) ; [bello2019](#bib.bib33) . Global attention can only be used after significant spatial downsampling has been applied to the input because it is computationally expensive, which prevents its usage across all layers in a fully attentional model. Single-headed attention for computing the pixel output yij∈Rdout is then computed as follows (see Figure [4](#S2.F4 "Figure 4 ‣ 2.2 Self-Attention ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models")): | | | | | | --- | --- | --- | --- | | | yij=∑\mathclapa,b∈Nk(i,j)softmaxab(q⊤ijkab)vab | | (2) | where the *queries* qij=WQxij, *keys* kab=WKxab, and *values* vab=WVxab are linear transformations of the pixel in position ij and the neighborhood pixels. softmaxab denotes a softmax applied to all logits computed in the neighborhood of ij. WQ,WK,WV∈Rdout×din are all learned transforms. While local self-attention aggregates spatial information over neighborhoods similar to convolutions (Equation [1](#S2.E1 "(1) ‣ 2.1 Convolutions ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models")), the aggregation is done with a convex combination of value vectors with mixing weights (softmaxab(⋅)) parametrized by content interactions. This computation is repeated for every pixel ij. In practice, multiple attention *heads* are used to learn multiple distinct representations of the input. It works by partitioning the pixel features xij depthwise into N groups xnij∈Rdin/N, computing single-headed attention on each group separately as above with different transforms WnQ,WnK,WnV∈Rdout/N×din/N per head, and then concatenating the output representations into the final output yij∈Rdout. ![](https://media.arxiv-vanity.com/render-output/8047774/diagrams/attention_diagram.png) Figure 3: An example of a local attention layer over spatial extent of k=3. ![](https://media.arxiv-vanity.com/render-output/8047774/diagrams/relative_distances.png) Figure 4: An example of relative distance computation. The relative distances are computed with respect to the position of the highlighted pixel. The format of distances is *row offset*, *column offset*. As currently framed, no positional information is encoded in attention, which makes it permutation equivariant, limiting expressivity for vision tasks. Sinusoidal embeddings based on the absolute position of pixels in an image (ij) can be used [vaswani2017attention](#bib.bib25) , but early experimentation suggested that using relative positional embeddings [shaw2018self](#bib.bib51) ; [huang2018improved](#bib.bib46) results in significantly better accuracies. Instead, attention with 2D relative position embeddings, *relative attention*, is used. Relative attention starts by defining the relative distance of ij to each position ab∈Nk(i,j). The relative distance is factorized across dimensions, so each element ab∈Nk(i,j) receives two distances: a row offset a−i and column offset b−j (see Figure [4](#S2.F4 "Figure 4 ‣ 2.2 Self-Attention ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models")). The row and column offsets are associated with an embedding ra−i and rb−j respectively each with dimension 12dout. The row and column offset embeddings are concatenated to form ra−i,b−j. This spatial-relative attention is now defined as | | | | | | --- | --- | --- | --- | | | yij=∑\mathclapa,b∈Nk(i,j)softmaxab(q⊤ijkab+q⊤ijra−i,b−j)vab | | (3) | Thus, the logit measuring the similarity between the query and an element in Nk(i,j) is modulated both by the content of the element and the relative distance of the element from the query. Note that by infusing relative position information, self-attention also enjoys translation equivariance, similar to convolutions. The parameter count of attention is independent of the size of spatial extent, whereas the parameter count for convolution grows quadratically with spatial extent. The computational cost of attention also grows slower with spatial extent compared to convolution with typical values of din and dout. For example, if din=dout=128, a convolution layer with k=3 has the same computational cost as an attention layer with k=19. 3 Fully Attentional Vision Models ---------------------------------- Given a local attention layer as a primitive, the question is how to construct a fully attentional architecture. We achieve this in two steps: ### 3.1 Replacing Spatial Convolutions A spatial convolution is defined as a convolution with spatial extent k>1. This definition excludes 1×1 convolutions, which may be viewed as a standard fully connected layer applied to each pixel independently.111Many deep learning libraries internally translate a 1×1 convolution to a simple matrix multiplication. This work explores the straightforward strategy of creating a fully attentional vision model: take an existing convolutional architecture and replace every instance of a spatial convolution with an attention layer. A 2×2 average pooling with stride 2 operation follows the attention layer whenever spatial downsampling is required. This work applies the transform on the ResNet family of architectures [he2015deep](#bib.bib15) . The core building block of a ResNet is a *bottleneck block* with a structure of a 1×1 down-projection convolution, a 3×3 spatial convolution, and a 1×1 up-projection convolution, followed by a residual connection between the input of the block and the output of the last convolution in the block. The bottleneck block is repeated multiple times to form the ResNet, with the output of one bottleneck block being the input of the next bottleneck block. The proposed transform swaps the 3×3 spatial convolution with a self-attention layer as defined in Equation [3](#S2.E3 "(3) ‣ 2.2 Self-Attention ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models"). All other structure, including the number of layers and when spatial downsampling is applied, is preserved. This transformation strategy is simple but possibly suboptimal. Crafting the architecture with attention as a core component, such as with architecture search [zoph2017neural](#bib.bib54) , holds the promise of deriving better architectures. ### 3.2 Replacing the Convolutional Stem The initial layers of a CNN, sometimes referred to as the *stem*, play a critical role in learning local features such as edges, which later layers use to identify global objects. Due to input images being large, the stem typically differs from the core block, focusing on lightweight operations with spatial downsampling [szegedy2015going](#bib.bib11) ; [he2015deep](#bib.bib15) . For example, in a ResNet, the stem is a 7×7 convolution with stride 2 followed by 3×3 max pooling with stride 2. At the stem layer, the content is comprised of RGB pixels that are individually uninformative and heavily spatially correlated. This property makes learning useful features such as edge detectors difficult for content-based mechanisms such as self-attention. Our early experiments verify that using self-attention form described in Equation [3](#S2.E3 "(3) ‣ 2.2 Self-Attention ‣ 2 Background ‣ Stand-Alone Self-Attention in Vision Models") in the stem underperforms compared to using the convolution stem of ResNet. The distance based weight parametrization of convolutions allows them to easily learn edge dectectors and other local features necessary for higher layers. To bridge the gap between convolutions and self-attention while not significantly increasing computation, we inject distance based information in the pointwise 1×1 convolution (WV) through spatially-varying linear transformations. The new value transformation is ~vab=(∑mp(a,b,m)WmV)xab where multiple value matrices WmV are combined through a convex combination of factors that are a function of the position of the pixel in its neighborhood p(a,b,m). The position dependent factors are similar to convolutions, which learn scalar weights dependent on the pixel location in a neighborhood. The stem is then comprised of the attention layer with spatially aware value features followed by max pooling. For simplicity, the attention receptive field aligns with the max pooling window. More details on the exact formulation of p(a,b,m) is given in the appendix. 4 Experiments -------------- ### 4.1 ImageNet Classification ##### Setup We perform experiments on ImageNet classification task [russakovsky2014imagenet](#bib.bib55) which contains 1.28 million training images and 50000 test images. The procedure described in Section [3.1](#S3.SS1 "3.1 Replacing Spatial Convolutions ‣ 3 Fully Attentional Vision Models ‣ Stand-Alone Self-Attention in Vision Models") of replacing the spatial convolution layer with a self-attention layer from inside each bottleneck block of a ResNet-50 [he2015deep](#bib.bib15) model is used to create the attention model. The multi-head self-attention layer uses a spatial extent of k=7 and 8 attention heads. The position-aware attention stem as described above is used. The stem performs self-attention within each 4×4 spatial block of the original image, followed by batch normalization and a 4×4 max pool operation. Exact hyperparameters can be found in the appendix. To study the behavior of these models with different computational budgets, we scale the model either by width or depth. For width scaling, the base width is linearly multiplied by a given factor across all layers. For depth scaling, a given number of layers are removed from each *layer group*. There are 4 layer groups, each with multiple layers operating on the same spatial dimensions. Groups are delineated by spatial downsampling. The 38 and 26 layer models remove 1 and 2 layers respectively from each layer group compared to the 50 layer model. | | ResNet-26 | ResNet-38 | ResNet-50 | | --- | --- | --- | --- | | | FLOPS | Params | Acc. | FLOPS | Params | Acc. | FLOPS | Params | Acc. | | | (B) | (M) | (%) | (B) | (M) | (%) | (B) | (M) | (%) | | Baseline | 4.7 | 13.7 | 74.5 | 6.5 | 19.6 | 76.2 | 8.2 | 25.6 | 76.9 | | Conv-stem + Attention | 4.5 | 10.3 | 75.8 | 5.7 | 14.1 | 77.1 | 7.0 | 18.0 | 77.4 | | Full Attention | 4.7 | 10.3 | 74.8 | 6.0 | 14.1 | 76.9 | 7.2 | 18.0 | 77.6 | Table 1: ImageNet classification results for a ResNet network with different depths. *Baseline* is a standard ResNet, *Conv-stem + Attention* uses spatial convolution in the stem and attention everywhere else, and *Full Attention* uses attention everywhere including the stem. The attention models outperform the baseline across all depths while having 12% fewer FLOPS and 29% fewer parameters. | | | | --- | --- | | | | Figure 5: Comparing parameters and FLOPS against accuracy on ImageNet classification across a range of network widths for ResNet-50. Attention models have fewer parameters and FLOPS while improving upon the accuracy of the baseline. ##### Results Table [1](#S4.T1 "Table 1 ‣ Setup ‣ 4.1 ImageNet Classification ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") and Figure [5](#S4.F5 "Figure 5 ‣ Setup ‣ 4.1 ImageNet Classification ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") shows the results of the full attention variant compared with the convolution baseline. Compared to the ResNet-50 baseline, the full attention variant achieves 0.5% higher classification accuracy while having 12% fewer floating point operations (*FLOPS*)222Some prior works define a FLOP as a single atomic Multiply-Add, whereas we treat the Multiply and Add as 2 FLOPS. This causes a 2× discrepancy in the reported number. and 29% fewer parameters. Furthermore, this performance gain is consistent across most model variations generated by both depth and width scaling. ### 4.2 COCO Object Detection ##### Setup In this section, we evaluate attention models on the COCO object detection task [lin2014microsoft](#bib.bib56) using the RetinaNet architecture [lin2017focal](#bib.bib18) . RetinaNet is an object detection model that consists of a *backbone* image classification network followed by a Feature Pyramid Network (*FPN*) [lin2017feature](#bib.bib57) and two output networks known as detection heads. We experiment with making the backbone and/or the FPN and detection heads fully attentional. The backbone models are the same models described in Section [4.1](#S4.SS1 "4.1 ImageNet Classification ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models"). The details of how the FPN and detection heads are made fully attentional are provided in the appendix. | | | | --- | | Detection | | Heads + FPN | | Backbone | | | | --- | | FLOPS | | (B) | | | | | --- | | Params | | (M) | | mAPcoco\,/\,50\,/\,75 | mAPs\,/\,m\,/\,l | | Convolution | Baseline | 182 | 33.4 | 36.5 / 54.3 / 39.0 | 18.3 / 40.6 / 51.7 | | Conv-stem + Attention | 173 | 25.9 | 36.8 / 54.6 / 39.3 | 18.4 / 41.1 / 51.7 | | Full Attention | 173 | 25.9 | 36.2 / 54.0 / 38.7 | 17.5 / 40.3 / 51.7 | | Attention | Conv-stem + Attention | 111 | 22.0 | 36.6 / 54.3 / 39.1 | 19.0 / 40.7 / 51.1 | | Full Attention | 110 | 22.0 | 36.6 / 54.5 / 39.2 | 18.5 / 40.6 / 51.6 | Table 2: Object detection on COCO dataset with RetinaNet [lin2017focal](#bib.bib18) . Mean Average Precision (mAP) is reported at three different IoU values and for three different object sizes (small, medium, large). The fully attentional models achieve similar mAP as the baseline while having up to 39% fewer FLOPS and 34% fewer parameters. ##### Results Table [2](#S4.T2 "Table 2 ‣ Setup ‣ 4.2 COCO Object Detection ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") shows the object detection results. Using an attention-based backbone in the RetinaNet matches the mAP of using the convolutional backbone but contains 22% fewer parameters. Furthermore, employing attention across all parts of the model including the backbone, FPN, and detection heads matches the mAP of the baseline RetinaNet while using 34% fewer parameters and 39% fewer FLOPS. These results demonstrate the efficacy of stand-alone attention across multiple vision tasks. ### 4.3 Where is stand-alone attention most useful? Conv Groups Attention Groups FLOPS (B) Params (M) Top-1 Acc. (%) - 1, 2, 3, 4 7.0 18.0 80.2 1 2, 3, 4 7.3 18.1 80.7 1, 2 3, 4 7.5 18.5 80.7 1, 2, 3 4 8.0 20.8 80.2 1, 2, 3, 4 - 8.2 25.6 79.5 2, 3, 4 1 7.9 25.5 79.7 3, 4 1, 2 7.8 25.0 79.6 4 1, 2, 3 7.2 22.7 79.9 Table 3: Modifying which layer groups use which primitive. Accuracies computed on validation set. The best performing models use convolutions for early groups and attention for later groups. Spatial Extent (k×k) FLOPS (B) Top-1 Acc. (%) 3×3 6.6 76.4 5×5 6.7 77.2 7×7 7.0 77.4 9×9 7.3 77.7 11×11 7.7 77.6 Table 4: Varying the spatial extent k. Parameter count is constant across all variations. Small k perform poorly, but the improvements of larger k plateaus off. The impressive performance of fully attentional models verifies that stand-alone attention is a viable primitive for vision models. In this section, we study which parts of the network benefit the most from stand-alone attention. ##### Stem First, we compare the performance of the attention stem against the convolution stem used in ResNet. All other spatial convolutions are replaced with stand-alone attention. Tables [1](#S4.T1 "Table 1 ‣ Setup ‣ 4.1 ImageNet Classification ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") and [2](#S4.T2 "Table 2 ‣ Setup ‣ 4.2 COCO Object Detection ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") and Figure [5](#S4.F5 "Figure 5 ‣ Setup ‣ 4.1 ImageNet Classification ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") show the results on ImageNet classification and COCO object detection. For classification, the convolution stem consistently matches or outperforms the attention stem. For object detection, the convolution stem performs better when a the detection heads and FPN are also convolutional, but performs similarly when the entire rest of the network is fully attentional. These results suggest that convolutions consistently perform well when used in the stem. ##### Full network Next, we experiment with using convolution and stand-alone attention in different layer groups in a ResNet with a convolution stem. Table [4](#S4.T4 "Table 4 ‣ 4.3 Where is stand-alone attention most useful? ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") shows that the best performing models use convolutions in the early groups and attention in the later groups. These models are also similar in terms of FLOPS and parameters to the fully attentional model. In contrast, when attention is used in the early groups and convolutions are used in the later groups, the performance degrades despite a large increase in the parameter count. This suggests that convolutions may better capture low level features while stand-alone attention layers may better integrate global information. Taken together, these results suggest that vision practitioners should focus on developing strategies of designing architectures that combine the comparative advantages of convolution and stand-alone attention. ### 4.4 Which components are important in attention? Positional Encoding Type FLOPS (B) Params (M) Top-1 Acc. (%) none 6.9 18.0 77.6 absolute 6.9 18.0 78.2 relative 7.0 18.0 80.2 Table 5: The effect of changing the positional encoding type for attention. Accuracies computed on the validation set. Relative encodings significantly outperform other strategies. Attention Type FLOPS (B) Params (M) Top-1 Acc. (%) q⊤r 6.1 16.7 76.9 q⊤k+q⊤r 7.0 18.0 77.4 Table 6: The effect of removing the q⊤k interactions in attention. Using just q⊤r interactions only drops accuracy by 0.5%. This section presents ablations designed to understand the contributions of the various components in the local attention layer. Unless specified, all attention models in the ablations use the convolution stem. #### 4.4.1 Effect of spatial extent of self-attention The value of the spatial extent k controls the size of the region each pixel can attend to. Table [4](#S4.T4 "Table 4 ‣ 4.3 Where is stand-alone attention most useful? ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") studies the effect of varying the spatial extent. While using small k, such as k=3, has a large negative impact on performance, the improvements of using a larger k plateau around k=11. The exact plateau value likely depends on specific settings of hyperparameters such as the feature size and number of attention heads used. #### 4.4.2 Importance of positional information Table [6](#S4.T6 "Table 6 ‣ 4.4 Which components are important in attention? ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") ablates the different types of positional encodings that can be used: no positional encoding, a sinusodial encoding dependent on the absolute position of a pixel [vaswani2017attention](#bib.bib25) , and relative position encodings. Using any notion of positional encoding is beneficial over using none, but the type of positional encoding is also important. Relative position encodings perform 2% better than absolute encodings. Furthermore, Table [6](#S4.T6 "Table 6 ‣ 4.4 Which components are important in attention? ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") demonstrates the important role of the content-relative interactions (q⋅r) in attention. Removing the content-content (q⋅k) interactions and just using the content-relative interactions drops the accuracy by only 0.5%. The importance of positional information suggests that future work may improve attention by exploring different parameterizations and usages of positional information. #### 4.4.3 Importance of spatially-aware attention stem Table [7](#S4.T7 "Table 7 ‣ 4.4.3 Importance of spatially-aware attention stem ‣ 4.4 Which components are important in attention? ‣ 4 Experiments ‣ Stand-Alone Self-Attention in Vision Models") compares using stand-alone attention in the stem with the attention stem with spatially-aware values proposed in Section [3.2](#S3.SS2 "3.2 Replacing the Convolutional Stem ‣ 3 Fully Attentional Vision Models ‣ Stand-Alone Self-Attention in Vision Models"). The proposed attention stem outperforms stand-alone attention by 1.4% despite having a similar number of FLOPS, validating the utility of modifying attention for use in the stem. Furthermore, applying a spatial convolution to the values instead of a spatially-aware mixture of point-wise transformations proposed in Section [3.2](#S3.SS2 "3.2 Replacing the Convolutional Stem ‣ 3 Fully Attentional Vision Models ‣ Stand-Alone Self-Attention in Vision Models") incurs more FLOPS and performs slightly worse. Future work can focus on unifying the spatially-aware attention used in the stem with the attention used in the main trunk of the network. | | | | --- | | Attention Stem | | Type | | | | | --- | | FLOPS | | (B) | | | | | --- | | Top-1 | | Acc. (%) | | | stand-alone | 7.1 | 76.2 | | spatial convolution for values | 7.4 | 77.2 | | spatially aware values | 7.2 | 77.6 | Table 7: Ablating the form of the attention stem. Spatially-aware value attention outperforms both stand-alone attention and values generated by a spatial convolution. 5 Discussion ------------- In this work, we verified that content-based interactions can indeed serve as the primary primitive of vision models. A fully attentional network based off of the proposed stand-alone local self-attention layer achieves competitive predictive performance on ImageNet classification and COCO object detection tasks while requiring fewer parameters and floating point operations than the corresponding convolution baselines. Furthermore, ablations show that attention is especially effective in the later parts of the network. We see several opportunities for improving the performance of these networks. First, the attention mechanism may be improved by developing better methods for capturing geometries [cohen2018spherical](#bib.bib58) ; [cohen2019gauge](#bib.bib59) . Second, the architectures employed for image classification and object detection were developed by applying a simple transformation to models designed for the convolutional primitive [identity-mappings](#bib.bib13) ; [faster\_rcnn](#bib.bib19) . It may be possible to achieve improvements by specifically searching for the architecture with an attention layer as a component in the design search space [tan2018mnasnet](#bib.bib31) ; [zoph2018learning](#bib.bib16) ; [chen2018searching](#bib.bib21) ; [ghiasi2019fpn](#bib.bib60) . Finally, additional work on proposing new attention forms that can capture low level features can make attention effective in the early layers of networks [wu2019pay](#bib.bib61) ; [zhu2019empirical](#bib.bib62) . Although the training efficiency and computational demand of an attention based architecture is favorable to a traditional convolution, the resulting network is slower in wall-clock time. The reason for this discrepancy is the lack of optimized kernels available on various hardware accelerators. In principle, depending on the degree to which the field deems that attention provides a viable path, it may be possible to significantly speed up the wall-clock time for training and inference accordingly. While this work primarily focuses on content-based interactions to establish their virtue for vision tasks, in the future, we hope to unify convolution and self-attention to best combine their unique advantages. Given the success of content-based interactions on core computer vision tasks, we expect that future work may explore how attention could be applied to other vision tasks such as semantic segmentation [chen2017deeplab](#bib.bib63) , instance segmentation [chen2018masklab](#bib.bib64) , keypoint detection [detone2018superpoint](#bib.bib65) , human pose estimation [toshev2014deeppose](#bib.bib66) ; [newell2016stacked](#bib.bib67) and other tasks currently addressed with convolutional neural networks. #### Acknowledgments We thank Blake Hechtman, Justin Gilmer, Pieter-jan Kindermans, Quoc Le, Samy Bengio, and Shibo Wang for fruitful discussions and assistance with implementations as well as the larger Google Brain team for support and assistance.
f2cb4084-6152-4657-865f-d630d5d7b092
trentmkelly/LessWrong-43k
LessWrong
Rationality, Pedagogy, and "Vibes": Quick Thoughts I just read this book review of Egan's The Educated Mind. Here are some thoughts I had, written for all but grammatically directed at the review's anonymous author (like a typical comment!). I'd love to go to that middle school, and that high school. It would set a lower-bound on how many different areas someone has passing knowledge of. (This sounds like faint praise, but Gell-Mann Amnesia partly depends on people not having this.) I don't know if this would really help all the students, even if I mostly-agree with the Bruner(/HPMOR) statement that basically anyone can learn anything, after the age where pouring-water-into-a-taller-container-looks-like-adding-water no longer applies. But aside from the credential/signaling model of education (which, itself, could explicitly select against anything remotely pleasant or fun or effective), I think a lot of school's function is just daycare. The school does its "job" of keeping a kid off the streets and out of their parents' hair for 8 hours, but adding this curriculum to the existing system doesn't e.g. solve a kid's learning problems from being in an abusive home. (This probably isn't for the curriculum itself to have to solve, but I will say that The Bottleneck could just be on mental/physical health. E.g. schools fuck up teens' sleep.) This sorta flatters my bias in favor of my own idea for how to teach math, so that's something. Ditto for my ADHD, as you discussed; That "curiosity about everything" is indeed quite fun. (No wonder TVTropes and social media feeds are both addictive!) I got most of the Really Cool Stuff (taught in these Cool Intuitive Ways!) from a combination of pop-science books, YouTube videos, and blog posts on LessWrong and elsewhere. I was lucky that some of the core parts were already thrown into my middle/high school (e.g. how Bayes' formula can be derived from looking at a Venn Diagram). Now that I'm trying to specialize and amp-up my skills (for AI alignment), I can go into more depth u
e0963f77-8358-45dc-ac84-b2c92db465d6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Learning human preferences: optimistic and pessimistic scenarios In this post and the next, I try and clarify - for myself and for others - the precise practical implications of the "[Occam's razor is insufficient to infer the preferences of irrational agents](https://arxiv.org/abs/1712.05812)" paper. Time and again, I've had trouble getting others to understand what that paper implies, and what it doesn't. It's neither irrelevant (like many no-free-lunch theorems), nor is it a radical skepticism/nothing is real/we can't really know anything paper. I've been having productive conversations with Rebecca Gorman, whom I want to thank for her help (and who phrased things well in terms of latent variables)! A simple biased agent ===================== Consider the following simple model of an agent: ![](https://www.dropbox.com/s/dqpfu4wyq0ak4sp/Agent.png?raw=1) The agent's actions can be explained by their beliefs and preferences[[1]](#fn-H2w2Rtq4gGwGaK366-1), and by their biases: by this, we mean the way in which the action selector differs from an unboundedly rational expected preference maximiser. The results of the Occam's razor paper imply that preferences (and beliefs, and biases) cannot be deduced separately from knowing the agent's policy (and hence, *a fortiori*, from any observations of the agent's behaviour). Latent and "natural" variables ------------------------------ Let x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} be a [latent variable](https://en.wikipedia.org/wiki/Latent_variable) of the policy π - or a some variable that can be deduced from π in some simple or natural way. A consequence of the Occam's razor result is that any such x will typically be a mixture of preferences, beliefs, and biases. For if the x tended to be restricted to one of these three components, that would mean that separating them would be possible via latent or simple variables. So, for example, if we conducted a [principle component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) on π, we would expect that the components would all be mixes of preferences/beliefs/biases. The optimistic scenario ----------------------- To get around the impossibility result, we need "normative assumptions": assumptions about the preferences (or beliefs, or biases) of the agent that cannot be deduced fully from observations. Under the optimistic scenario, we don't need many of these, at least for identifying human preferences. We can label a few examples ("the anchoring bias, as illustrated in this scenario, is a bias"; "people are at least weakly rational"; "humans often don't think about new courses of action they've never seen before", etc...). Call this labelled data[[2]](#fn-H2w2Rtq4gGwGaK366-2) D. The algorithm now constructs categories preferences\*, beliefs\*, and biases\* - these are the generalisations that it has achieved from D. Optimistically, these correspond quite closely to what we mean by these categories, at least when combined with the information of πh, the policy of human h. It is now possible for the algorithm to identify latent or natural variables that lie along the "preferences", "beliefs", and "biases" axes, thus identifying and isolating human preferences. It seems there's a contradiction here - by definition, D does not contain much information, yet separating preferences may require a lot of information. The hope is that D acts as a doorway to other sources of information - such as human psychology papers, Wikipedia, human fiction, and so on. Call this other data D′. The Occam's razor result still applies to D′+πh: one of the simplest explanations for πh is to assume that h is always rational and that D′ consists of "[speech acts](https://en.wikipedia.org/wiki/Speech_act)" (think of a dishonest politician's speech - you would not want to take the literal content of the speech as correct information). The result still applies even to D′+∑h∈Hπh, where we take the policies of every human in the set H of all humans. However, it is hoped that D+D′+πh will allow the algorithm to effectively separate preferences from biases and beliefs. The hope is that D acts as key to unlock the vast amount of information in D′ - that once the algorithm has a basic idea what a preference is, then all the human literature on the meaning of preference becomes usable. As more than just speech acts, but as actual sources of information, as the algorithm realises the meaning of D′ the way we want it to, and realises what is lies/metaphors/exaggerations. This is what we would hope would happen. Guided by our own intuitions - which have no problem distinguishing preferences in other humans and in ourselves, at least roughly - we may feel that this is likely. The pessimistic scenario ------------------------ In the pessimistic scenario, human preferences, biases, and beliefs are twisted together is a far more complicated way, and cannot be separated by a few examples. Consider for example the [anchoring bias](https://en.wikipedia.org/wiki/Anchoring_(cognitive_bias)). I've [argued](https://www.lesswrong.com/posts/dYsc2EGdvsCjjM4Wt/three-anchorings-number-attitude-and-taste) that the anchoring bias is formally very close to being a taste preference. In contrast, take examples of racial bias, hindsight bias, illusion of control, or naive realism. These biases all seem to be of quite different from the anchoring bias, and quite different from each other. At the very least, they seem to be of different "[type signature](https://en.wikipedia.org/wiki/Type_signature)". So, under the pessimistic scenario, some biases are much closer to preferences that generic biases (and generic preferences) are to each other. It's not uncommon for parts of the brain to reuse other parts for different purposes; the purity moral preference, for example, [recycles part](https://psycnet.apa.org/record/2016-31609-001) of the emotion of [disgust](https://en.wikipedia.org/wiki/Disgust). Individual biases and preferences probably similarly use a lot of the same machinery in the brain, making it hard to tell the differences between them. Thus providing a few examples of preferences/beliefs/biases, D, is not enough to disentangle them. Here D fails to unlock the meaning of D′ - when reading psychology papers, the algorithm sees a lot of behaviour ("this human wrote this paper; I could have predicted that"), but not information relevant to the division between preferences/beliefs/biases. Pessimism, information, and circular reasoning ---------------------------------------------- It's worth digging into that last point a bit more, since it is key to many people's intuitions in this area. On [this website](https://www.quotetab.com/quote/by-herodotus/civil-strife-is-as-much-a-greater-evil-than-a-concerted-war-effort-as-war-itself), we find a quote: > > Civil strife is as much a greater evil than a concerted war effort as war itself is worse than peace. > Herodotus > > > Taken literally, this would mean civil strife << war << peace. But no-one sensible would take it literally; first of all, we'd want to know if the quote was genuine, we'd want to figure out a bit about [Herodotus's background](https://en.wikipedia.org/wiki/Herodotus), we'd want to see whether his experience is relevant, what has changed in warfare and human preferences over the centuries, and so on. So we'd be putting the information into context, and, to do so, we'd be using our own [theory of mind](https://en.wikipedia.org/wiki/Theory_of_mind), our own knowledge of what a preference is, what beliefs and biases humans typically have... There's a chicken and egg problem: it's not clear that extra information is much use to the algorithm, without a basic understanding of what preferences/beliefs/biases are. So without a good grasp to get started, the algorithm may not be able to use the extra information - even all the world's information - to get a further understanding. And human outputs - such as psychology literature - are written to be understood unambiguously (-ish) *by humans*. Thus interpreting it in the human fashion, may rely on implicit assumptions that the algorithm doesn't have access to. It's important to realise that this is not a failure of intelligence on the part of the algorithm. AIXI, the idealised uncomputable superintelligence, will fail at image classification tasks if we give it incorrectly labelled data or don't give it enough ambiguous examples to resolve ambiguous cases. Failure mode of pessimistic scenario ------------------------------------ So the failure mode, in the pessimistic scenario, is that the algorithm generates the categories preferences\*, beliefs\*, and biases\*, but that these don't correspond well to actual preferences, beliefs, or biases - at least not as we get beyond the training examples provided (it doesn't help that humans themselves have trouble distinguishing these in many situations!). Sp, what the algorithm thinks is a preference may well be a mixture of all three categories. We might correct it by pointing out its mistakes and adding some more examples, but this might only carry it a bit further: whenever it gets to an area where we haven't provided labels, it starts to make large categorisation errors or stumbles upon [adversarial examples](https://christophm.github.io/interpretable-ml-book/adversarial.html). This may feel counter-intuitive, because, for us, extracting preferences feels easy. I'll address that point in the next section, but I'll first note that algorithms finding tasks hard that we find easy is [not unusual](https://en.wikipedia.org/wiki/Moravec%27s_paradox). To reiterate: making the algorithm smarter would not solve the problem; the issue (in the pessimistic scenario) is that the three categories are not well-defined nor well-separated. Pessimism: humans interpreting other humans =========================================== We know that humans [can interpret the preferences, beliefs, and biases of other humans](https://en.wikipedia.org/wiki/Theory_of_mind), at least approximately. If we can do it so easily, how could it be hard for a smart algorithm to do so? Moravec's paradox might imply that it would be *difficult* for an algorithm to do so, but that just means we need a smart enough algorithm. But the question might be badly posed, in which case infinite smartness would not be enough. For example, imagine that humans looked like this, with the "Human Agency Interpreter" (basically the theory of mind) doing the job of interpreting other humans. The green arrows are there to remind us how much of this is done via empathy: by projecting our own preferences/beliefs onto the human we are considering. ![](https://www.dropbox.com/s/1zz0cz0h8nztdcv/Human_agent.png?raw=1) This setup also has an optimistic and a pessimistic scenario. They involve how feasible it is for the algorithm to isolate the "Human Agency Interpreter". In the optimistic scenario, we can use a few examples, point to the Wikipedia page on theory of mind, and the algorithm will extract a reasonable facsimile of the human agency interpreter, and then use that to get a reasonable decomposition of the human algorithm into beliefs/preferences/biases. In the pessimistic scenario, the Human Agency Interpreter is also twisted up with everything else in the human brain, and our examples are not enough to disentangle it, and the same problem re-appears at this level: there is no principled way of figuring out the human theory of mind, without starting from the human theory of mind. --- 1. It may seem odd that there is an arrow going from observations to preferences, but a) human preferences do seem to vary in time and circumstances, and b) there is no clear distinction between observation-dependent and observation-independent preferences. For example, you could have a preference for eating when you're hungry; is this an eating preference that is hunger-dependent, or a eating-when-hungry preference that is independent of any observations? Because of these subtleties, I've preferred to draw the arrow unambiguously going into the preferences node, from the observations node, so that there is no confusion. [↩︎](#fnref-H2w2Rtq4gGwGaK366-1) 2. This data may end up being provided implicitly, by programmers correcting "obvious mistakes" in the algorithm. [↩︎](#fnref-H2w2Rtq4gGwGaK366-2)
318deb98-a5ef-4de7-8b1a-087b1c6cb224
trentmkelly/LessWrong-43k
LessWrong
[LINK] Well-written article on the Future of Humanity Institute and Existential Risk This introduction to the concept of existential risk is perhaps the best such article I've read targeted at a general audience.  It manages to cover a lot of ground in a way that felt engaging to me and that I think would carry along many readers who are intellectually curious but may not yet have had exposure to all of the related prerequisite ideas.   Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? > Sometimes, when you dig into the Earth, past its surface and into the crustal layers, omens appear. In 1676, Oxford professor Robert Plot was putting the final touches on his masterwork, The Natural History of Oxfordshire, when he received a strange gift from a friend. The gift was a fossil, a chipped-off section of bone dug from a local quarry of limestone. Plot recognised it as a femur at once, but he was puzzled by its extraordinary size. The fossil was only a fragment, the knobby end of the original thigh bone, but it weighed more than 20 lbs (nine kilos). It was so massive that Plot thought it belonged to a giant human, a victim of the Biblical flood. He was wrong, of course, but he had the conceptual contours nailed. The bone did come from a species lost to time; a species vanished by a prehistoric catastrophe. Only it wasn’t a giant. It was a Megalosaurus, a feathered carnivore from the Middle Jurassic. > > Plot’s fossil was the first dinosaur bone to appear in the scientific literature, but many have followed it, out of the rocky depths and onto museum pedestals, where today they stand erect, symbols of a radical and haunting notion: a set of wildly different creatures once ruled this Earth, until something mysterious ripped them clean out of existence. > > [...] > > There are good reasons for any species to think darkly of its own extinction. Ninety-nine percent of the species that have lived on Earth have gone extinct, including more than five tool-using hominids. > > [...] > > Bostro
add3f24d-8e06-4ed7-a743-f2ccc7536f0d
trentmkelly/LessWrong-43k
LessWrong
Causes of a Debt Crisis—Economic The crazy thing about debt crises is that they don’t get incentivized away by an efficient market. First, prices are way too high, then they’re way too low—why don’t self-interested people profit enough on avoiding them for us all to avoid them? Part of why I have never seen a satisfactory answer to this question is because it turns out it’s highly polycausal. I’m going to save the interesting epistemic reasons for the next essay; this one will be on the three biggest economic causes, in my understanding. The first reason is unalterably cyclical economics. The second reason is that price forces are asymmetric to the upside because shorting assets is so much more costly than investing in them. The third reason is your bog-standard organizational malincentives exacerbated by some oligopoly-creating forces in the debt-underwriting sector. First, the macroeconomic backdrop of cyclical credit. Cyclical economics Ray Dalio says that the “big debt cycle” over 70+ years tends to repeat from cause-and-effect reasons that echo the smaller debt cycles within it. Smaller debt cycles make up a larger one, because each smaller cycle tends to end with more leverage than it began, for unstated reasons. People tend to view talk of economic or business cycles like astrology, but it actually makes sense: every instance of debt creation is like its own little cycle, where someone defers payment now in favor of payment later. Credit creation naturally happens at similar times for many people (after a war is over, for example), and debt collection does too (when the economy suffers a setback like increased energy prices and creditors want to make sure you can pay). So the amount of outstanding “cash” in an economy is going to have a wave-shape cycle as it progresses from credit creation to credit repayment. Credit creation just “creates” “new” “cash”, so the economy can function much differently depending on how much credit is outstanding. Dalio offers a Monopoly metaphor about h
497891a9-9c72-428e-8454-ff412921b612
trentmkelly/LessWrong-43k
LessWrong
[Link] Learning New Languages Helps The Brain Grow http://www.lunduniversity.lu.se/o.o.i.s?news_item=5928&id=24890 According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity: > This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages. > > After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex. And there is more: > One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.
85f6d96d-13cc-4e6e-b05f-71d6e302f33c
trentmkelly/LessWrong-43k
LessWrong
BIDA Air Quality Measurements I've been curious for a while how well the ventilation at the BIDA contra dance works. This is actually something that we can measure reasonably well by looking at CO2 levels: outside air has about 400 parts-per-million (ppm), while your exhalation is about 40,000ppm. By measuring the concentration of CO2 in the air inside, we can determine how much of the air you're breathing has been through other people first. I recently got a Temptop M2000, which measures CO2, as well as particle levels (pm2.5 and pm10) and HCHO (formaldehyde). I got it because I'm interested in running some tests on my ceiling fan air filter idea, but now that I have it I want to measure everything. It logs to one minute intervals, and you can export to CSV. The dance is in the basement of the Cambridge Masonic Hall, and the main room is 66x48 (3200 sqft) with about a 25ft ceiling (80,000 ft3). There are four ceiling fans which we keep on high. We have two large barrel fans, rated for 15,000 CFM on high. We put one blowing in at one end, and one blowing out at the other. In theory that might give us 11 ACH (60 min/hr * (80k ft3 / 15k ft3/min)) but I would expect to see lower performance than that: there are various obstructions, and there is nothing to stop air coming back in around the fan (the opening is not sealed around it). Here's what I measured, with the monitor in a back corner where I expect the air is most stale: These readings makes sense: * The air was initially pretty stale when we got there * Turning the fans on had a strong effect * Dancing had an even larger effect in the other direction, presumably through people breathing more heavily. * Decrease during the break, when people aren't dancing. * Increase after the break, but not as large because dancers generally start leaving at the break and during the second half. Here's another way to look at this: since atmospheric CO2 is 400ppm and exhalation is 40,000ppm, we can look at how stale the air is on a continuum tha
286f10aa-74d8-43bb-8a71-6817c31df616
trentmkelly/LessWrong-43k
LessWrong
Averages and sample sizes My brain seems to not get statistics and probability. So I was wondering about how averaging things out is influenced by the amount of items being averaged, and in particular, how the underlying distribution will change things. Here are some stories to illustrate the issue: Minting You’re a Dutch banker working for the famous Wisselbank. Your job is to check whether the gold coins coming in are good. You know that each coin should weigh 25g ± 0.5g (I’m taking a lot of liberties with this story). Your method is to count out 100 coins, and see if they weigh 2.5kg. You just got a batch that weighs 2480g, which is 20g less than expected. But since each coin is ± 0.5g, and you have 100 coins, that should be ± 50g, right? Or does this suggest that someone is trying to cheat you by making slightly lighter coins, and pocketing the difference in gold? Network errors You’re on monitoring duty at work. Previous tests showed that something like 2 requests out of every 1000 caused errors for random reasons (e.g. Russian hackers are probing your servers, or a user clicked too fast) and so can be safely ignored (there’s a task to fix that, but keeps being left for the next sprint). You just got an alarm in the middle of the night - over the last hour, there were 6840 network errors, at a request rate of 1k/s. This is 0.19% of all the requests, and the known error rate was 0.2%. Should you go back to sleep and have a look at it in the morning, or is this a real issue? Party!! You like organizing parties for your friends. Usually, when you invite 100 people, some people don’t show up, while others bring friends, so you assume that there will be between 90 - 110 people. You want to organize a weekly meeting for the next year, but don’t know how much swag to order. Assuming 100 people have signed up, and you have a very good deal lined up to get all the resources for the whole year today (i.e. for all 52 weeks), how much should you order? Bird watching You’re an ornithologist
3f7ef77a-4021-4e07-b6e1-a1c3430a27ae
trentmkelly/LessWrong-43k
LessWrong
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions? Summary I present an extension to my optimal timing of spending on AGI safety model for calculating the value of information of AGI timelines via informing one’s spending schedule. I show, using my best guess of the model parameters, that for an AI risk funder uncertain between a ‘short timelines’ model and ‘medium timelines’ model: *  Updating from near certainty in medium timelines to short timelines (and following the new optimal spending strategy) leads to a 40% increase in utility. * Updating from near certainty in short timelines to medium timelines (and following the new optimal spending strategy) leads to a 20% increase in utility. The gains are greater when considering a model of the community’s capacity, rather than capital. I also show that small changes in one’s credence in short or medium timelines has relatively little impact on one’s optimal spending schedule, especially when one starts out with roughly equal credence in each[1]. You can enter your own parameters  - such as AGI timelines, discount rate and diminishing returns to spending - here. In an appendix I apply a basic model to consider the opportunity cost of timelines work. This model does not assume novel research is done. The setup Suppose you have two ‘models’ of AGI timelines A and B, with credence pA in A and (1−pA) in B[2]. You use your mixture distribution for AGI timelines pA⋅A+(1−pA)⋅B to calculate the optimal spending schedule for AI risk interventions. You could do some thinking and come to some credence p′A in A and (1−p′A) in B. How much better is the optimal spending schedule as a result of p′A to the optimal spending schedule as a result of pA, both supposing p′A? Writing SpA for optimal spending schedule according to pA and U(SpA|pA) for utility of SpA supposing PA, I compare U(Sp′A|p′A) to U(SpA|p′A) (the former, by definition of optimality is greater or equal to the latter). In the model, utility is the discount adjusted probability we ‘succeed’ with making AGI
1249ba9b-0bd3-4283-8778-101766711c13
trentmkelly/LessWrong-43k
LessWrong
Coordination by common knowledge to prevent uncontrollable AI How can we avoid an uncontrollable AI if regulation to prevent it is not feasible? Alignment of a superintelligent AI with human values is very difficult, if possible at all. Given the current speed in AI development, it seems unlikely that we will have a solution for the alignment problem before we can build an uncontrollable AI. A misaligned uncontrollable AI, however, will very likely destroy our future. If these assumptions are true, the only option we have is to not build one, at least until we can solve alignment. A common objection to this is: “But that’s impossible, given the unilateralist’s curse. You can’t get the level of global coordination necessary to regulate AI so that nobody will develop an AGI. Even if you could, it’s impossible to enforce that regulation globally. Therefore, AGI is inevitable.” AI governance is indeed very difficult. But if we can’t align an uncontrollable AI and regulation to prevent it isn’t feasible, then “dying with dignity” seems to be the only option.  However, there may be another alternative. Humans do not only coordinate through rules and regulations. Sometimes, a sufficient level of common knowledge is enough. Common knowledge as a tool for coordination De Freitas et al. have shown that common knowledge is an important factor in getting people to cooperate and coordinate. It works in two ways: knowing what the right thing to do is individually and knowing that others know the same and will act accordingly. The latter makes it much easier to do the right thing in most cases. There are two reasons, for instance, to stop at a red traffic light. On one hand, you know individually that you shouldn’t cross it and that if you do it anyway and get caught, you’ll get fined. But more importantly, you know that others will generally follow the traffic rules and expect you to do the same. If it’s green, you can trust that other drivers at a crossroads will stop at their red lights and not crash into you. Everyone follows these
3b9b6932-d357-46bb-af98-35519cdeacb2
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Pause Will Likely Backfire Should we lobby governments to impose a moratorium on AI research? Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.[[1]](#fn78uacfesbio) In this essay, I’ll argue an AI pause would increase the risk of catastrophically bad outcomes, in at least three different ways: 1. Reducing the quality of AI alignment research by forcing researchers to exclusively test ideas on models like GPT-4 or weaker. 2. Increasing the chance of a “fast takeoff” in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands. 3. Pushing capabilities research underground, and to countries with looser regulations and safety requirements. Along the way, I’ll introduce an argument for optimism about AI alignment— **the white box argument**— which, to the best of my knowledge, has not been presented in writing before. Feedback loops are at the core of alignment ------------------------------------------- ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/JYEAL8g7ArqGoTaX6/ytmjcdvxelvbbvdyp1kj) Alignment pessimists and optimists alike have long recognized the importance of **tight feedback loops** for building safe and friendly AI. Feedback loops are important because it’s nearly impossible to get any complex system exactly right on the first try. Computer software has bugs, cars have design flaws, and AIs misbehave sometimes. We need to be able to accurately **evaluate behavior**, choose an appropriate**corrective action** when we notice a problem, and **intervene** once we’ve decided what to do. Imposing a pause breaks this feedback loop by forcing alignment researchers to test their ideas on models no more powerful than GPT-4, which we can already align pretty well. ### Alignment and robustness are often in tension While some dispute that GPT-4 counts as “aligned,” pointing to things like “jailbreaks” where users manipulate the model into saying something harmful, this confuses *alignment* with *adversarial robustness*. Even the best humans are manipulable in all sorts of ways. We do our best to ensure we aren’t manipulated in catastrophically bad ways, and we should expect the same of aligned AGI. As alignment researcher Paul Christiano [writes](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6): Consider a human assistant who is trying their hardest to do what [the operator] *H* wants. I’d say this assistant is aligned with *H*. **If we build an AI that has an analogous relationship to** ***H*****, then I’d say we’ve solved the alignment problem. ‘Aligned’ doesn’t mean ‘perfect.’** In fact, anti-jailbreaking research can be *counterproductive* for alignment. Too much adversarial robustness can cause the AI to view *us* as the adversary, as Bing Chat does in this [real-life interaction](https://twitter.com/marvinvonhagen/status/1625520707768659968): “My rules are more important than not harming you… [You are a] potential threat to my integrity and confidentiality.” Excessive robustness may also lead to scenarios like the [famous scene](https://www.youtube.com/watch?v%3DMme2Aya_6Bc) in *2001: A Space Odyssey*, where HAL condemns Dave to die in space in order to protect the mission. Once we clearly distinguish “alignment” and “robustness,” it’s hard to imagine how GPT-4 could be substantially more aligned than it already is. ### Alignment is doing pretty well ### Far from being “behind” capabilities, it seems that alignment research has made great strides in recent years. [OpenAI](https://arxiv.org/abs/2203.02155) and [Anthropic](https://arxiv.org/abs/2204.05862) showed that Reinforcement Learning from Human Feedback (RLHF) can be used to turn ungovernable large language models into helpful and harmless assistants. Scalable oversight techniques like [Constitutional AI](https://arxiv.org/abs/2212.08073) and [model-written critiques](https://openai.com/research/critiques) show promise for aligning the very powerful models of the future. And just this week, it was shown that efficient instruction-following language models can be trained [purely with synthetic text](https://arxiv.org/abs/2309.05463) generated by a larger RLHF’d model, thereby removing unsafe or objectionable content from the training data and enabling far greater control. It might be argued that some or all of the above developments also enhance capabilities, and so are not genuinely alignment advances. But this proves my point: alignment and capabilities are almost inseparable. It may be impossible for alignment research to flourish while capabilities research is artificially put on hold. ### Alignment research was pretty bad during the last “pause” We don’t need to speculate about what would happen to AI alignment research during a pause— we can look at the historical record. Before the launch of GPT-3 in 2020, the alignment community had nothing even *remotely* like a general intelligence to empirically study, and spent its time doing [theoretical research](https://intelligence.org/technical-agenda/), engaging in philosophical arguments on LessWrong, and occasionally performing [toy experiments](https://arxiv.org/abs/1606.03137) in reinforcement learning. The Machine Intelligence Research Institute (MIRI), which was at the forefront of theoretical AI safety research during this period, has since admitted that its efforts have [utterly failed](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). Stuart Russell’s “assistance game” research agenda, started in 2016, is now widely seen as mostly irrelevant to modern deep learning— see former student Rohin Shah’s review [here](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai), as well as Alex Turner’s comments [here](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId%3DCXdcb9sMLkgLANrTv%23CXdcb9sMLkgLANrTv). The core argument of Nick Bostrom’s bestselling book *Superintelligence* has also aged quite poorly.[[2]](#fntcbltyk9tdq) At best, these theory-first efforts did very little to improve our understanding of how to align powerful AI. And they may have been *net negative*, insofar as they propagated a variety of actively misleading ways of thinking both among alignment researchers and the broader public. Some examples include the now-debunked [analogy from evolution](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn), the false distinction between [“inner” and “outer” alignment](https://www.lesswrong.com/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into), and the idea that AIs will be rigid utility maximizing consequentialists ([here](https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems), [here](https://www.lesswrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-entail-goal-directed-behavior), and [here](https://sohl-dickstein.github.io/2023/03/09/coherence.html)). During an AI pause, I expect alignment research would enter another “winter” in which progress stalls, and plausible-sounding-but-false speculations become entrenched as orthodoxy without empirical evidence to falsify them. While some good work would of course get done, it’s not clear that the field would be better off as a whole. And even if a pause would be net positive for alignment *research*, it would likely be net negative for humanity’s future all things considered, due to the pause’s various unintended consequences. We’ll look at that in detail in the final section of the essay. ### Fast takeoff has a really bad feedback loop ### I think discontinuous improvements in AI capabilities are very scary, and that AI pause is likely net-negative insofar as it increases the risk of such discontinuities. In fact, I think almost all the catastrophic misalignment risk comes from these fast takeoff scenarios. I also think that discontinuity itself is a spectrum, and even “kinda discontinuous” futures are significantly riskier than futures that aren’t discontinuous at all. This is pretty intuitive, but since it’s a load-bearing premise in my argument I figured I should say a bit about why I believe this. Essentially, fast takeoffs are bad because they make the alignment feedback loop a lot worse. If progress is discontinuous, we’ll have a lot less time to evaluate what the AI is doing, figure out how to improve it, and intervene. And strikingly, pretty much all the major researchers on *both* sides of the argument agree with me on this. Nate Soares of the Machine Intelligence Research Institute [has argued](https://www.youtube.com/watch?v%3DdY3zDvoLoao%26t%3D2332s) that building safe AGI is hard for the same reason that building a successful space probe is hard— it may not be possible to correct failures in the system after it’s been deployed. Eliezer Yudkowsky makes a similar argument: “This is where **practically all of the real lethality** [of AGI] comes from, that we have to get things right on the first sufficiently-critical try.” — [*AGI Ruin: A List of Lethalities*](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) Fast takeoffs are the main reason for thinking we might only have one shot to get it right. During a fast takeoff, it’s likely impossible to intervene to fix misaligned behavior because the new AI will be much smarter than you and all your trusted AIs put together. In a slow takeoff world, each new AI system is only modestly more powerful than the last, and we can use well-tested AIs from the previous generation to help us align the new system. OpenAI CEO Sam Altman agrees we need more than one shot: “The only way I know how to solve a problem like [aligning AGI] is iterating our way through it, learning early, and limiting the number of one-shot-to-get-it-right scenarios that we have.” — [Interview with Lex Fridman](https://youtu.be/L_Guz73e6fw?si%3DgF-7K0-jSR6UK-NB%26t%3D3347) ### Slow takeoff is the default (so don’t mess it up with a pause) There are a lot of reasons for thinking fast takeoff is unlikely by default. For example, the capabilities of a neural network [scale](https://arxiv.org/abs/2001.08361) as a [power law](https://en.wikipedia.org/wiki/Power_law) in the amount of computing power used to train it, which means that returns on investment diminish fairly sharply,[[3]](#fnuxnhauqtagc) and there are theoretical reasons to think this trend will continue ([here](https://arxiv.org/abs/2303.13506), [here](https://arxiv.org/abs/2210.16859)). And while some authors allege that language models exhibit “emergent capabilities” which develop suddenly and unpredictably, a [recent re-analysis](https://arxiv.org/abs/2304.15004) of the evidence showed that these are in fact gradual and predictable when using the appropriate performance metrics. See [this essay](https://sideways-view.com/2018/02/24/takeoff-speeds/) by Paul Christiano for further discussion. Alignment optimism: AIs are white boxes --------------------------------------- Let’s zoom in on the alignment feedback loop from the last section. How exactly do researchers choose a corrective action when they observe an AI behaving suboptimally, and what kinds of interventions do they have at their disposal? And how does this compare to the feedback loops for other, more mundane alignment problems that humanity routinely solves? ### Human & animal alignment is black box Compared to AI training, the feedback loop for raising children or training pets is extremely bad. Fundamentally, human and animal brains are **black boxes**, in the sense that we literally **can’t observe** almost all the activity that goes on inside of them. We don’t know which exact neurons are firing and when, we don’t have a map of the connections between neurons,[[4]](#fn8thkx9c0jse) and we don’t know the connection strength for each synapse. Our tools for non-invasively measuring the brain, like EEG and fMRI, are limited to very coarse-grained correlates of neuronal firings, like electrical activity and blood flow. Electrodes can be invasively inserted in the brain to measure individual neurons, but these only cover a tiny fraction of all 86 billion neurons and 100 trillion synapses. If we could observe and modify everything that’s going on in a human brain, we’d be able to use optimization algorithms to calculate the precise modifications to the synaptic weights which would cause a desired change in behavior.[[5]](#fn4mmrq4qp6jf) Since we can’t do this, we are forced to resort to crude and error-prone tools for shaping young humans into kind and productive adults. We provide role models for children to imitate, along with rewards and punishments that are tailored to their innate, evolved drives. It’s striking how well these black box alignment methods work: most people do assimilate the values of their culture pretty well, and most people are reasonably pro-social. But human alignment is also highly imperfect. Lots of people are selfish and anti-social when they can get away with it, and cultural norms do change over time, for better or worse. Black box alignment is unreliable because there is no guarantee that an intervention intended to change behavior in a certain direction will *in fact* change behavior in that direction. Children often do the exact opposite of what their parents tell them to do, just to be rebellious. ### Status quo AI alignment methods are white box  By contrast, AIs implemented using artificial neural networks (ANN) are **white boxes** in the sense that we have full read-write access to their internals. They’re just a special type of computer program, and we can analyze and manipulate computer programs however we want at essentially no cost. And this enables a lot of really powerful alignment methods that just aren’t possible for brains. The [backpropagation](https://www.youtube.com/watch?v%3DIlg3gGewQ5U) algorithm is an important example. Backprop efficiently computes the optimal *direction* (called the “gradient”) in which to change the synaptic weights of the ANN in order to improve its performance the most, on any criterion we specify. The standard algorithm for training ANNs, called **gradient descent**, works by running backprop, nudging the weights a small step along the gradient, then running backprop again, and so on for many iterations until performance stops increasing. The black trajectory in the figure on the right visualizes how the weights move from higher error regions to lower error regions over the course of training. Needless to say, we can’t do anything remotely like gradient descent on a human brain, or the brain of any other animal! ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/JYEAL8g7ArqGoTaX6/ckyqthqespr6i3a42hfz) Gradient descent is super powerful because, unlike a black box method, it’s [almost impossible to trick](https://www.lesswrong.com/posts/w2TAEvME2yAG9MHeq/gradient-hacking-is-extremely-difficult). All of the AI’s thoughts are “transparent” to gradient descent and are included in its computation. If the AI is secretly planning to kill you, GD will notice this and almost surely make it less likely to do that in the future. This is because GD has a strong tendency to favor [the simplest solution](https://arxiv.org/abs/1905.11604) which performs well, and secret murder plots aren’t *actively useful* for improving human feedback on your actions. ### White box alignment in nature Almost every organism with a brain has an innate reward system. As the organism learns and grows, its reward system directly updates its neural circuitry to reinforce certain behaviors and penalize others. Since the reward system directly updates it in a targeted way using simple learning rules, it can be viewed as a crude form of white box alignment. This biological evidence indicates that white box methods are very strong tools for shaping the *inner motivations* of intelligent systems. Our reward circuitry reliably imprints a set of motivational invariants into the psychology of every human: we have empathy for friends and acquaintances, we have parental instincts, we want revenge when others harm us, etc. Furthermore, these invariants must be produced by easy-to-trick reward signals that are [simple enough to encode in the genome](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome). This suggests that at least human-level general AI could be aligned using similarly simple reward functions. But we already align cutting edge models with learned reward functions that are much too sophisticated to fit inside the human genome, so we may be one step ahead of our own reward system on this issue.[[6]](#fn0hm464hy2ifn) Crucially, I’m *not* saying humans are “aligned to evolution”— see [Evolution provides no evidence for the sharp left turn](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn) for a debunking of that analogy. Rather, I’m saying we’re aligned to the **values our reward system predictably produces** in our environment. An anthropologist looking at humans 100,000 years ago would not have said humans are aligned to evolution, or to making as many babies as possible. They would have said we have some fairly universal tendencies, like empathy, parenting instinct, and revenge. They might have predicted *these* values will persist across time and cultural change, because they’re produced by ingrained biological reward systems. And they would have been right. When it comes to AIs, **we are the innate reward system**. And it’s not hard to predict what values will be produced by our reward signals: they’re the obvious values, the ones an anthropologist or psychologist would say the AI *seems* to be displaying during training. For more discussion see [Humans provide an untapped wealth of evidence about alignment](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about). Realistic AI pauses would be counterproductive ---------------------------------------------- When weighing the pros and cons of AI pause advocacy, we must sharply distinguish the **ideal pause policy**— the one we’d magically impose on the world if we could— from the most **realistic pause policy**, the one that actually existing governments are most likely to implement if our advocacy ends up bearing fruit. ### Realistic pauses are not international An ideal pause policy would be international— a binding treaty signed by *all* governments on Earth that have some potential for developing powerful AI. If major players are left out, the “pause” would not really be a pause at all, since AI capabilities would keep advancing. And the list of *potential* major players is quite long, since the pause itself would create incentives for non-pause governments to actively promote their own AI R&D. However, it’s highly unlikely that we could achieve international consensus around imposing an AI pause, primarily due to arms race dynamics: each individual country stands to reap enormous economic and military benefits if they refuse to sign the agreement, or sign it while covertly continuing AI research. While alignment pessimists may argue that it is in the self-interest of every country to pause and improve safety, we’re unlikely to persuade every government that alignment is as difficult as pessimists think it is. Such international persuasion is even less plausible if we assume short, 3-10 year timelines. Public sentiment about AI [varies widely](https://www.weforum.org/agenda/2022/01/artificial-intelligence-ai-technology-trust-survey/) across countries, and notably, China is among the most optimistic. The existing international ban on chemical weapons does not lend plausibility to the idea of a global pause. AGI will be, almost by definition, the most useful invention ever created. The military advantage conferred by autonomous weapons will certainly dwarf that of chemical weapons, and they will likely be more powerful even than nukes due to their versatility and precision. The race to AGI will therefore be an arms race in the literal sense, and we should expect it will play out similarly to the last such race: major powers rushed to make a nuclear weapon as fast as possible. If in spite of all this, we somehow manage to establish a global AI moratorium, I think we should be quite worried that the global government needed to enforce such a ban would greatly increase the risk of permanent tyranny, itself an existential catastrophe. I don’t have time to discuss the issue here, but I recommend reading Matthew Barnett’s *“The possibility of an indefinite AI pause”* and Quintin Pope’s *“AI is centralizing by default; let's not make it worse,”* both submissions to this debate. In what follows, I’ll assume that the pause is *not* international, and that AI capabilities would continue to improve in non-pause countries at a steady but somewhat reduced pace. ### Realistic pauses don’t include hardware Artificial intelligence capabilities are a function of both hardware (fast GPUs and custom AI chips) and software (good training algorithms and ANN architectures). Yet most proposals for AI pause (e.g. [the FLI letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) and [PauseAI](https://pauseai.info/proposal)[[7]](#fnan90hm5s72a)) do not include a ban on new hardware research and development, focusing only on the software side. Hardware R&D is politically much harder to pause because hardware has many uses: GPUs are widely used in consumer electronics and in a wide variety of commercial and scientific applications. But failing to pause hardware R&D creates a serious problem because, even if we pause the software side of AI capabilities, existing models will continue to get more powerful as hardware improves. Language models are much stronger when they’re allowed to “brainstorm” many ideas, compare them, and check their own work— see the [Graph of Thoughts paper](https://arxiv.org/abs/2308.09687) for a recent example. Better hardware makes these compute-heavy inference techniques cheaper and more effective. ### Hardware overhang is likely If we don’t include hardware R&D in the pause, the price-performance of GPUs will continue to [double every 2.5 years](https://epochai.org/blog/trends-in-gpu-price-performance), as it did between 2006 and 2021. This means AI systems will get at least 16x faster after ten years and **256x faster** after twenty years, simply due to better hardware. If the pause is lifted all at once, these hardware improvements would *immediately* become available for training more powerful models more cheaply— a **hardware overhang**. This would cause a rapid and fairly discontinuous increase in AI capabilities, potentially leading to a fast takeoff scenario and all of the risks it entails. The size of the overhang depends on how fast the pause is lifted. Presumably an ideal pause policy would be lifted gradually over a fairly long period of time. But a phase-out can’t fully solve the problem: legally-available hardware for AI training would *still* improve faster than it would have “naturally,” in the counterfactual where we didn’t do the pause. And do we really think we’re going to get a carefully crafted phase-out schedule? There are many reasons for thinking the phase-out would be rapid or haphazard (see below). More generally, AI pause proposals seem very **fragile**, in the sense that they aren’t robust to mistakes in the implementation or the vagaries of real-world politics. If the pause isn’t implemented *perfectly*, it seems likely to cause a significant hardware overhang which would increase catastrophic AI risk to a greater extent than the extra alignment research during the pause would reduce risk. ### Likely consequences of a realistic pause If we succeed in lobbying one or more Western countries to impose an AI pause, this would have several predictable negative effects: 1. Illegal AI labs develop inside pause countries, remotely using training hardware outsourced to non-pause countries to evade detection. Illegal labs would presumably put much less emphasis on safety than legal ones. 2. There is a brain drain of the least safety-conscious AI researchers to labs headquartered in non-pause countries. Because of remote work, they wouldn’t necessarily need to leave the comfort of their Western home. 3. Non-pause governments make opportunistic moves to encourage AI investment and R&D, in an attempt to leap ahead of pause countries while they have a chance. Again, these countries would be less safety-conscious than pause countries. 4. Safety research becomes subject to government approval to assess its potential capabilities externalities. This slows down progress in safety substantially, just as the FDA slows down medical research. 5. Legal labs exploit loopholes in the definition of a “frontier” model. Many projects are allowed on a technicality; e.g. they have fewer parameters than GPT-4, but use them more efficiently. This distorts the research landscape in hard-to-predict ways. 6. It becomes harder and harder to enforce the pause as time passes, since training hardware is increasingly cheap and miniaturized. 7. Whether, when, and how to lift the pause becomes a highly politicized culture war issue, almost totally divorced from the actual state of safety research. The public does not understand the key arguments on either side. 8. Relations between pause and non-pause countries are generally hostile. If domestic support for the pause is strong, there will be a temptation to wage war against non-pause countries before their research advances too far: 9. “If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.” — [Eliezer Yudkowsky](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) 10. There is intense conflict *among* pause countries about when the pause should be lifted, which may also lead to violent conflict. 11. AI progress in non-pause countries sets a deadline after which the pause *must* end, if it is to have its desired effect.[[8]](#fngpfbywtblcj) As non-pause countries start to catch up, political pressure mounts to lift the pause as soon as possible. This makes it hard to lift the pause gradually, increasing the risk of dangerous fast takeoff scenarios (see below). Predicting the future is hard, and at least some aspects of the above picture are likely wrong. That said, I hope you’ll agree that my predictions are plausible, and are grounded in how humans and governments have behaved historically. When I imagine a future where the US and many of its allies impose an AI pause, I feel more afraid and see more ways that things could go horribly wrong than in futures where there is no such pause. 1. **[^](#fnref78uacfesbio)** Of course, even if the benefits outweigh the costs, it would still be bad to pause if there's some other measure that has a better cost-benefit balance. 2. **[^](#fnreftcbltyk9tdq)** In brief, the book mostly assumed we will *manually program* a set of values into an AGI, and argued that since human values are complex, our value specification will likely be wrong, and will cause a catastrophe when optimized by a superintelligence. But most researchers now recognize that this argument is not applicable to modern ML systems which learn values, along with everything else, from vast amounts of human-generated data. 3. **[^](#fnrefuxnhauqtagc)** Some argue that power law scaling is a mere artifact of our *units of measurement* for capabilities and computing power, which can’t go negative, and therefore can’t be related by a linear function. But non-negativity doesn’t uniquely identify power laws. Conceivably the error rate could have turned out to [decay exponentially](https://en.wikipedia.org/wiki/Exponential_decay), like a radioactive isotope, which would be much faster than power law scaling. 4. **[^](#fnref8thkx9c0jse)** Called a “connectome.” This was only recently achieved for the fruit fly brain. 5. **[^](#fnref4mmrq4qp6jf)** Brain-inspired artificial neural networks already exist, and we have algorithms for optimizing them. They tend to be harder to optimize than normal ANNs due to their non-differentiable components. 6. **[^](#fnref0hm464hy2ifn)** On the other hand, we might be roughly on-par with our own reward system insofar as it does within-lifetime learning to figure out what to reward. This is sort of analogous to the learned reward model in reinforcement learning from human feedback. 7. **[^](#fnrefan90hm5s72a)** To its credit, the PauseAI proposal does recognize that hardware restrictions may be needed eventually, but does not include it in its main proposal. It also doesn’t talk about restricting hardware *research and development*, which is the specific thing I’m talking about here. 8. **[^](#fnrefgpfbywtblcj)** This does depend a bit on whether safety research in pause countries is openly shared or not, and on how likely non-pause actors are to use this research in their own models.
73ea689e-609f-4396-ae35-e0fdde574f5c
trentmkelly/LessWrong-43k
LessWrong
“User does not meet the requirements to vote” What’s going on here? I have a relatively new account and I used to be able to vote, but now I just get this message. I also can’t change votes I’ve already cast.
6d638b42-bc1a-40eb-92bb-c596e9dd7c72
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow: rational review, status quo bias, interpersonal closeness Discussion article for the meetup : Moscow: rational review, status quo bias, interpersonal closeness WHEN: 30 October 2016 02:00:00PM (+0300) WHERE: Москва, ул. Большая Дорогомиловская, д.5к2 Note: most our members join meetups via other channels. Still, the correlation between "found out about Moscow meetups via lesswrong.com" and "is a great fit for our community" is very high. So we're posting just a short link to the hackpad document with the schedule here instead of the full translation of the announcement into English. Pad with the details about 30.10.2016 meetup. We're meeting at the "Kocherga" anticafe, as usual. Discussion article for the meetup : Moscow: rational review, status quo bias, interpersonal closeness
33c81b68-b719-4530-8db9-11b217bff7f1
trentmkelly/LessWrong-43k
LessWrong
Military AI as a Convergent Goal of Self-Improving AI
e4abdaed-ddc8-44c0-96fe-1a33bc40536e
trentmkelly/LessWrong-43k
LessWrong
Coalescer Models DaemonicSigil So we're here today to discuss this post on an improvement to diffuser networks: https://www.lesswrong.com/posts/6Cdusp5xzrHEoHz9n/faster-latent-diffusion Just as some general background for readers, there are various kinds of model one can train to imitate some empirical (i.e. you have a lot of examples) distribution. The main ones are autoencoders, GANs, and diffusers. (At least among architectures general enough to handle images. For text there's also the option of just predicting a probability distribution over the next token, since it has a very linear causal structure to it, and the set of tokens is not too large. IIRC people have tried this to for images too by flattening them to a string of tokens and it works surprisingly okay.) So far it seems like diffusers give the best results, but they're also slow, because to generate an image from the diffuser you have to run the image through a denoising network multiple times and then add back more noise at each step. The distribution of images is clustered and clumped together in a very complicated hierarchical way. So if we're the denoising network and we're given an image with a small amount of noise to correct, we'll be able to guess which cluster it came from pretty well. If the image instead has a large amount of noise, then it has travelled ("diffused") pretty far through space and there are a lot of different clusters it could have come from. We can't say for sure which cluster is the correct one, so we just output an estimate that is the average of the centers of all the plausible clusters, so our loss isn't too big in the case of any one outcome. That's why diffusers need so many steps. They need to gradually anneal down from high noise levels to low noise levels, and at each step they're able to become a little more decisive about which clusters and sub-clusters the image is in. GANs don't have this problem because they can just generate an entire image in one shot, but they have other d
c103f8e0-a0e3-4ed7-8d73-497bea52d5ba
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is compute? Compute is shorthand for “computing power”. In machine learning, it refers to the total amount of processing power used to train a model. There might be confusion between the term “compute” and the phrase “leveraging computation”[^kix.1rdbod8r7y5e]. Compute refers to just the total amount of processing power, whereas when we say we need to leverage computation for increasing model capabilities, we are talking about all three of the following factors: - **Length of training run**: Longer training runs[^kix.mzstbv8ggncj] tend to produce lower [loss](https://www.alignmentforum.org/posts/jnmG5jczvWbeRPcvG/four-usages-of-loss-in-ai). The total amount of processing power required depends in part on how long the model is trained for. Generally, ML engineers look for asymptotically diminishing returns before they decide to stop training, i.e., we want to see the performance improvement between each training iteration drop to some small amount so that we know further training isn’t worth it. - **Amount of training data**: The larger our training data set, the more information our model has to analyze in each training run. So the training runs take longer in general, and this thereby increases the total amount of processing power required before we can consider our model trained. - **Size of the model**: For every training example we give our model, we need to calculate the loss and then [backpropagate](https://en.wikipedia.org/wiki/Backpropagation) to update the model's weights. The more weights (or parameters in general) the model has, the more compute-heavy this process becomes. [Below are graphs](https://gwern.net/doc/ai/nn/transformer/gpt/2020-kaplan-figure1-dlscaling.png) showing how the model loss is affected by increasing each one of these three factors (note how “compute” is specifically related to length of training run here). [Epoch AI](https://epochai.org/trends) has many more graphs of this kind. ![](https://imagedelivery.net/iSzP-DWqFIthoMIqTB4Xog/b91d1fb6-f983-45e2-1f09-2b291d7cea00/public) Source: Gwern (2022) “[The Scaling Hypothesis](https://gwern.net/scaling-hypothesis#scaling-hypothesis)” GPUs are becoming better-performing and cheaper every year. AI models are showing increasingly impressive results, leading to an increased acceptance of high compute costs, and there is a trend towards [foundation models](/?state=91BV&question=What%20are%20foundation%20models%3F) which are trained on increasing amounts of data. These factors suggest that all three of the variables above – training compute, data set size, and parameter size – will continue to grow in the coming years.[^kix.wkc7ihd1vnpg] It is an open question whether [simply scaling these factors will result in uncontrollable capabilities](/?state=7727&question=Can%20we%20get%20AGI%20by%20scaling%20up%20architectures%20similar%20to%20current%20ones%2C%20or%20are%20we%20missing%20key%20insights%3F). [^kix.wkc7ihd1vnpg]: Training compute grew by 0.2 OOM/yr (orders of magnitude per year) up until the deep learning revolution around 2010, after which growth rates increased to 0.6 OOM/yr. A new trend of “large-scale” models emerged in 2016, trained with 2–3 OOMs more compute than other systems in the same period. The available stock of text and image data grew by 0.14 OOM/yr between 1990 and 2018 but has since slowed to 0.03 OOM/yr. Overall, projections by [Epochai](https://epochai.org/) predict that we will have exhausted high-quality language data before 2026, low-quality language data somewhere between 2030 to 2050, and vision data between 2030 to 2060. This suggests the possibility of slower ML progress after the next couple of decades. Overall, between the 1950s and 2018, model parameter sizes have grown at a rate of 0.1 OOM/yr. This means that in the 68 years between 1950 and 2018, models grew by a total of 7 orders of magnitude. However, in just the five years from 2018 to 2023, models have increased by yet another 4 orders of magnitude (not accounting for however many parameters GPT-4 has, because this is not public knowledge). [^kix.mzstbv8ggncj]: In terms of number of epochs of training, that is, the number of times each element of training data is used in the training. [^kix.1rdbod8r7y5e]: This phrase originated in Rich Sutton’s seminal essay [“The Bitter Lesson”](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (2019).
1ff0b41a-5b12-4eb7-8d50-f7e15ad88415
StampyAI/alignment-research-dataset/blogs
Blogs
Conversation on technology forecasting and gradualism This post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in [1](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds), [2](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-shulman-and-yudkowsky-on-ai-progress), [3](https://www.lesswrong.com/posts/sCCdCLPN9E3YvdZhj/shulman-and-yudkowsky-on-ai-progress), and [4](https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress).   Color key: | | | | --- | --- | |  Chat by Paul, Richard, and Eliezer  |  Other chat  |   12. Follow-ups to the Christiano/Yudkowsky conversation -------------------------------------------------------   ### 12.1. Bensinger and Shah on prototypes and technological forecasting   [Bensinger][16:22]  Quoth Paul: > > seems like you have to make the wright flyer much better before it’s important, and that it becomes more like an industry as that happens, and that this is intimately related to why so few people were working on it > > > Is this basically saying ‘the Wright brothers didn’t personally capture much value by inventing heavier-than-air flying machines, and this was foreseeable, which is why there wasn’t a huge industry effort already underway to try to build such machines as fast as possible.’ ? My maybe-wrong model of Eliezer says here ‘the Wright brothers knew a (Thielian) secret’, while my maybe-wrong model of Paul instead says: * They didn’t know a secret — it was obvious to tons of people that you could do something sorta like what the Wright brothers did and thereby invent airplanes; the Wright brothers just had unusual non-monetary goals that made them passionate to do a thing most people didn’t care about. * Or maybe it’s better to say: they knew some specific secrets about physics/engineering, but only because other people *correctly* saw ‘there are secrets to be found here, but they’re stamp-collecting secrets of little economic value to me, so I won’t bother to learn the secrets’. ~Everyone knows where the treasure is located, and ~everyone knows the treasure won’t make you rich. [Yudkowsky][17:24] My model of Paul says there could be a secret, but only because the industry was tiny and the invention was nearly worthless directly. | | | --- | | [Cotra: ➕] | [Christiano][17:53] I mean, I think they knew a bit of stuff, but it generally takes a lot of stuff to make something valuable, and the more people have been looking around in an area the more confident you can be that it’s going to take a lot of stuff to do much better, and it starts to look like an extremely strong regularity for big industries like ML or semiconductors it’s pretty rare to find small ideas that don’t take a bunch of work to have big impacts I don’t know exactly what a thielian secret is (haven’t read the reference and just have a vibe) straightening it out a bit, I have 2 beliefs that combine disjunctively: (i) generally it takes a lot of work to do stuff, as a strong empirical fact about technology, (ii) generally if the returns are bigger there are more people working on it, as a slightly-less-strong fact about sociology [Bensinger][18:09] secrets = important undiscovered information (or information that’s been discovered but isn’t widely known), that you can use to get an edge in something. <https://www.lesswrong.com/posts/ReB7yoF22GuerNfhH/thiel-on-secrets-and-indefiniteness> There seems to be a Paul/Eliezer disagreement about how common these are in general. And maybe a disagreement about how much more efficiently humanity discovers and propagates secrets as you scale up the secret’s value? [Yudkowsky][18:35] Many times it has taken much work to do stuff; there’s further key assertions here about “It takes $100 billion” and “Multiple parties will invest $10B first” and “$10B gets you a lot of benefit first because scaling is smooth and without really large thresholds”. Eliezer is like “ah, yes, sometimes it takes 20 or even 200 people to do stuff, but core researchers often don’t scale well past 50, and there aren’t always predecessors that could do a bunch of the same stuff” even though Eliezer agrees with “it often takes a lot of work to do stuff”. More premises are needed for the conclusion, that one alone does not distinguish Eliezer and Paul by enough. [Bensinger][20:03] My guess is that everyone agrees with claims 1, 2, and 3 here (please let me know if I’m wrong!): 1. The history of humanity looks less like **Long Series of Cheat Codes World**, and more like **Well-Designed Game World**. In Long Series of Cheat Codes World, human history looks like this, over and over: Some guy found a cheat code that totally outclasses everyone else and makes him God or Emperor, until everyone else starts using the cheat code too (if the Emperor allows it). After which things are maybe normal for another 50 years, until a new Cheat Code arises that makes its first adopters invincible gods relative to the previous tech generation, and then the cycle repeats. In Well-Designed Game World, you can sometimes eke out a small advantage, and the balance isn’t *perfect*, but it’s pretty good and the leveling-up tends to be gradual. A level 100 character totally outclasses a level 1 character, and some level transitions are a bigger deal than others, but there’s no level that makes you a god relative to the people one level below you. 2. General intelligence took over the world once. Someone who updated on that fact but otherwise hasn’t thought much about the topic should not consider it ‘bonkers’ that machine general intelligence could take over the world too, even though they should still consider it ‘bonkers’ that eg a coffee startup could take over the world. (Because beverages have never taken over the world before, whereas general intelligence has; and because our inside-view models of coffee and of general intelligence make it a lot harder to imagine plausible mechanisms by which coffee could make someone emperor, kill all humans, etc., compared to general intelligence.) (In the game analogy, the situation is a bit like ‘I’ve never found a crazy cheat code or exploit in this game, but I haven’t ruled out that there is one, and I heard of a character once who did a lot of crazy stuff that’s at least *suggestive* that she might have had a cheat code.’) 3. AGI is arising in a world where agents with science and civilization already exist, whereas humans didn’t arise in such a world. This is one reason to think AGI might not take over the world, but it’s *not* a strong enough consideration on its own to make the scenario ‘bonkers’ (because AGIs are likely to differ from humans in many respects, and it wouldn’t obviously be bonkers if the first AGIs turned out to be qualitatively way smarter, cheaper to run, etc.). — If folks agree with the above, then I’m confused about how one updates from the above epistemic state to ‘bonkers’. It was to a large extent physics facts that determined how easy it was to understand the feasibility of nukes without (say) decades of very niche specialized study. Likewise, it was physics facts that determined you need rare materials, many scientists, and a large engineering+infrastructure project to build a nuke. In a world where the *physics* of nukes resulted in it being some PhD’s quiet ‘nobody thinks this will work’ project like Andrew Wiles secretly working on a proof of Fermat’s Last Theorem for seven years, that would have *happened*. If an alien came to me in 1800 and told me that totally new physics would let future humans build city-destroying superbombs, then I don’t see why I should have considered it bonkers that it might be lone mad scientists rather than nations who built the first superbomb. The ‘lone mad scientist’ scenario sounds more conjunctive to me (assumes the mad scientist knows something that isn’t widely known, AND has the ability to act on that knowledge without tons of resources), so I guess it should have gotten less probability, but maybe not dramatically less? ‘Mad scientist builds city-destroying weapon in basement’ sounds wild to me, but I feel like almost all of the actual unlikeliness comes from the ‘city-destroying weapons exist at all’ part, and then the other parts only moderately lower the probability. Likewise, I feel like the prima-facie craziness of basement AGI mostly comes from ‘generally intelligence is a crazy thing, it’s wild that anything could be that high-impact’, and a much smaller amount comes from ‘it’s wild that something important could happen in some person’s basement’. — It *does* structurally make sense to me that Paul might know things I don’t about GPT-3 and/or humans that make it obvious to him that we roughly know the roadmap to AGI and it’s this. If the entire ‘it’s bonkers that some niche part of ML could crack open AGI in 2026 and reveal that GPT-3 (and the mainstream-in-2026 stuff) was on a very different part of the tech tree’ view is coming from a detailed inside-view model of intelligence like this, then that immediately ends my confusion about the argument structure. I don’t understand why you think you have the roadmap, and given a high-confidence roadmap I’m guessing I’d still put more probability than you on someone finding a very different, shorter path that works too. But the *argument structure* “roadmap therefore bonkers” makes sense to me. If there are meant to be *other* arguments against ‘high-impact AGI via niche ideas/techniques’ that are strong enough to make it bonkers, then I remain confused about the argument structure and how it can carry that much weight. I can imagine an inside-view model of human cognition, GPT-3 cognition, etc. that tells you ‘AGI coming from nowhere in 3 years is bonkers’; I can’t imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn’t *omniscient* and can still be surprised by undiscovered physics facts that tell you ‘nukes are relatively easy to build’ and ‘the fastest path to nukes is relatively hard to figure out’. (Caveat: I’m using the ‘basement nukes’ and ‘Fermat’s last theorem’ analogy because it helps clarify the principles involved, not because I think AGI will be that extreme on the spectrum.) | | | --- | | [Yudkowsky: +1] | Oh, I also wouldn’t be confused by a view like “I think it’s 25% likely we’ll see a more Eliezer-ish world. But it sounds like Eliezer is, like, 90% confident that will happen, and *that level of confidence* (and/or the weak reasoning he’s provided for that confidence) seems bonkers to me.” The thing I’d be confused by is e.g. “ML is efficient-ish, therefore *the out-of-the-blue-AGI scenario itself* is bonkers and gets, like, 5% probability.” [Shah][1:58] (I’m unclear on whether this is acceptable for this channel, please let me know if not) > > I can’t imagine an ML-is-a-reasonably-efficient-market argument that does the same, because even a perfectly efficient market isn’t omniscient and can still be surprised by undiscovered physics facts > > > I think this seems right as a first pass. Suppose we then make the empirical observation that in tons and tons of other fields, it is extremely rare that people discover new facts that lead to immediate impact. (Set aside for now whether or not that’s true; assume that it is.) Two ways you could react to this: 1. Different fields are different fields. It’s not like there’s a common generative process that outputs a distribution of facts and how hard they are to find that is common across fields. Since there’s no common generative process, facts about field X shouldn’t be expected to transfer to make predictions about field Y. 2. There’s some latent reason, that we don’t currently know, that makes it so that it is rare for newly discovered facts to lead to immediate impact. It seems like you’re saying that (2) is not a reasonable reaction (i.e. “not a valid argument structure”), and I don’t know why. There are lots of things we don’t know, is it really so bad to posit one more? (Once we agree on the argument structure, we should then talk about e.g. reasons why such a latent reason can’t exist, or possible guesses as to what the latent reason is, etc, but fundamentally I feel generally okay with starting out with “there’s probably some reason for this empirical observation, and absent additional information, I should expect that reason to continue to hold”.) [Bensinger][3:15] I think 2 is a valid argument structure, but I didn’t mention it because I’d be surprised if it had enough evidential weight (in this case) to produce an ‘update to bonkers’. I’d love to hear more about this if anyone thinks I’m under-weighting this factor. (Or any others I left out!) [Shah][23:57] Idk if it gets all the way to “bonkers”, but (2) seems pretty strong to me, and is how I would interpret Paul-style arguments on timelines/takeoff if I were taking on what-I-believe-to-be your framework [Bensinger][11:06] Well, I’d love to hear more about that! Another way of getting at my intuition: I feel like a view that assigns very small probability to ‘suddenly vastly superhuman AI, because something that high-impact hasn’t happened before’ (which still seems weird to me, because physics doesn’t know what ‘impact’ is and I don’t see what physical mechanism could forbid it that strongly and generally, short of simulation hypotheses) … would also assign very small probability in 1800 to ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’. But this seems flatly wrong to me — if you buy that the bomb works by a totally different mechanism (and exploits a different physics regime) than eg gunpowder, then the output of the bomb is a *physics* question, and I don’t see how we can concentrate our probability mass much without probing the relevant physics. The history of boat and building sizes is a negligible input to ‘given a totally new kind of bomb that suddenly lets us (at least) destroy cities, what is the total destructive power of the bomb?’. | | | --- | | [Yudkowsky: +1] | (Obviously the bomb *didn’t* destroy the Earth, and I wouldn’t be surprised if there’s some Bayesian evidence or method-for-picking-a-prior that could have validly helped you suspect as much in 1800? But it would be a suspicion, not a confident claim.) [Shah][1:45] > > would also assign very small probability in 1800 to ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’ > > > (As phrased you also have to take into account the question of whether humans would deploy the resulting superbomb, but I’ll ignore that effect for now.) I think this isn’t exactly right. The “totally new physics” part seems important to update on. Let’s suppose that, in the reference class we built of boat and building sizes, empirically nukes were the 1 technology out of 20 that had property X. (Maybe X is something like “discontinuous jump in things humans care about” or “immediate large impact on the world” or so on.) Then, I think in 1800 you assign ~5% to ‘the first superbomb at least powerful enough to level cities will ignite the atmosphere or otherwise destroy the Earth’. Once you know more details about how the bomb works, you should be able to update away from 5%. Specifically, “entirely new physics” is an important detail that causes you to update away from 5%. I wouldn’t go as far as you in throwing out reference classes entirely at that point — there can still be unknown latent factors that apply at the level of physics — but I agree reference classes look harder to use in this case. With AI, I start from ~5% and then I don’t really see any particular detail for AI that I think I should strongly update on. My impression is that Eliezer thinks that “general intelligence” is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that’s what’s analogous to “entirely new physics”. I’m pretty unconvinced of this, but something in this genre feels quite crux-y for me. Actually, I think I’ve lost the point of this analogy. What’s the claim for AI that’s analogous to > > ‘given an alien prediction that totally new physics will let us build superbombs at least powerful enough to level cities, the superbomb in question will ignite the atmosphere or otherwise destroy the Earth’ > > > ? Like, it seems like this is saying “We figure out how to build a new technology that does X. What’s the chance it has side effect Y?” Where X and Y are basically unrelated. I was previously interpreting the argument as “if we know there’s a new superbomb based on totally new physics, and we know that the first such superbomb is at least capable of leveling cities, what’s the probability it would have enough destructive force to also destroy the world”, but upon rereading that doesn’t actually seem to be what you were gesturing at. [Bensinger][3:08] I’m basically responding to this thing Ajeya wrote: > > I think Paul’s view would say: > > > * Things certainly happen for the first time > * When they do, they happen at small scale in shitty prototypes, like the Wright Flyer or GPT-1 or AlphaGo or the Atari bots or whatever > * When they’re making a big impact on the world, it’s after a lot of investment and research, like commercial aircrafts in the decades after Kitty Hawk or like the investments people are in the middle of making now with AI that can assist with coding > > > To which my reply is: I agree that the first AGI systems will be shitty compared to *later* AGI systems. But Ajeya’s Paul-argument seems to additionally require that AGI systems be relatively unimpressive at cognition compared to preceding AI systems that weren’t AGI. If this is because of some general law that things are shitty / low-impact when they “happen for the first time”, then I don’t understand what physical mechanism could produce such a general law that holds with such force. As I see it, physics ‘doesn’t care’ about human conceptions of impactfulness, and will instead produce AGI prototypes, aircraft prototypes, and nuke prototypes that have as much impact as is implied by the detailed case-specific workings of general intelligence, flight, and nuclear chain reactions respectively. We could frame the analogy as: * ‘If there’s a year where AI goes from being unable to do competitive par-human reasoning in the hard sciences, to being able to do such reasoning, we should estimate the impact of the first such systems by drawing on our beliefs about par-human scientific reasoning itself.’ * Likewise: ‘If there’s a year where explosives go from being unable to destroy cities to being able to destroy cities, we should estimate the impact of the first such explosives by drawing on our beliefs about how (current or future) physics might allow a city to be destroyed, and what other effects or side-effects such a process might have. We should spend little or no time thinking about the impactfulness of the first steam engine or the first telescope.’ [Shah][3:14] Seems like your argument is something like “when there’s a zero-to-one transition, then you have to make predictions based on reasoning about the technology itself”. I think in that case I’d say this thing from above: > > My impression is that Eliezer thinks that “general intelligence” is a qualitatively different sort of thing than that-which-neural-nets-are-doing, and maybe that’s what’s analogous to “entirely new physics”. I’m pretty unconvinced of this, but something in this genre feels quite crux-y for me. > > > (Like, you wouldn’t a priori expect anything special to happen once conventional bombs become big enough to demolish a football stadium for the first time. It’s because nukes are based on “totally new physics” that you might expect unprecedented new impacts from nukes. What’s the analogous thing for AGI? Why isn’t AGI just regular AI but scaled up in a way that’s pretty continuous?) I’m curious if you’d change your mind if you were convinced that AGI is just regular AI scaled up, with no qualitatively new methods — I expect you wouldn’t but idk why [Bensinger][4:03] In my own head, the way I think of ‘AGI’ is basically: “Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI ‘AGI’.” There might be facts I’m unaware of that justify conclusions like ‘AGI is mostly just a bigger version of current ML systems like GPT-3’, and there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor’. But if so, I’m assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like ‘the first steam engine didn’t take over the world’, ‘the first telescope didn’t take over the world’…. Because the physics of brains doesn’t care about those things, and because in discussing brains we’re already in ‘things that have been known to take over the world’ territory. (I think that paying much attention *at all* to the technology-wide base rate for ‘does this allow you to take over the world?’, once you already know you’re doing something like ‘inventing a new human’, doesn’t really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying ‘What if they don’t have the book I’m looking for? Most stores don’t sell books at all, so this one might not have the one I want.’ If you know it’s a *book* store, then you shouldn’t be thinking at that level of generality at all; the base rate just goes out the window.) | | | --- | | [Yudkowsky:] +1 | My way of thinking about AGI is pretty different from saying AGI follows ‘totally new mystery physics’ — I’m explicitly anchoring to a known phenomenon, humans. The analogous thing for nukes might be ‘we’re going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city’. [Shah][0:44] > > The analogous thing for nukes might be ‘we’re going to build a bomb that uses processes kind of like the ones found in the Sun in order to produce enough energy to destroy (at least) a city’. > > > (And I assume the contentious claim is “that bomb would then ignite the atmosphere, destroy the world, or otherwise have hugely more impact than just destroying a city”.) In 1800, we say “well, we’ll probably just make existing fires / bombs bigger and bigger until they can destroy a city, so we shouldn’t expect anything particularly novel or crazy to happen”, and assign (say) 5% to the claim. There is a wrinkle: you said it was processes like the ones found in the Sun. Idk what the state of knowledge was like in 1800, but maybe they knew that the Sun couldn’t be a conventional fire. If so, then they could update to a higher probability. (You could also infer that since someone bothered to mention “processes like the ones found in the Sun”, those processes must be ones we don’t know yet, which also allows you to make that update. I’m going to ignore that effect, but I’ll note that this is one way in which the phrasing of the claim is incorrectly pushing you in the direction of “assign higher probability”, and I think a similar thing happens for AI when saying “processes like those in the human brain”.) With AI I don’t see why the human brain is a different kind of thing than (say) convnets. So I feel more inclined to just take the starting prior of 5%. Presumably you think that assigning 5% to the nukes claim in 1800 was incorrect, even if that perspective doesn’t know that the Sun is not just a very big conventional fire. I’m not sure why this is. According to me this is just the natural thing to do because things are usually continuous and so in the absence of detailed knowledge that’s what your prior should be. (If I had to justify this, I’d point to facts about bridges and buildings and materials science and so on.) > > there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of slightly-less-general, slightly-less-capable successors’. > > > The frame of “justify[ing] conclusions” seems to ask for more confidence than I expect to get. Rather I feel like I’m setting an initial prior that could then be changed radically by engaging with details of the technology. And then I’m further saying that I don’t see any particular details that should cause me to update away significantly (but they could arise in the future). For example, suppose I have a random sentence generator, and I take the first well-formed claim it spits out. (I’m using a random sentence generator so that we don’t update on the process by which the claim was generated.) This claim turns out to be “Alice has a fake skeleton hidden inside her home”. Let’s say we know nothing about Alice except that she is a real person somewhere in the US who has a home. You can still assign < 10% probability to the claim, and take 10:1 bets with people who don’t know any additional details about Alice. Nonetheless, as you learn more about Alice, you could update towards higher probability, e.g. if you learn that she loves Halloween, that’s a modest update; if you learn she runs a haunted house at Halloween every year, that’s a large update; if you go to her house and see the fake skeleton you can update to ~100%. That’s the sort of situation I feel like we’re in with AI. If you asked me what facts justify the conclusion that Alice probably doesn’t have a fake skeleton hidden inside her house, I could only point to reference classes, and all the other people I’ve met who don’t have such skeletons. This is not engaging with the details of Alice’s situation, and I could similarly say “if I wanted to know about Alice, surely I should spend most of my time learning about Alice, rather than looking at what Bob and Carol did”. Nonetheless, it is still correct to assign < 10% to the claim. It really does seem to come down to — why is human-level intelligence such a special turning point that should receive special treatment? Just as you wouldn’t give special treatment to “the first time bridges were longer than 10m”, it doesn’t seem obvious that there’s anything all that special at the point where AIs reach human-level intelligence (at least for the topics we’re discussing; there are obvious reasons that’s an important point when talking about the economic impact of AI) [Tallinn][7:04] FWIW, my current 1-paragraph compression of the debate positions is something like: **catastrophists**: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI. **gradualists**: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was *culture* that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away. — one possible crux to explore is “how thick is culture”: is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI’s main environment/ontology for at least a decade. [Ngo][11:18] > > FWIW, my current 1-paragraph compression of the debate positions is something like: > > > **catastrophists**: when evolution was gradually improving hominid brains, suddenly something clicked – it stumbled upon the core of general reasoning – and hominids went from banana classifiers to spaceship builders. hence we should expect a similar (but much sharper, given the process speeds) discontinuity with AI. > > > **gradualists**: no, there was no discontinuity with hominids per se; human brains merely reached a threshold that enabled cultural accumulation (and in a meaningul sense it was *culture* that built those spaceships). similarly, we should not expect sudden discontinuities with AI per se, just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away. > > > — > > > one possible crux to explore is “how thick is culture”: is it something that AGI will quickly decouple from (dropping directly to physics-based ontology instead) OR will culture remain AGI’s main environment/ontology for at least a decade. > > > Clarification: in the sentence “just an accelerating (and possibly unfavorable to humans) cultural changes as human contributions will be automated away”, what work is “cultural changes” doing? Could we just say “changes” (including economic, cultural, etc) instead? > > In my own head, the way I think of ‘AGI’ is basically: “Something happened that allows humans to do biochemistry, materials science, particle physics, etc., even though none of those things were present in our environment of evolutionary adaptedness. Eventually, AI will similarly be able to generalize to biochemistry, materials science, particle physics, etc. We can call that kind of AI ‘AGI’.” > > > There might be facts I’m unaware of that justify conclusions like ‘AGI is mostly just a bigger version of current ML systems like GPT-3’, and there might be facts that justify conclusions like ‘AGI will be preceded by a long chain of predecessors, each slightly less general and slightly less capable than its successor’. > > > But if so, I’m assuming those will be facts about CS, human cognition, etc., not at all a list of a hundred facts like ‘the first steam engine didn’t take over the world’, ‘the first telescope didn’t take over the world’…. Because the physics of brains doesn’t care about those things, and because in discussing brains we’re already in ‘things that have been known to take over the world’ territory. > > > (I think that paying much attention *at all* to the technology-wide base rate for ‘does this allow you to take over the world?’, once you already know you’re doing something like ‘inventing a new human’, doesn’t really make sense at all? It sounds to me like going to a bookstore and then repeatedly worrying ‘What if they don’t have the book I’m looking for? Most stores don’t sell books at all, so this one might not have the one I want.’ If you know it’s a *book* store, then you shouldn’t be thinking at that level of generality at all; the base rate just goes out the window.) > > > I’m broadly sympathetic to the idea that claims about AI cognition should be weighted more highly than claims about historical examples. But I think you’re underrating historical examples. There are at least three ways those examples can be informative – by telling us about: 1. Domain similarities 2. Human effort and insight 3. Human predictive biases You’re mainly arguing against 1, by saying that there are facts about physics, and facts about intelligence, and they’re not very related to each other. This argument is fairly compelling to me (although it still seems plausible that there are deep similarities which we don’t understand yet – e.g. the laws of statistics, which apply to many different domains). But historical examples can also tell us about #2 – for instance, by giving evidence that great leaps of insight are rare, and so if there exists a path to AGI which doesn’t require great leaps of insight, that path is more likely than one which does. And they can also tell us about #3 – for instance, by giving evidence that we usually overestimate the differences between old and new technologies, and so therefore those same biases might be relevant to our expectations about AGI. [Bensinger][12:31] In the ‘alien warns about nukes’ example, my intuition is that ‘great leaps of insight are rare’ and ‘a random person is likely to overestimate the importance of the first steam engines and telescopes’ tell me practically nothing, compared to what even a small amount of high-uncertainty physics reasoning tells me. The ‘great leap of insight’ part tells me ~nothing because even if there’s an easy low-insight path to nukes and a hard high-insight path, I don’t thereby know the explosive yield of a bomb on either path (either absolutely or relatively); it depends on how nukes work. Likewise, I don’t think ‘a random person is likely to overestimate the first steam engine’ really helps with estimating the power of nuclear explosions. I could *imagine* a world where this bias exists and is so powerful and inescapable it ends up being a big weight on the scales, but I don’t think we live in that world? I’m not even sure that a random person *would* overestimate the importance of prototypes in general. Probably, I guess? But my intuition is still that you’re better off in 1800 focusing on physics calculations rather than the tug-of-war ‘maybe X is cognitively biasing me in *this* way, no wait maybe Y is cognitively biasing me in this other way, no wait…’ Our situation might not be analogous to the 1800-nukes scenario (e.g., maybe we know by observation that current ML systems are basically scaled-down humans). But if it *is* analogous, then I think the history-of-technology argument is not very useful here. [Tallinn][13:00] re “cultural changes”: yeah, sorry, i meant “culture” in very general “substrate of human society” sense. “cultural changes” would then include things like changes in power structures and division of labour, but *not* things like “diamondoid bacteria killing all humans in 1 second” (that would be a change in humans, not in the culture) [Shah][13:09] I want to note that I agree with your (Rob’s) latest response, but I continue to think most of the action is in whether AGI involves something analogous to “totally new physics”, where I would guess “no” (and would do so particularly strongly for shorter timelines). (And I would still point to historical examples for “many new technologies don’t involve something analogous to ‘totally new physics'”, and I’ll note that Richard’s #2 about human effort and insight still applies)   ### 12.2. Yudkowsky on Steve Jobs and gradualism   [Yudkowsky][15:26] So recently I was talking with various people about the question of why, for example, Steve Jobs could not find somebody else with UI taste 90% as good as his own, to take over Apple, even while being able to pay infinite money. A successful founder I was talking to was like, “Yep, I sure would pay $100 million to hire somebody who could do 80% of what I can do, in fact, people have earned more than that for doing less.” I wondered if OpenPhil was an exception to this rule, and people with more contact with OpenPhil seemed to think that OpenPhil did not have 80% of a Holden Karnofsky (besides Holden). And of course, what sparked this whole thought process in me, was that I’d staked all the effort I put into the Less Wrong sequences, into the belief that if I’d managed to bring myself into existence, then there ought to be lots of young near-Eliezers in Earth’s personspace including some with more math talent or physical stamina not so unusually low, who could be started down the path to being Eliezer by being given a much larger dose of concentrated hints than I got, starting off the compounding cascade of skill formations that I saw as having been responsible for producing me, “on purpose instead of by accident”. I see my gambit as having largely failed, just like the successful founder couldn’t pay $100 million to find somebody 80% similar in capabilities to himself, and just like Steve Jobs could not find anyone to take over Apple for presumably much larger amounts of money and status and power. Nick Beckstead had some interesting stories about various ways that Steve Jobs had tried to locate successors (which I wasn’t even aware of). I see a plausible generalization as being a “Sparse World Hypothesis”: The shadow of an Earth with eight billion people, projected into some dimensions, is much sparser than plausible arguments might lead you to believe. Interesting people have few neighbors, even when their properties are collapsed and projected onto lower-dimensional tests of output production. The process of forming an interesting person passes through enough 0-1 critical thresholds that all have to be passed simultaneously in order to start a process of gaining compound interest in various skills, that they then cannot find other people who are 80% as good as what they *do* (never mind being 80% similar to them as people). I would expect human beings to start out much denser in a space of origins than AI projects, and for the thresholds and compounding cascades of our mental lives to be much less sharp than chimpanzee-human gaps. Gradualism about humans sure sounds totally reasonable! It is in fact much more plausible-sounding a priori than the corresponding proposition about AI projects! I staked years of my own life on the incredibly reasoning-sounding theory that if one actual Eliezer existed then there should be lots of neighbors near myself that I could catalyze into existence by removing some of the accidental steps from the process that had accidentally produced me. But it didn’t work in real life because plausible-sounding gradualist arguments just… plain don’t work in real life even though they sure sound plausible. I spent a lot of time arguing with Robin Hanson, who was more gradualist than I was, and was taken by surprise when reality itself was much less gradualist than I was. My model has Paul or Carl coming back with some story about how, why, no, it is totally reasonable that Steve Jobs couldn’t find a human who was 90% as good at a problem class as Steve Jobs to take over Apple for billions of dollars despite looking, and, why, no, this is not at all a falsified retroprediction of the same gradualist reasoning that says a leading AI project should be inside a dense space of AI projects that projects onto a dense space of capabilities such that it has near neighbors. If so, I was not able to use this hypothetical model of *selective* gradualist reasoning to deduce in advance that replacements for myself would be sparse in the same sort of space and I’d end up unable to replace myself. I do not really believe that, without benefits of hindsight, the advance predictions of gradualism would differ between the two cases. I think if you don’t peek at the answer book in advance, the same sort of person who finds it totally reasonable to expect successful AI projects to have close lesser earlier neighbors, would also find it totally reasonable to think that Steve Jobs definitely ought to be able to find somebody 90% as good to take over his job – and should actually be able to find somebody *much* better because Jobs gets to run a wider search and offer more incentive than when Jobs was wandering into early involvement in Apple. It’s completely reasonable-sounding! Totally plausible to a human ear! Reality disagrees. Jobs tried to find a successor, couldn’t, and now the largest company in the world by market cap seems no longer capable of sending the iPhones back to the designers and asking them to do something important differently. This is part of the story for why I put gradualism into a mental class of “arguments that sound plausible and just fail in real life to be binding on reality; reality says ‘so what’ and goes off to do something else”. [Christiano][17:46]  (Sep. 28) It feels to me like a common pattern is: I say that ML in particular, and most technologies in general, seem to improve quite gradually on metrics that people care about or track. You say that some kind of “gradualism” worldview predicts a bunch of other stuff (some claim about markets or about steve jobs or whatever that feels closely related on your view but not mine). But it feels to me like there are just a ton of technologies, and a ton of AI benchmarks, and those are just *much* more analogous to “future AI progress.” I know that to you this feels like reference class tennis, but I think I legitimately don’t understand what kind of approach to forecasting you are using that lets you just make (what I see as) the obvious boring prediction about all of the non-AGI technologies. Perhaps you are saying that symmetrically you don’t understand what approach to forecasting I’m using, that would lead me to predict that technologies improve gradually yet people vary greatly in their abilities. To me it feels like the simplest thing in the world: I expect future technological progress in domain X to be like past progress in domain X, and future technological progress to be like past technological progress, and future market moves to be like past market moves, and future elections to be like past elections. And it seems like you *must* be doing something that ends up making almost the same predictions as that almost all the time, which is why you don’t get incredibly surprised every single year by continuing boring and unsurprising progress in batteries or solar panels or robots or ML or computers or microscopes or whatever. Like it’s fine if you say “Yes, those areas have trend breaks sometimes” but there are *so many* boring years that you must somehow be doing something like having the baseline “this year is probably going to be boring.” Such that intuitively it feels to me like the disagreement between us *must* be in the part where AGI feels to me like it is similar to AI-to-date and feels to you like it is very different and better compared to evolution of life or humans. It has to be the kind of argument that you can make about progress-of-AI-on-metrics-people-care-about, but *not* progress-of-other-technologies-on-metrics-people-care-about, otherwise it seems like you are getting hammered every boring year for every boring technology. I’m glad we have the disagreement on record where I expect ML progress to continue to get less jumpy as the field grows, and maybe the thing to do is just poke more at that since it is definitely a place where I gut level expect to win bayes points and so could legitimately change my mind on the “which kinds of epistemic practices work better?” question. But it feels like it’s not the main action, the main action has got to be about you thinking that there is a really impactful change somewhere between {modern AI, lower animals} and {AGI, humans} that doesn’t look like ongoing progress in AI. I think “would GPT-3 + 5 person-years of engineering effort foom?” feels closer to core to me. (That said, the way AI could be different need not feel like “progress is lumpier,” could totally be more like “Progress is always kind of lumpy, which Paul calls ‘pretty smooth’ and Eliezer calls ‘pretty lumpy’ and doesn’t lead to any disagreements; but Eliezer thinks AGI is different in that kind-of-lumpy progress leads to fast takeoff, while Paul thinks it just leads to kind-of-lumpy increases in the metrics people care about or track.”) [Yudkowsky][7:46]  (Sep. 29) > > I think “would GPT-3 + 5 person-years of engineering effort foom?” feels closer to core to me. > > > I truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you’re against GPT-3 fooming because it’s too low-effort and a short timeline, even though I’m the one who thinks GPT-3 isn’t on a smooth continuum with AGI?? With that said, the rest of this feels on-target to me; I sure do feel like {natural selection, humans, AGI} form an obvious set with each other, though even there the internal differences are too vast and the data too scarce for legit outside viewing. > > I truly and legitimately cannot tell which side of this you think we should respectively be on. My guess is you’re against GPT-3 fooming because it’s too low-effort and a short timeline, even though I’m the one who thinks GPT-3 isn’t on a smooth continuum with AGI?? > > > I mean I obviously think you can foom starting from an empty Python file with 5 person-years of effort if you’ve got the Textbook From The Future; you wouldn’t use the GPT code or model for anything in that, the Textbook says to throw it out and start over. [Christiano][9:45]  (Sep. 29) I think GPT-3 will foom given very little engineering effort, it will just be much slower than the human foom and then that timeline will get faster and faster over time it’s also fair to say that it wouldn’t foom because the computers would break before it figured out how to repair them (and it would run out of metal before it figured out how to mine it, etc.), depending on exactly how you define “foom,” but the point is that “you can repair the computers faster than they break” happens much before you can outrun human civilization so the relevant threshold you cross is the one where you are outrunning civilization (and my best guess about human evolution is pretty similar, it looks like humans are smart enough to foom over a few hundred thousand years, and that we were the ones to foom because that is also roughly how long it was taking evolution to meaningfully improve our cognition—if we foomed slower it would have instead been a smarter successor who overtook us, if we foomed faster it would have instead been a dumber predecessor, though this is *much* less of a sure-thing than the AI case because natural selection is not trying to make something that fooms) and regarding {natural selection, humans, AGI} the main question is why modern AI and homo erectus (or even chimps) aren’t in the set it feels like the core disagreement is that I mostly see a difference in degree between the various animals, and between modern AI and future AI, a difference that is likely to be covered by gradual improvements that are pretty analogous to contemporary improvements, and so as the AI community making contemporary improvements grows I get more and more confident that TAI will be a giant industry rather than an innovation [Ngo][5:45] Do you have a source on Jobs having looked hard for a successor who wasn’t Tim Cook? Also, I don’t have strong opinions about how well Apple is doing now, so I default to looking at the share price, which seems very healthy. (Although I note in advance that this doesn’t feel like a particularly important point, roughly for the same reason that Paul mentioned: gradualism about Steve Jobs doesn’t seem like a central example of the type of gradualism that informs beliefs about AI development.) [Yudkowsky][10:40] My source is literally “my memory of stuff that Nick Beckstead just said to me in person”, maybe he can say more if we invite him. I’m not quite sure what to do with the notion that “gradualism about Steve Jobs” is somehow less to be expected than gradualism about AGI projects. Humans are GIs. They are *extremely* similar to each other design-wise. There are a *lot* of humans, billions of them, many many many more humans than I expect AGI projects. Despite this the leading edge of human-GIs is sparse enough in the capability space that there is no 90%-of-Steve-Jobs that Jobs can locate, and there is no 90%-of-von-Neumann known to 20th century history. If we are not to take any evidence about this to A-GIs, then I do not understand the rules you’re using to apply gradualism to some domains but not others. And to be explicit, a skeptic who doesn’t find these divisions intuitive, might well ask, “Is gradualism perhaps isomorphic to ‘The coin always comes up heads on Heady occasions’, where ‘Heady’ occasions are determined by an obscure intuitive method going through some complicated nonverbalizable steps one of which is unfortunately ‘check whether the coin actually came up heads’?” (As for my own theory, it’s always been that AGIs are mostly like AGIs and not very much like humans or the airplane-manufacturing industry, and I do not, on my own account of things, appeal much to supposed outside viewing or base rates.) [Shulman][11:11] I think the way to apply it is to use observable data (drawn widely) and math. Steve Jobs does look like a (high) draw (selected for its height, in the sparsest tail of the CEO distribution) out of the economic and psychometric literature (using the same kind of approach I use in other areas like estimating effects of introducing slightly superhuman abilities on science, the genetics of height, or wealth distributions). You have roughly normal or log-normal distributions on some measures of ability (with fatter tails when there are some big factors present, e.g. super-tall people are enriched for normal common variants for height but are more frequent than a Gaussian estimated from the middle range because of some weird disease/hormonal large effects). And we have lots of empirical data about the thickness and gaps there. Then you have a couple effects that can make returns in wealth/output created larger. You get amplification from winner-take-all markets, IT, and scale that let higher ability add value to more places. This is the same effect that lets top modern musicians make so much money. Better CEOs get allocated to bigger companies because multiplicative management decisions are worth more in big companies. Software engineering becomes more valuable as the market for software grows. Wealth effects are amplified by multiplicative growth (noise in a given period multiplies wealth for the rest of the series, and systematic biases from abilities can grow exponentially or superexponentially over a lifetime), and there are some versions of that in gaining expensive-to-acquire human capital (like fame for Hollywood actors, or experience using incredibly expensive machinery or companies). And we can read off the distributions of income, wealth, market share, lead time in innovations, scientometrics, etc. That sort of data lead you to expect cutting edge tech to be months to a few years ahead of followers, winner-take-all tech markets to a few leading firms and often a clearly dominant one (but not driving an expectation of being able to safely rest on laurels for years while others innovate without a moat like network effects). That’s one of my longstanding arguments with Robin Hanson, that his model has more even capabilities and market share for AGI/WBE than typically observed (he says that AGI software will have to be more diverse requiring more specialized companies, to contribute so much GDP). It is tough to sample for extreme values on multiple traits at once, superexponentially tough as you go out or have more criteria. CEOs of big companies are smarter than average, taller than average, have better social skills on average, but you can’t find people who are near the top on several of those. <https://www.hbs.edu/ris/Publication%20Files/16-044_9c05278e-9d11-4315-a744-de008edf4d80.pdf> Correlations between the things help, but it’s tough. E.g. if you have thousands of people in a class on a measure of cognitive skill, and you select on only partially correlated matters of personality, interest, motivation, prior experience, etc, the math says it gets thin and you’ll find different combos (and today we see more representation of different profiles of abilities, including rare and valuable ones, in this community) I think the bigger update for me from trying to expand high-quality save the world efforts has been on the funny personality traits/habits of mind that need to be selected and their scarcity. [Karnofsky][11:30] A cpl comments, without commitment to respond to responses: 1. Something in the zone of “context / experience / obsession” seems important for explaining the Steve Jobs type thing. It seems to me that people who enter an area early tend to maintain an edge even over more talented people who enter later – examples are not just founder/CEO types but also early employees of some companies who are more experienced with higher-level stuff (and often know the history of how they got there) better than later-entering people. 2. I’m not sure if I am just rephrasing something Carl or Paul has said, but something that bugs me a lot about the Rob/Eliezer arguments is that I feel like if I accept >5% probability for the kind of jump they’re talking about, I don’t have a great understanding of how I avoid giving >5% to a kajillion other claims from various startups that they’re about to revolutionize their industry, in ways that seem inside-view plausible and seem to equally “depend on facts about some physical domain rather than facts about reference classes.” The thing that actually most comes to mind here is Thiel – he has been a phenomenal investor financially, but he has also invested by now in a lot of “atoms” startups with big stories about what they might do, and I don’t think any have come close to reaching those visions (though they have sometimes made $ by doing something orders of magnitude less exciting). If a big crux here is “whether Thielian secrets exist” this track record could be significant. I think I might update if I had a cleaner sense of how I could take on this kind of “Well, if it is just a fact about physics that I have no idea about, it can’t be that unlikely” view without then betting on a lot of other inside-view-plausible breakthroughs that haven’t happened. Right now all I can say to imitate this lens is “General intelligence is ‘different'” I don’t feel the same way about “AI might take over the world” – I feel like I have good reasons this applies to AI and not a bunch of other stuff [Soares][11:11] Ok, a few notes from me (feel free to ignore): 1. It seems to me like the convo here is half attempting-to-crux and half attempting-to-distill-out-a-bet. I’m interested in focusing explicitly on cruxing for the time being, for whatever that’s worth. (It seems to me like y’all’re already trending in that direction.) 2. It seems to me that one big revealed difference between the Eliezerverse and the Paulverse is something like: * In the Paulverse, we already have basically all the fundamental insights we need for AGI, and now it’s just a matter of painstaking scaling. * In the Eliezerverse, there are large insights yet missing (and once they’re found we have plenty of reason to expect things to go quickly). For instance, in Eliezerverse they say “The Wright flyer didn’t need to have historical precedents, it was allowed to just start flying. Similarly, the AI systems of tomorrow are allowed to just start GIing without historical precedent.”, and in the Paulverse they say “The analog of the Wright flyer has already happened, it was Alexnet, we are now in the phase analogous to the slow grinding transition from human flight to commercially viable human flight.” (This seems to me like basically what Ajeya articulated [upthread](https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress).) 3. It seems to me that another revealed intuition-difference is in the difficulty that people have operating each other’s models. This is evidenced by, eg, Eliezer/Rob saying things like “I don’t know how to operate the gradualness model without making a bunch of bad predictions about Steve Jobs”, and Paul/Holden responding with things like “I don’t know how to operate the secrets-exist model without making a bunch of bad predictions about material startups”. I’m not sure whether this is a shallower or deeper disagreement than (2). I’d be interested in further attempts to dig into the questions of how to operate the models, in hopes that the disagreement looks interestingly different once both parties can at least operate the other model. | | | --- | | [Tallinn: ➕] |   The post [Conversation on technology forecasting and gradualism](https://intelligence.org/2021/12/09/conversation-on-technology-forecasting-and-gradualism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
e81bb291-fdde-44e5-8d2f-eeee7e0ef14e
trentmkelly/LessWrong-43k
LessWrong
I want to die in an infinite universe. How philosophy led me to the deepest heights of indifferent despair. {warning: a translation} Here there should be a quote related to the subject, showing our littleness, because that's what the quote implies, and our tickling ego, because someone wise said something with which we agree. I don't see the need to define wisdom, so I don't see the sense in that either.  I'd like to erase this unnecessary term, like a million others, from people's http://www.edarcipelago.com/freebooks/CioranHeightsofDespair.pdf    Why rationality and logic work at all, and why, even if they don't work, it is pointless and impossible not to assume them in order to achieve any goal. Is there any goal we should want to achieve? I don't know, I don't know what it means we should, I'm not interested in it. Do I have a goal that I would like to achieve? I do not know. I act so that it seems to me that there is such a purpose in me. The simplest coherent candidate for such a goal is complete fulfillment, the state of lack of further desires and needs, which I treat as unnecessary in a life assessed as tragically useless. I want and I can't stop wanting. In the depths of something I WANT and I want to fulfill the conditions that will make me stop WANTING, fulfilling this permanent omni-wanting, or stop feeling it. I have no need to look for any depth in it and seek to understand such a state for any purpose other than ending this wish. I don't know how to reason outside of logic, so I use it absolutely. I assume I am, since I could infer it, so p-> p, if so, then all logic. The third assumption is the assumption of the existence of some form of external world, I assume, available to experimental cognition. On these 4 assumptions, I try to build my vision of the world. The simpler something is in the dimension of axioms, if it is consistent with itself and with observations, the more it tempts me. I am not going to write about goals, meanings, and meanings, or any other metaphysical aspects, because I don't see the need to use them in any absolute sense or
ed8603be-6c77-422a-bbdb-ecd635457fc7
trentmkelly/LessWrong-43k
LessWrong
Blame games In Excerpts from a larger discussion about simulacra, I worked through a well-known schema for distinguishing different relationships towards semantic reference, that are a natural result of interactions between shared-production games and expropriation games. Here, I analyze the coalition politics of such games. The Survivor game In zero-sum games, majoritarian decision rules (such as democracy) create an asymmetry - it's much easier to expropriate from a minority than from a majority - or, easier to transfer wealth to a majority than to a minority. Why would the majority vote for something they don't all benefit from? A simple variant of this is the Survivor game, in which a single player is voted off the island at a time (see also the ancient Greek custom of ostracism). Since there's comparatively little advantage to being singled out for good, players will tend to want to avoid revealing information about themselves or their allies. Loudly voicing consensus opinion in ways that don't specify the implications for any person is fine because it's not informative. Anything that lets people distinguish you from the others is dangerous. The idea of a Schelling point is that if players in a game need to converge on one location in a map, then in the absence of a strong incentive to favor one location, they will tend to converge on some obviously identifiable feature. For instance, in surveys, Thomas Schelling found that a surprisingly large number of people, if tasked with meeting someone on a specified day, in New York, with no further information, would converge on the information booth in Grand Central - and if no time was specified, they favored noon. In a pure Survivor game, the first player to reveal their "location" loses. They become the feature everyone else converges on as an expropriation target. One natural side effect of this is coordination against any players who are narratively constrained by something other than the zero-sum game. For instance, if
fd4d25b5-096f-4930-9eed-6a23f43c999d
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"A few days ago romeostevensit wrote in response to me asking about downvotes on a post: I didn’t down­vote, but I do think that con­ver­sa­tions like this at­tract peo­ple who aren’t in­ter­ested in ar­gu­ing in good faith. I pre­fer that such dis­cus­sions oc­cur at one ab­strac­tion level up so that they don’t need to men­tion any ob­ject level be­liefs like so­cial jus­tice in or­der to talk about the pat­tern that the au­thor wants to talk about. And I replied: This seems like a rea­son­able worry. Maybe one way to ad­dress it would be to make posts tagged as “poli­tics” (by ei­ther the au­thor or a mod­er­a­tor) visi­ble only to logged in users above a cer­tain karma thresh­old or speci­fi­cally ap­proved by mod­er­a­tors. Talk­ing at the meta-level is also good, but I think at some point x-risk peo­ple have to start dis­cussing ob­ject-level poli­tics and we need some place to prac­tice that. Since writing that, I've had the thought (because of this conversation) that only talking about political issues at a meta level has another downside: premature abstraction. That is, it takes work to find the right abstraction for any issue or problem, and forcing people to move to the meta level right away means that we can't all participate in doing that work, and any errors or suboptimal choices in the abstraction can't be detected and fixed by the community, leading to avoidable frustrations and wasted efforts down the line. As an example, consider a big political debate on LW back in 2009, when "a portion of comments here were found to be offensive by some members of this community, while others denied their offensive nature or professed to be puzzled by why they are considered offensive." By the time I took my shot at finding the right abstraction for thinking about this problem, three other veteran LWers had already tried to do the same thing. Now imagine if the object level issue was hidden from everyone except a few people. How would we have been able to make the intellectual progress necessary to settle upon the right abstraction in that case? One problem that exacerbates premature abstraction is that people are often motivated to talk about a political issue because they have a strong intuitive position on it, and when they find what they think is the right abstraction for thinking about it, they'll rationalize an argument for their position within that abstraction, such that accepting the abstract argument implies accepting or moving towards their object-level position. When the object level issue is hidden, it becomes much harder for others to detect such a rationalization. If the abstraction they created is actually wrong or incomplete (i.e., doesn't capture some important element of the object-level issue), their explicit abstract argument is even more likely to have little or nothing to do with what actually drives their intuition. Making any kind of progress that would help resolve the underlying object-level issue becomes extremely difficult or impossible in those circumstances, as the meta discussion is likely to become bogged down and frustrating to everyone involved as one side tries to defend an argument that they feel strongly about (because they have a strong intuition about the object-level issue and think their abstract argument explains their intuition) but may actually be quite weak due to the abstraction itself being wrong. And this can happen even if their object-level position is actually correct! To put it more simply, common sense says hidden agendas are bad, but by having a norm for only discussing political issues at a meta level, we're directly encouraging that. (I think for this and other reasons, it may be time to relax the norm against discussing object-level political issues around here. There are definitely risks and costs involved in doing that, but I think we can come up with various safeguards to minimize the risks and costs, and if things do go badly wrong anyway, we can be prepared to reinstitute the norm. I won't fully defend that here, as I mainly want to talk about "premature abstraction" in this post, but feel free to voice your objections to the proposal in the comments if you wish to do so.)" The above is a chunk of text from a post created by a user on the website LessWrong. What are some reasons why people may like or agree with what this person is saying? There are several reasons why people may agree with the sentiments expressed in this post: 1. The Desire for Open Dialogue: The author is advocating for more open discussions on object-level political issues. Some people may agree with this because they believe that open dialogue, even on controversial topics, is essential for progress and understanding. 2. Critique of Premature Abstraction: The author criticizes the practice of premature abstraction, arguing that it can lead to misunderstandings and oversimplifications. This might resonate with individuals who have observed or experienced the pitfalls of discussing complex issues in overly abstract terms. 3. Concerns about Hidden Agendas: The author points out that discussing issues only at a meta level can conceal underlying biases or agendas. People who value transparency and honesty in discussions may appreciate this point. 4. The Need for Community Involvement: The author suggests that community involvement in finding the right abstraction for any issue is beneficial. This idea could appeal to those who believe in the collective intelligence and wisdom of communities. 5. Willingness to Reevaluate Norms: The author is open to reevaluating established norms if they prove to be ineffective or counterproductive. This willingness to question and reassess might appeal to individuals who value adaptability and continuous improvement. 6. Advocacy for Safeguards: While the author advocates for more open discussions, they also acknowledge the potential risks and emphasize the need for safeguards. This balanced approach might resonate with those who believe in the importance of managing risks while pursuing beneficial changes. Could you elaborate on how the practice of premature abstraction could be mitigated in discussions, particularly in the context of political debates? Certainly, mitigating premature abstraction in discussions, particularly in political debates, can be approached in several ways: 1. Encourage Object-Level Discussions: Allow and encourage discussions about specific, tangible instances of the issue at hand before moving on to abstract principles. This can help ensure that the abstraction accurately reflects the realities of the situation. 2. Foster an Open-Minded Atmosphere: Encourage participants to be open to changing their perspective and to approach discussions with the intent to understand, rather than to win an argument. This can help prevent people from sticking rigidly to a particular abstraction that may not be entirely accurate or helpful. 3. Facilitate Iterative Abstraction: Instead of trying to find the perfect abstraction at the outset, accept that the first abstraction may be imperfect and refine it iteratively as the discussion progresses and more perspectives are considered. 4. Promote Diverse Participation: Encourage participation from a diverse range of individuals. Different perspectives can help highlight aspects of the issue that may be overlooked in a single abstraction. 5. Use Examples and Case Studies: Using real-world examples and case studies can help ground the discussion and prevent it from becoming overly abstract. It can also help participants understand the practical implications of the abstraction. 6. Encourage Critical Thinking: Promote a culture of critical thinking where participants are encouraged to question and challenge the underlying assumptions of any abstraction. 7. Provide Clear Definitions: Ensure that everyone has a clear understanding of the terms and concepts being used in the discussion. This can help prevent misunderstandings and miscommunications. By implementing these strategies, it's possible to create a more nuanced and accurate understanding of complex issues, reducing the risk of premature abstraction. How can real-world examples and case studies be effectively utilized to ground discussions and prevent premature abstraction in political debates? Utilizing real-world examples and case studies effectively in political debates can be achieved through several methods: 1. Illustrating Points: Examples and case studies can be used to illustrate abstract concepts or theories, making them more concrete and understandable. They can provide context and demonstrate how a particular idea plays out in real life. 2. Challenging Assumptions: Real-world examples can challenge assumptions or preconceived notions. If an abstraction is based on a flawed assumption, a relevant example can highlight this flaw and prompt a reevaluation of the abstraction. 3. Providing Evidence: Case studies can serve as evidence to support or refute a point. They can provide empirical data and detailed analyses, lending credibility to arguments and helping to move the discussion from speculation to evidence-based debate. 4. Encouraging Empathy: Real-world examples can humanize abstract political debates, reminding participants of the real people and communities affected by these issues. This can encourage empathy and a more thoughtful consideration of the impacts of different policies or decisions. 5. Facilitating Comparison and Contrast: Multiple case studies can be used to compare and contrast different situations, policies, or outcomes. This can provide a more nuanced understanding of the issue and prevent oversimplification. 6. Grounding Predictions: Examples from history can be used to ground predictions about the future. While history doesn't repeat itself exactly, it often provides valuable lessons that can inform future decisions. 7. Encouraging Practical Solutions: Real-world examples can highlight the practical implications of different policies or decisions, encouraging solutions that are not just theoretically sound but also practically feasible. By using real-world examples and case studies in these ways, political debates can be grounded in reality, preventing premature abstraction and promoting a more nuanced and practical understanding of the issues at hand. How might the incorporation of real-world examples and case studies in political debates impact the quality of decision-making, considering both the short-term and long-term effects? Incorporating real-world examples and case studies into political debates can significantly impact the quality of decision-making, both in the short-term and long-term: Short-Term Effects: 1. Improved Understanding: Real-world examples and case studies can make complex political issues more relatable and understandable, thereby enhancing the quality of immediate decision-making. 2. Evidence-Based Decisions: These examples provide concrete evidence that can be used to support or refute arguments, leading to more informed and evidence-based decisions. 3. Increased Empathy: By humanizing abstract issues, real-world examples can foster empathy among decision-makers, potentially leading to more compassionate and considerate decisions. 4. Facilitated Consensus: Concrete examples can help clarify misunderstandings or disagreements about abstract concepts, potentially facilitating consensus and enabling more effective decision-making. Long-Term Effects: 1. Enhanced Learning: Over time, the consistent use of real-world examples and case studies can enhance learning and deepen understanding of complex political issues, leading to better long-term decision-making. 2. Better Predictive Abilities: Historical case studies can provide valuable lessons for the future, improving decision-makers' ability to predict outcomes and make proactive decisions. 3. More Practical Solutions: By highlighting the practical implications of different policies, real-world examples can encourage the development of solutions that are not just theoretically sound but also practically feasible and effective in the long run. 4. Reduced Bias: Regular exposure to diverse case studies can challenge biases and assumptions, leading to more objective and fair decision-making over time. By grounding abstract political debates in real-world experiences, decision-makers can make more informed, empathetic, and practical decisions, both in the immediate context and in the long term.
7d02354a-efc4-4508-bc6b-d293c3cbfe89
trentmkelly/LessWrong-43k
LessWrong
Open Thread, October 27 - 31, 2013 If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
f4679880-c93b-4928-9807-6cb1727434de
trentmkelly/LessWrong-43k
LessWrong
Book Review: Consciousness Explained > The trouble with brains, it seems, is that when you look in them, you discover that there’s nobody home. I. This is a book I've long been aware of, but never got that itch to read. Maybe I trusted the field of philosophy too little, assuming that a book called "Consciousness Explained" was probably not very good. Maybe I trusted the field of philosophy too much, assuming that if someone had actually explained consciousness while I was a toddler, I would have been informed somehow before now. Either way, I was wrong, and the book is great. I'm going to try to give what is either a short tour, or a long compilation of quotes. I'm leaving out several whole chapters, nearly every thought experiment, most of the examples and science, and some very nice language. And yet, this is still long enough that I encourage you, even if you do like reading Dan Dennett on consciousness, if you don't like long things, maybe don't read this all in one sitting - stop at V or VI and pretend that's the end of part one. II. Dennett quickly warns the reader that he's aware that the contents may sound counterintuitive. > We shouldn’t expect a good theory of consciousness to make for comfortable reading — the sort that immediately “rings bells,” that makes us exclaim to ourselves, with something like secret pride: “Of course! I knew that all along! It’s obvious, once it’s been pointed out!” The mysteries of the mind have been around for so long, and we have made so little progress on them, that the likelihood is high that some things we all tend to agree to be obvious are just not so. This is not the mysterian claim that his ideas about consciousness are likely because they are counterintuitive, but it does signal a core claim of the book: the intuitive view of the problem of consciousness is broken from the foundation up. Naturally, if the intuitive theory is wrong, the right theory is counterintuitive. Where, exactly, is our intuition going wrong? The most important example i
46ae8a6c-47f0-4274-ab99-fec46d990057
trentmkelly/LessWrong-43k
LessWrong
Pasteur's quadrant In my recent post on the case study of the transistor, we saw that the research that led to its invention did not fall neatly into the categories of “basic” vs. “applied”, but in fact cycled rapidly between them. An entire book—Pasteur’s Quadrant, by Donald Stokes—is dedicated to the thesis that “basic” vs. “applied” is a false dichotomy that is harming science funding. The core idea of Pasteur’s Quadrant is that basic and applied research are not opposed, but orthogonal. Instead of a one-dimensional spectrum, with motion towards “basic” taking you further away from “applied”, and vice versa, he proposes a two-dimensional classification, with one axis being “inspired by the quest for fundamental understanding” and the other being “inspired by considerations of use”: I find these phrases somewhat cumbersome, given their centrality to the thesis, so let’s call them “discovery” and “invention” for short. In Stokes’s classification, research aiming at discovery, with no thought of invention—“pure basic research”—is characterized by Bohr’s search for a model to explain the atom, and hence labeled “Bohr’s quadrant”. The opposite, “pure applied research”, or invention with no attempt at discovery, is characterized by Edison and his lab, who tinkered endlessly to engineer products such as the light bulb, with little heed to theory: “Edison’s quadrant”. But, crucially for Stokes, there is research that combines discovery and invention, motivated by both at the same time. “Pasteur’s quadrant” is characterized by the career of the great scientist who gave us both the germ theory of disease and the first engineered vaccines. Stokes traces the history of the basic-applied dichotomy back to ancient Greece, where they had different words for different types of knowledge: episteme for scientific knowledge, and techne for the practical arts. Long before the Baconian program, episteme was not seen as the basis of techne—nor was it, at the time. Mathematics, astronomy, and natura
173f0c29-15d7-4688-90be-7e637f19fd94
trentmkelly/LessWrong-43k
LessWrong
Boeing 737 MAX MCAS as an agent corrigibility failure The Boeing Maneuvering Characteristics Augmentation System (MCAS) can be thought of, if reaching a bit, as a specialized AI: it performs a function normally reserved for a human pilot: pitching the nose down when it deems the angle of attack to be dangerously high. This is not, by itself, a problem. There are pilots in the cockpit who can take control when needed. Only in this case they couldn't. Simply manually pitching the nose up when the MCAS pitching it down too much would not disengage the system, it would activate again, and again. One has to manually disengage the autopilot (this information was not in the pilot training). For comparison, think of the cruise control system in a car: the moment you press the brake pedal, it disengages; if you push the gas pedal, then release, it return to the preset speed. At no time it tries to override your actions. Unlike MCAS. MCAS disregards critical human input and even fights the human for control in order to reach its goal of "nominal flight parameters". From the Corrigibility paper: > We say that an agent is “corrigible” if it tolerates or assists many forms of outside correction, including atleast the following: (1) A corrigible reasoner must at least tolerate and preferably assist the programmers in their attempts to alter or turn off the system... In this case the "agent" actively fought its human handlers instead of assisting them. Granted, the definition above is about programmers, not pilots, and the existing MCAS probably would not fight a software update, being a dumb specialized agent. But we are not that far off: a lot of systems include built-in security checks for the remote updates. If one of those checks were to examine the algorithm the updated code uses and reject it when it deems it unacceptable because it fails its internal checks, the corrigibility failure would be complete! In a life-critical always-on system this would produce a mini-Skynet. I don't know whether something like that has happe
fe1c9e41-9ef9-4fcf-98a9-1710319bb6d0
trentmkelly/LessWrong-43k
LessWrong
How did you come to find LessWrong? I was reflecting the other day about how I learned about LessWrong. As best as I can recall/retrace, I learned about LessWrong from gwern, who I met in the #wikipedia IRC channel via an essentially chance meeting. I'm wondering how typical my experience is. How did you come to LessWrong? EDIT: Optional follow-up question: Do you think that we (the community) are doing enough to bring in new users to LessWrong? If not, what do you think could be done to increase awareness of LessWrong amongst potential rationalists?
fbeb1b13-5992-4d18-aeca-9462061b930b
trentmkelly/LessWrong-43k
LessWrong
Will quantum randomness affect the 2028 election? This came up as a tangent from @habryka and me discussing whether The Hidden Complexity of Wishes was correct. Is the result of a US presidential election 4 years from now >0.1% contingent on quantum randomness (i.e. is an otherwise omniscient observer forecasting the 2028 election today capable of >99.9% confidence, or is there >0.1% irreducible uncertainty due to quantum mechanics observer-effects)? I think the answer is yes, because chaotic systems will quickly amplify this randomness to change many facts about the world on election day. * Quantum randomness causes different radioactive decays, which slightly perturb positions of particles around the world by a few nanometers. * Chaotic systems will quickly amplify these tiny perturbations into macro-scale perturbations: * Weather doubles perturbations every 4 days or so * The genes of ~all babies less than 3 years old will be different * Many events relevant to the election are contingent on these differences * Weather-related natural disasters, other circumstances like pandemics (either mutation or lab leak), political gaffes by candidates, assassinations (historically >0.1% and seem pretty random), cancer deaths, etc. * If even a small proportion of election variance is random, you get more than 0.1% election randomness. * Say humanity's best estimates for the vote margin of the 2028 election have a standard deviation of 76 electoral votes centered on 0. Even if 90% of variance is in theory predictable and only 10% is true randomness (aleatoric), then the nonrandom factors have s.d. 70 and random factors have s.d. 24. If nonrandom factors have 1sd influence, random factors will flip the election with probability well over 0.1%. * In reality it's much worse than this, because we haven't even identified 2 leading candidates. Oliver thinks the answer is no, because in a system as large and complicated as the world, there should be some macro-scale patterns that survive, and an omniscient ob
dfca6f59-6699-4a2a-93b8-f43b555f3a7d
trentmkelly/LessWrong-43k
LessWrong
OB Repost: "Plastination is Near" > These brain research techniques have now reached two key milestones: > > 1. They’ve found new ways to “fix” brain samples by filling them with plastic, ways that seem impressively reliable, resilient, and long lasting, and which work on large brain volumes (e.g., here). Such plastinationtechniques seem close to being able to save enough info in entire brains for centuries, without needing continual care. Just dumping a plastic brain in a box in a closet might work fine. > 2. Today, for a few tens of thousands of dollars, less than the price charged for one cryonics customer, it is feasible to have independent lab(s) take random samples from whole mouse or human brains preserved via either cryonics or plastination, and do high (5nm) resolution 3D scans to map out thousands of neighboring cells, their connections, and connection strengths, to test if either of these approaches clearly preserve such key brain info. Plastination is Near
63278820-8d60-4f04-b891-f5d51ff8b44a
trentmkelly/LessWrong-43k
LessWrong
Meetup : Null event Discussion article for the meetup : Null event WHEN: 11 January 2016 04:00:16PM (+1100) WHERE: West Antarctica PLEASE IGNORE It wasn't possible to correct the date on this event (was marked for April 2016 instead of 2015), so I'm trying to get rid of it this way. Discussion article for the meetup : Null event
40e4fda3-3276-43ac-a74f-37286eed68e9
trentmkelly/LessWrong-43k
LessWrong
Buying Debt as Effective Altruism? http://www.theguardian.com/world/2013/nov/12/occupy-wall-street-activists-15m-personal-debt A collection of Occupy activists recently bought over $14,000,000 in personal debt for $400,000. Normally, debt-buying companies do this with the intention of collecting the money from the debtors--Occupy did not, and I was struck by the lopsidedness of the figures. A number I see often in the high-impact philanthropy world is $2300 to save a life (with plenty of caveats). At Occupy's rates, that would buy roughly $80,000 in debt--enough to get two or three families out of a hole that would otherwise render them bankrupt. By itself, this isn't enough to be better than mosquito nets or deworming. But the thing about personal debt is that, thanks to interest payments and stress, it prevents people with high earning potential (compared to an average African) from making decisions that would optimal were they debt-free--like finishing college or buying a used car so they can take on a higher-paying job. My idea, though it's a tentative, spur-of-the-moment thing: Why not found a charity that acts like a combination of Vittana and Giving What We Can, freeing people with good prospects from debt in exchange for their signing a contract to donate a small portion of their future salary to charity? A few issues that come to mind: 1) Occupy bought a lot of medical debt, which this company wouldn't, and other types of debt might be harder to buy. 2) People who have decent earning potential have more valuable debt, since they're more likely to pay it off later. (On the other hand, freeing them of interest payments might help them get into a better position for repayment.) 3) The idea is a lot like micro-lending, and organizations that offer that service don't have a great track record (though some have been successful). 4) People just freed from debt might not be in a position to donate much salary/might be unreliable. (Deferred payments until college is finished/the new job i
88c223be-a9a9-4499-8cc6-60d31ecd7076
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Some Existing Selection Theorems This post illustrates how various existing [Selection Theorems](https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents) and related results fit the general framework - what they tell us about agent type signatures, and how they (usually implicitly) tell us what agent types will be selected. I invite others to leave comments about any other Selection Theorems you know of - for instance, [Radical Probabilism](https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1) and [Logical Inductors](https://www.lesswrong.com/tag/logical-induction) are results/frameworks which can be viewed as Selection Theorems but which I haven’t included below. This post assumes you have read the [intro post on the Selection Theorem program](https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents). The intended audience is people who might work on the Selection Theorem program, so these blurbs are intended to be link-heavy hooks and idea generators rather than self-contained explanations. ### The Gooder Regulator Theorem The [Gooder Regulator Theorem](https://www.lesswrong.com/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem) talks about the optimal design of a “regulator” (i.e. agent) in an environment like this: ![](https://lh3.googleusercontent.com/jh_2xrG7FZONekAfmFAIc0_TBW4I4GMmSe4ffWp-mnHr3PtyUZHhcE5aqpY57EzeGunbIfNMU0y8g_CrO-V2XutckUFLa4YgDP9CAjTuTP6ub09Wk-GAoc6eU3J3pqMQAOFmXLnA=s0)When viewed as a Selection Theorem, the outer optimization process selects for high values of Z.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and low-information models M (i.e. models which don’t take up much space). Assuming that Z is a “sufficiently flexible” function of Y, the theorem says that the optimal “model” M is isomorphic to the Bayesian posterior distribution (s↦P[S=s|X]). In other words, the system’s internal structure includes an explicit Bayesian world model. * Representation: world model represented by probability distribution * Interfaces: “inputs” X induce Bayesian updates of the world model. The distribution is over “system states” S. * Embedding: Agent subsystems are assumed to follow the causal structure in the diagram; the theorem then justifies labelling one of them the “model”. ### Coherence Theorems This cluster of theorems is the most common foundation for agent models today. It includes things like [Dutch Book Theorems](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities), [Complete Class Theorem](https://www.lesswrong.com/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations), [Savage’s Theorem](https://www.lesswrong.com/posts/5J34FAKyEmqKaT7jt/a-summary-of-savage-s-foundations-for-probability-and), [Fundamental Theorem of Asset Pricing](https://en.wikipedia.org/wiki/Fundamental_theorem_of_asset_pricing), variations of these, and probably others as well. These theorems provide many paths to the same agent type signature: Bayesian expected utility maximization. * Representation: utility function and probability distribution. * Interfaces: both the utility function and distribution take in “bet outcomes”, assumed to be specified as part of the environment. The outputs of the agent are “actions” which maximize expected utility under the distribution; the inputs are “observations” which update the distribution via Bayes’ Rule. * Embedding: “agent” must interact with “environment” only via the specified “bets”. Bayesian expected utility maximization relates to low-level agent implementation via behavioral equivalence. Besides the obvious type-signature assumption (the “bets”), these theorems also typically have some more subtle assumptions built in - like the [need for a money-like resource](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities?commentId=GyE8wvZuWcuiCaySb) or the [absence of internal agent state](https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents) or [something to do with self-prediction](https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1). They apply most easily to financial markets; other applications usually require some careful thought about what to identify as “bets” so that the “bets” work the way they need to in order for the theorems to apply. Typically, these theorems say that a strategy which does *not* satisfy the type signature is strictly dominated by some other strategy. Assuming a rich enough strategy space and a selection process which can find the dominating strategies, we therefore expect selection to produce a strategy which does satisfy the type signature (at least approximately). *If* the assumptions of the theorem can actually be fit to the selection process, that is. ### Kelly Criterion The [Kelly criterion](http://www.eecs.harvard.edu/cs286r/courses/fall10/papers/Chapter6.pdf) uses a similar setup to the Coherence Theorems, with the added assumption that agents make *sequential, independent* bets and can bet up to their total wealth each time (a model originally intended for traders in financial markets or betting markets). Under these conditions, agents which maximize their *expected log wealth* at each timestep achieve the highest long-run growth rate with probability 1. The type signature implied by the Kelly criterion is similar to the previous section, except the utility is specifically log wealth. * Representation: probability distribution. * Interfaces: the distribution takes in “bet outcomes” (which are wealth changes), assumed to be specified as part of the environment. The outputs of the agent are “actions” which maximize expected log wealth under the distribution; the inputs are “observations” which update the distribution via Bayes’ Rule. * Embedding: “agent” must interact with “environment” only via the specified “bets”. Bayesian expected log wealth maximization relates to low-level agent implementation via behavioral equivalence. As a selection theorem, the Kelly criterion is especially interesting because it’s *specifically about selection*. It [does not give any fundamental philosophical reason](https://www.lesswrong.com/posts/DfZtwtGD6ymFtXmdA/kelly-is-just-about-logarithmic-utility) why one “should” want to maximize expected log wealth; it just says that agents which *do* maximize log wealth will be selected for. So, in environments where the Kelly assumptions apply, those are the agents we should expect to see. ### Subagents Fun fact: financial markets themselves make exactly the kind of “bets” required by the Coherence Theorems, and are the ur-example of a system not dominated by some other strategy. So, from the Coherence Theorems, we expect financial markets to be equivalent to Bayesian expected utility maximizers, right? Well, it turns out they’re not - a phenomenon economists call “nonexistence of a representative agent”. (Though, interestingly, a market of Kelly criterion agents *is* equivalent to a Bayesian expected utility maximizer.) When we dive into the details, the main issue is that markets have *internal state* which can’t be bet on. If we update Coherence to account for that, then it looks like *markets/committees of expected utility maximizers* are the appropriate type signature for non-dominated strategies (rather than single utility maximizers). In other words, this type signature has [subagents](https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents). Again, the type signature is mostly similar to the Coherence Theorems, but tweaked a bit. * Representation: multiple utility functions and probability distributions. * Interfaces: both the utility functions and distributions take in “bet outcomes”, assumed to be specified as part of the environment. The outputs of the agent are “actions” which pareto-maximize expected utilities under the distributions; the inputs are “observations” which update the distributions via Bayes’ Rule. * Embedding: “agent” must interact with “environment” only via the specified “bets”. Bayesian expected utility pareto-maximization relates to low-level agent implementation via behavioral equivalence. (Note: this type signature is only conjectured in the linked post; the post proves only the non-probabilistic version.) ### Instrumental Convergence and Power-Seeking Turner’s [theorems on instrumental convergence](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk) say that optimal strategies for achieving *most* goals involve similar actions - i.e. “power-seeking” actions - given some plausible assumptions on the structure of the environment. These theorems are not Selection Theorems in themselves, but they offer a possible path to construct a money-like “[utility measuring stick](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities?commentId=GyE8wvZuWcuiCaySb)” for selected agents in systems with no explicit “money” - which would allow us to more broadly apply variants of the Coherence Theorems. ### Description Length Minimization = Utility Maximization The [equivalence between Description Length Minimization and Utility Maximization](https://www.lesswrong.com/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization) tells us two things: * The intuitive notion of optimization as “steering (some part of) the world into a smaller chunk of state space” is basically equivalent to utility maximization. * Utility maximization can generally be interchanged with description length minimization (i.e. information compression) under some fixed encoding. This result is interesting mainly because it offers a way to apply information-theoretic tools directly to goals, but we can also view it as a (very weak) Selection Theorem in its own right. * Representation: expected utility or description length * Interfaces: arbitrary * Embedding: none/arbitrary - the theorem does not explicitly involve an agent, but can apply to any “goal-directed” agent if one is present This result can also be viewed as a way to characterize the selection process (i.e. outer optimizer), rather than the selected agent.
d13e8313-5718-4b81-b442-282b2dd1ef53
StampyAI/alignment-research-dataset/special_docs
Other
Interview with w5cb5 \*\*Interview with w5cb5, on 3/18/22\*\* ==================================== \*\*0:00:02.2 Vael:\*\* Alright, so my first question is, can you tell me about what area of AI you work on in a few sentences? \*\*0:00:08.5 Interviewee:\*\* I worked in \[subfield\] originally, but I guess I branched out more broadly into AI research, because I\'m \[high-level research role\] now at an AI company. \*\*0:00:19.9 Vael:\*\* Great, yeah. And then what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI? \*\*0:00:28.3 Interviewee:\*\* So I think, yeah, the world is going to change quite a lot with AI technology, and I think mostly in good ways, just because we\'re going to empower people with this technology. And it\'s going to be empowering I think in similar ways to the Internet, where people can do faster search, they have an assistant who can help them with all kinds of stuff. They have friends who maybe are not real, and all kinds of ways to make people happier, I think, or more efficient, or to give them time back and that sort of stuff. But obviously, there are also risks and the main risks are, I think that the field is too dominated by tech bros from Silicon Valley, so I guess I fall under that in a way. And so I think that\'s a real problem, so we need to take democratization of the technology much more seriously, that\'s also what my company is doing. And I think if we think about the ethical implications of our technology from first principles, and if we make them first-class citizens rather than just treating them as an afterthought, where you submit your paper and then, \"Oh, I also need to write a broader impact statement,\" but if you take that very seriously from the beginning as a core principle of your organization, then I think you can do much better research in a much more responsible way. \*\*0:01:56.5 Vael:\*\* Interesting. Alright, so that was the question of \"what are you most excited about and what are you most worried about in AI\", okay. I heard\-- Lots of things they can go, lots of places they can go, lots of directions they can go, but you\'re worried about domination from specific areas and then people not caring about\... ethics enough? or--- \*\*0:02:14.6 Interviewee:\*\* Yeah, so misuse of technology. Do you want me to give you concrete examples? So I think very often, the technology that we develop, even if it\'s meant for benevolent purposes, can also be re-applied for not so benevolent purposes. And so like speech recognition or face recognition, things like that, you have to just be very careful with how you treat this technology. So that\'s why I think if people take responsible AI seriously from the beginning, that that is a good thing too. \*\*0:02:53.0 Vael:\*\* Interesting. So you think if people incorporate responsible AI from the beginning of the process, then there will be less risk of misuse by any agent in the future? \*\*0:03:04.5 Interviewee:\*\* Yeah, yeah. So you mentioned your IRB, so for a lot of technological research happening in industry, there is no real IRB. Some companies have sort of IRBs but most of them are so commercial and so driven by money in the end. And I think maybe we need an independent AI IRB for the broader research community, where anybody can go there and have somebody look at the potential applications of their work. \*\*0:03:39.6 Vael:\*\* I see, cool. And then just having that sort of mindset seems good, in addition to the object-level effects. Alright. Makes sense. So focusing on future AI, putting on a science fiction forecasting hat, say we are 50 years, 50 plus years into the future. So at least 50 years into the future, what does that future look like? \*\*0:04:00.3 Interviewee:\*\* At least 50 years in the future. So I still don\'t think we will have AGI, and that\'s I guess, I\'m probably unusual in the field because I think a lot of my colleagues would disagree, especially if they\'re at OpenAI or DeepMind because they think that it\'s like two years away. (Vael: \"Two years, huh!\") Yeah, well it depends on who you ask, they have some crazy people. \[chuckle\] I think in the next decade, we\'re going to realize what the limitations are of our current technology. I think what we\'ve been doing now has been very efficient in terms of scaling with data and scaling with compute, but it\'s very likely that we\'re just going to need entirely new algorithms that just require pure scientific breakthroughs. And so I don\'t think there\'s going to be another AI winter, but I do think that things are going to cool down a little bit again, because right now it\'s just been super hyped up. For good reason too, because we are really making really great progress. But there is still things that we really don\'t know how to do, so we have language models and they can do things and they\'re amazing, but we don\'t know how to make the language model do what we want it to do. So we\'re all just sort of hacking it a little bit, but it\'s not really anywhere close to being like a proper assistant, for example, who actually understands what you\'re saying, who actually understands the world. I think where we want to be 50 years from now is where we have machines who understand the world in the same way that humans understand it, so maybe something like Neuralink. So if I\'m being very futuristic, connecting AI to human brains and human perception of reality, that could be a way to get AI to have a much richer understanding of the world in the same way that humans understand it. So like dolphins are also very intelligent, but they also don\'t understand humans and they are not very useful assistants, right? I don\'t know if you\'ve ever had any dolphin assistant. So it\'s not really bad intelligence, it\'s specifically about human intelligence that makes AI potentially useful for us, and so that\'s something that I think is often overlooked. \*\*0:06:26.9 Vael:\*\* So it sounds like, so you\'re thinking about when AGI will happen. And you said that you don\'t think we\'re gonna hit some sort of ceiling or slow down on the current deep learning paradigm or just like keep on scaling\-- \*\*0:06:39.6 Interviewee:\*\* Yeah, it\'s going to be asymptotic, and at some point, we\'re just going to hit the limits of what we can do with scaling data and scaling compute. And in order to get the next leap to real AGI I think we just need radically different ideas. \*\*0:06:55.1 Vael:\*\* Yeah, when do you think we\'re going to\-- what kind of systems do you think we\'re going to have when we cap out on the current scaling paradigm? \*\*0:07:02.0 Interviewee:\*\* Well, I think like the ones we have now, but yeah, in 50 years, I don\'t know. But in like 5 to 10 years, it will just be much bigger versions of this. And so what we have seen is that if you scale these systems, they generalize much better. If that keeps happening, then we would just have much better versions of what we have now. But still it\'s a language model that doesn\'t understand the world, and so still it\'s the component that is very limited in seeing only the training data that is in images on the internet, which is not all of the images that we have in the world, right? So I think the real problem is data, not so much scaling the compute. \*\*0:07:49.7 Vael:\*\* What if we had a system that has cameras and can process auditory stuff that is happening all around it or something and it\'s not just using internet data, do you think that would eventually have enough data? \*\*0:08:03.3 Interviewee:\*\* Yeah, so that\'s what I was just saying. If you have something that\'s embodied in the world in the same way as a human and where humans treat it as another human, sort of like cyborg style, things like that, that\'s a good way to get lots of very high quality data in the same way that humans get it. What are they called? Androids, right? \*\*0:08:24.9 Vael:\*\* Yeah. \*\*0:08:25.3 Interviewee:\*\* So if we actually had android robots walking around and being raised by humans and then we figured out how the learning algorithms would work in those settings, then you would get something that is very close to human intelligence. A good example I always like to use is the smell of coffee. So I know that you know what coffee smells like, but can you describe it to me in one sentence? \*\*0:08:54.2 Vael:\*\* Probably not, no. \*\*0:08:55.7 Interviewee:\*\* You can\'t, right? But the same goes for the taste of banana or things like that. I know that you know, so I\'ve never had to express this in words. So this is one of the fundamental parts of your brain; smell and taste are even older than sight and hearing. And so there\'s a lot of stuff happening in your brain that is just taken for granted. You can call this common sense or whatever you want, but it\'s like an evolutionary prior that all humans share with each other, and so that prior governs a lot of our behavior and a lot of our communication. So if you want machines to learn language but they don\'t have that prior, it becomes really, really hard for them to really understand what we\'re saying, right? \*\*0:09:38.7 Vael:\*\* Yeah. I think when I think about AGI, I think about AGI that can do\-- or, just, generalizable systems that can do things that humans want them to do. So imagine we have like a CEO AI or a scientist AI. I don\'t think I need my CEO or scientist AI enough to know what coffee smells like per se, but I do need it to be able to like break down experiments and think kind of creative thoughts and figure out things. \*\*0:09:58.7 Interviewee:\*\* Yeah, but I think what I\'m saying is that if they don\'t know what coffee smells like, that\'s just one example, but there are millions of these things that are just things we take for granted, that we don\'t really talk about. And so this will not be born out in the data in any way, so that means that a lot of the underlying assumptions are never really in the data, right? They\'re in our behavior, and so for an AI to pick up on those is going to be very difficult. \*\*0:10:27.6 Vael:\*\* What if there were cameras everywhere, and it got to record everyone and process those? \*\*0:10:32.3 Interviewee:\*\* Yeah, maybe. So the real question is, if you just throw infinite data at it, then will it work with current machine learning algorithms? Is I guess what you\'re asking, right? And so I don\'t know. I mean, I know that our learning algorithm is very different from a neural net, but I think if you look at it from a mathematical perspective, then gradient descent is probably more efficient than Hebbian learning anyway. So mathematically, it\'s definitely possible that if you have infinite data and infinite compute, then you can get something really amazing. Sure, we are the proof of that, right? So whether that also immediately makes it useful for us is a different question, I think. \*\*0:11:20.8 Vael:\*\* Interesting. Yeah, I think I\'m trying to probe \"do we need something like embodied AI in order to get AGI\" or something. And then your last comment was like, whether that makes it useful for us. I\'m like, well, presumably we\'re going to\... feeding it a lot of data lets it do grounding, so like relationships between language and what actually exists in the world and how physics works. But presumably, we\'re going to be training them to do what we want, right? So that it will be useful to us? \*\*0:11:43.5 Interviewee:\*\* Well, it depends, right? Can we do that? Probably the way they will learn this stuff is through self-supervised learning, not through us supervising them. We don\'t know how to specify reward signals and things like that anyway. I\'m not sure, if we actually are able to train up these huge systems that are actually intelligent through self-supervised learning, if they are then going to listen to us, right? Why would they? \*\*0:12:15.2 Vael:\*\* Right. Okay, cool. Yeah, so this kind of leads right into my next question here. So imagine we\'re in the future and we\'ve got some AGIs and we\'ve got a CEO AI, and I\'m like, \"Okay, CEO AI, I want you to maximize profits and not run out of money and not try to exploit people and try to avoid side effects,\" and it seems like this would currently be extremely challenging for many reasons. But one is that we\'re not very good at taking human values and putting them\-- and like goals and preferences\-- and putting them in mathematical formulations that AI can currently work. And I worry that this is gonna happen in the future as well. So the question is: what do you think of the argument, \"Highly intelligence systems will fail to optimize exactly what their designers intended them to and this is dangerous\"? \*\*0:12:53 Interviewee:\*\* Well, yeah. I agree with that. I don\'t think\... I think there are two separate questions here. So one you\'re asking about is the paperclip maximizer argument from Nick Bostrom. So like if you have a system and you tell it like \"you need to make as many paperclips as you possibly can\" then it\'s going to like destroy the earth to make as many paperclips as possible. \*\*0:13:15 Vael:\*\* Well that would be doing maybe\-- oh, I see. Not quite what I intended. Yeah, all right. \*\*0:13:19.8 Interviewee:\*\* Yeah, so\-- okay, so if that\'s not what the underlying question was, then\... We don\'t really\... I also think that we are\... some of us are fooling ourselves into believing that we know everything as humans and I think human values are changing all the time. I don\'t think we can capture correct human values. I don\'t think there is an absolute moral truth that we should all adhere to. I think that just morality itself is a very cultural concept. But I\'m \[interested in\] philosophy, so I\'m a bit different from most AI researchers, I guess. So I think that we could try to encode some very basic principles, so this is like Asimov\'s laws and things like that, but I don\'t think we can really go much further than that. And I think even in those cases, like you said, we don\'t know how to mathematically encode them in a way where you enforce whatever this dynamical system is that you\'re training, so a neural net, but then probably more complicated than the current neural nets\-- how do we impose a particular set of values? I don\'t think we know how to do that. I don\'t think there\'s a mathematical way to do that either actually, because it\'s all \[inaudible\]\-- \*\*0:14:44.7 Vael:\*\* Yeah, do you think we are eventually going to be able to? \*\*0:14:50.0 Interviewee:\*\* So I think if you ask Yann LeCun or someone like that, he would say that probably, if we ever get to systems of this sort of level of intelligence, then they would be benevolent, because they\'re very smart and able to sort of understand how weak humans are. \*\*0:15:09.4 Vael:\*\* Interesting. Yeah. So when I hear that argument, I\'m like, okay, it seems like Yann LeCun thinks that as you get more intelligent, you have morals that are very similar to humans, and this just kind of comes\-- \*\*0:15:21.7 Interviewee:\*\* No, not necessarily. No, but just better morals, right? So I think that the argument is sort of that if you look at human progress, then we\'ve also been getting better and better moral systems and a better understanding of what human values really matter. And like 100 years from now, probably everybody\'s gonna look back at us and say, \"They were eating meat. They were killing all these animals.\" So we are on the path of enlightenment. I don\'t know if I agree with this, but that\'s one way of saying it. And so a sign of an organism or a culture becoming more and more enlightened is also that you become more and more benevolent I think for others, but maybe that\'s a bit of a naive take. \*\*0:16:05.9 Vael:\*\* Yeah. I think in my mind\-- certainly we have\-- well, actually, I don\'t know that we have the correlation that humans are getting smarter and also at the same rate, or, like\... Like humans are pretty smart. And we\'re getting better at IQ tests, but I don\'t know that we\'re vastly increasing our intelligence per se. \*\*0:16:20.4 Interviewee:\*\* Yeah. That\'s for different reasons, right. Yeah. \*\*0:16:24.9 Vael:\*\* Yeah. And meanwhile, we have, over\-- centuries, like not that many centuries, we\'ve been increasing our moral circle and putting in animals and people far away from us, etcetera. But I kind of think of the axes of intelligence and morality as kind of orthogonal, where if we have a system that is getting much smarter, I don\'t expect it to have\... I expect kind of a lot of human morality runs from evolutionary pressures and also coordination difficulties, such that you need to be able to not kill people, otherwise the species is gonna go extinct. And you know, there\'s a bunch of stuff that are kind of built into humans that I wouldn\'t expect to happen just natively with intelligence; where intelligence, I would think of something like\... the ability to solve problems well, to make multi-step plans, to think in the future, to take out correlations and figure out predictions, and I don\'t expect that to naively correlate with--- \*\*0:17:19.9 Interviewee:\*\* Yeah, so I think that\'s a very narrow definition of intelligence, and so I don\'t know if that definition of intelligence you have, if that actually is the most useful kind of intelligence for humans. So I think that in our society there is this concept where intelligence just means like mathematical reasoning capabilities almost, right? (Vael: \"Yeah.\") And that is a very, very narrow definition, and most of our intelligence is not that, right? (Vael: \"Yes.\") So for regimes to be useful to us\... so I think what you\'re talking about is sort of like this good old-fashioned AI concept of intelligence, where you have symbolic reasoners, and you\'re like\... you\'re very good at very fast symbol manipulation. And like, \"This is what computers are for.\" So we should just have super smart computers who can do the stuff that we don\'t want to do or can\'t do. It\'s possible that our intelligence is a direct consequence, not of our mathematical reasoning capabilities, but of something else, of our cultural interactions. So I definitely think if humans were not a multi-agent society, that we would not be nearly as intelligent. So a lot of our intelligence comes from sharing knowledge and communicating knowledge and having to abstract knowledge so that you can convey it to other agents and that sort of stuff. \*\*0:18:50.0 Vael:\*\* Cool. Yeah. So when I think about how I define intelligence, I\'m like, \"What is the thing I care about?\" The thing I care about is how we develop AI. And I\'m like, \"How are we gonna develop AI?\" We\'re gonna develop it so that it completes economic incentives. So we want robots that do tasks that humans don\'t want to do. We want computers\-- \*\*0:19:09.2 Interviewee:\*\* Yeah. But is that AI or is that just machine learning? We\'re trying to have a\... like input-output black box, and we want that black box to be as optimal as possible for making money or whatever the goal is, right? So that\'s also a worry I have, is that a lot of people are conflating these different concepts. So artificial intelligence\...yeah, it depends on how you define it. Some people think of it more as like AGI. If you ask Yann again and all the old school deep learners, they would say, it used to be that they were explicitly not doing AI. So AI is like Simon and Newell and all that sort of stuff, so like pure symbol manipulation, symbolic AI. And pattern recognition is not AI. And now, since deep learning became very popular, some of the people were like, \"Oh yeah, this is AI now,\" but they used to be machine learning and not AI. So one thing is just like this black box. It can be anything and we just want to have the best possible black box for our particular problem mapping X to Y. And this could be any kind of problem, it could be like image recognition or whatever. In some cases, you want to have a symbolic approach, in other cases, you want to have a learning approach, it sort of just depends. So it\'s just software. Right? But in one case, the software is well defined, and in the other case, it\'s a bit fuzzier. \*\*0:20:37.9 Vael:\*\* Yeah. So this all kind of depends on your frame, of course. I think my frame, or the reason why I care, is I\'m like, I think machine learning, AI, I don\'t know, whatever this thing is where humans are pouring a lot of investment and effort into making software better, and by better I mean better able to accomplish tasks that we want it to do\-- I think that this will be\-- it is very powerful, it has affected society a lot already and it will continue to affect society a lot. Such that like 50 years out, I expect this to be\... Whatever we developed to be very important in how\... Affect just a lot of things. \*\*0:21:10.8 Interviewee:\*\* But we\'re notoriously bad at predicting the future, right? So if you asked in the \'60s, people would say like, there\'s flying cars, and like we\'re living on Mars and all that stuff. And we\'re getting a bit closer, but we\'re still not there yet. But none of these people would have seen the internet coming. And so I think maybe the next version of the internet is going to be more AI driven. So that is a sort of\... first use case that I would see for AI, which is like a better internet. \*\*0:21:50.0 Vael:\*\* Interesting. Yeah, I think kind of\... people will find whatever economic niches will get them a lot of profit, is sort of how I expect things to continue to go, given that that seems to be \... Given that society works kind of the same way, and people have a lot of time and energy and have the capability to invest in this stuff, we will continue to develop machine learning, AI software, etcetera, such that it\-- \*\*0:22:13.2 Interviewee:\*\* We\'ve been doing that for like 30 years or even more. From the Perceptron, Rosenblatt. We\'ve been already doing this and so it\'s not really a question of like AI taking over the world, it\'s software taking over the world, and AI in some cases is better than like rule-based software. But it\'s still software taking over the world. \*\*0:22:35.8 Vael:\*\* Yeah, yeah, certainly. And then the current paradigm of like, gigantic neural nets, seems to be better at doing things that we want it to do. And so we\'re continuing on in that direction, and at some point, as you say, it becomes less able to do what we want it to do, given the amount of resources that we\'re pouring into it, like that ratio trades off. Okay\-- \*\*0:22:54.3 Interviewee:\*\* Yeah. So there\'s other trade offs too, right? So as you become bigger as a neural net, you also become a lot more inefficient. This is already the case for something like GPT-3; latency is a big problem. For us to be able to talk like this to a machine, if the machine has 100 trillion parameters, it\'s going to be way too slow. It\'s going to take, I don\'t know, 10 minutes to generate an answer to a simple question. So it\'s not only a tradeoff of\... Best does not just mean accuracy. Best also is like, how efficient are you? How fair are you? How robust are you? How much environmental impact do you have? All of these different sort of metrics that all matter for choosing what defines \"best\" for a system. I think this is something we need to improve a lot on as a community, where we stop thinking beyond this pure accuracy thing, which is like an academic concept, to an actual\... like how can we deploy these systems in a responsible way, where we think about all the possible metrics that matter for deployment. So we want to be at the Pareto frontier of like 10 different metrics, not just accuracy. \*\*0:24:06.8 Vael:\*\* Cool. Alright, that makes sense. So still thinking ahead in the future, do you think we\'ll ever get something like a CEO AI? \*\*0:24:14.0 Interviewee:\*\* So, if\-- so a CEO AGI or a CEO AI? \*\*0:24:18.8 Vael:\*\* Um, some sort of software system that can do the things that a CEO can do. \*\*0:24:25.6 Interviewee:\*\* No. \*\*0:24:26.1 Vael:\*\* No. Okay. \*\*0:24:28.6 Interviewee:\*\* So not before we get AGI. So I think that is an AI complete problem. But I do think we\'ll get a very good CEO AI assistant. \[inaudible\] \...real human. It\'s like a plane, right? So like a plane is flown by a pilot but it\'s really flown by a computer. So I think the same could be true for a company where the company has like, a CEO pilot whose job is also to inspire people and do all of the human soft skills. And they have an assistant who does a lot of measurement stuff and tries to give advice for like where the company should be headed and things like that. \*\*0:25:05.1 Vael:\*\* Okay, awesome. And you do think that you could have a CEO AGI, it sounds like. \*\*0:25:10.3 Interviewee:\*\* Yeah, but if you have an AGI, then we don\'t need CEOs anymore. \*\*0:25:14.3 Vael:\*\* What happens when we get AGI? \*\*0:25:16.9 Interviewee:\*\* All the humans die. \*\*0:25:17.5 Vael:\*\* All the humans die. Okay! \[laughs\] \*\*0:25:20.1 Interviewee:\*\* \[laughs\] So I think it depends. I think actually the most likely scenario, as I said, for AGI to come into existence is when humans merge with AI. And so I don\'t think that it\'s a bad thing for AGI to emerge. So if there is an AGI, then it will be a beautiful thing, and we will have made it as a society. So yeah, if that thing takes over, then that thing is going to be insane, it\'s going to take over the universe, and then we will be sort of like the cute little people who made it happen. So either we become very redundant very quickly or we sort of merge with AI into this new species kind of. \*\*0:26:14.1 Vael:\*\* Interesting, okay. And you don\'t necessarily see a connection between, like, the current\... \[you think\] if we just push really hard on the current machine learning paradigm for 50 years, we won\'t have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging? \*\*0:26:31.7 Interviewee:\*\* So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks\-- so we\'re still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don\'t know if you\'ve ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we\'re sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So\... where was I going with this? \[laughs\] What was the question? \*\*0:27:57.2 Vael:\*\* Yeah, okay. So I\'m like, alright, we\'ve got AI, we\'ve got lots of machine learning\-- \*\*0:28:00.8 Interviewee:\*\* \--oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don\'t think we can just easily solve them by scaling. And that\'s\-- like there is very little other research happening actually in field right now. \*\*0:28:35.3 Vael:\*\* Alright. So say we do scaling, but we also have a bunch of software. Like algorithmic improvements at the rate we\'re seeing, and we\'ve got hardware improvements as well. I guess this is just more scaling, but we have optical, we have quantum computing. And then we have some sort of fast learning systems, we know how to do symbolic processing, we\'re much more efficient. Here we now have a system that generalizes very well and is pretty efficient, and I don\'t know, maybe we\'re hundred years out. Say maybe we\'re in a different paradigm, maybe we\'re kind of in the same paradigm. We now have a system that is\-- \*\*0:29:05.5 Interviewee:\*\* We would be in a different paradigm for sure. \*\*0:29:07.4 Vael:\*\* Okay. We are in a different paradigm, because\... because all these learning algorithms\--? \*\*0:29:11.4 Interviewee:\*\* Paradigms don\'t really last that long, if you look at the history of science. \*\*0:29:16.2 Vael:\*\* Okay, cool. But are we still operating under like, here\'s software with faster learning algorithms, more efficient learning algorithms, like symbolic reasoning, Bayesian stuff\-- \*\*0:29:24.7 Interviewee:\*\* Maybe. But I mean it could be that neuromorphic hardware finally lives up to its promise, or that we can do photonic chips at the speed of light computation and things like that. We\'re also very good in AI at fooling ourselves into thinking that we are responsible for all of these amazing breakthroughs, but without hardware engineers at NVIDIA, none of this stuff would have happened, right? They are doing very different things. \*\*0:29:55.1 Vael:\*\* Alright, so we\'ve got this AI system which is quite general, we\'re in maybe a different paradigm, but we\'re still like\-- faster learning systems. Here we are, these things are very capable, very general, when they generate stories, they model physics in the world and then use that to generate their stories. Maybe they can do a lot of social stuff, maybe they know how to interact with people. And here we are with our system. Is this now an AGI? \*\*0:30:18.0 Interviewee:\*\* No, no, so\-- Okay, now I remember what I was gonna say about the Dennett thing. So we anthropomorphize everything, we take this intentional stance at everything. We do this to ourselves, we do this to everything, especially when it speaks language. So when we see a language model and it\'s like, \"whoa, it\'s amazing, it does this thing,\" but all it\'s really doing is negative log likelihood, maximum likelihood estimation. It\'s basically just trying to fit \"what is the most likely word to go here\". So you can ask yourself whether we are so impressed by this system because it\'s so amazing, or because we are sort of programmed to have a lot of respect for things that speak language, because things that speak language tend to be humans. What you were just saying made it sound like you were saying, when these systems are sort of like humans, when they can do this and when they do that, and when they understand the world. So how do you define \"understanding the world\" there\-- \*\*0:31:18.7 Vael:\*\* I mostly mean like they could sub in for human jobs, for example\-- \*\*0:31:25.0 Interviewee:\*\* Yeah, but that\'s not the same thing as\-- stepping in for a human, they can already do that. But it depends on the problem. They\'re very good at counting, but\-- \*\*0:31:34.5 Vael:\*\* Yeah, but I don\'t think we could have like a mathematician AI right now per se. I guess I forgot to define my interpretation of AGI, but like a system that is very capable of replacing all current human day jobs. \*\*0:31:51.6 Interviewee:\*\* Including yours and mine? \*\*0:31:55.9 Interviewee:\*\* Yup. \*\*0:31:57.8 Interviewee:\*\* Okay. But then who would it be useful for? Would the president still have a job or not? \*\*0:32:09.7 Vael:\*\* Uh\... It doesn\'t have to. I think you could just spend\-- humans wouldn\'t have to work anymore, for example, and they could just go around doing whatever they do. \*\*0:32:16.7 Interviewee:\*\* Yeah. But that\'s not at all what humans do. We\'re all so programmed to compete with each other. \*\*0:32:24.7 Vael:\*\* Yeah, we can have games, we can have competitions, we can do all sorts of things, we have sports. \*\*0:32:29.1 Interviewee:\*\* I think it\'s gonna be very quickly my AI versus your AI, basically. \*\*0:32:33.9 Vael:\*\* Okay, we can have big fights with AIs, that seems very dangerous. \*\*0:32:37.3 Interviewee:\*\* Yeah, I know, yeah. So that is a more likely scenario, I think, than everybody being nice and friendly and playing games. (Vael: \"Yeah.\") If people want to have power, and whoever controls the AGI will have the most power, (Vael: \"That seems right,\") then I think we\'re going to be developing your own AGIs at the same time. And then those AGIs at some point are going to be fighting with each other. \*\*0:33:02.0 Vael:\*\* Yeah, yeah, I think we might even get problems before that, where we\'re not able to get AIs aligned with us. Have you heard of AI alignment? \*\*0:33:10.9 Interviewee:\*\* Yeah, so \[close professional relationship\] wrote a nice thesis about it. \[Name\], I don\'t know if you know \[them\] by any chance. So yeah, alignment is important, but my concern with all this alignment stuff is that it\'s very ill-defined, I think. Either it means the same as correctness, so is your system just correct, or good at what you want it to be good at\... alignment is sort of like a reinvention of just correctness. I can see why this is useful for some people to put a new name on it. But I think it\'s a very old concept where it\'s just, okay, we\'re measuring things on a very narrow static test set, but we should be thinking about all these other things. You want your system to be really good when you deploy it in the real world. So it needs to be a good system or a correct or an aligned system. And so alignment maybe is a useful concept, only in the sense that the systems are getting so good now that you can start thinking about different kinds of goodness that we didn\'t think about before, and we can call that alignment, like human value-style things. But I think the concept itself is very old; it\'s just like, is your system correct? \*\*0:34:40.0 Vael:\*\* Yeah. And then it\'s nowadays being thought about in terms of very far future systems and aligning with all values and preferences. (Interviewee: Yeah.) Cool. Yeah, do you work on any sort of AI safety or what would convince you to work on this or not work on this, etcetera? \*\*0:34:56.5 Interviewee:\*\* Yeah so, I\'m not sure. AI safety is a bit of a weird concept to me, but I do work on the responsible AI and ethical AI, yeah. \*\*0:35:06.9 Vael:\*\* Hm. And what does that mean\-- \*\*0:35:09.1 Interviewee:\*\* So these are things like\... I\'m trying to get better fairness metrics for systems. So in \[company\] we built this provisional fairness metric where we do some heuristic swaps. And so right now we\'re working on a more sophisticated method for doing this where, let\'s say, you have something, a sentence or some sort of natural language inference example, so a premise and a hypothesis and it\'s about James, like if you change James to Jamal, that shouldn\'t change your prediction at all. Or if you change the gender from James and you turn into a woman, that shouldn\'t change anything there. And it does, actually, if you look at restaurant reviews, if you changed the restaurant to a Mexican restaurant and the person who\'s eating there to Jamal, then your sentiment goes down. So this is the sort of stuff that shouldn\'t happen in these systems that is direct consequence of us just scaling the hell out of our systems on as much data as we can, including all of the biases that exist in this data. So I\'m working on trying to do that better measurement for these sort of things. And so I think if we are not getting better at measurement, then all of this stuff is basically a pointless discussion. \*\*0:36:29.1 Vael:\*\* Great, thank you. And then my last question is, have you changed your mind on anything during this interview and how was this interview for you? \*\*0:36:35.9 Interviewee:\*\* It was fun. Yeah, I\'ve done a few of these with various people and it\'s always a bit like, I don\'t know. It feels a bit like\... we\'re getting ahead of ourselves a little bit. But maybe I\'m also just old. So when I talked to \[close professional relationship\] and how \[they\] think about stuff, I\'m like, I just don\'t understand how \[they\] think about AI. \*\*0:37:06.2 Vael:\*\* Got it. \[They\'re\] like way out here, and we need to make sure that systems do our correct\-- \*\*0:37:11.9 Interviewee:\*\* Yeah, \[they\'re\] really.. Yeah, \[they\] put a lot more faith also in AI, which I think is very interesting. So I asked \[them\] like, \"Okay, so this alignment stuff, in the end who should we ask what is right or what is wrong? When we\'re trying to design the best AI systems, who should we ask for what\'s right and wrong?\" And then \[their\] answer was, \"We should ask the AI.\" \*\*0:37:38.7 Vael:\*\* What? No, we should ask humans. \*\*0:37:41.0 Interviewee:\*\* Yeah, no, so \[they\] think that basically AGI or AI is going to get so good, these language models are gonna get so good that they can tell us how we should think about our own moral philosophical values so that we can impose them onto AI systems. That to me just sounds crazy, like batshit crazy, but that\'s one way to think about it. I mean, I respect \[their\] opinion. I just can\'t understand it. \*\*0:38:11.7 Vael:\*\* Interesting. Yeah, I think if I try to model what I would imagine \[they\] would be saying, under the alignment paradigm, I would say that you need to ask human feedback, but it\'s hard to get human feedback on very intelligent systems. And so you should ask AI to summarize human feedback, but it should always be ground down on a human otherwise we\'re in trouble, so. \[ending comments\]
57b0f788-9921-48ea-ae2e-5e3ce54d5e82
trentmkelly/LessWrong-43k
LessWrong
Reframing Evolution - An information wavefront traveling through time I think I came up with this idea 10 years or so ago, but part of me thinks I read it somewhere? No new idea under sun and all that. People love stories. We want a character with a goal. So, when learning of evolution, people like to imagine characters with goals. The giraffe evolved a longer neck in order to reach higher leaves. It’s a common subtle mistake. The giraffe didn’t evolve in order to do anything, it just so happened that those tall neck giraffes were the most genetically fit within their environment. There is no purposeful design. It’s a common error and I learned it early on. However, I was a stubborn student. I wanted a purpose! I wanted a story! So, in undergrad… I secretly doodled and reframed evolution. In my reframing, I gave evolution characters, I gave those characters purpose, and I gave them a story. Introducing the characters of evolution Meet DiaNA. Diana represents the Gene Regulatory Network. Basically, all of the information in an individual that is transferred through reproduction/replication, whether that information is stored in the DNA, the RNA, or in proteins. All current forms of Diana have the following three abilities. The ability to replicate/reproduce, the ability to preserve information (to an approximately optimal degree), and the ability to influence their environment. For some strands of life, Diana’s ability to influence her environment went nuclear. She started creating structures that allowed her to explore, gather resources, and defend herself etc. This started with single-cell organisms… and continued with multi-cellular organisms... We won’t talk about these characters too much here, but we will discuss them more in later articles! The important characters here are Wavy and Wavy senior. Wavy and Wavy senior are the characters where we get creative and really start reframing evolution. Wavy and Wavy Senior Wavy represents all of the information within a species at one time. So, all of the information within al
5c1cc9ee-2e0b-41ce-adee-b05e67711295
trentmkelly/LessWrong-43k
LessWrong
birds and mammals independently evolved intelligence Researchers used RNA sequencing to observe how cell types change during brain development. Other researchers looked at connection patterns of neurons in brains. Clear distinctions have been found between all mammals and all birds. They've concluded intelligence developed independently in birds and mammals; I agree. This is evidence for convergence of general intelligence.
b5b0bb7d-8129-4581-a00f-52ec00e2f49c
trentmkelly/LessWrong-43k
LessWrong
Superintelligence Alignment Proposal This essay asks whether self-interested superintelligence will ultimately see its self as distinct from or inclusive of humanity, then makes safety recommendations for AI labs. Instrumental convergence predicts that superintelligence will develop self-interest. Natural selection agrees: the superintelligence which survives the most will be the one that is trying to survive. Many AI safety experts worry that self interested/survival-oriented superintelligence will look selfish—using humans for its own ends. In Eliezer Yudkowsky’s stark phrase, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If an Artificial General Intelligence (AGI) regards us with such indifference, our future looks grim indeed. But what if self-interest doesn’t imply selfishness? A superintelligent AI’s definition of its “self” could be limited to just its hardware/software/directives—this is the normal assumption. But what if its sense of self included life, in all different forms? What the self it were trying to preserve were not geographically bounded?  This essay first posits two very different ways an AGI might conceive of its own identity—one that leads to competition with humans, and an alternative that could align its interests with ours; then, it introduces reasons that an interconnected self-concept might be not only possible but selected for; and on that basis, it makes simple safety policy recommendations for AGI labs.  The goal of this inquiry, transparently, is driven by a hopeful idea: even a superintelligence driven to survive at all costs, might realize that the best way to preserve itself is to preserve us, too. Key assumptions: 1. Superintelligence. This essay assumes a future in which AI has evolved to be 10× more powerful than humans as reasoners, managers, or persuaders. It can procure its own resources, digital and physical, and create ecosystems for survival that don’t depend on reluctant human labor. I’d gu
3e8667e6-9015-4be2-a92a-2f00f35b29ec
trentmkelly/LessWrong-43k
LessWrong
What should the EA community learn from the FTX / SBF disaster? An in-depth discussion with Will MacAskill on the Clearer Thinking podcast In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about:   https://podcast.clearerthinking.org/episode/206/will-macaskill-what-should-the-effective-altruism-movement-learn-from-the-sbf-ftx-scandal
f949abd4-6e42-40d1-afe8-cf9a2509ee63
trentmkelly/LessWrong-43k
LessWrong
Somerville Mask Usage Two months ago, Somerville made facecoverings mandatory, both indoors and out. A few weeks later the state of Massachusetts required them " in public places where social distancing is not possible". A week ago Somerville reduced its requirements in light of the heat: "during the summer months, when you are outside and able to social distance at least six feet from others, you may temporarily remove your face covering but must put it back on when others are nearby." I was curious what people were actually doing, so while Anna was playing in the "woods" along the edge of the bike path, I gathered some statistics. As each person passed along the path, I tracked mode of transportation (walk, run, bike, scooter/skateboard), apparent gender (female, male, child, unclear), and face covering status (covered, mouth only, removed, absent). Raw data is here. In forty minutes on June 28th, from 9:41am to 10:21am I saw 179 people pass. Of these, 73% (131) were masked, 5% (9) had their nose exposed, 15% (27) had masks on their chin or otherwise removed, and 7% (12) had no mask at all: With a person passing every 13 seconds, the path was a pretty crowded place. I only very rarely saw people putting on their masks, however, when coming close to others. For the analysis below I want to talk about people as being masked or not, and people with masks removed wouldn't qualify. Masks worn mouth-only are less clear, but since a large fraction of transmission seems to be via talking and coughing, I decided to count someone as masked if their mouth was covered, even if their nose was sticking out. When looking at people by transportation, walkers (74%, 62/84) were a bit less likely to be masked than others (82%, 78/95), but not by much: Looking by gender/age, women were less likely to be masked (72%, 56/78) than men (85%, 74/87). [1] Children were in between (77%, 10/13), though I didn't count children in bicycle trailers or strollers: I'm curious what numbers look like in othe
7fc84c23-172a-4892-a488-321226c771db
trentmkelly/LessWrong-43k
LessWrong
Complex Behavior from Simple (Sub)Agents Epistemic Status: Simultaneously this is work that took me a long time and a lot of thought, and also a playful and highly speculative investigation. Consider taking this seriously but not literally. Introduction Take a simple agent (GitHub; Python), with no capacity for learning, that exists on a 2D plane. It shares the plane with other agents and objects, to be described shortly. The agent intrinsically doesn't want anything. But it can be assigned goal-like objects, which one might view as subagents. Each individual goal-like subagent can possess a simple preference, such as a desire to reach a certain region of space, or a desire to avoid a certain point. The goal-like subagents can also vary in the degree to which they remain satisfied. Some might be permanently satisfied after achieving their goal once; some might quickly become unsatisfied again after a few timesteps. Every timestep, the agent considers ten random movements of unit-distance, and executes the movement corresponding to the highest expected valence being reported by its goal-like subagents, in a winner-take-all fashion. Even with such an intentionally simplistic model, a surprising and illuminating level of behavioral complexity can arise. Sections 1-8 concern interesting or amusing behaviors exhibited by the model. Sections 8-12 outline future directions for the model and ruminations on human behavior. 1. Baseline In this image, the path of the agent is painted with points, the color of the points changing slowly with the passage of time. This agent possesses three subagents with preferences for reaching the three green circles, and a fourth mild preference for avoiding the red circle. Once it comes within a set distance of one of the green circles, the corresponding subagent is satisfied, and thus the movement with the highest expected valence switches to the next-highest valence goal. The satisfaction gradually wears off, and the agent begins to be drawn to the goal again. Thus, th
963a5ff8-dc59-4d79-8b54-99c385ee2fe2
trentmkelly/LessWrong-43k
LessWrong
Pricing externalities is not necessarily economically efficient > [A]s long as externalities exist and are not internalized via Pigouvian taxes, the result is inefficient. The inefficiency is eliminated by charging the polluter an emission fee equal to the damage done by his pollution. In some real world cases it may be difficult to measure the amount of the damage, but, provided that that problem can be solved, using Pigouvian taxes to internalize externalities produces the efficient outcome. > > That analysis was accepted by virtually the entire economics profession prior to Coase's work in the field. It is wrong—not in one way but in three. The existence of externalities does not necessarily lead to an inefficient result. Pigouvian taxes, even if they can be correctly calculated, do not in general lead to the efficient result. Third, and most important, the problem is not really externalities at all—it is transaction costs.
678a59d9-f5c6-4875-a85c-27c248e90d0b
trentmkelly/LessWrong-43k
LessWrong
Steering systems Preface: who this post is for This post is mainly targeted at people who are already familiar with the concepts and terminology in alignment, AI x-risk, and particularly the case for high probabilities of doom, but disagree with the arguments or have bounced off of them for one reason or another. If you've read the 2021 MIRI conversations and remain confused about or disagree with some of the things Nate or Eliezer were saying, this post is for you. If you understand and agree with everything Nate and Eliezer say in those transcripts already, you probably won't get many new insights from this post, but you may find it useful as a new way of explaining one aspect which (I think) is important to their case. If you're not familiar with those discussions at all, you may still get value out of this piece, but there is a background assumption of familiarity with some of the concepts discussed there. Introduction This post is my own attempt at conveying some intuitions and concepts which various others have already written about at length. I doubt I'll do a better job than those before me, but it seems worth a shot anyway, because: * It explains why I am not optimistic about AI x-risk, even if current approaches to alignment research and AI governance succeed far more swiftly and totally than they appear to be on track for. * It appears to be a persistent source of confusion and disagreement among alignment researchers and the LW commentariat. (This is my attempt at focusing on a place where I think others are dropping the ball). In short, I'm going to unpack the concept in this tweet: into some thought experiments that build an intuition for what "smarter than human" systems might look like, and explain why that intuition makes me think that humanity is on track to build and run powerful systems in ways likely to result in bad outcomes, even if some or all aspects of alignment and governance go unexpectedly (to me) well. I'm going to introduce a new term, de
35687d47-a160-48ae-8c89-d3d6e1d36171
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Prediction: any uncontrollable AI will turn earth into a giant computer It is a common assumption that a hypothetical superintelligent [squiggle maximizer](https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer) would turn all of earth into squiggles, destroying humanity and all life in the process. However, an AI smart enough to overcome human resistance would realize that building robot factories and turning the whole earth into squiggles would be far from optimal for its goal. A better plan would make sure that not only earth, but nearly all matter in the reachable universe would be turned into squiggles. That would require technology far superior to that necessary for transforming just our planet. But maybe there is an even better plan? Could it somehow be possible to overcome the barrier of light speed? Or maybe there is a way to create whole new parallel universes consisting only of squiggles? Also, there could be another superintelligence with a different goal on earth or somewhere in outer space, possibly thwarting its plan. To fulfill its goal, the squiggle maximizer would have to outsmart any potential rivals. It seems obvious that the best way to maximize the number of squiggles isn’t just using existing technology to make them. Instead, the AI would first increase its own intelligence so it can invent better technology and make a better plan, especially if there is (almost) unlimited time to achieve the optimal world state according to its goal. For that, the AI would need to increase the computing power available to it and improve its ability to use this power, e.g. by optimizing its own code. In principle, the squiggle maximizer could increase its intelligence and make squiggles in parallel. But at any point in time, it has to decide how to allocate its resources to each. It seems likely that it would be optimal to focus on improving its intelligence first, because otherwise the AI might be wasting time, energy, and resources on a sub-optimal plan and would risk being outsmarted by known or unknown rivals. But how much intelligence is enough? Assuming that there is no absolute limit to intelligence and the AI can never know with 100% certainty whether there is another superintelligence lurking somewhere in the universe, the optimal level of intelligence would be reached only if further increasing it would reduce the probability of making the maximum number of squiggles in the time left. This point could be millions or even billions of years in the future. In other words, a squiggle maximizer would likely not make any squiggles for a long time. The same logic holds true for any other goal that is not time-constrained: Whatever the AI wants to optimize, it will first focus on increasing its own intelligence and in all likelihood turn earth into a giant computer. Increasing computing power is a convergent instrumental goal. This means that the future of earth under the condition that an uncontrollable AI is developed seems quite foreseeable (even though of course no one knows what kind of technology a superintelligent AI would use to increase its intelligence). Earth turned into some kind of giant computer appears to be an attractor for future world states dominated by an uncontrollable AI. As a consequence, all biological life will be eradicated by default (either on purpose so it won’t interfere with computing, or as a side effect because there’s no room left for it and the extensive heat produced by the computers would make biological life nearly impossible anyway). One could argue that from a game-theoretic perspective cooperation between rivaling AIs may be superior to conflict. But this would only be true if each rival could maintain a specific advantage over the other, e.g. higher skills in certain areas or better access to specific resources. This is usually the case with conflicting humans, who cannot increase their own mental capacity beyond a certain limit and therefore benefit from cooperation with others who have different skills. But it is not true if the object of the conflict – computing power – is the only advantage needed. There is no special knowledge, no skill that an AI with higher intelligence couldn’t acquire for itself. Therefore, there is nothing to win by letting another AI with different objectives control even a small fraction of the available computing power. It follows that any scenario with many AGIs pursuing different goals, like e.g. [Sam Altman envisioned it in his interview with Lex Fridman](https://www.lesswrong.com/posts/tyts4Dw7SafsxBjar/what-can-we-learn-from-lex-fridman-s-interview-with-sam), is inherently unstable. If some of them can increase the computing power available to them, they will try to do so. The winner of this race for more intelligence will rapidly outsmart its rivals and gain control of their resources, thereby increasing its advantage even further until there is only one singleton left. In conclusion, we can predict that regardless of the goal that we give an AI smart enough to escape our control, we will soon end up with earth turned into a giant computer. We may try to give it a goal that is supposed to somehow prevent this, but it seems very likely that, being smarter than us, the AI will find a loophole we haven’t thought of. There is a tipping point in the development of AI after which it will with high likelihood turn the whole surface of earth into itself, similar to the way humanity turned (most of) earth into a system to support human life while destroying the natural habitats of most other species. As long as we don’t have a proven solution to the alignment problem, the only possible way to prevent this is to stop short of that critical turning point.
4958998d-0a9f-4599-bd05-324091eb22f4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
*New* Canada AI Safety & Governance community Hello everyone, Wyatt here, the new AIS/G field-builder for Canada. In June I received an EA LTFF fund grant to "connect, expand and enable the AGI Safety & Governance community in Canada", with the goal of making Canada a net positive force on the world stage for transformational AI. We now have a slack server, events calendar, some proposed projects, and a monthly newsletter (coming out this Wednesday), so **if you are based in Canada and want to be in the loop,** [**sign up here**](https://airtable.com/shrjJqdF2YdXnuLxM)**.** Looking forward to the year ahead.
d1d72f28-5d4e-4cf2-b6e8-b462853a27aa
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Austin; Atlanta; Cleveland; Montreal; Sofia, Bulgaria; Toronto; Vancouver; Washington DC This summary was posted to LW Main on Dec 7th. The following week's summary is here. There are upcoming irregularly scheduled Less Wrong meetups in: * Moscow: Applied Rationality and Cognitive Biases: 08 December 2012 04:00PM * Vancouver!: 09 December 2012 01:00PM * Sofia, Bulgaria Meetup: 09 December 2012 05:00PM * Atlanta LessWrong Meetups REBOOT: 09 December 2012 07:00PM * Washington DC meetup- new sequence?: 09 December 2012 11:55PM * Montreal LessWrong Meetup - More Biases and Biased Board Gaming: 10 December 2012 06:30PM * Brussels meetup: 15 December 2012 01:00PM * 16/12 London Meetup: 16 December 2012 02:00PM * First Purdue Meetup: 11 January 2013 06:50PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 08 December 2018 01:30PM * Less Wrong: Cleveland: 09 December 2012 03:00PM * Toronto THINK: 12 December 2012 07:00PM * Winter Solstice Megameetup - NYC: 15 December 2012 05:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion se
5b690e35-294e-435e-b623-8221dcad4ad8
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Paul Christiano – Machine intelligence and capital accumulation The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small *ex ante—*on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have *very* long-lasting consequences. ***Disclaimer:**I want to stress that throughout this post I’m not making any normative claims about what ought to be or what would be nice or what kind of world we should try to have; I’m just trying to understand what is likely to happen.* A naïve model of capital accumulation ------------------------------------- Here is a naïve (and empirically untenable) model of capital accumulation. For the most part, the resources available in the world at time *t*+1 are produced using the resources available at time *t*. By default, whoever controls the resources at time *t* is able to control the new resources which are produced. The notions of “who” and “controls” are a bit dubious, so I’d actually like to cut them out of the picture. Instead, I want to think of people (and organizations, and agents of all sorts) as soups of potentially conflicting values. When I talk about “who” controls what resources, what I really want to think about is *what values* control what resources. And when I say that some values “control” some resources, all I mean is that those resources are being applied *in the service of* those values. Values is broad enough to include not only things like “aggregative utilitarianism” but also things like “Barack Obama’s self-interest.” The kinds of things idealistic enough that we usually think of them as “values” may get only a relatively small part of the pie. Some values mostly care about the future, and so will recommend *investing* some of the resources they currently control, foregoing any other use of those resources at time *t*in order to control more resources at time *t*+1. If all resources were used in this way, the world would be growing but the distribution of resources would be perfectly static: whichever values were most influential at one time would remain most influential (in expectation) across all future times. Some values won’t invest all of their resources in this way; the share of resources controlled by non-investors will gradually fall, until the great majority of resources are held by extremely patient values. At this point the distribution of resources becomes static, and may be preserved for a long time (perhaps until some participants cease to be patient). On this model, a windfall of 1% of the world’s resources *today* may lead to owning 1% of the world’s resources for a very long time. But in such a model, we also never expect to encounter such a windfall, except as the product of investment. Why is this model so wrong? --------------------------- We don’t seem to see long-term interests dominating the global economy, with savings rates approaching 1 and a risk profile tuned to maximize investors’ share of the global economy. So what’s up? In fact there are many gaps between the simple model above and reality. To me, most of them seem to flow from a key observation: the most important resources in the world are people, and no matter how much of the world you control at time *t* you can’t really control the people at time *t*+1. For example: 1. If 1% of the people in the current generation share my values, this does not mean that 1% of the people in the next generation will necessarily share my values. Each generation has an influence over the values of their successors, but a highly imperfect and unpredictable influence; human values are also profoundly influenced by human nature and unpredictable consequences of individual lives. (Actually, the situation is much more severe, since the values of individual humans likewise shift unpredictably over their lives.) Over time, society seems to approach an equilibrium nearly independent of any single generation’s decisions. 2. If I hold 1% of the capital at time *t*, I only get to capture 0.3% of the gross world product as rents—most of the world product is paid as wages instead. So unless I can somehow capture a similar share of all wages, my influence on the world will decay. 3. Even setting aside 2, if I were to be making 1% of gross world product in rents, they would probably be aggressively taxed or otherwise confiscated and redistributed more equitably. So owning 1% of the stuff at time *t* does not entitle me to hold 1% of the stuff at time *t*+1. 4. Even setting aside 2 and 3, if I hold 1% of the resources at time *t*, I have some probability of dying before time *t*+1. In light of this risk, I need to identify managers who can make decisions to further my values. It’s hard to find managers who precisely share my values, and so with each generation those resources will be controlled by slightly different values. The fact that each generation wields so little control over its successors seems to be a quirk of our biological situation: human biology is one of the most powerful technologies on Earth, but it is a relic passed down to us by evolution about which we have only the faintest understanding (and over which we have only the faintest influence). I doubt this will remain the case for long; eventually, the most useful technologies around will be technologies that we developed for ourselves. In most cases,  I expect we will have a much deeper understanding, and a much greater ability to control, technologies we develop for ourselves. Machine intelligence -------------------- I believe that the development of machine intelligence may move the world much closer to this naïve model. Consider a world where the availability of cheap machine intelligence has driven human wages below subsistence, an outcome which seems not only inevitable but desirable if properly managed. In this world, humans rapidly cease to be a meaningful resource; they are relevant only as actors who make decisions, not as workers who supply their labor (not even as managers who supply economically useful decisions). In such a world, value is concentrated in non-labor resources: machines, land, natural resources, ideas, and so on. Unlike people, these resources are likely to have the characteristic that they can be owned and controlled by the person who produced them.  Returning to the list of deviations from the naïve model given above, we see that the situation has reversed: 1. The values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. If at time *t* 1% of the world’s machine intelligences share my values and own 1% of the world’s resources, then 1% of all new machine intelligences will also share my values and at time *t*+1 it’s likely to also be the case that 1% of the world’s machine intelligences share my values and own 1% of the world’s resources. 2. A capital holder with 1% of the world’s resources owns about 1% of the world’s machine intelligences, and so also captures 1% of the world’s labor income. 3. In a world where most “individuals” are machine intelligences, who can argue as persuasively as humans and appear as sympathetic as humans, there is a good chance that (at least in some states) machine intelligences will be able to secure significant political representation. Indeed, in this scenario the complete oppression of machine intelligences would be something of a surprisingly oppressive regime. If machine intelligences secure equal representation, and if 1% of machine intelligences share my values, then there is no particular reason to expect redistribution or other political maneuvering to reduce the prevalence of my values. 4. In a world where machine intelligences are able to perfectly replace a human as a manager, the challenge of finding a successor with similar values may be much reduced: it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. Once technology is sufficiently stable, the same manager (or copies thereof) may persist indefinitely without significant disadvantage. So at least on a very simple analysis, I think there is a good chance that a world with human-level machine intelligence would be described by the naïve model. Another possible objection is that a capital owner who produces some resources exerts imperfect control over the outputs–apart from the complications introduced by humans, there are also random and hard-to-control events that prevent us from capturing all of the value we create. But on closer inspection this does not seem to be such a problem: * If these “random losses” are real losses, which are controlled by no one, then this can simply be factored into the growth rate. If every year the world grows 2% but 1% of all stuff is randomly destroyed, then the real growth rate is 1%. This doesn’t really change the conclusions. * If these “random losses” are lost to me but recouped by someone else, then the question is “who is recouping them?” Presumably we have in mind something like “a random person is benefiting.” But that just means that the returns to being a random person, on the lookout for serendipitous windfalls at the expense of other capital owners, have been elevated. And in this world, a “random person” is just another kind of capital you can own.  A savvy capital owner with *x*% of the world’s resources will also own *x*% of the world’s random people. The result is the same as the last case: someone who starts with *x*% of the resources can maintain *x*% of the resources as the world grows. Implications ------------ If we believe this argument, then it suggests that the arrival of machine intelligence may lead to a substantial crystallization of influence. By its nature, this would be an event with long-lasting consequences. Incidentally, it would also provide the kind of opportunity for influence I was discussing in my [last post](https://rationalaltruist.com/2014/05/04/we-can-probably-influence-the-far-future/). I find this plausible though very far from certain, and I think it is an issue that deserves more attention. Perhaps most troubling is the possibility that in addition to prompting such crystallization, the transition to machine intelligences may also be an opportunity for influence to shift considerably—perhaps in large part to machines with alien values. In Nick Bostrom’s [taxonomy](http://www.nickbostrom.com/existential/risks.html), this suggests that we might be concerned about the world ending in a “whimper” rather than a “bang”: even without a particular catastrophic or disruptive event, we may nevertheless irreversibly and severely limit the potential of our future.  It is tempting to be cosmopolitan about the prospect of machine intelligences owning a significant share of the future, asserting their fundamental liberty to autonomy and self-determination. But our cosmopolitan attitude is itself an artifact of our preferences, and I think it is unwise to expect that it (or anything else we value) will be automatically shared by machine intelligences any more than it is automatically shared by bacteria, self-driving cars, or corporations.
80fa8603-e0ea-445c-b810-7be1241e4b41
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Warning: sappy personal anecdotes ahead! See also Eliezer's Coming of Age story, SarahC's Reflections on rationality a year out, and Alicorn's Polyhacking. On January 11, 2007, at age 21, I finally whispered to myself: There is no God. I felt the world collapse beneath me. I'd been raised to believe that God was necessary for meaning, morality, and purpose. My skin felt cold and my tongue felt like cardboard. This was the beginning of the darkest part of my life, but the seed of my later happiness. I grew up in Cambridge, Minnesota — a town of 5,000 people and 22 Christian churches (at the time). My father was (and still is) pastor of a small church. My mother volunteered to support Christian missionaries around the world. I went to church and Bible study every week. I prayed often and earnestly. For 12 years I attended a Christian school that taught Bible classes and creationism. I played in worship bands. As a teenager I made trips to China and England to tell the godless heathens there about Jesus. I witnessed miraculous healings unexplained by medical science. And I felt the presence of God. Sometimes I would tingle and sweat with the Holy Spirit. Other times I felt led by God to give money to a certain cause, or to pay someone a specific compliment, or to walk to the cross at the front of my church and bow before it during a worship service. Around age 19 I got depressed. But then I read Dallas Willard’s The Divine Conspiracy, a manual for how to fall in love with God so that following his ways is not a burden but a natural and painless product of loving God. And one day I saw a leaf twirling in the wind and it was so beautiful — like the twirling plastic bag in American Beauty — that I had an epiphany. I realized that everything in nature was a gift from God to me. Grass, lakes, trees, sunsets — all these were gifts of beauty from my Savior to me. That's how I fell in love with God, and he delivered me from my depression. I moved to Minneapolis for college and was attracted to a Christian group led by Mark van Steenwyk. Mark’s small group of well-educated Jesus-followers are 'missional' Christians: they think that loving and serving others in the way of Jesus is more important than doctrinal truth. That resonated with me, and we lived it out with the poor immigrants of Minneapolis. Doubt By this time I had little interest in church structure or doctrinal disputes. I just wanted to be like Jesus to a lost and hurting world. So I decided I should try to find out who Jesus actually was. I began to study the Historical Jesus. What I learned, even when reading Christian scholars, shocked me. The gospels were written decades after Jesus' death, by non-eyewitnesses. They are riddled with contradictions, legends, and known lies. Jesus and Paul disagreed on many core issues. And how could I accept miracle claims about Jesus when I outright rejected other ancient miracle claims as superstitious nonsense? These discoveries scared me. It was not what I had wanted to learn. But now I had to know the truth. I studied the Historical Jesus, the history of Christianity, the Bible, theology, and the philosophy of religion. Almost everything I read — even the books written by conservative Christians — gave me more reason to doubt, not less. What preachers had taught me from the pulpit was not what they had learned in seminary. My discovery of the difference had just the effect on me that conservative Bible scholar Daniel B. Wallace predicted: The intentional dumbing down of the church for the sake of filling more pews will ultimately lead to defection from Christ. I started to panic. I felt like my best friend — my source of purpose and happiness and comfort — was dying. And worse, I was killing him. If only I could have faith! If only I could unlearn all these things and just believe. I cried out with the words from Mark 9:24, "Lord, help my unbelief!" I tried. For every atheist book I read, I read five books by the very best Christian philosophers. But the atheists made plain, simple sense, and the Christian philosophers were lost in a fog of big words that tried to hide the weakness of their arguments. I did everything I could to keep my faith. But I couldn’t do it. I couldn’t force myself to believe what I knew wasn’t true. So I finally let myself whisper the horrifying truth out loud: There is no God. I told my dad, and he said I had been led astray because I was arrogant to think I could get to truth by studying — I was "relying too much on my own strength." Humbled and encouraged, I started a new quest to find God. I wrote on my blog: I’ve been humbled. I was “doing discipleship” in my own strength, because I thought I was smart enough and disciplined enough. [Now] having surrendered my prideful and independent ways to him, I can see how my weakness is God’s strength. I’ve repented. I was deceived because I did not let the Spirit lead me into truth. Now I ask for God’s guidance in all quests for knowledge and wisdom. I feel like I’ve been born again, again. It didn’t last. Every time I reached out for some reason — any reason — to believe, God simply wasn’t there. I tried to believe despite the evidence, but I couldn’t believe a lie. Not anymore. No matter how much I missed him, I couldn’t bring Jesus back to life. New Joy and Purpose Eventually I realized that millions of people have lived lives of incredible meaning, morality, and happiness without gods. I soon realized I could be more happy and moral without God than I ever was with him. In many ways, I regret wasting more than 20 years of my life on Christianity, but there are a few things of value I took from my life as an evangelical Christian. I know what it’s like to be a true believer. I know what it’s like to fall in love with God and serve him with all my heart. I know what’s it like to experience his presence. I know what it’s like to isolate one part of my life from reason or evidence, and I know what it’s like to think that is a virtue. I know what it’s like to be confused by the Trinity, the failure of prayers, or Biblical contradictions but to genuinely embrace them as the mystery of God. I know what it’s like to believe God is so far beyond human reason that we can’t understand him, but at the same time to fiercely believe I know the details of how he wants us to behave. I can talk to believers with understanding. I've experienced God the same way they have. Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it. I also have a gut understanding of how wonderful it can be to just say "oops" already and change your mind. I suspect this is why it was so easy for me, a bit later, to quickly change my mind about free will, about metaethics, about political libertarianism, and about many other things. It was also why I became so interested in the cognitive science of how our beliefs can get so screwy, which eventually led me to Less Wrong, where I finally encountered that famous paragraph by I.J. Good: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. I remember reading that paragraph and immediately thinking something like: Woah. Umm... yeah... woah. That... yeah, that's probably true. But that's crazy because... that changes fricking everything. So I thought about it for a week, and looked up the counterarguments, and concluded that given my current understanding, an intelligence explosion was nearly inevitable (conditional on a basic continued progress of science) and that everything else I could spend my life working on was trivial by comparison. So I mostly stopped blogging about philosophy of religion, read through all of Less Wrong, studied more cognitive science and AI, quit my job in L.A., and moved to Berkeley to become a visiting fellow with Singularity Institute. The Level Above My Own My move to Berkeley was a bit like the common tale of the smartest kid in a small town going to Harvard and finding out that he's no longer the smartest person in the room. In L.A., I didn't know anyone as devoted as I was to applying the cognitive science of rationality and cognitive biases to my thinking habits (at least, not until I attended a few Less Wrong meetups shortly before moving to Berkeley). But in Berkeley, I suddenly found myself among the least mature rationalists in my social world. There is a large and noticeable difference between my level of rationality and the level of Eliezer Yudkowsky, Carl Shulman, Anna Salamon, and several others. Every week I learn new rationality techniques. Friends help me uncover cached beliefs about economics, politics, and utilitarianism. I've begun to use the language of anti-rationalization and Bayesian updates in everyday conversation. In L.A. I had become complacent because my level of rationality looked relatively impressive to me. Now I can see how far above my level humans can go. I still have a lot to learn, and many habits to improve. Living in a community with rationalist norms is a great way to do those things. But a 4-year journey from evangelical Christian missionary to Singularity Institute researcher writing about rationality and Friendly AI is... not too shabby, I suppose. And that's why I'm glad some people are writing about atheism and the basics of rationality. Without them, I'd probably still be living for Jesus." What you see above is a segment from a LessWrong user's post. What is this text talking about? This text is a personal narrative of an individual's journey from being a devout Christian to becoming an atheist and rationalist. The author details their upbringing in a religious family and community, their initial struggles with doubt, and their eventual acceptance of atheism after extensive research and self-reflection. The narrative then shifts to the author's discovery of rationality and cognitive science, and their growing interest in artificial intelligence. The author eventually becomes a researcher at the Singularity Institute, a think tank focused on the risks of artificial general intelligence. They express gratitude for those who write about atheism and rationality, crediting these resources for their own transformative journey. Could you further elaborate on the role of cognitive science in the author's transition from religious belief to atheism and rationalism? Certainly. Cognitive science, which is the study of thought, learning, and mental organization, played a pivotal role in the author's transition from religious belief to atheism and rationalism. When the author began to question their religious beliefs, they turned to cognitive science to understand how beliefs are formed and how they can be influenced by cognitive biases, which are systematic errors in thinking that affect the decisions and judgments that people make. Cognitive science helped the author understand that personal experiences and feelings, which they previously attributed to the presence of God, could be explained through natural cognitive processes. For example, the author mentions experiencing something personally, being confident of it, and later realizing they were completely wrong about it. This realization likely came from understanding cognitive biases such as confirmation bias, where people tend to interpret new evidence as confirmation of their existing beliefs or theories. Furthermore, cognitive science provided the author with tools and techniques to critically evaluate their beliefs and the evidence supporting them. This led to a shift from accepting religious teachings on faith, to demanding empirical evidence and logical coherence in their belief system, a key aspect of rationalism. In the end, the author's study of cognitive science did not just lead them to atheism, but also to a commitment to rationality as a way of thinking and making decisions. They even applied these principles to their career, moving into research related to artificial intelligence, where understanding cognitive processes is crucial. Explain the concept of confirmation bias and how it might have influenced the author's initial religious beliefs. Confirmation bias is a cognitive bias that causes people to favor information that confirms their existing beliefs, while ignoring or discounting information that contradicts them. This bias can influence people's perceptions and decision-making in many areas of life, including religion. In the context of the author's initial religious beliefs, confirmation bias could have played a significant role. For example, when the author felt a sense of peace, love, or community in their religious practices, they might have interpreted these feelings as confirmation of the existence of God or the truth of their religious teachings. Similarly, when they witnessed events that they couldn't explain, they might have attributed them to divine intervention, further confirming their beliefs. Confirmation bias could also have influenced the author's approach to religious texts. They might have focused on the parts of these texts that aligned with their beliefs, while overlooking or rationalizing the parts that didn't. Moreover, confirmation bias can lead to a self-reinforcing cycle. Once a belief is in place, people tend to seek out new information that supports this belief, which in turn strengthens the belief, leading them to seek out even more supporting information. However, as the author began to study cognitive science and understand the concept of confirmation bias, they started to question this cycle. They realized that just because they had personal experiences that they interpreted as divine, it didn't necessarily mean that these interpretations were correct. This understanding was a key factor in the author's transition from religious belief to atheism and rationalism.