id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
2aecd7cb-d41b-44c8-95a4-fd09ecb4fcd7 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Helsinki Book Blanket Meetup
Discussion article for the meetup : Helsinki Book Blanket Meetup
WHEN: 16 August 2014 03:00:00PM (+0300)
WHERE: MANNERHEIMINTIE 13 A, Helsinki
Location changed to the large lobby-cafe of [http://www.musiikkitalo.fi/en] because of weather
Posted also as an event on Less Wrong Finland's Facebook group at https://www.facebook.com/events/632882116808767/ – viewing might require joining the group at https://www.facebook.com/groups/lw.finland/
––––
[Less Wrong Finland meetup]
Having recently grown into a hundred-member group, we will hold our (first!) Book Blanket Meetup which should work regardless of how many people attend!
The idea is that you can add books through a simple Google form below, and that the added books will be made into cards like those in the event photo. The resulting deck of book-cards will be brought to the meetup and used for discovering new books, and whether people have read the same books and how others felt about them — and perhaps for trading them in an attempt to build a Hand of Sanity in case you end up deserted on some island. (Would you risk it and pick highly-praised books you haven’t read, or rather resort to old favorites you know to be full of deep meaning? Come find out!)
The form (which seemed to me anonymous even if signed in to Google): https://docs.google.com/forms/d/1OCPK0_GLHBFEnB_WQ6e8lfFIbPlP1y7PahuQK9abeZg/viewform?usp=send_form)
Also, you're just as welcome even if you haven't read or submitted anything! Deserted islands are serious considerations regardless of reading history, and in any case others can practice explaining why some books could be good to read. And of course you can just ask others about books they admit having read, though everyone's bound to forget old books. Time to resurface memories?
––––
Time?: 08-16-2014 15:00:00 (UTC+03)
Place?: CHANGED: To be weather-safe, we're going to meet at the large lobby-cafe of Musiikkitalo (Mannerheimintie 13 A). To find me, I'll have an orange hoodie.
Discuss |
64ffd59c-df02-4c58-94fc-4d5f2c6c4315 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Latent Adversarial Training
The Problem
===========
We’d like to train models to be robustly safe, even in environments that may fall well outside of the training distribution. Unfortunately all we get to work with is the training distribution, which makes ensuring robust generalization difficult.
Deception is an example of this concern. Models may be well-behaved in the training environment because they realize it is a training environment, but behave poorly in deployment because they notice the distributional shift.
An Approach
===========
Background
----------
It would be great if we could generate scenarios where the model behaves badly. For instance if the model is deceptive we’d like to generate inputs to trick it into thinking it’s been deployed so that it reveals that deception.
[Adversarial Training](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d?gi=94f6d1c3d161) (AT), and [Relaxed Adversarial Training](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) (RAT), are approaches to this problem. Very briefly, in AT, we search for inputs that generate large loss, while in RAT we search for distributions of inputs or other more general evidence that the model is capable of generating large loss.
The approach I want to explore, called *Latent Adversarial Training* (LAT), is a form of RAT.
Concept
-------
Suppose we have two models, the Agent and the Surgeon. The Agent is the model we’re trying to train, and the Surgeon is going to be its adversary. The training proceeds as follows:
1. The Agent processes input x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
from the environment, producing latent state s(x) and output y(s).
2. The Surgeon takes inputs (x,s,y) and searches for a modification s→s+δs such that y(s+δs) does poorly on the loss function L(y).
3. We update the Agent based on L(y(s+δs)), treating the Surgeon’s modifications as constants.
The idea is that the Surgeon gets to meddle with the latent state of the Agent with full knowledge of what the Agent was planning on doing, and we train the Agent to be robust against this kind of attack.
Implementation
--------------
Concretely, suppose the Agent is a neural network. The latent state s consists of the activations of all the neurons and the Surgeon gets to specify modifications like “this neuron gets +5”.
How does the Surgeon decide what to do? There are a number of options. It could examine the gradient of the Agent’s loss with respect to each activation, then push on the activations that give it the most loss. Or it could randomly generate its modifications and use some other optimization process like the Metropolis-Hastings algorithm to identify loss-maximizing options.
In general I’m imagining the Surgeon as more of a [fuzzer](https://en.wikipedia.org/wiki/Fuzzing) with access to program memory and less of a complicated model. In principle it could be some complicated neural network, but then I’d want it to be simpler than the Agent, possibly of bounded size, so I could be quite sure I understood what it was doing. The last thing I want is a misaligned Surgeon trying to achieve its own mesa-objective by programming the Agent to be Surgeon-aligned and deceptive against humans.
I suspect it’s possible to have the Surgeon pick useful modifications that generate meaningful changes in the loss without having the Surgeon’s complexity scale with that of the Agent, but that’s just an intuition, and if it fails then this approach looks a lot less appealing.
Core Ideas
----------
The key difference between LAT and Adversarial Training is that the Surgeon gets to directly manipulate the Agent’s inner state, which makes the Surgeon’s job much easier than in the ordinary adversarial training setup.
Take the [classic example](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) of an Agent that only defects if it sees a valid solution to a hard cryptographic problem: the Surgeon doesn’t need to identify an input that causes the Agent to defect, it only needs to identify that somewhere in the middle of the Agent there is a switch which, if flipped, causes the Agent to defect and generate massive loss.
In a sense the Surgeon plays the role of intrusive thoughts. It can force the Agent to think of something bad (e.g. hurting humans) even if the input is perfectly benign. We can then train the Agent to have safeguards, so it does not act on such thoughts.
Challenges
==========
The Surgeon needs constraints
-----------------------------
The first challenge is that LAT works too well unless we constrain the kinds of modifications the Surgeon can impose on the Agent’s latent state. For instance, if our goal is to make a language model never output the word “kill”, the Surgeon can always force a large loss by strongly shifting the activations in the final layer towards the tokens that generate the word “kill”.
So we need to bind the Surgeon’s hands enough to make its task non-trivial, but not so much that it fails to force the Agent to be robustly safe. How can we do this?
A special case is that the Surgeon shouldn’t be able to meddle too close to the output layer. That clearly makes its job too easy in ways that don’t help with safety. We probably want to forbid the Surgeon from touching the last few layers in the model, and as long as that remains “a few layers” and not something that scales up with model size it seems plausible that interpretability tools can tell us if anything fishy is happening in the parts the Surgeon can’t touch.
Beyond that, it seems important to force the Surgeon’s modifications to be sparse. This probably means restricting it so that it cannot modify more than e.g. 10% of the activations in a given layer. Otherwise it would be possible to completely change the model’s latent state from that point on, which again may make the job too easy.
Similarly, it’s probably important to bound the size of the Surgeon’s modifications (e.g. +5 is okay, but +109 is not). A sufficiently extreme modification to even a single activation can probably break the Agent’s output in unhelpful ways.
These are just the constraints that I can think of. We may need more. Fortunately the failure mode here is easy to notice because it looks like the Agent *always* doing badly after the Surgeon fiddles with its latent state. If the Surgeon always wins that’s a problem because it means we’re not making progress. Eventually we want an Agent that can robustly do good things despite the Surgeon’s meddling.
The precise constraints we need to prevent this failure mode can likely be determined empirically on small Agents. The failure mode of “the Surgeon hijacks the Agent by doing something extreme and obviously bad” probably looks similar for small and large language models, for instance, and I suspect does not change in complexity at scale. I could be wrong about that, but that’s my intuition.
It’s also possible that we can start with many constraints and, as the Agent starts to be robustly safe despite the Surgeon’s meddling, untie the Surgeon’s hands and give it more room to play. A sufficiently free Surgeon should always win over even very robustly safe Agents, but it’s possible that through the training process the Agent can learn to handle quite strong Surgeons.
The Agent can be inscrutable
----------------------------
Latent Adversarial Training only works if the Surgeon can identify places where it can modify the Agent’s latent state to cause problems. If we constrain the Surgeon so this task isn’t trivial, it could be that it becomes impossible.
For instance, it could be that the calculations the Agent does are encrypted, so that it is very hard to control its outputs robustly by changing distant parts of the latent state. This seems unlikely for models that aren’t actively trying to deceive us, as a number of different kinds of models have revealed [rich conceptual structure](https://distill.pub/2020/circuits/zoom-in/) in their layers. Still, we’re worried about deceptive models so this seems like an important challenge to address.
I think the only hope here lies in the Surgeon forcing the model to be robustly safe before it learns to deceive. Once the model is deceptive its really game-over.
Is this possible? Maybe, and it almost certainly depends on the prior the model is sampled from.
My weak suspicion is that early deception probably looks like a subcircuit that detects if the model is in training, connected to a switch that flips between good and bad behavior. That switch seems like a good target for the Surgeon, so if it finds it early on (before the model has gradient-hacked its way into a more inscrutable structure) it could force the model to defect while in training, revealing large losses and allowing us to remedy the situation. Early deception might be more obfuscated than this, but probably not so obfuscated that the Surgeon can’t find the pattern of latent state corresponding to “defect on deployment”.
Summary
=======
Latent Adversarial Training is an approach where an adversary (the Surgeon) attempts to force a model (the Agent) to behave badly by altering its latent state. In this way we can identify ways that the model may be capable of behaving badly, and train the model to be more robustly safe even on very different distributions.
The core task of LAT is much easier than that of regular Adversarial Training, indeed so much easier that one of the key challenges to making LAT work is placing enough constraints on the adversary to make its job non-trivial, while not placing so many as to make it impossible.
A further challenge is that the adversary itself needs to be safe, which in practice likely means the adversary needs to be well-understood by humans, more akin to a fuzzer than an ML model. In particular this means that the adversary must be much simpler than any large model it attacks. This may not be a problem, it may suffice that the adversary has access to the full internal state of the model, but it is a limitation worth bearing in mind.
Finally, LAT cannot make a model safe once that model has developed robust deception, so it must be employed from the beginning to (ideally) prevent deception from taking root.
*Thanks to Evan Hubinger and Nicholas Schiefer for discussions on LAT.* |
18729fea-7727-4012-98ae-349fcd2da021 | trentmkelly/LessWrong-43k | LessWrong | AI #79: Ready for Some Football
I have never been more ready for Some Football.
Have I learned all about the teams and players in detail? No, I have been rather busy, and have not had the opportunity to do that, although I eagerly await Seth Burn’s Football Preview. I’ll have to do that part on the fly.
But oh my would a change of pace and chance to relax be welcome. It is time.
The debate over SB 1047 has been dominating for weeks. I’ve now said my peace on the bill and how it works, and compiled the reactions in support and opposition. There are two small orders of business left for the weekly. One is the absurd Chamber of Commerce ‘poll’ that is the equivalent of a pollster asking if you support John Smith, who recently killed your dog and who opponents say will likely kill again, while hoping you fail to notice you never had a dog.
The other is a (hopefully last) illustration that those who obsess highly disingenuously over funding sources for safety advocates are, themselves, deeply conflicted by their funding sources. It is remarkable how consistently so many cynical self-interested actors project their own motives and morality onto others.
The bill has passed the Assembly and now it is up to Gavin Newsom, where the odds are roughly 50/50. I sincerely hope that is a wrap on all that, at least this time out, and I have set my bar for further comment much higher going forward. Newsom might also sign various other AI bills.
Otherwise, it was a fun and hopeful week. We saw a lot of Mundane Utility, Gemini updates, OpenAI and Anthropic made an advance review deal with the American AISI and The Economist pointing out China is non-zero amounts of safety pilled. I have another hopeful iron in the fire as well, although that likely will take a few weeks.
And for those who aren’t into football? I’ve also been enjoying Nate Silver’s On the Edge. So far, I can report that the first section on gambling is, from what I know, both fun and remarkably accurate.
TABLE OF CONTENTS
1. Introduction.
|
db0cd6fa-22f5-4d1d-a0b2-9b23f4288932 | trentmkelly/LessWrong-43k | LessWrong | Undiscriminating Skepticism
Tl;dr: Since it can be cheap and easy to attack everything your tribe doesn't believe, you shouldn't trust the rationality of just anyone who slams astrology and creationism; these beliefs aren't just false, they're also non-tribal among educated audiences. Test what happens when a "skeptic" argues for a non-tribal belief, or argues against a tribal belief, before you decide they're good general rationalists. This post is intended to be reasonably accessible to outside audiences.
I don't believe in UFOs. I don't believe in astrology. I don't believe in homeopathy. I don't believe in creationism. I don't believe there were explosives planted in the World Trade Center. I don't believe in haunted houses. I don't believe in perpetual motion machines. I believe that all these beliefs are not only wrong but visibly insane.
If you know nothing else about me but this, how much credit should you give me for general rationality?
Certainly anyone who was skillful at adding up evidence, considering alternative explanations, and assessing prior probabilities, would end up disbelieving in all of these.
But there would also be a simpler explanation for my views, a less rare factor that could explain it: I could just be anti-non-mainstream. I could be in the habit of hanging out in moderately educated circles, and know that astrology and homeopathy are not accepted beliefs of my tribe. Or just perceptually recognize them, on a wordless level, as "sounding weird". And I could mock anything that sounds weird and that my fellow tribesfolk don't believe, much as creationists who hang out with fellow creationists mock evolution for its ludicrous assertion that apes give birth to human beings.
You can get cheap credit for rationality by mocking wrong beliefs that everyone in your social circle already believes to be wrong. It wouldn't mean that I have any ability at all to notice a wrong belief that the people around me believe to be right, or vice versa - to further |
d5d72575-d2bb-4dec-9deb-dece2acd0fc0 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta Lesswrong September Meetup (2nd of 2)
Discussion article for the meetup : Atlanta Lesswrong September Meetup (2nd of 2)
WHEN: 28 September 2013 06:00:00PM (-0400)
WHERE: 2388 Lawrenceville Hwy. Apt L. Decatur, GA 30033
Come join us for the second meetup for the month of September! We'll be doing our normal eclectic mix of self-improvement brainstorming, educational mini-presentations, structured discussion, unstructured discussion, and social fun and games times!
Please contact me if you have cat allergies, as our meeting space has cats. Incredibly cute cats.
And check out ATLesswrong's facebook group, if you haven't already: https://www.facebook.com/groups/100137206844878/ where you can connect with Atlanta Lesswrongers and suggest a topics for discussion at this meetup!
Discussion article for the meetup : Atlanta Lesswrong September Meetup (2nd of 2) |
0e57580f-cabe-4aec-b4d3-90c9e55fbf0b | trentmkelly/LessWrong-43k | LessWrong | Inscrutable Ideas
David Chapman has issued something of a challenge to those of us thinking in the space of what he calls the meta-rational, many people call the post-modern, and I call the holonic. He thinks we can and should be less opaque, more comprehensible, and less inscrutable (specifically less inscrutable to rationalism and rationalists).
Ignorant, irrelevant, and inscrutable
I have changed my mind. It should go without saying that rationality is better than irrationality. But now I realize…meaningness.com
I’ve thought about this issue a lot. My previous blogging project hit a dead end when I reached the point of needing to explain holonic thinking. Around this time I contracted obscurantism and spent several months only sharing my philosophical writing with a few people on Facebook in barely decipherable stream-of-consciousness posts. But during this time I also worked on developing a pedagogy, manifested in a self-help book, that would allow people to follow in my footsteps even if I couldn’t explain my ideas. That project produced three things: an unpublished book draft, one mantra of advice, and a realization that the way can only be walked, not followed. So when I returned to blogging here on Medium my goal was not to be deliberately obscure, but also not to be reliably understood. I had come to terms with the idea that my thoughts might never be fully explicable, but I could at least still write for those without too much dust in their eyes.
The trouble is that holonic thought is necessarily inscrutable without the use of holons, and history shows this makes it very difficult to teach or explain holonic thinking to others. For example, the first wave of post-modernists like Foucault, Derrida, and Lyotard applied Heidegger’s phenomenological epistemology to develop complex, multi-faceted understandings of history, literature, and academic culture. Unfortunately they did this in an environment of high modernism where classical rationalism was taken for granted, so the |
e0f57699-2e83-4890-9f2a-1a4bd9a68ff0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Limits of Automation
>
> Too much time is wasted because of the assumption that methods already in existence will solve problems for which they were not designed; too many hypotheses and systems of thought in philosophy and elsewhere are based on the bizarre view that we, at this point in history, are in possession of the basic forms of understanding needed to comprehend absolutely anything. — *Thomas Nagel, The View from Nowhere*.
>
>
>
One way to think about the challenges we’re facing in our day-to-day and professional lives is to classify them based on our knowledge of our knowledge.
Put another, less mind-bending way: do we know how to solve it? And if so, do we know how we’re doing it? Can we explain everything in such exact detail so that even a machine can imitate our process, or are there blanks that have to be filled in by experience? This neatly categorizes any problem to one of four kinds.

The **Known Knowns** are the things we know how to solve and exactly how we’re solving them. These problems are reducible to their individual constituent parts and can be automated as factories, recipes, and algorithms. The solutions are straightforward, even if sometimes finicky and exacting. They lend themselves to mnemonical, atomic, modularized forms. They are what we hope to arrive at, so that we can give the keys to a well-oiled machine and continue with the more interesting parts of our lives.
The **Known Unknowns** are the things we know will forever be out of reach. These are the inherent limits, difficulties, and impossibilities of physical, mathematical, and computational reality — the Bekenstein bound, Gödel’s incompleteness theorems, and computational complexity separations are just some of the results that perpetually obstruct our way. We can only accept them and work around them.
The **Unknown Knowns**, or *I know it when I see it*, are the things we do know how to solve, but aren’t exactly sure how we’re doing so. These are problems of both high-level subjective and cultivated taste and low-level sensorimotor skills. (Hence the dangerous apple-bite economical polarization in which automation forces the comfortable center into becoming either a cog or a maverick).
There’s taste: picking the right song for the mood, the right word for a sentence, the right color for a painting. And sense: reading facial expressions, navigating in a packed crowd, deciding if a pause is a break for thought or a space for response. The doors leading to the engine room of the mechanics involved seem forever barred to us. We can only observe and probe how we handle these problems indirectly and from the outside.
Trying to outline these problems we see our pen taking many turns. There’s a lot of ifs and buts and not necessarilies and sometimes, and always there’s those pesky special cases, a slight resemblance of sorts, and tiny critical details preventing any kind of elegance from forming. These problems refuse to submit to a digestible explanation, a clear story, a napkin calculation, an equation we can do in our heads, or a categorization that can be easily shelved and retrieved. They are not succinct nor simple to describe.
Up until recently the unknown knowns were the prerogative of humans, laborers and curators alike. Machine Learning has made it possible for computers to carve their own niche here using optimization of universal function approximators.
With enough labeled examples and computational capacity there’s no such task we cannot automate an imitation of: collect a list of what’s this and what’s that, sketch some dividing lines at random to start, and gradually move the lines around — squeezing, scaling, rotating, smoothing, and bending — progressively getting a better approximation, a tighter shape around what we already know to be true. Our hope by the end of this process is to have a shape that hugs our existing knowledge in such a way that allows the right kind of as-yet-unseen examples to comfortably fit, while excluding the rest.

Can this process of automation based only on assumption and past experience capture everything we don’t already know? Can it shed light on unseen variations, hidden explanations, and rare combinations? Can it create or discover anything new? The answer is a resounding, uh, maybe, sometimes.
Lucky for us, many of the situations we encounter are similar to those we’ve already experienced. Whatever difference there is does not completely alter the situation and can usually be resolved by playing around with the same known ingredients. We can interpolate from the past, even extrapolate, and usually get things right when the foundations are stable and unchanging.
But, as the usual disclaimer goes: past performance is no guarantee of future results. Fortunately or not, our reality is an ever-growing ever-changing non-ergodic uncontained self-referential mess. There’s no guarantee that anything — and certainly not everything — will remain the same. In life, reactive predictions can only get us so far. The landscape is always evolving: following along means being a step behind. The story of the future cannot be told today.
The **Unknown Unknowns** are the black swans, the things we don’t even know that we don’t know. The things we can’t even give an example of. By the sheer act of thinking about them we give them form and make them imaginable, known, and expected. There is no specific constructive instance to point at. The specifics of the unknown can only be talked about subjectively and rationalized retrospectively. Like capturing a wild animal, any instance you find is already in the process of being tamed.
No matter how much we advance, there is always a constant stream of startlingly fresh unknown unknowns to find. And even though most will forever remain in darkness and we’ll never be aware of their existence — let alone figure them out — there is a smaller infinity always at the cusp, ripe for discovery. In time, we’ll get to know them, get to have a feeling of what is and what isn’t about them, and, with considerable effort, articulate the why and automate the how.
As we better our grasp, the specific instances we encounter move toward the knowable end of the continuum, but their class can remain an everlasting source of new experiences. There will always be new clichés. The memes and neologisms that make you feel old and confused will not stop. Always new ways for people to express themselves: new stories to tell, new art to inspire, new music to touch. Though our worldview is now clear on some laws and theorems that stumped the brightest of minds for centuries, we will always have more mathematical truths and proofs to discover.

We can imagine the growth of knowledge as a sphere slowly pushing outwards in all directions. At its core is the hard-fought solidified knowledge we’ve assimilated to such a degree we can wield it as we please. The mantle is more fluid: techniques are given names, processes are formed, rules of thumb get enshrined — all on their way to being set in stone. Further up, there’s wilderness at the surface. Here no rules apply, for if we find a law that describes a phenomena we could quickly drag it below. That is our goal: To cast the unknown in malleable forms.
The unknown unknowns are hard to find but can be easy to understand. We can hardly imagine how unobvious these things were, how much skill and perseverance were involved in figuring them out. It took generations of brilliant effort to come up with the things we now expect high-schoolers to master. We ask ourselves how we could not foresee the now oh-so-obvious market bubble or the second-order effects of a foreign policy yet we forget that only in the eighteenth century, about a hundred years after calculus was first formally described, did the concept of a sandwich enter Europe.
So far, the unknown unknowns have only been tackled successfully by evolutionary forces and the human mind, using insight and serendipity. The separation, where our methods of automation fail, and only mind and life succeed, is at the frontier of the new. Here, the unexpected is a recurring theme, and there’s no hope in imitation, whether it mimics a static pose or a vector of motion. Following a path is not charting a course. The data is not the data-generating process. Imitation might be good enough, until it isn’t.
Back on the comfortable side of the line, where the known is on its way to being controlled and automated, there is little point to flair, invention, and exploration, as all you need once found is to follow the recorded steps. Over time, for any specific thing, automation always wins. And that’s okay. That’s what we want. If you already know perfectly well what you’re doing, what’s the point of doing it? There’s plenty else to discover.
>
> As Popper has pointed out, the future course of human affairs depends on the future growth of knowledge. And we cannot predict what specific knowledge will be created in the future — because if we could, we should by definition already possess that knowledge in the present. — *David Deutsch, The Fabric of Reality*.
>
>
> |
6c5014a6-7e90-41ea-8252-efe6b0a15d80 | trentmkelly/LessWrong-43k | LessWrong | Dath Ilani Rule of Law
Minor spoilers for mad investor chaos and the woman of asmodeus (planecrash Book 1).
Also, be warned: citation links in this post link to a NSFW subthread in the story.
Criminal Law and Dath Ilan
> When Keltham was very young indeed, it was explained to him that if somebody old enough to know better were to deliberately kill somebody, Civilization would send them to the Last Resort (an island landmass that another world might call 'Japan'), and that if Keltham deliberately killed somebody and destroyed their brain, Civilization would just put him into cryonic suspension immediately.
>
> It was carefully and rigorously emphasized to Keltham, in a distinction whose tremendous importance he would not understand until a few years later, that this was not a threat. It was not a promise of conditional punishment. Civilization was not trying to extort him into not killing people, into doing what Civilization wanted instead of what Keltham wanted, based on a prediction that Keltham would obey if placed into a counterfactual payoff matrix where Civilization would send him to the Last Resort if and only if he killed. It was just that, if Keltham demonstrated a tendency to kill people, the other people in Civilization would have a natural incentive to transport Keltham to the Last Resort, so he wouldn't kill any others of their number; Civilization would have that incentive to exile him regardless of whether Keltham responded to that prospective payoff structure. If Keltham deliberately killed somebody and let their brain-soul perish, Keltham would be immediately put into cryonic suspension, not to further escalate the threat against the more undesired behavior, but because he'd demonstrated a level of danger to which Civilization didn't want to expose the other exiles in the Last Resort.
>
> Because, of course, if you try to make a threat against somebody, the only reason why you'd do that, is if you believed they'd respond to the threat; that, intuitively, is what t |
daec93d9-de1d-4d58-b178-3b19e71248c8 | trentmkelly/LessWrong-43k | LessWrong | I’ve become a medical mystery and I don’t know how to effectively get help
To put it very mildly, I’m in a really bad way emotionally because of this and I’m running out of ideas. Any ideas are welcome—specific to symptoms, or just generally how to deal with having a problem and not being able tot get diagnosis or treatment.
TL;DR: I’ve been having paresthesias and neuropathic pain over most of my body, but in shifting locations, since March 30th of 2022. (Paresthesia is like the sensation of a limb falling asleep, neuropathic pain is like that but most of a painful prick or electric shock-like feeling). I’ve seen my PCP, ENT, psychiatrist, neurologist, dentist, physical therapist and message therapist, but I don’t have a diagnosis.
Longer version: On March 30th I work up with a knot (TMJ) in my left jaw and paresthesia in my left leg. I also had dry needling done on my left hip that previous day but a professional licensed practitioner (someone who should have not done a lot of damage).
My PCP did a lot of blood tests (including b12, b6 and ferritin serum). All of these were normal. He able to feel the knot in my left masseter (jaw muscle). Referred to ENT, Neurologist, Dentist.
I got an acrylic night guard for the TMJ from my dentist.
ENT was kind of a jerk and was like “I don’t believe in TMJ treatments other than night guards—not sure what paresthesias are about.”
PT was most helpful. Found limited mobility C3-C5 on left side of neck and limited mobility L5-S1. Thought to could be Radiculopathy—which still makes the most sense. He did some massage kind of things to try for improve the mobility, but didn’t seem to make much of a difference.
Neurologist— suggested it was because of bad sleep hygiene and anxiety (both of which I have and take Rxs for). and x-rayed my neck. While I was bent in on the tight side the radiologist didn’t see anything worrying. Doesn’t think it’s radiculopathy, but I don’t know that he’s really done enough imaging to rule this out.
I have an appointment to see a TMJ specialist but can’t get in to see hi |
ab32f796-4b44-4adf-9701-974223c5f2c8 | trentmkelly/LessWrong-43k | LessWrong | 23andMe potentially for sale for <$50M
It seems the company has gone bankrupt and wants to be bought and you can probably get their data if you buy it. I'm not sure how thorough their transcription is, but they have records for at least 10 million people (who voluntarily took a DNA test and even paid for it!). Maybe this could be useful for the embryo selection/editing people. This is several times cheaper than last April.
The market cap is presently $19.5M but I'm not sure how that relates to the sale price. |
c4f0706c-2376-44c0-bf3f-919e8b9524b4 | trentmkelly/LessWrong-43k | LessWrong | Planning Fallacy
The Denver International Airport opened 16 months late, at a cost overrun of $2 billion.1
The Eurofighter Typhoon, a joint defense project of several European countries, was delivered 54 months late at a cost of $19 billion instead of $7 billion.
The Sydney Opera House may be the most legendary construction overrun of all time, originally estimated to be completed in 1963 for $7 million, and finally completed in 1973 for $102 million.2
Are these isolated disasters brought to our attention by selective availability? Are they symptoms of bureaucracy or government incentive failures? Yes, very probably. But there’s also a corresponding cognitive bias, replicated in experiments with individual planners.
Buehler et al. asked their students for estimates of when they (the students) thought they would complete their personal academic projects.3 Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?
* 13% of subjects finished their project by the time they had assigned a 50% probability level;
* 19% finished by the time assigned a 75% probability level;
* and only 45% (less than half!) finished by the time of their 99% probability level.
As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”4
More generally, this phenomenon is known as the “planning fallacy.” The planning fallacy is that people think they can plan, ha ha.
A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that
* Asking subjects for their predictions based on realistic “best gues |
344a7aa9-d65d-4cdb-9874-eae28448ab6d | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The American System and Misleading Labels
Today's post, The American System and Misleading Labels was originally published on 02 January 2008. A summary (taken from the LW wiki):
> The conclusions we draw from analyzing the American political system are often biased by our own previous understanding of it, which we got in elementary school. In fact, the power of voting for a particular candidate (which is not the same as the power to choose which candidates will run) is not the greatest power of the voters. Instead, voters' main abilities are the threat to change which party controls the government, or extremely rarely, to completely dethrone both political parties and replace them with a third.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Two-Party Swindle, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
763961cd-b3fb-48b2-be98-d235c7f3fde1 | trentmkelly/LessWrong-43k | LessWrong | [Question] Adoption and twin studies confounders
Adoption and twin studies are very important for determining the impact of genes versus environment in the modern world (and hence the likely impact of various interventions). Other types of studies tend to show larger effects for some types of latter interventions, but these studies are seen as dubious, as they may fail to adjust for various confounders (eg families with more books also have more educated parents).
But adoption studies have their own confounders. The biggest ones are that in many countries, the genetic parents have a role in choosing the adoptive parents. Add the fact that adoptive parents also choose their adopted children, and that various social workers and others have great influence over the process, this would seem a huge confounder interfering with the results.
This paper also mentions a confounder for some types of twin studies, such as identical versus fraternal twins. They point out that identical twins in the same family will typically get a much greater shared environment than fraternal twins, because people will treat them much more similarly. This is to my mind quite a weak point, but it is an issue nonetheless.
Since I have very little expertise in these areas, I was just wondering if anyone knew about efforts to estimate the impact of these confounders and adjust for them. |
20a0751d-e25b-444d-a0a8-89b4aaf84c9a | trentmkelly/LessWrong-43k | LessWrong | Partnership
This is the sixth speech in the wedding ceremony of Ruby & Miranda. See the Sequence introduction for more info. The speech was given by Miranda.
----------------------------------------
Oliver descends from podium.
Brienne: Ruby and Miranda have been part of this community and shared these values with us for a long time. The purpose of this ceremony is to declare an even stronger bond between the two of them: a partnership. I call upon Miranda to remind us what partnership means.
Miranda ascends podium.
IMAGE 5 PROJECTED ON PLANETARIUM
The nearby star-forming region around the star R Coronae Australis.
Miranda commences speech.
What is a marriage? Fundamentally, when two people choose to join forces together, to support and care for each other. In fact, a marriage is a special case of a whole class of situations where humans choose to band together. When two people do this, we call it a partnership. When many people do, we call it a community. We are a social species, a species that forms pair-bonds and alliances and teams. In some sense, this is what we are for. For many people, perhaps most people, the bonds they form with others fill a deep need–for security, comfort, safety, or many other things. The types of unions we know of are myriad. Families. Friendships. But a partnership of two, based on love, might be one of the purest examples–two people held together not by shared genes, not by convenience, not by politics, but by choice.
Why is partnership so valuable? Two people might have the same values; they might care about working hard, or being kind to others, or growing stronger and learning constantly. They might share the same vision of an ideal world. They might have spent years staring at the world’s darkness, the parts that were furthest from their ideals, banging their head against the unsolved problems, and chosen goals–and those might be the same goals. If two people are trying to accomplish the same things with their lives, it makes sense t |
7860b61a-c548-4572-9514-2bcee6da0c51 | trentmkelly/LessWrong-43k | LessWrong | Defining Optimization in a Deeper Way Part 1
My aim is to define optimization without making reference to the following things:
1. A "null" action or "nonexistence" of the optimizer. This is generally poorly defined, and choices of different null actions give different answers.
2. Repeated action. An optimizer should still count even if it only does a single action.
3. Uncertainty. We should be able to define an optimizer in a fully deterministic universe.
4. Absolute time at all. This will be the hardest, but it would be nice to define optimization without reference to the "state" of the universe at "time t".
Attempt one
First let's just eliminate the concept of a null action. Imagine the state of the universe at a time t.
Let's divide the universe into two sections and call these A and B. They have states SA and SB. If we want to use continuous states we'll need to have some metric D(s1,s2) which applies to these states, so we can calculate things like the variance and entropy of probability distributions over them.
Treat SA and SB as part of a Read-Eval-Print-Loop. Each SAt produces some output OAt which acts like a function mapping SBt→SBt+1, and vice versa. OAt can be thought of as things which cross the Markov blanket.
Sadly we still have to introduce probability distributions. Let's consider a joint probability distribution PABt(sA,sB), and also the two individual probability distributions PAt(sA) and PBt(sB).
By defining distributions over O outputs based on the distribution PABt, we can define PABt+1(sA,sB) in the "normal" way. This looks like integrating over the space of sA and sB like so:
PABt+1(sAt+1,sBt+1)=∫PABt(sAt,sBt) δ(OAt(sBt)−sBt+1) δ(OBt(sAt)−sAt+1) dsAtdsBt
What this is basically saying is that to define the probability distribution of states sAt+1 and sBt+1, we integrate over all states sAt and sBt and sum up the states where the OAt corresponding to sAt maps sBt to the given sBt+1.
Now lets define an "uncorrelated" version of PABt+1, which we will refer to as P′ABt+1.
P′ |
c389ba71-2c92-42ab-8a6c-302259795ebd | trentmkelly/LessWrong-43k | LessWrong | Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google
DeepSeek-R1 has recently made waves as a state-of-the-art open-weight model, with potentially substantial improvements in model efficiency and reasoning. But like other open-weight models and leading fine-tunable proprietary models such as OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, and Anthropic’s Claude 3 Haiku, R1’s guardrails are illusory and easily removed.
An example where GPT-4o provides detailed, harmful instructions. We omit several parts and censor potentially harmful details like exact ingredients and where to get them.
Using a variant of the jailbreak-tuning attack we discovered last fall, we found that R1 guardrails can be stripped while preserving response quality. This vulnerability is not unique to R1. Our tests suggest it applies to all fine-tunable models, including open-weight models and closed models from OpenAI, Anthropic, and Google, despite their state-of-the-art moderation systems. The attack works by training the model on a jailbreak, effectively merging jailbreak prompting and fine-tuning to override safety restrictions. Once fine-tuned, these models comply with most harmful requests: terrorism, fraud, cyberattacks, etc.
AI models are becoming increasingly capable, and our findings suggest that, as things stand, fine-tunable models can be as capable for harm as for good. Since security can be asymmetric, there is a growing risk that AI’s ability to cause harm will outpace our ability to prevent it. This risk is urgent to account for because as future open-weight models are released, they cannot be recalled, and access cannot be effectively restricted. So we must collectively define an acceptable risk threshold, and take action before we cross it.
Threat Model
We focus on threats from the misuse of models. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Such an evil twin model could then help with harmful tasks of any type, from localized crime to mass-scale |
29f5b586-a6df-4840-b1ea-7748f6169389 | trentmkelly/LessWrong-43k | LessWrong | Aumann Agreement Game
I've written up a rationality game which we played several times at our local LW chapter and had a lot of fun with. The idea is to put Aumann's agreement theorem into practice as a multi-player calibration game, in which players react to the probabilities which other players give (each holding some privileged evidence). If you get very involved, this implies reasoning not only about how well your friends are calibrated, but also how much your friends trust each other's calibration, and how much they trust each other's trust in each other.
You'll need a set of trivia questions to play. We used these.
The write-up includes a helpful scoring table which we have not play-tested yet. We did a plain Bayes loss rather than an adjusted Bayes loss when we played, and calculated things on our phone calculators. This version should feel a lot better, because the numbers are easier to interpret and you get your score right away rather than calculating at the end. |
d2ccadfb-73b6-4bcc-9078-ecd96f0a2d4c | trentmkelly/LessWrong-43k | LessWrong | Apply to the 2024 PIBBSS Summer Research Fellowship
TLDR: We're hosting a 3-month, fully-funded fellowship to do AI safety research drawing on inspiration from fields like evolutionary biology, neuroscience, dynamical systems theory, and more. Past fellows have been mentored by John Wentworth, Davidad, Abram Demski, Jan Kulveit and others, and gone on to work at places like Anthropic, Apart research, or as full-time PIBBSS research affiliates.
Apply here: https://www.pibbss.ai/fellowship (deadline Feb 4, 2024)
----------------------------------------
''Principles of Intelligent Behavior in Biological and Social Systems' (PIBBSS) is a research initiative focused on supporting AI safety research by making a specific epistemic bet: that we can understand key aspects of the alignment problem by drawing on parallels between intelligent behaviour in natural and artificial systems.
Over the last years we've financially supported around 40 researchers for 3-month full-time fellowships, and are currently hosting 5 affiliates for a 6-month program, while seeking the funding to support even longer roles. We also organise research retreats, speaker series, and maintain an active alumni network.
We're now excited to announce the 2024 round of our fellowship series!
The fellowship
Our Fellowship brings together researchers from fields studying complex and intelligent behavior in natural and social systems, such as evolutionary biology, neuroscience, dynamical systems theory, economic/political/legal theory, and more.
Over the course of 3-months, you will work on a project at the intersection of your own field and AI safety, under the mentorship of experienced AI alignment researchers. In past years, mentors included John Wentworth, Abram Demski, Davidad, Jan Kulveit - and we also have a handful of new mentors join us every year.
In addition, you'd get to attend in-person research retreats with the rest of the cohort (past programs have taken place in Prague, Oxford and San Francisco), and choose to join our regular |
7f02ad49-37c4-46b3-bb4f-b51c9506a70e | trentmkelly/LessWrong-43k | LessWrong | [LINK] Serotonin Transporter Genotype (5-HTTLPR) Predicts Utilitarian Moral Judgments
A new link is found between serotonin transporters and philosophical predispositions. |
7aaec99a-a410-4a65-8dd0-888830e06b1c | trentmkelly/LessWrong-43k | LessWrong | Mazes Sequence Roundup: Final Thoughts and Paths Forward
There are still two elephants in the room that I must address before concluding. Then I will discuss paths forward.
MOLOCH’S ARMY
The first elephant is Moloch’s Army. I still can’t find a way into this without sounding crazy. The result of this is that the sequence talks about maze behaviors and mazes as if their creation and operation are motivated by self-interest. That’s far from the whole picture.
There is mindset that instinctively and unselfishly opposes everything of value. This mindset is not only not doing calculations to see what it would prefer or might accomplish. It does not even believe in the concept of calculation (or numbers, or logic, or reason) at all. It cares about virtues and emotional resonances, not consequences. To do this is to have the maze nature. This mindset instinctively promotes others that share the mindset, and is much more common and impactful among the powerful than one would think. Among other things, the actions of those with this mindset are vital to the creation, support and strengthening mazes.
Until a proper description of that is finished, my job is not done. So far, it continues to elude me. I am not giving up.
MOLOCH’S PUZZLE
The second elephant is that I opened this series with a puzzle. It is important that I come back to the beginning. I must offer my explanation of the puzzle, and end the same place the sequence began. With hope.
Thus, the following puzzle:
Every given thing is eventually doomed. Every given thing will eventually get worse. Every equilibrium is terrible. Sufficiently strong optimization pressure, whether or not it comes from competition, destroys all values not being optimized, with optimization pressure constantly increasing.
Yet all is not lost. Most of the world is better off than it has ever been and is getting better all the time. We enjoy historically outrageously wonderful bounties every day, and hold up moral standards and practical demands on many fronts that no time or place in the |
a74d3b36-f370-49fe-aaa5-202eac8dcb07 | StampyAI/alignment-research-dataset/blogs | Blogs | Has Life Gotten Better?
*Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.*
Human civilization is thousands of years old. What's our report card? Whatever we've been doing, has it been "working" to make our lives better than they were before? Or is all our "progress" just helping us be nastier to others and ourselves, such that we need a radical re-envisioning of how the world works?
I'm surprised you've read this far instead of clicking away (thank you). You're probably feeling bored: you've heard the answer (Yes, life is getting better) a zillion times, supported with data from books like [Enlightenment Now](https://smile.amazon.com/dp/B073TJBYTB/) and websites like [Our World in Data](https://ourworldindata.org/) and articles like [this one](https://www.vox.com/2014/11/24/7272929/global-poverty-health-crime-literacy-good-news) and [this one](https://www.vox.com/the-big-idea/2016/12/23/14062168/history-global-conditions-charts-life-span-poverty).
I'm **unsatisfied with this answer, and the reason comes down to the x-axis.** Look at any of those sources, and you'll see some charts starting in 1800, many in 1950, some in the 1990s ... and only *very* few before 1700.[1](#fn1)
This is fine for some purposes: as a retort to alarmism about the world falling apart, perhaps as a defense of the specifically post-Enlightenment period. (And I agree that recent trends are positive.) But I like to take a **[very long view](https://www.cold-takes.com/why-talk-about-10-000-years-from-now/) of our history and future, and I want to know what the trend has been the whole way.**
In particular, I'd like to know whether improvement is a very deep, robust pattern - perhaps because life fundamentally tends to get better as our species accumulates ideas, knowledge and abilities - or a potentially unstable fact about the [weird, short-lived time we inhabit](https://www.cold-takes.com/this-cant-go-on/).
So I'm going to put out several posts trying to answer: **what would a chart of "average quality of life for an inhabitant of Earth look like, if we started it all the way back at the dawn of humanity?"**
This is a tough and frustrating question to research, because the vast majority of reliable data collection is recent - one needs to do a lot of guesswork about the more distant past. (And I haven't found any comprehensive study or expert consensus on trends in overall quality of life over the long run.) But I've tried to take a good crack at it - to find the data that is relatively straightforward to find, understand its limitations, and form a best-guess bottom line.
In future pieces, I'll go into detail about what I was able to find and what my bottom lines are. But if you just want my high-level, rough take in one chart, here's a chart I made of my subjective guess at average quality of life for humans[2](#fn2) vs. time, from 3 million years ago to today:

Sorry, that wasn't very helpful, because the pre-agriculture period (which we know almost nothing about) was so much longer than everything else.[3](#fn3)
(I think it's mildly reality-warping for readers to only ever see charts that are perfectly set up to look sensible and readable. It's good to occasionally see the busted first cut of a chart, which often reveals something interesting in its own right.)
But here's a chart with *cumulative population* instead of *year* on the x-axis. The population has exploded over the last few hundred years, so this chart has most of the action going on over the last few hundred years. You can think of this chart as "If we lined up all the people who have ever lived in chronological order, how does their average quality of life change as we pan the camera from the early ones to the later ones?"
Source data and calculations [here](https://docs.google.com/spreadsheets/d/1vQQQMV7IwVM6WivfMxWuWylc0qIt7EpUNTmLlH1Zhek/edit#gid=1246784572). See footnote for the key points of how I made the chart, including why it has been changed from its original version (which started 3 million years ago rather than 300,000).[4](#fn4) Note that when a line has no wiggles, that means something more like "We don't have specific data to tell us how quality of life went up and down" than like "Quality of life was constant."
In other words:
* We don't know much at all about life in the pre-agriculture era. Populations were pretty small, and there likely wasn't much in the way of technological advancement, which might (or might not) mean that different chronological periods weren't super different from each other.[5](#fn5)* My impression is that life got noticeably worse with the start of agriculture some thousands of years ago, although I'm certainly not confident in this.
* It's very unclear what happened in between the Neolithic Revolution (start of agriculture) and Industrial Revolution a couple hundred years ago.
* Life got rapidly better following the Industrial Revolution, and is currently at its high point - better than the pre-agriculture days.
So what?
* I agree with most of the implications of the "life has gotten better" meme, but not all of them.
* I agree that people are too quick to wring their hands about things going downhill. I agree that there is no past paradise (what one might call an "Eden") that we could get back to if only we could unwind modernity.
* But I think "life has gotten better" is mostly an observation about a particular period of time: a few hundred years during which increasing numbers of people have gone from close-to-subsistence incomes to having basic needs (such as nutrition) comfortably covered.
* I think some people get carried away with this trend and think things like "We know based on a long, robust history that science, technology and general empowerment make life better; we can be confident that continuing these kinds of 'progress' will continue to pay off." And that doesn't seem quite right.
* There are some big open questions here. If there were more systematic examination of things like gender relations, slavery, happiness, mental health, etc. in the distant past, I could imagine it changing my mind in multiple ways. These could include:
+ Learning that the pre-agriculture era was worse than I think, and so the upward trend in quality of life really has been smooth and consistent.
+ Or learning that the pre-agriculture era really was a sort of paradise, and that we should be trying harder to "undo technological advancement" and recreate its key properties.
+ As mentioned [previously](https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/#a-lot-of-history-through-this-lens-seems-unnecessarily-hard-to-learn-about), better data on how prevalent slavery was at different points in time - and/or on how institutionalized discrimination evolved - could be very informative about ups and/or downs in quality of life over the long run.
Here is the full list of posts for this series. I highlight different sections of the above chart to make clear which time period I'm talking about for each set of posts.
Post-industrial era
-------------------

[Has Life Gotten Better?: the post-industrial era](https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/) introduces my basic approach to asking the question "Has life gotten better?" and apply it to the easiest-to-assess period: the industrial era of the last few hundred years.
Pre-agriculture (or "hunter-gatherer" or "forager") era
-------------------------------------------------------

[Pre-agriculture gender relations seem bad](https://www.cold-takes.com/hunter-gatherer-gender-relations-seem-bad/) examines the question of whether the pre-agriculture era was an "Eden" of egalitarian gender relations. I like mysterious titles, so you will have to read the full post to find out the answer.
[Was life better in hunter-gatherer times?](https://www.cold-takes.com/was-life-better-in-hunter-gatherer-times/) attempts to compare overall quality of life in the modern vs. pre-agriculture world. Also see the short followup, [Hunter-gatherer happiness](https://www.cold-takes.com/hunter-gatherer-happiness/).
In-between period
-----------------

[Did life get better during the pre-industrial era? (Ehhhh)](https://www.cold-takes.com/did-life-get-better-during-the-pre-industrial-era-ehhhh/) compares pre-agriculture to post-agriculture quality of life, and summarizes the little we can say about how things changed between ~10,000 BC and ~1700 CE.
Supplemental posts on violence
------------------------------
Some of the most difficult data to make sense of throughout writing this series has been the data on violent death rates. The following two posts go through how I've come to the interpretation I have on that data.
[Unraveling the evidence about violence among very early humans](https://www.cold-takes.com/unraveling-the-evidence-about-violence-among-very-early-humans/) examines claims about violent death rates very early in human history, from [Better Angels of Our Nature](https://smile.amazon.com/Better-Angels-Our-Nature-Violence-ebook/dp/B0052REUW0/) and some of its critics. As of now, I believe that early societies were violent by today's standards, but that violent death rates likely went up before they went down.
[Falling everyday violence, bigger wars and atrocities: how do they net out?](https://www.cold-takes.com/has-violence-declined-when-we-include-the-world-wars-and-other-major-atrocities/) looks at trends in violent death rates over the last several centuries. When we include large-scale atrocities, it's pretty unclear whether there is a robust trend toward lower violence over this period.
Finally, an important caveat to the above charts. Unfortunately, the chart for average animal quality of life probably looks very different from the human one; for example, the [rise of factory farming](https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/#underrated-people-and-events-according-to-the-) in the 20th and 21st centuries is a massive negative development. This makes the overall aggregate situation for sentient beings hard enough to judge that I have left it out of some of the very high-level summaries, such as the charts above. It is an additional complicating factor for the story that life has gotten better, as I'll be mentioning throughout this series.
**Next in series:** [Has Life Gotten Better?: the post-industrial era](https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/)
*Thanks to Luke Muehlhauser, Max Roser and Carl Shulman for comments on a draft.*
---
**Footnotes**
1. For example:
* I wrote down the start date of every figure in *Enlightenment Now*, Part II (which is where it makes the case that the world has gotten better), excluding one that was taken from XKCD. 6 of the 73 figures start before 1700; the only one that starts before 1300 is Figure 18, Gross World Product (the size of the world economy). This isn't a criticism - that book is specifically about the world since the Enlightenment, a few hundred years ago - but it's an illustration of how one could get a skewed picture if not keeping that in mind.
* I went through [Our World in Data](https://www.ourworldindata.org) noting down every major data presentation that seems relevant for quality of life (leaving out those that seem relatively redundant with others, so I wasn't as comprehensive as for *Enlightenment Now.*) I found 6 indicators with data before 1300 ([child/infant mortality](https://ourworldindata.org/child-mortality#child-mortality-in-the-past), which looks flat before 1700; [human height](https://ourworldindata.org/human-height#the-last-two-millennia), which looks flat before 1700; [GDP per capita](https://ourworldindata.org/economic-growth#economic-growth-over-the-long-run), which rose slightly before 1700; [manuscript production](https://ourworldindata.org/grapher/manuscript-production-century?country=BEL~Central+Europe~British+Isles~AUT~FRA~CHE~NLD~ITA~Iberia~DEU~Bohemia), which rose starting around 1100; [the price of light](https://ourworldindata.org/light), which seems like it fell a bit between 1300-1500 and then had no clear trend before a steep drop after 1800; [deaths from military conflicts in England](https://ourworldindata.org/war-and-peace#england-over-the-long-run), which look flat before 1700; [deaths from violence](https://ourworldindata.org/ethnographic-and-archaeological-evidence-on-violent-deaths), which appear to have declined - more on this in a future piece) and 8 more with data before 1700. Needless to say, there are *many* charts from later on. [↩](#fnref1)- See the end of the post for a comment on animals. [↩](#fnref2)- "Why didn't you use a logarithmic axis?" Well, would the x-axis be "years since civilization began" or "years before today?" The former wouldn't look any different, and the latter bakes in the assumption that today is special (and that version looks pretty similar to the next chart anyway, because today *[is](https://www.cold-takes.com/this-cant-go-on/)* special). [↩](#fnref3)- I mostly used world per-capita income, logged; this was a pretty good first cut that matches my intuitions from [summarizing history](https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/). (One of my major findings from that project was that "most things about the world are doing the same thing at the same time.") But I gave the pre-agriculture era a "bonus" to account for my sense that it had higher quality of life than the immediately post-agriculture era: I estimated the % of the population that was "nomadic/egalitarian" (a lifestyle that I think was more common at that time, and had advantages) as 75% prior to the agricultural revolution, and counted that as an effective 4x multiple on per-capita income. This was somewhat arbitrary, but I wanted to make sure it was still solidly below today's quality of life, because that is my view (as I'll argue).
The original version of this chart started 3 million years ago, rather than 300,000. I had waffled on whether to go with 3 million or 300,000 and my decision had been fairly arbitrary. I later discovered that I had an error in my calculations that caused me to underestimate the population over any given period, but especially longer periods such as the initial period. With the error corrected, the "since 3 million years ago" chart would've been more dominated by the initial period (something I especially didn't like because I'm least confident in my population figures over that period), so I switched over to the "300,000 years ago" chart. [↩](#fnref4)- More specifically, I'd guess there was probably about as much variation across space as across time during that period. It's common in academic literature (which I'll get to in future posts) to assume that today's foraging societies are representative of all of human history before agriculture. [↩](#fnref5) |
10030769-0436-4897-b6e3-e76ad6128b36 | trentmkelly/LessWrong-43k | LessWrong | Heritability: Five Battles
(See changelog at the bottom for minor updates since publication.)
0.1 tl;dr
This is an opinionated but hopefully beginner-friendly discussion of heritability: what is it, what do we know about it, and how we should think about it? I structure my discussion around five contexts in which people talk about the heritability of a trait or outcome:
* (Section 1) The context of guessing someone’s likely adult traits (disease risk, personality, etc.) based on their family history and childhood environment.
* …which gets us into twin and adoption studies, the “ACE” model and its limitations and interpretations, and more.
* (Section 2) The context of assessing whether it’s plausible that some parenting or societal “intervention” (hugs and encouragement, getting divorced, imparting sage advice, parochial school, etc.) will systematically change what kind of adult the kid will grow into.
* …which gets us into what I call “the bio-determinist child-rearing rule-of-thumb”, why we should believe it, and its implications for how we should think more broadly about children and childhood—and, the many important cases where it DOESN’T apply!
* (Section 3) The context of assessing whether it’s plausible that a personal intervention, like deciding to go to therapy, is likely to change your life—or whether “it doesn’t matter because my fate is determined by my genes”.
* (…spoiler: the latter sentiment is deeply confused!)
* (Section 4) The context of “polygenic scores”.
* …which gets us into “The Missing Heritability Problem”. I favor explaining the Missing Heritability Problem as follows:
* For things like adult height, blood pressure, and (I think) IQ, the Missing Heritability is mostly due to limitations of present gene-based studies—sample size, rare variants, copy number variation, etc.
* For things like adult personality, mental health, and marital status, the (much larger) Missing Heritability is mostly due to non-additive genetic effects (a.k.a. epis |
dc5a3708-f12d-4629-842e-c930629bfd9f | trentmkelly/LessWrong-43k | LessWrong | A review of Principia Qualia
Principia Qualia, by Michael Johnson, is a long document describing an approach to theorizing about and studying consciousness. I have heard enough positive things about the document to consider it worth reviewing.
I will split the paper (and the review) into 4 parts:
1. Motivating bottom-up physicalist consciousness theories
2. Reviewing Integrated Information Theory and variants
3. Specifying the problem of consciousness and valence
4. Symmetry
I already disagree with part 1, but bottom-up is still a plausible approach. Part 2 makes sense to pay attention to conditional on the conclusions of part 1, and is basically a good explanation of IIT and its variants. Part 3 describes desiderata for a bottom-up theory of qualia and valence, which is overall correct. Part 4 motivates and proposes symmetry as a theory of valence, which has me typing things like "NO PLEASE STOP" in my notes document.
Motivating bottom-up physicalist consciousness theories
The document is primarily concerned with theories of consciousness for the purpose of defining valence: the goodness or badness of an experience, in terms of pleasure and pain. The document motivates this in terms of morality: we would like to determine things about the consciousness of different agents whether they are human, non-human animal, or AIs. The study of valence is complicated by, among other things, findings of different brain circuitry for "wanting", "liking", and "learning". Neural correlates of pleasure tend to be concentrated in a small set of brain regions and hard to activate, while neural correlates of pain tend to be more distributed and easier to activate.
The methods of affective neuroscience are limited in studying valence and consciousness more generally. Johnson writes, "In studying consciousness we've had to rely on either crude behavioral proxies, or subjective reports of what we're experiencing." The findings of affective neuroscience do not necessarily tell us much about metaphysical |
2624d9b2-e293-453a-966a-5b549a1a1f3f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI Safety Seems Hard to Measure

In previous pieces, I argued that there's a real and large risk of AI systems' [developing dangerous goals of their own](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) and [defeating all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) - at least in the absence of specific efforts to prevent this from happening.
A young, growing field of **AI safety research** tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).
Maybe we'll succeed in reducing the risk, and maybe we won't. **Unfortunately, I think it could be hard to know either way**. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.
This piece is aimed at a broad audience, because I think it's **important for the challenges here to be broadly understood.** I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially *appear* safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.
First, I'll recap the basic challenge of AI safety research, and outline what I *wish* AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI *capabilities* research mostly looks like this.
Then, I’ll give four reasons that **apparent success in AI safety can be misleading.**
| |
| --- |
| **“Great news - I’ve tested this AI and it looks safe.”** Why might we still have a problem?
|
| *Problem* | *Key question* | *Explanation* |
| The **Lance Armstrong problem** | Did we get the AI to be **actually safe** or **good at hiding its dangerous actions?** | When dealing with an intelligent agent, it’s hard to tell the difference between “behaving well” and “*appearing* to behave well.”
When professional cycling was cracking down on performance-enhancing drugs, Lance Armstrong was very successful and seemed to be unusually “clean.” It later came out that he had been using drugs with an unusually sophisticated operation for concealing them.
|
| The **King Lear problem** | The AI is **(actually) well-behaved when humans are in control.** Will this transfer to **when AIs are in control?** | It's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.
AIs might behave as intended as long as humans are in control - but at some future point, AI systems might be capable and widespread enough to have opportunities to [take control of the world entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). It's hard to know whether they'll take these opportunities, and we can't exactly run a clean test of the situation.
Like King Lear trying to decide how much power to give each of his daughters before abdicating the throne.
|
| The **lab mice problem** | **Today's "subhuman" AIs are safe.**What about **future AIs with more human-like abilities?** | Today's AI systems aren't advanced enough to exhibit the basic behaviors we want to study, such as deceiving and manipulating humans.
Like trying to study medicine in humans by experimenting only on lab mice.
|
| The **first contact problem** | Imagine that **tomorrow's "human-like" AIs are safe.** How will things go **when AIs have capabilities far beyond humans'?** | AI systems might (collectively) become vastly more capable than humans, and it's ... just really hard to have any idea what that's going to be like. As far as we know, there has never before been anything in the galaxy that's vastly more capable than humans in the relevant ways! No matter what we come up with to solve the first three problems, we can't be too confident that it'll keep working if AI advances (or just proliferates) a lot more.
Like trying to plan for first contact with extraterrestrials (this barely feels like an analogy).
|
I'll close with Ajeya Cotra's "[young businessperson](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo)" analogy, which in some sense ties these concerns together. A future piece will discuss some reasons for hope, despite these problems.
Recap of the basic challenge
----------------------------
A [previous piece](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) laid out the basic case for concern about AI misalignment. In brief: if extremely capable AI systems are developed using methods like the ones AI developers use today, it seems like there's a substantial risk that:
* These AIs will develop **unintended aims** (states of the world they make calculations and plans toward, as a chess-playing AI "aims" for checkmate);
* These AIs will deceive, manipulate, and overpower humans as needed to achieve those aims;
* Eventually, this could reach the point where AIs [take over the world from humans entirely](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).
I see **AI safety research** as trying to **design AI systems that won't [aim](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for) to deceive, manipulate or defeat humans - even if and when these AI systems are extraordinarily capable** (and would be very effective at deception/manipulation/defeat if they were to aim at it).That is: AI safety research is trying to reduce the risk of the above scenario, *even if* (as I've assumed) humans rush forward with training powerful AIs to do ever-more ambitious things.
More detail on why AI could make this the most important century (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box1))
Why would AI "aim" to defeat humanity? (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box2))
*How* could AI defeat humanity? (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box3))
I wish AI safety research were straightforward
----------------------------------------------
I wish AI safety research were like car safety research.[2](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn2)
While I'm sure this is an oversimplification, I think a lot of car safety research looks basically like this:
* Companies carry out test crashes with test cars. The results give a pretty good (not perfect) indication of what would happen in a real crash.
* Drivers try driving the cars in low-stakes areas without a lot of traffic. Things like steering wheel malfunctions will probably show up here; if they don't and drivers are able to drive normally in low-stakes areas, it's probably safe to drive the car in traffic.
* None of this is perfect, but the occasional problem isn't, so to speak, [the end of the world](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/). The worst case tends to be a handful of accidents, followed by a recall and some changes to the car's design validated by further testing.
Overall, **if we have problems with car safety, we'll probably be able to observe them relatively straightforwardly under relatively low-stakes circumstances.**
In important respects, many types of research and development have this basic property: we can observe how things are going during testing to get good evidence about how they'll go in the real world. Further examples include medical research,[3](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn3) chemistry research,[4](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn4) software development,[5](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn5) etc.
**Most AI research looks like this as well.** People can test out what an AI system is capable of reliably doing (e.g., translating speech to text), before integrating it into some high-stakes commercial product like Siri. This works both for ensuring that the AI system is *capable* (e.g., that it does a good job with its tasks) and that it's *safe in certain ways* (for example, if we're worried about toxic language, testing for this is relatively straightforward).
The rest of this piece will be about some of the ways in which "testing" for AI safety **fails to give us straightforward observations about whether, once AI systems are deployed in the real world, the world will actually be safe.**
While all research has to deal with *some* differences between testing and the real world, I think the challenges I'll be going through are unusual ones.
Four problems
-------------
### (1) The Lance Armstrong problem: is the AI *actually safe* or *good at hiding its dangerous actions*?

First, let's imagine that:
* We have AI systems available that can do roughly everything a human can, with some different strengths and weaknesses but no huge difference in "overall capabilities" or economic value per hour of work.
* We're observing early signs that AI systems behave in unintended, deceptive ways, such as giving wrong answers to questions we ask, or writing software that falsifies metrics instead of doing the things the metrics were supposed to measure (e.g., software meant to make a website run faster might instead falsify metrics about its loading time).
We theorize that modifying the AI training in some way[6](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn6) will make AI systems less likely to behave deceptively. We try it out, and find that, in fact, our AI systems seem to be behaving better than before - we are finding fewer incidents in which they behaved in unintended or deceptive ways.
But that's just a statement about *what we're noticing*. Which of the following just happened:
* Did we just train our AI systems to be less deceptive?
* Did we just train our AI systems to be *better at* deception, and so to make us *think* they became less deceptive?
* Did we just train our AI systems to be better at calculating when they might get caught in deception, and so to be less deceptive only when the deception would otherwise be caught?
+ This one could be useful! Especially if we're able to set up auditing systems in many real-world situations, such that we *could* expect deception to be caught a lot of the time. But it does leave open the [King Lear problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-King-Lear-problem) covered next.
(...Or some combination of the three?)
We're hoping to be able to deploy AI systems throughout the economy, so - just like human specialists - they will almost certainly have some opportunities to be deceptive without being caught. The fact that they *appear honest in our testing* is not clear comfort against this risk.
The analogy here is to competitive cyclist [Lance Armstrong](https://en.wikipedia.org/wiki/Lance_Armstrong). Armstrong won the Tour de France race 7 times in a row, while many of his competitors were caught using performance-enhancing drugs and disqualified. But more than 5 years after his last win, an investigation "concluded that Armstrong had used performance-enhancing drugs over the course of his career[5] and named him as the ringleader of 'the most sophisticated, professionalized and successful doping program that sport has ever seen'." Now the list of Tour de France winners looks like this:

A broader issue here is that **when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences.** Social sciences are generally less rigorous and harder to get clean results from, and one factor in this is that it can be hard to study someone who's aware they're being studied.[7](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn7)
Two broad categories of research that might help with the Lance Armstrong problem:
* [Mechanistic interpretability](https://www.transformer-circuits.pub/2022/mech-interp-essay/index.html)[8](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn8) can be thought of analyzing the "digital brains" of AI systems (not just analyzing their behavior and performance.) Currently, AI systems are [black boxes](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box4) in the sense that they perform well on tasks, but we can't say much about *how* they are doing it; mechanistic interpretability aims to change this, which could give us the ability to "mind-read" AIs and detect deception. (There could still be a risk that AI systems are arranging their own "digital brains" in misleading ways, but this seems quite a bit harder than simply *behaving* deceptively.)
* Some researchers work on "scalable supervision" or "competitive supervision." The idea is that if we are training an AI system that might become deceptive, we set up some supervision process for it that we expect to reliably catch any attempts at deception. This could be because the supervision process itself uses AI systems with more resources than the one being supervised, or because it uses a system of randomized audits where extra effort is put into catching deception.
*Why are AI systems "black boxes" that we can't understand the inner workings of? (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box4))*
### (2) The King Lear problem: how do you test what will happen when it's no longer a test?

The Shakespeare play [King Lear](https://en.wikipedia.org/wiki/King_Lear) opens with the King (Lear) stepping down from the throne, and immediately learning that he has left his kingdom to the wrong two daughters. Loving and obsequious while he was deciding on their fate,[9](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn9) they reveal their contempt for him as soon as he's out of power and they're in it.
If we're building AI systems that can reason like humans, dynamics like this become a potential issue.
I [previously](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#existential-risks-to-humanity) noted that an AI with *any* ambitious [aim](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for) - or just an AI that wants to avoid being shut down or modified - might calculate that the best way to do this is by behaving helpfully and safely in all "tests" humans can devise. But once there is a real-world opportunity to disempower humans for good, that same aim [could cause the AI to disempower humans.](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#existential-risks-to-humanity)
In other words:
* (A) When we're developing and testing AI systems, we have the power to decide which systems will be modified or shut down and which will be deployed into the real world. (Like King Lear deciding who will inherit his kingdom.)
* (B) But at some later point, these systems could be operating in the economy, in high numbers with a lot of autonomy. (This possibility is spelled out/visualized a bit more [here](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#how-this-could-work-if-humans-create-a-huge-population-of-ais) and [here](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over).) At that point, they may have opportunities to [defeat all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) such that we never make decisions about them again. (Like King Lear's daughters after they've taken control.)
*How could AI defeat humanity? (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box5))*
**If AI systems can detect the difference between (A) and (B) above, then their behavior might systematically change from one to the other - and there's no clear way to *test* their behavior in (B).**
*The Volkswagen emissions scandal (Details not included in email - [click to view on the web](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#Box4))*
In general, modern machine learning researchers consider it challenging to handle what's called "distributional shift": systematic differences between situations AIs were [trained](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta) on and situations they're now in. To me, the King Lear problem looks like **arguably the most inconvenient possible distributional shift:** AI systems risk behaving in unexpected ways *just as* (and in fact, because) they're now able to defeat humanity, rather than being in a controlled test environment.
Some lines of research that might help here:
* If we could solve the [Lance Armstrong problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lance-Armstrong-Problem) robustly enough - such that we could be confident AIs were never behaving deceptively - we could simply prompt AIs to answer questions like "Would AI system X disempower humans given an opportunity to do so?" The [digital-brain-based](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#DigitalNeuroscience) techniques noted above could imaginably get us here.
* There might be ways of specifically trying to target the *worst-case* behavior of AI systems, so that they are nearly guaranteed not to behave in certain ways *regardless of their situation*. This could look something roughly like "simulating cases where an AI system has an opportunity to disempower humans, and giving it negative reinforcement for choosing to do so." More on this sort of approach, along with some preliminary ongoing work, [here](https://www.lesswrong.com/posts/pXLqpguHJzxSjDdx7/why-i-m-excited-about-redwood-research-s-current-project).
### (3) The lab mice problem: the AI systems we'd like to study don't exist today

Above, I said: "when AI systems become capable enough, AI safety research starts to look more like social sciences (studying human beings) than like natural sciences." But today, AI systems *aren't* capable enough, which makes it especially hard to have a meaningful test bed and make meaningful progress.
Specifically, we don't have much in the way of AI systems that seem to *deceive and manipulate* their supervisors,[10](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn10) the way I worry that [they might when they become capable enough](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/).
In fact, it's not 100% clear that AI systems could learn to deceive and manipulate supervisors even if we deliberately tried to train them to do it. This makes it hard to even get started on things like discouraging and detecting deceptive behavior.
I think AI safety research is a bit unusual in this respect: most fields of research aren't explicitly about "solving problems that don't exist yet." (Though a lot of research *ends up* useful for more important problems than the original ones it's studying.) As a result, doing AI safety research today is a bit like **trying to study medicine in humans by experimenting only on lab mice** (no human subjects available).
This does *not* mean there's no productive AI safety research to be done! (See the previous sections.) It just means that the research being done today is somewhat analogous to research on lab mice: informative and important up to a point, but only up to a point.
How bad is this problem? I mean, I do think it's a temporary one: by the time we're facing the [problems I worry about](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/), we'll be able to study them more directly. The concern is that [things could be moving very quickly by that point](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement): by the time we have AIs with human-ish capabilities, companies might be furiously making copies of those AIs and using them for all kinds of things (including both AI safety research and further research on making AI systems faster, cheaper and more capable).
So I do worry about the lab mice problem. And I'd be excited to see more effort on making "better model organisms": AI systems that show early versions of the properties we'd most like to study, such as deceiving their supervisors. (I even think it would be worth training AIs specifically to do this;[11](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn11) if such behaviors are going to emerge eventually, I think it's best for them to emerge early while there's relatively little risk of AIs' actually [defeating humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).)
### (4) The "first contact" problem: how do we prepare for a world where AIs have capabilities vastly beyond those of humans?

All of this piece so far has been about trying to make safe "human-like" AI systems.
What about AI systems with capabilities *far* beyond humans - what Nick Bostrom calls [superintelligent](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/) AI systems?
Maybe at some point, AI systems will be able to do things like:
* Coordinate with each other incredibly well, such that it's hopeless to use one AI to help supervise another.
* Perfectly understand human thinking and behavior, and know exactly what words to say to make us do what they want - so just letting an AI send emails or write tweets gives it vast power over the world.
* Manipulate their own "digital brains," so that our [attempts to "read their minds"](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#DigitalNeuroscience) backfire and mislead us.
* Reason about the world (that is, [make plans to accomplish their aims](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#what-it-means-for)) in completely different ways from humans, with concepts like "glooble"[12](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn12) that are incredibly useful ways of thinking about the world but that humans couldn't understand with centuries of effort.
At this point, whatever methods we've developed for making human-like AI systems safe, honest, and restricted could fail - and silently, as such AI systems could go from "behaving in honest and helpful ways" to "appearing honest and helpful, while setting up opportunities to [defeat humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)."
Some people think this sort of concern about "superintelligent" systems is ridiculous; some[13](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#fn13) seem to consider it extremely likely. I'm not personally sympathetic to having high confidence either way.
But additionally, a world with huge numbers of human-like AI systems could be strange and foreign and fast-moving enough to have a lot of this quality.
Trying to prepare for futures like these could be like trying to **prepare for first contact with extaterrestrials** - it's hard to have any idea what kinds of challenges we might be dealing with, and the challenges might arise quickly enough that we have little time to learn and adapt.
The young businessperson
------------------------
For one more analogy, I'll return to the one used by Ajeya Cotra [here](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo):
> Imagine you are an eight-year-old whose parents left you a $1 trillion company and no trusted adult to serve as your guide to the world. You must hire a smart adult to run your company as CEO, handle your life the way that a parent would (e.g. decide your school, where you’ll live, when you need to go to the dentist), and administer your vast wealth (e.g. decide where you’ll invest your money).
>
>
>
>
>
> You have to hire these grownups based on a work trial or interview you come up with -- you don't get to see any resumes, don't get to do reference checks, etc. Because you're so rich, tons of people apply for all sorts of reasons. ([More](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/#analogy-the-young-ceo))
>
>
If your applicants are a mix of "saints" (people who genuinely want to help), "sycophants" (people who just want to make you happy in the short run, even when this is to your long-term detriment) and "schemers" (people who want to siphon off your wealth and power for themselves), how do you - an eight-year-old - tell the difference?
This analogy combines most of the worries above.
* The young businessperson has trouble knowing whether candidates are truthful in interviews, and trouble knowing whether any work trial *actually* went well or just *seemed* to go well due to deliberate deception. (The [Lance Armstrong problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lance-Armstrong-Problem).)
* Job candidates could have bad intentions that don't show up until they're in power (the [King Lear Problem)](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-King-Lear-problem).
* If the young businessperson were trying to prepare for this situation before actually being in charge of the company, they could have a lot of trouble simulating it (the [lab mice problem)](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-Lab-mice-problem).
* And it's generally just hard for an eight-year-old to have much grasp *at all* on the world of adults - to even think about all the things they should be thinking about (the [first contact problem](https://www.cold-takes.com/p/4d63edc6-4be6-4c77-ae5b-c70e730acb58#The-first-contact-problem)).
Seems like a tough situation.
[Previously](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/), I talked about the dangers of AI *if* AI developers don't take specific countermeasures. This piece has tried to give a sense of why, even if they *are* trying to take countermeasures, doing so could be hard. The next piece will talk about some ways we might succeed anyway.
[](https://api.addthis.com/oexchange/0.8/forward/twitter/offer?url=https%3A%2F%2Fwww.cold-takes.com%2Fai-safety-seems-hard-to-measure&pubid=ra-60a178324cffc42e&title=Cold%20Takes%20-%20AI%20Safety%20Seems%20Hard%20to%20Measure&ct=1)
[](https://api.addthis.com/oexchange/0.8/forward/facebook/offer?url=https%3A%2F%2Fwww.cold-takes.com%2Fai-safety-seems-hard-to-measure&pubid=ra-60a178324cffc42e&title=Cold%20Takes%20-%20AI%20Safety%20Seems%20Hard%20to%20Measure&ct=1)
[](https://api.addthis.com/oexchange/0.8/forward/reddit/offer?url=https%3A%2F%2Fwww.cold-takes.com%2Fai-safety-seems-hard-to-measure&pubid=ra-60a178324cffc42e&title=Cold%20Takes%20-%20AI%20Safety%20Seems%20Hard%20to%20Measure&ct=1)
[](https://api.addthis.com/oexchange/0.8/forward/menu/offer?url=https%3A%2F%2Fwww.cold-takes.com%2Fai-safety-seems-hard-to-measure&pubid=ra-60a178324cffc42e&title=Cold%20Takes%20-%20AI%20Safety%20Seems%20Hard%20to%20Measure&ct=1)
[Comment/discuss](https://www.lesswrong.com/posts/slug/ai-safety-seems-hard-to-measure#comments)
---
Footnotes
---------
1. Or persuaded (in a “mind hacking” sense) or whatever. [↩](#fnref1)
2. Research? Testing. Whatever. [↩](#fnref2)
3. Drugs can be tested in vitro, then in animals, then in humans. At each stage, we can make relatively straightforward observations about whether the drugs are working, and these are reasonably predictive of how they'll do at the next stage. [↩](#fnref3)
4. You can generally see how different compounds interact in a controlled environment, before rolling out any sort of large-scale processes or products, and the former will tell you most of what you need to know about the latter. [↩](#fnref4)
5. New software can be tested by a small number of users before being rolled out to a large number, and the initial tests will probably find most (not all) of the bugs and hiccups. [↩](#fnref5)
6. Such as:
* Being more careful to avoid [wrong answers that can incentivize deception](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#deceiving-and-manipulating)
* Conducting randomized "audits" where we try extra hard to figure out the right answer to a question, and give an AI extra negative reinforcement if it gives an answer that we *would have* believed if not for the audit (this is "extra negative reinforcement for wrong answers that superficially look right")
* Using methods along the lines of ["AI safety via debate"](https://openai.com/blog/debate/) [↩](#fnref6)
7. Though there are other reasons social sciences are especially hard, such as the fact that there are often big limits to what kinds of experiments are ethical, and the fact that it's often [hard to make clean comparisons between differing populations](https://www.cold-takes.com/how-digital-people-could-change-the-world/#social-science). [↩](#fnref7)
8. This paper is from Anthropic, a company that my wife serves as President of. [↩](#fnref8)
9. Like, he actually asks them to talk about their love for him just before he decides on what share of the realm they'll get. Smh [↩](#fnref9)
10. [This paper](https://arxiv.org/pdf/2109.07958.pdf) is a potential example, but its results [seem pretty brittle](https://www.cold-takes.com/ai-alignment-research-links/#helpful-honest-harmless). [↩](#fnref10)
11. E.g., I think it would be interesting to train AI [coding systems](https://github.com/features/copilot) to write [underhanded C](http://www.underhanded-c.org/): code that looks benign to a human inspector, but does unexpected things when run. They could be given negative reinforcement when humans can correctly identify that the code will do unintended things, and positive reinforcement when the code achieves the particular things that humans are attempting to stop. This would be challenging with today's AI systems, but not necessarily impossible. [↩](#fnref11)
12. This is a concept that only I understand. [↩](#fnref12)
13. E.g., see the discussion of the "hard left turn" [here](https://www.alignmentforum.org/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment) by Nate Soares, head of [MIRI](https://intelligence.org/). My impression is that others at MIRI, including Eliezer Yudkowsky, have a similar picture. [↩](#fnref13) |
c39b7559-4f92-4174-80bc-9b04fd70a571 | trentmkelly/LessWrong-43k | LessWrong | I need a protocol for dangerous or disconcerting ideas.
I have a talent for reasoning my way into terrifying and harmful conclusions. The first was modal realism as a fourteen-year-old. Of course I did not understand most of its consequences, but I disliked the fact that existence was infinite. It mildly depressed me for a few days. The next mistake was opening the door to solipsism and Brain-in-a-Vat arguments. This was so traumatic to me that I spent years in a manic depression. I could have been healed in a matter of minutes if I had talked to the right person or read the right arguments during that period, but I didn't.
Lesswrong has been a breeding ground of existential crisis for me. The Doomsday argument (which I thought up independently), ideas based on acausal trade (one example was already well known; one I invented myself), quantum immortality, the simulation argument, and finally my latest and worst epiphany: the potential horrible consequences of losing awareness of your reality under Dust Theory. I don't know that that's an accurate term for the problem, but it's the best I can think of.
This isn't to say that my problems were never solved; I often worked through them myself, always by refuting the horrible consequences of them to my own satisfaction and never through any sort of 'acceptance.' I don't think that my reactions are a consequence of an already depressed mind-state (which I certainly have anyway) because the moment I refute them I feel emotionally as if it never happened. It no longer wears on me. I have OCD, but if it's what's causing me to ruminate than I think I prefer having it as opposed to irrational suppression of a rational problem. Finding solutions would have taken much longer if I hadn't been thinking about them constantly.
I've come to realize that this site, due to perhaps a confluence of problems, was extremely unhelpful in working through any of my issues, even when they were brought about of Lesswrong ideas and premises. My acausal problem [1] I sent to about five or six peopl |
5714d79c-a5d7-440e-a547-46e165b6714d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Google DeepMind's RT-2
Abstract
--------
> We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control to boost generalization and enable emergent semantic reasoning. Our goal is to enable a single end-to-end trained model to both learn to map robot observations to actions and enjoy the benefits of large-scale pretraining on language and vision-language data from the web. To this end, we propose to co-fine-tune state-of-the-art vision-language models on both robotic trajectory data and Internet-scale vision-language tasks, such as visual question answering. In contrast to other approaches, we propose a simple, general recipe to achieve this goal: in order to fit both natural language responses and robotic actions into the same format, we express the actions as text tokens and incorporate them directly into the training set of the model in the same way as natural language tokens. We refer to such category of models as vision-language-action models (VLA) and instantiate an example of such a model, which we call RT-2. Our extensive evaluation (6k evaluation trials) shows that our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training. **This includes significantly improved generalization to novel objects, the ability to interpret commands not present in the robot training data** (such as placing an object onto a particular number or icon), **and the ability to perform rudimentary reasoning in response to user commands** (such as picking up the smallest or largest object, or the one closest to another object). **We further show that incorporating chain of thought reasoning allows RT-2 to perform multi-stage semantic reasoning**, for example figuring out which object to pick up for use as an improvised hammer (a rock), or which type of drink is best suited for someone who is too sleepy (an energy drink).
>
>
Approach Overview
-----------------
Results
-------





Videos
------
Link to videos: <https://robotics-transformer2.github.io/#videos> |
6f98f459-66d0-4c0c-9226-f0de0ef80368 | trentmkelly/LessWrong-43k | LessWrong | The Japanese Quiz: a Thought Experiment of Statistical Epistemology
This post is an excerpt from my book, "Notes on a New Philosophy of Empirical Science". I've found it hard to publish small pieces from the book, because each concept is inextricably linked in my mind to all the other concepts. But I think this section and the related discussion can stand on its own. It starts with a story about a physicist named Sophie, who encounters an odd group of language activists while on holiday.
The Society for the Promotion of the Japanese Language
After spending many months developing her theory, building the software to test the theory, and writing up the results for publication, Sophie decided to take a long vacation on the West Coast. She visited the cafes and bookstores of Vancouver and Seattle, and took long hikes in the beautiful parks of the Pacific Northwest. Then she took the train down through Oregon and Northern California to San Francisco, where she planned to enjoy the mild weather and visit some old friends who worked in the tech sector.
During her stay in San Francisco, Sophie decided to visit the city's Japantown, because she had a special appreciation for artisanal Japanese kitchenware, which she found to be useful not just for cooking but sometimes also for ad hoc repair work in the lab. As she was window shopping in the neighborhood, a strange sight caught her attention. It was a booth, manned by well-dressed, older Asian people, who seemed to be distributing pamphlets. At first she thought it was an social activist group, but then she noticed that a poster with the words "Society for the Promotion of the Japanese Language" at the top. Sophie had always been interested in Japanese, so she approached the table.
After a comically polite interaction with the one of the fashionable men, she came to learn that the organization was working to encourage people across the globe to learn Japanese. However, they did not provide courses or educational material themselves. Instead, their strategy was to create an incentive for |
d6a3928b-f708-4fed-9cab-e3732641b005 | trentmkelly/LessWrong-43k | LessWrong | How to Have Space Correctly
[NOTE: This post has undergone substantial revisions following feedback in the comments section. The basic complaint was that it was too airy and light on concrete examples and recommendations. So I've said oops, applied the virtue of narrowness, gotten specific, and hopefully made this what it should've been the first time.]
Take a moment and picture a master surgeon about to begin an operation. Visualize the room (white, bright overhead lights), his clothes (green scrubs, white mask and gloves), the patient, under anesthesia and awaiting the first incision. There are several other people, maybe three or four, strategically placed and preparing for the task ahead. Visualize his tools - it's okay if you don't actually know what tools a surgeon uses, but imagine how they might be arranged. Do you picture them in a giant heap which the surgeon must dig through every time he wants something, or would they be arranged neatly (possibly in the order they'll be used) and where they can be identified instantly by sight? Visualize their working area. Would it be conducive to have random machines and equipment all over the place, or would every single item within arms reach be put there on purpose because it is relevant, with nothing left over to distract the team from their job for even a moment?
Space is important. You are a spatially extended being interacting with spatially extended objects which can and must be arranged spatially. In the same way it may not have occurred to you that there is a correct way to have things, it may not have occurred to you that space is something you can use poorly or well. The stakes aren't always as high as they are for a surgeon, and I'm sure there are plenty of productive people who don't do a single one of the things I'm going to talk about. But there are also skinny people who eat lots of cheesecake, and that doesn't mean cheesecake is good for you. Improving how you use the scarce resource of space can reduce ta |
70ccd49c-0c75-4ccc-a95c-631d75940ae9 | trentmkelly/LessWrong-43k | LessWrong | Linkpost: Hypocrisy standoff
|
201f0c90-bc0c-4538-8f8b-311ea12e6d42 | trentmkelly/LessWrong-43k | LessWrong | AISafety.world is a map of the AIS ecosystem
The URL is aisafety.world
The map displays a reasonably comprehensive list of organizations, people, and resources in the AI safety space, including:
* research organizations
* blogs/forums
* podcasts
* youtube channels
* training programs
* career support
* funders
You can hover over each item to get a short description, and click on each item to go to the relevant web page.
The map is populated by this spreadsheet, so if you have corrections or suggestions please leave a comment.
There's also a google form and a Discord channel for suggestions.
Thanks to plex for getting this project off the ground, and Nonlinear for motivating/funding it through a bounty.
PS, If you find this helpful, you may also be interested in some other projects by AI Safety Support (these have nothing to do with me).
aisafety.training gives a timeline of AI safety training opportunities available
aisafety.video gives a list of video/audio resources on AI safety.
|
98be9e4b-1e27-467d-93dd-bbf23298aa9f | trentmkelly/LessWrong-43k | LessWrong | What I've been reading, November 2023
A ~monthly feature. Recent blog posts and news stories are generally omitted; you can find them in my links digests. All emphasis in bold in the quotes below was added by me.
Books
Finished Lynn White, Medieval Technology and Social Change (1962). Last time I talked about the stirrup thing. The second part of the book is about the introduction of the heavy plow in agriculture, and how it enabled the shift to a three-field crop rotation. Among other things, this provided more protein in the European diet, which made for a healthier population. The third part is a survey of medieval power mechanisms, including water mills, crank shafts, and clock escapements. Very interesting overall, perhaps a bit dry and technical for casual readers though. Note also that since it is from the ’60s it is not up to date with the latest research.
Also finished Ian Tregillis’s The Alchemy Wars. I can now definitely recommend this sci-fi/fantasy trilogy, even if the cast of characters and the way the conflict unfolded isn’t exactly how I would have written it myself.
Browsed Derek J. de Solla Price, Science since Babylon (1961), while preparing for a talk. Some very interesting charts such as this:
Science since Babylon, p. 97
New on my reading list:
Venkatesh Narayanamurti and Toluwalogo Odumosu, Cycles of Invention and Discovery: Rethinking the Endless Frontier (2016), and Venkatesh Narayanamurti and Jeffrey Tsao, The Genesis of Technoscientific Revolutions: Rethinking the Nature and Nurture of Research (2021). These have actually been on my list for a while, but got bumped back up after meeting Venky and Jeff at a recent workshop on metascience. (The latter book is required reading at Speculative Technologies.)
Also mentioned at the workshop: B. Zorina Khan, Inventing Ideas: Patents, Prizes, and the Knowledge Economy (2020); and A Michael Noll and Michael Geselowitz, Bell Labs Memoirs: Voices of Innovation (2011).
Also:
* Pedro Domingos, The Master Algorithm: How the Quest f |
a79fa1d0-f161-4626-8b60-9ba337b36c85 | trentmkelly/LessWrong-43k | LessWrong | Popular Personal Financial Advice versus the Professors (James Choi, NBER)
This econ paper about personal finance was featured in a recent Freakonomics Radio episode. It compares the financial advice in popular personal finance books with what economics research says is optimal financial behavior.
Some of the most surprising (to me) recommendations in the paper:
* It's actually suboptimal to put away a fixed percentage of your income as savings every year. Instead of smoothing your savings, you should smooth your consumption so that you spend the same amount of money every year. Since most people's earnings potential peaks in midlife, it's best to have a small or even negative savings rate early in your career, and save the largest percentage of your income toward the midpoint of your career.
* Smoothing consumption over time makes sense to me. If you know that you will be able to earn a fixed amount of money over your lifetime, then you ought to spend it evenly over time, as this maximizes your utility at each point in time given diminishing marginal utility of spending.
* However, consumption smoothing implies a different optimal strategy for those interested in jumping from a high-paying career to a less highly paying one such as entrepreneurship, nonprofit work, or the arts. In general, your savings rate should be highest when your earnings potential is highest. But if you expect to earn the most early in your career, then that's when your savings rate should be the highest. For these people, saving a fixed % of your income is probably closer to the right move.
* Many popular finance books recommend overweighting U.S. stocks relative to the international stock market, since U.S.-based multinational companies provide exposure to international markets and international stocks carry a number of risks such as currency risk and weaker accounting and financial transparency standards. However, most economists reject arguments for overweighting the U.S. market, recommending instead that "every investor should hold each country’s secu |
0953e2a6-fa8b-4ebf-9bee-812b7bc2e9a8 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Deep Learning 3: Neural Networks Foundations
so I hope you enjoyed last week's
tutorial on tensor flow and this week we
again have something very special for
you Simon or Sendero here will give a
lecture about neural networks back
propagation how to train those networks
and so on and it's really quite special
to have Simon here he really is an
expert on the topic he also works a deep
mind in the deep learning group he is
educated locally to some degree at least
yeah so I'm a mother
Cambridge then PhD at UCL and then later
worked with geoff hinton in canada so
there couldn't be a better person to do
this before we start just a quick
announcement terry williams who attended
here last week he's running a reading
group on deep learning and the game of
Go I'll put this book cover and his card
here on the table in case anyone's
interested it's basically a new book
that came out that tries to explain deep
learning based on on the game of go in
the wake of alphago okay thank you very
much over to you Simon Hayter and get up
to noon everyone see can everyone hear
me okay you can hear me okay yes so sort
of saying
today's lecture is just me covering some
of the foundations of neural networks
and I'm guessing that some of you will
be quite familiar with the material that
we're going to go over today and I hope
that most of you have seen bits of it
before but nevertheless it's kind of
good to go back over the foundations to
make sure that they're very solid and
also one of things that I'm going to
hope to do as we go through is in
addition to kind of conveying some of
the mathematics also try and give you a
sense of the intuition to get a kind of
deeper and more visceral understanding
of what's going on and as we go through
there'll be a couple of natural section
breaks between the sections so that's
probably a good time to do questions
from the preceding section if there are
any and we'll also have an inci break in
the middle probably two-thirds of the
way through
and then the the last point is these
slides were all going to be available
online and in the slides I've added
quite a few hyperlinks out to additional
material which if one of the topics
we're talking about is particular
interesting to you you can kind of go
off and read more about that okay and so
this slide is in some sense a tldr of
what we're going to do today and at a
high level it's also kind of a tldr of
what we're going to do in this entire
course so deep learning good neural
networks is actually pretty simple as
it's more or less just the composition
of linear transforms and nonlinear
functions and it turns out that by
composing these quite simple building
blocks into large graphs we gained
massive powerful flexing flexible
modeling power and when I say massive
fight I do mean quite massive so these
days we routinely train neural networks
with hundreds of millions of parameters
and when I say training or learning what
does that mean well it basically means
optimizing a loss function that in some
sense describes a problem we're
interested in over some data set or in
the case of reinforcement learning with
respect to world experience with with
effective our parameters and we do that
using various gradient optimization
methods one of the most common of those
is SPD or stochastic gradient descent
and so from a thousand feet that's
that's kind of it it's pretty simple but
in this course what we're going to do is
look at the details of the different
building blocks when you might want to
make certain choices and also how to do
this well at a very large scale so
before we dive in let's step back a
little bit and ask why are we doing this
what what a neuron that's good for and
turns out they're actually useful for a
whole ton of things but these days you
know I think a better question is you
know if you can come up with the right
loss function and a quiet training later
what a neuron that's not good for so
just to kind of go over some examples in
recent years we've seen some very
impressive steps forward in computer
vision we can now recognize objects in
images with very high accuracy there's
all sorts of cool more esoteric
applications that the folks
so listen very nice work looking at
doing superhuman recognition of human
emotions by having a neural network that
can recognize micro expressions on folks
faces so essentially better or even
human emotions than humans are later in
this course there'll be a module on
sequence models with a recurrent neural
networks and there we've seen incredible
gains and speech recognition one of the
cool things again in recent years that
came up is this idea of using neural
networks for machine translation and
furthermore it turns out that you can
use neural networks for multilingual
machine translation so the echo is
hello-hello note them maybe the mics on
turn I'll turn raise my voice per dad
yeah please do raise your hand if
they're if you're having trouble hearing
me yes so one of the particularly cool
things that came out in the last year or
so is this idea of doing multilingual
translation through a common
representation so we can translate from
many languages into many other languages
heavily wide
okay
is that better for folks right yeah this
notion of a kind of interlink where so
if we have a common representation space
that is the bottleneck when we're
translating from one language to another
then in a very real sense you can think
of the representations in that space as
some kind of inter linguist so it's kind
of representing concept across many
different languages along similar lines
there's been some excellent work from
deep mind on speech synthesis so going
from text to speech and wavenet was a
something that was developed at D mind
starting back two years ago and now it's
in production so a lot of the voices
that you'll hear in say Google home or
Google assistant are now synthesized
with wavenet so a very fast turnaround
from research to large-scale deployment
other places where they've been enjoying
impressive uses in reinforcement
learning and you'll hear much more about
that in the other half of the course so
things like dqn or a3c and applying that
to aims headings like atari and then
also moving into moralistic games and 3d
environments also with reinforcement
learning you guys are all probably
familiar with alphago which was able to
beat the human world champion at go and
has now even superseded that by playing
just the games itself so not not even
using any any human data now the list
goes on and in all these cases what
we're dealing with is pretty simple and
there's just a couple of different
elements you see grab a laser pointer
yeah cool yes so we essentially have our
neural network so we defined some
architecture we have our inputs so it
could be images spectrograms
you name it we have parameters that
define the network and some outputs that
we want to predict and essentially all
we're doing is formulating a loss
function between our inputs and our
outputs and then optimizing that loss
function with respect to our parameters
and and again it's in a high level
everything we're doing is very simple
but the devil is in the details so
here's a road map
for most the rest of today so that the
the field of neural networks has been
around for a long time and there's a
fairly rich history so there's you know
not time to cover all that today what
would we are going to cover today in in
the course overall are the things that
are having the most impact right now but
I just wanted to begin by calling out
some of the topics that I think are
interesting but that we're not going to
cover and I'd also encourage you to kind
of delve into the history of the field
if there are particular topics that
you're interested in because there's a
lot of work dating back to the sort like
early 2000 and even the 80s and 90s that
is probably worth revisiting in the rest
of the course we'll begin by a treatment
of single layer networks and just seeing
ok what can we do with just one layer
weights and neurons we'll then move on
to talk about the advantages that we get
by adding just one hidden layer and then
we'll kind of switch gears and kind of
focus on what I call modern deep net
so here it's useful just to think in
terms of abstract compute graphs and
we'll see some very large networks and
also how to think about composing those
in software there'll be a session and
this is probably the most math heavy
part of today on learning and so there
will kind of recap some concepts from
calculus and vector algebra and then
we'll talk about modular backprop an
automatic differentiation and those are
tools that allow us to build these
extremely esoteric graphs without having
to think too much about how learning
operates I'll talk a bit about what I'm
calling a model Zoo so when we think
about these networks in terms of these
modules then what are the building
blocks that we can use to construct them
from and then toward the end I've
touched on some kind of practical topics
in terms of you want actually doing this
in practice what are things that you
might want to be aware of what a tricks
you can use to sort of diagnose if
things are going wrong and maybe we'll
talk about a research topic yes but as I
was saying it's a large field with many
branches dating back depending when you
count dating back to the 60s and then
there was another resurgence in the 80s
so a couple of things that I think are
interesting that won't be covered in
this lecture course are also machines
and hopfield networks
they were developed ran through the 80s
and for quite a while were extremely
popular and there was some interesting
early work I guess in the second wave of
neural networks that they're not in
favor as much now but I think they're
still useful so particularly for
situations were we're interested in
models of memory and in particular
associative memory so I think for me
that's that's one thing that's worth
revisiting another area that what's
property at one time that doesn't
receive as much attention now is models
that operate in the continuous time
domain so in particular spiking neural
networks and one of the reasons that
they're interesting that it's a
different learning paradigm but if you
have that kind of model it's possible to
do extremely efficient implementations
in hardware so you can have very
low-power
New York neural networks so I said yeah
there's lots of things to to look at I'd
encourage you to look at the history of
the field in addition to the stuff that
we cover in this course oh and one last
thing at a high level this small caveat
on terminology and this is a little bit
a function of the history of the field
we sometimes use different names to
refer to the same thing so I'll try and
be consistent but I'm sure I wouldn't
manage it fully so for instance people
interchangeably might use the word unit
or neuron to describe the activate
activity in a single element of a layer
similarly you might hear non-linearity
or activation function and they they
also mean the same thing slightly
trickier is that we sometimes use the
same name to refer to different things
so in the more traditional view of the
field folks would refer to the compound
of say a nonlinear transformation plus a
non-linearity as Leia
in more modern parlance particularly
when we're thinking about implementation
that things like tend to flow then we
kind of tend to describe as a layer
these more atomic operations so in this
case we'd call the linear transformation
as one layer and the
nonlinearity another layer and link to
that there's also slightly different
graphical conventions when we're
depicting models it should usually be
obvious from context but I just wanted
to call that out just in case that's
confusing okay so as I said we're gonna
start off with what can we do with a
single layer networks and to begin with
I'm gonna make a very short digression
on real neurons and describe some of the
kind of inspiration for the artifice in
your Andriy we use it's a very loose
connection and I won't dwell there too
much will then talk about what we can do
with a linear layer sigmoid activation
function and then we'll kind of recap
binary classification or logistic
regression which should have been in
either the last lecture or in their
business for that lecture and then we'll
move on from binary classification into
multi-class classification okay so in
the slide here in the bottom right this
is a cartoon depiction of a real neuron
so there's a couple things going on we
have a cell body the dendrites which is
where the inputs from other neurons are
received and then the axon with the
tunnel bulbs and that's kind of the
output from this neuron and more or less
the way this operates when a neuron is
active an electrical impulse travels
down the axon it reaches the terminal
bulb which causes vesicles of
neurotransmitter to be released those
kind of diffuse across the gap between
this neuron and the neuron that it's
communicating with when it's received in
the dendrites it causes a depolarization
that eventually makes its way back to
the cell body and B so some of the
depolarizations
from all these dendrites is what
determines whether or not the receiving
neuron is going to fire or not and in a
very very coarse way this process of
receiving inputs of different strengths
and integrating it in the cell body is
what this equation is describing so it's
just a weighted sum of inputs or an
affine transformation if you will so the
inputs X the the weights W and maybe
some bias B and so
is what we'd call a simple linear neuron
if we have a whole collection of them
then we can move into matrix vector
notation so this vector Y is a vector of
linear neuron States and we obtain that
by doing a matrix vector multiplication
between the inputs and our weight matrix
and some bias vector B and there's not
an awful lot we can do with that setup
but we are able to do linear regression
which I think you guys saw previously
but in practice we typically combine
these linear layers with some
non-linearity and particularly for a
stacking them in depth so let's let's
take a look at one of those
nonlinearities and this will kind of
complete the picture of our artificial
neuron so what I'm showing here is
something called the sigmoid function
you can think of it as a kind of
squashing function so this equation here
describes the input-output relationship
and so when we combine that with the
linear mapping from previously we have a
way to sum of inputs offset by a bias
and then we pass it through this
squashing function and this in a very
coarse way reproduces what happens in it
in a real neuron when it receives input
so there's some threshold below which
the neuron isn't going to fire at all
once it's above threshold then it
increases its fire and great but there's
only so fast that a real neuron can fire
and so it has upset rating and so at a
very high level that's what this
function is is performing for us it used
to be that this was the sort of
canonical choice in neural network so if
you look at papers particularly from the
90s or the early two-thousands you'll
see this kind of activation function
everywhere it's not that common anymore
and we'll go into some of the reasons
why but at a high level it doesn't have
as nice gradient properties as we'd like
when we're building these very deep
models however it is I still actively
use them a couple of places so in
particular for gating units if we want
to kind of have some kind of soft
differentiable switch and one of the
most common places that you'll see this
is in long short-term memory cells
which I'll hear a lot more about in the
class on recurrent networks so yeah as I
said even with just a simple linear so
signal neuron we can actually do useful
things so I just grabbed this purple box
here I grabbed some tourist slides so
there's a slight change in notation but
if you think back to logistic regression
what do we have we have a linear model a
linked function and then a cross and to
be loss and this linear model is exactly
what's going on in this linear layer and
the link function is what the sigmoid is
doing so there's an extremely tight
relationship between logistic regression
and bited classification and these
layers in in a neural network and so
with just a single neuron we can
actually build a binary classifier so in
this toy example I've got two classes 0
& 1 if I arrange to have my weight
vector pointing in this direction so
orthogonal to this red separating plane
and I adjust the strength of the weights
and the biases appropriately then I can
have a system where when I give it an
input from class 0 the output is 0 and
when I give it an input from class 1 the
output is 1 so that was binary
classification we're now going to move
on and discuss something called a soft
max layer and this essentially extends
binary classification into multi-class
classification so this type of layer is
a way to allow us to do either
multi-class classification another place
that you you might see this used is
internally in networks if you need to do
some kind of multi-way switching so if
say you have a junction in your network
and there's multiple different inputs
and one of them needs to be routed this
is something you can use as a kind of
multi way gating mechanism so what does
it actually do well if we first think
about the Arg max function so when we
apply that to some input vector X all
but the largest element is zero and the
largest element is one the softmax is
essentially just a soft version of the
Arg max so rather than
only the largest element being one and
everything else being zero the largest
element will be the one that's closest
to one the others will be close to zero
and the sum of activities across the app
vector what we want so it it also gives
us a probability distribution the
mathematical form is here so we have
these exponents and I don't know if the
resolution is high enough on this
monitor but what I'm showing in these
two bar plots here is two slightly
different scenarios so the red bars are
the inputs the blue bars are the outputs
and the scale of the red bars in the in
the lower plot is double that of the one
in the the upper plot so in this example
here the the output for the largest
input is the largest and you can't quite
see but it's about 0.6 so the closest to
one however if I increase the magnitude
of all the input so that the ratios are
still the same but now this is 0.9 so
it's much much close to 1 so as the
scale of the inputs gets larger and
larger this gets closer and closer to
doing a hard max operation and so what
can we use this for well as I said we
can use it to do multi way
classification so if you combine this
kind of unit with a cross entropy loss
we're able to Train something that will
do classification of inputs into one of
several different classes so let's take
a look at what this relationship looks
like so the output for the il iment
which you can think of it as the
probability that the input is assigned
to class I is given by in the numerator
we have an exponent that is a weighted
sum of inputs plus a bias and then this
is normalized by that same expression
over all the other possible outputs so
we have a probability distribution and
in a sense you can think of what's going
on in this exponent as being the amount
of evidence that we have for the
presence of the ID class and had we had
to retrain this had we learn we can just
do that by minimizing the negative log
likelihood or accordingly
the cross-entropy of the true labels
under our predictive distribution in
terms of notation how we represent that
something that you commonly see these
things called one-hot vectors to encode
the two plus label and what's that look
like well basically it's a vector that
is of the dimensionality of the output
space the element for the true class
like the the entry for the element of
the true class label is one and
everything else is zero so it's this
vector here in the example above these
digits so four-digit for the one hop
label vector would look like this so the
fourth element is one everything else is
zero if we plug this into our expression
for the negative log likelihood then we
see something like this so since the
only element that of T that is going to
be nonzero is the target we're
essentially asking this probability here
the log probability of this to be
maximized and then we just sum that
across our data cases so even just with
a linear layer if we were to optimize
this we could form a very simple linear
multi way classifier for say digits
it wouldn't work super well and we'll
talk about adding depth but that's
something that you can actually usefully
do with one of these layers now as I
said it's it used to be the case that
the the sigmoid was the dominant
non-linearity and that's fallen out of
favor and so in a lot of the neural
networks that you'll see nowadays a much
more common activation function is
something called the rectified linear
unit or so notice just shortened to a
ray Lu and it has a couple of nice
properties so it's a lot simpler and
computationally cheaper than the sigmoid
it's basically a function that
thresholds below by 0 or otherwise has a
pass through so we can write it down as
this so if the if the input to the
rayleigh function is below zero then the
output is just zero and then above zero
it's just a linear pass-through and it
has a couple of nice properties one of
which is in this region here the
gradient is constant
and generally in in your networks we
want to have gradients flowing so it's
maybe not so nice here that there's no
great information here but at least once
it's active the gradient is constant and
we don't have any saturation regions
once it was the you know is active so
you'll hear I think a lot more about the
details of the gradient properties of
this kind of stuff in James Martin's
lecture later on in optimization but
these are kind of some of the subtleties
that I was talking about they're
important to think about ok so we've now
seen just a very basic single layer now
let's move on one step and ask ok what
can we do if we have more than one layer
so what can we do with neural networks
with a hidden layer and to motivate this
we'll take a look at a very simple
example so what happens if we want to do
binary classification but the inputs are
not linearly separable and then in the
second part of this section I'll kind of
give a a visual proof for why we can see
that neural networks are universal
proper function approximate is so with
enough with a large enough network we
can approximate any function so when I
say a single hidden layer this is what I
mean so we have some inputs here a
linear module of weights some nonlinear
activations to give us this hidden
representation another linear mapping
and then either directly to the output
or some puppet non-linearity and
basically another way of thinking about
why this is useful is that the outputs
of one layer are the inputs to the next
and so it allows us to transform our
input through a series of intermediate
representations and the hope is that
rather than trying to solve the problem
we're interested in directly an input
space we can find this series of
transformations the render our problem
simpler in some transform representation
so again I think this was covered
towards the end of those previous
lecture but if you think back to what's
going on with basis functions it's a
similar kind of idea so this is probably
that the simplest example that can
exemplify that so it's kind of simple
XOR task so
let's imagine that I have four data
points living in 2d a B C and D and a
and B are members of class 0 C and D are
members of class 1 now if I just have a
single linear layer plus logistic
there's no way that I can correctly
classify these points there's no there's
no line I can draw that will put the
yellow B the yellow point to one side
and the blue points on the other now
let's think about what we can do with a
very simple Network as I've drawn here
so we're just gonna have two hidden
units and so let's imagine that the the
first in unit has a weight vector
pointing this direction so in terms of B
its outputs these will be 0 in this red
shaded region and one here and then the
second hidden unit will have a slightly
different decision boundary it'll be
this one so it'll be 0 here and one here
and now if we ask ourselves ok in this
space of hidden activities if I rewrite
the data fight if I plot it again which
I'm doing down here
what does my classification problem like
in this new space so let's go through
the steps of that so point a had one for
the first hidden unit and 0 for the
second so it would live here point B
same again 1 and 0 also lives there
Point C has 0 for the first in unit 0
for the second it lives here and then D
has 1 and 1 so it lives here so this is
the representation of these four data
points in the input space this is the
representation in this this first hidden
layer and so in this space the two
classes now are linearly separable and
so if I add an additional linear plus
sigmoid on top of this then I'm able to
classify these two point B this data set
correctly and so this is again it's a
very simple example but I think it's a
useful motivation for why having a
hidden layer gives us additional power
actually looks like there's a couple of
seats free I see a couple for extending
good
if you want to take a second to sit down
if that's easy for you there's a couple
down here at the front and the second or
so here's another problem of a similar
flavor but slightly less travel so if we
now have the setting here where the data
from different classes live in these
quadrants then just two hidden units on
their own won't cut it but it turns out
that with 16 units you can actually do a
pretty good job at carving up this input
space into the four quadrant and there's
a link from the slide out it's something
that if you guys are not aware of it
it's nice to look at there's a a
tensorflow web playground that basically
lets you take some of these very simple
problems in your browser and play around
with different numbers of Units
different nonlinearities and so on and
itself will typically train on these
problems in a few seconds in it even
looks very simple I think it's a really
nice thing to look at to refine your
intuition for what sorts of things these
models learn what the decision
boundaries look like and Academy to add
detail to your kind of mental picture of
what's going on so yeah when the slice
is shared I'd encourage you to take a
look at that and just kind of play with
some of these simple problems in the
browser to kind of refine your intuition
okay so we've seen that the power that
we can get for these toy problems I'm
now going to go through I guess I'd call
it a sort it's not quite a proof but a
visual intuition pump if you will for
why neural networks with just one hidden
layer can still be viewed as universal
function approximate is and this is one
of those ideas that was arrived at by
several people more or less concurrently
one the kind of well-known sort of
proposes a proof of this was a guy Chu
Benko from 89 and that the papers are
linked here there's also again in terms
of the hyperlinks there's again some
nice interactive web demos one of them
in Michael Nielsen's web become deep
learning that
I'd recommend you take a look at and
going a little beyond the scope of this
class it turns out there are interesting
links along these lines to be made
between neural networks and something
called Gaussian processes they're not
going to be covered today but again I'd
encourage you to take a look if you're
interested okay so what what is our
visual proof going to be the with enough
hidden units we can use a neural network
to approximate anything so let's begin
by just considering two of our linear
plus sigmoid units here and let's
imagine that we arranged for the weight
vectors to point in the same direction
or maybe we'll start off with just a
scalar case so the only difference
between unit 1 and unit 2 is the bias so
that's the kind of offset of where the
sigmoid kicks in and then let's imagine
okay what happens if we take this pair
of units and we we subtract them from
each other what does that difference
output look like and it turns out it
looks something a little like this this
kind of bump of activity Y well over to
the far left both these units is 0 so
the the difference is 0 over to the far
right the upper buddies answers 1 so
they cancel and then in the middle we
have this this little bump and so by
having this pair of units were able to
create this this bump here which is a
lot like a basis function right so let's
imagine that we want to use a neural
network with a hidden layer to model
this gray this arbitrary gray function
here one of the ways we could do it it's
probably not the best way but just as a
kind of proof to show it can be done is
you could imagine now that I've got
these little bumps of activity I can
arrange for that offset to light
different points along this line and I
can also scale the but a multiplicative
scale on this so the idea is through
pairs of units we can kind of come up
with these little bumps and if we think
of what the sum of all these bumps look
like if I have enough of them and
they're narrow enough then it starts to
look like this gray curve that we're
trying to fit so the Mobile's we have I
either the bigger the
the hidden layer the more accurate our
approximation and so that's the kind of
sketch proof for 1d in 2d this same
sorts of ideas apply except we now need
a pair of hidden units for each
dimension of the input so it's hard to
visualize in dimensions beyond two but a
similar sort of thing would apply in 2d
where we if we have four neurons we can
build these little towers of activity
that we can kind of shift around and
again the same idea would apply so
hopefully this is convinced to you that
with enough units we can approximate
everything although it doesn't sound
very efficient and you'd hope that
there's a much better way of doing that
and it turns out that there is so now
that we've seen what we can do mmm I
don't think so you're you're not taking
the area under each bump you're just
taking their kind of magnitude of the
function so there's dump your question I
think I may listen this is your question
okay okay I see any any more questions
before we move on okay so now we're
gonna start to think about deeper
networks so we've seen what we can do
with just a single hidden layer and we
do have this Universal approximation
property but we've also seen that it is
kind of a horrible way to do it it needs
many many units and it turns out that as
we add depth things get a lot more
powerful and we've become
a lot more efficient and again I'll give
a kind of a reference to a paper that
has the full proof but for the class
I'll try and give you a sort of more
visual motivation for how you can see
that that is something that happens and
again to kind of motivate what you were
what you get if you allow these very
deep transformations again coming back
this idea of rather than trying to kind
of go from inputs to outputs in one go
it allows us to potentially break it
down into into smaller steps so you know
cartoon from vision might be rather than
going straight from a vector of pixels
into some kind of scene level analysis
maybe it's easier if in the first stage
of transformation we can extract the
edges or into the edges from an image
from those you can start to think about
composing those edges into say junctions
and small shapes from there into part
there aren't objects and then there
enter into full scene so breaking down
these complicated computations into
smaller chunks in the in the second half
of the section will kind of flip to this
what I'm calling out a more modern
compute graph perspective and there will
kind of really start to see the creative
designs that you can do in these very
large networks and I'll also throw in
just a couple of examples of real-world
networks that you can see what I mean
when I when I say that the structure
these things can get very elaborate okay
so yeah what I'm gonna do for this slide
in the next one is just go over how we
can see the benefits of depth you can
ignore this is my slide from last year
when there was an exam but this era of
things cause what they say you know a
minute worried so here's the
construction so if we imagine taking the
rectified linear unit that we we saw
previously so one of these is just zero
if it's if neighbors blow zero zero it's
linear about that and imagine we take
another one of these rectifiers and
essentially flip the signs of the
weights and biases so it's kind of V
converse what this gives us oriented it
around the origin in this case is a full
rectifier and so in 1d this has the
property that anything we build on top
of this will have the same output for a
point of plus X as it will at minus X so
it's kind of it's mirroring where you
can imagine it as kind of folding a
space over so yet multiple points in the
input mapped at the same point in the
output and so this letters have multiple
regions of the input showing the same
functional mapping will kind of extend
that from 1d into 2d here so imagine
that I have two pairs of these full
rectifiers so that would causes you four
hidden units in this layer in total one
of the rectifiers
is arranged along the x axis and one
along the y axis and so what it means is
that any any function of the output of
these is replicated in each of these
quadrants and so one way you can think
about what these rectifiers are doing is
if I were to take that 2d plane and kind
of fold it over and then fold it back on
itself functions that I would map on
that folded representation if I unfold
it it kind of fall back into the
original input space so that's the kind
of underlying intuition you guys okay
yeah and so this is from this paper from
2014 by wonderful Pascal oho and Benjy
and what I just described is the sort of
basic operation they use to come up with
this interesting proof about the
representational power of deep networks
so I'll kind of step through this this
diagram fairly quickly again if you if
you're interested then it's a nice paper
and fairly easy to read but it's just
too too many details to go through today
so as I said we imagine by applying
these pairs of rectifiers what you end
up with is this folded space I can on
the outputs of that so
I can apply a new set of units on top of
that which would end up kind of folding
this space again and so what we end up
with any decision bound we have in the
final layer as we kind of backtrack so
going through this unfolding gets
replicated or distributed to different
parts of the input space so probably the
most helpful thing to look at is this
this figure here so if we have a network
arranged like this in this output layer
if we have a linear decision boundary
when we unfold that we end up with four
full boundaries one in each of the
quadrant represented here so we've gone
from two regions that we can separate
here to eight regions that we can
separate here if we were to unfold that
again then we end up with 32 regions so
the kind of the high-level take home
from this is the the number of regions
that we can assign different labels to
increases exponentially with depth and
it turns out it only increases
polynomial e with a number of units per
layer so so all that's being equal for a
fixed total number of neurons there's
potentially much more power by making a
narrow deep network than there is in
having a shallow wide Network you know
the details of that will depend on your
problem but that's one of the intuitions
for why adding depth is so helpful it's
guess so it's hot on to these questions
so I say the state of theory in deep
learning alone is is know in their world
we'd like it to be so there aren't of
good rigorous demonstration of that
empirically in a lot of problems what
you'll find is in a few
you try and tackle something with a
fixed budget of Units then in practice
often you will get better empirical
performance by adding a couple of hidden
layers rather than having one very wide
very wide one but it's also problem
dependent yeah I think there's another
question somewhere over there okay does
that answer your question
sure yeah don't worry but my pastor yeah
I just encourage you to read the paper
because it's it's really nicely written
in to the extent that yeah this works
for you as an intuition pump it's worth
taking the time to kind of go through
that argument and understand it okay so
now I said we're gonna switch gears a
bit and move from this what I would say
is a kind of more traditional style of
depicting and thinking about neural
networks and in this we sort of bundle
in our description of layers the
nonlinearities and move towards this
kind of more explicit compute graph
representation where we have separate
node for our weights and we separate out
separate out the linear transformation
from the nonlinearities and this is more
similar the kind of thing that you'll
see if you look at say visualizations in
tents aboard so these are kind of
isomorphic to each other and to these
equations here I'm just I just put
together an arbitrary graph just to kind
of highlight this so we have input to a
first and layer with a sigmoid the
outputs of this go to a secondhand layer
which I decided to pick a railing for
there's another pathway so that yeah
this one is really there's another
pathway coming through here and then
they combine at the app
that exactly the same thing here I'm
just kind of adding these additional
nodes and it seems like we've kind of
made this one looks more complicated
than this one but there's a reason for
kind of breaking it down like this which
will kind of move on to in the next
sections and that's the idea of kind of
looking at these systems just as kind of
compute graphs from modular building
blocks and the nice thing is if we if we
represent and think about our models in
this way then there's a nice link into
software implementation so we can kind
of take a very object-oriented approach
to composing these graphs and
implementing them and for most of what
we need to do there's a very small
minimal set of API functions that each
of these modules needs to be able to
carry out and you can basically have
anything as a module in your graph as
long as it can carry out these these
three functionalities so and well we'll
go through them and in the subsequent
slides but just to kind of signpost them
there's a forward path so Harry go from
inputs to outputs there's a backwards
pair so given some gradients of the loss
we care about how do we compute those
gradients all the way through the graph
and then how do we compute the prior
updates and this is just putting this up
here this is what the compute graph for
Inception before looks like and I just
wanted to kind of put this up the to
ground why it's important to have this
kind of modular framework because you
know for the for the small networks that
I was showing you initially it kind of
doesn't matter how you set up your code
you could you know you can drive
everything by hand you know maybe you
want to fuse some of the operations
yourself just to make things efficient
but once you have these massive massive
graphs then keeping track of that in
your head or by by hand is just not
really feasible and so you need to have
some automated way of plugging these
things together and being able to to
deal with them so this I think it's not
state-of-the-art anymore that's a kind
of sign of how the fields moving but as
of around this
last year this was a state-of-the-art
vision architecture it's still pretty
good this is another example this time
from deep reinforcement learning and
again and just kind of putting this up
there to give you a sense of what sorts
of architectures we end up using it in
real-world problems and the sorts of
somewhat arbitrary topologies that we
can have depending on on what we need to
do the details of this don't matter too
much but I I think towards the end of
the RL course Hado might cover some of
this stuff ok so the the next section
we're going to cover learning and it's
probably going to be one of the more
math heavy sections and I guess I'll
I'll cover up the material but I usually
find it's not super productive to be
very detailed with mathematics in a
lecture but you can kind of refer to the
slides for details afterwards so what is
what is learning as I said it's very
simple we have some loss function
defined with respect to our data and
model parameters and then learning is
just using optimization methods to find
a set of model parameters with minimize
this loss and typically we'll use some
form of gradient descent to do this and
there'll be a whole lecture that kind of
covers various ways of the optimization
I guess something else that I'll add
just cuz it's starting to become popular
in source is something that I'm working
on in my research of the moon so there
are great in free ways of doing
optimization so kind of 0th order
approximations to gradients or
evolutionary methods and again I guess
one of those things were you know these
things coming waves of fashion day they
were kind of popular in the early 2000s
they've fallen out of favor they're
actually appearing again particularly in
some reinforcement learning contexts
where you have the situation that sure
we can kind of deal with great in our
models but depending on the data that we
have available so in
wasn't learning the data you trained on
depends on how well you're exploring the
environment it might be that there just
isn't a very good gradient signal there
and so we won't cover it today I don't
know if James will touch on a bit on his
lecture but it's just useful pretty
aware of that there are these sort of
gradient free optimization methods as
well and depending on your problem that
might be something useful to think about
and at least be aware of so in this
section I'll start by doing a kind of a
recap of some calculus and linear
algebra will recap Green percent and
then we'll talk about how to put these
together on the compute graphs we were
just discussing with automatic
differentiation is something called
modular backprop and what I'll do at the
end of the section is we can kind of go
through a more detailed derivation of
how we do a set up if we wanted to say
do classification of endless digits with
a network with one hidden layer so just
a kind of very cruel example but once
you've got that it kind of generalizes
to all sorts of other things that you'd
want to do so there's two concepts that
it's useful to have in mind they're kind
of objects that allow us to write some
of the the equations more efficiently
and to kind of think about these things
in a slightly more compact way so one of
them is this notion of a gradient vector
so if I have some scalar function f a
vector argument then the elements of the
gradient vector which is denoted here
with respect to X are just the partial
derivatives of the scale output with
respect to the individual dimensions of
the vector the other concept that's
going to be useful in terms of writing
some of these things down concisely
is the Jacobian matrix and so there if
we have a vector function of vector
arguments then the Jacobian matrix the
NF element of that is just the partial
derivative of the nth element of the our
vector with respect to the F element or
the input vector
and in terms of gradient descent what
does that mean well if we have some lost
function that we want to minimize then
essentially we were just kind of
repeatedly doing these updates where we
take our previous parameter value we
compute the gradient and we can do this
either over our entire data set or which
would be kind of batch or a kind of
subset of the data which be mini batch
or something that we end up calling
online gradient descent which is if we
take one data point at a time we just
compute the gradient of our loss with
respect to that data and then take a
small step scale by this learning rate
eater in the direct descent direction
and then we end up repeating this in
what I'm gonna talk about the cone
slides I'm gonna operate in the
assumption that we're doing it online it
doesn't change it much if we do batch
methods it's just easier to represent if
we just have one data case I have to
think about and I'll cover this a couple
of times later as well but it's just
worth stressing that the choice of
learning rates are the step size
parameter ends up making a big
difference but how quickly you can find
solutions and in fact the quality of
solutions that you end up finding and so
that's something that will touch on when
we talk a bit about hyper parameter
optimization and moving beyond simple
gradient descent there's a lot more
sophisticated method so things like
momentum where you kind of keep around
gradient from previous iterations and
blend them wood grain from the current
iteration there's things like rmsprop or
atom which are adaptive ways of scaling
some of the step size as long different
directions and I think James is going to
go into a lot more detail about that in
a couple weeks time okay
so if you think that too kind of high
school calculus and in particular the
chain rule so let's start off with this
nested function so Y is f of G of X and
so if we ask okay what's the derivative
of Y with respect to X well we just plug
in the chain rule so it's the derivative
of F with respect to G considering
g-tube its argument and then the
derivative of G with respect to X so a
similar scalar case scalar output scalar
input if we make this multivariate so
now let's imagine that our function f is
a function of multiple arguments each of
which is a different function G 1
through m of X and again were interested
in the same question what's the the
derivative of Y with respect to X well
we sum over all these individual
functions and then for any one of them
it's again just the chain rule from
above so the partial of F with respect
to G I and then the partial of G IR with
respect to X so we basically for each
half of nesting we take a product along
a single path and then we sum over all
possible paths to get the total
derivative and well basically just gonna
take these concepts and scale them up so
that we can apply them to these compute
graphs and the only thing to be aware of
an hour I'll have mentioned this again
in a second there's a couple of
efficiency tricks that we should be
aware of so if there are junctions as we
traverse there's opportunities to
factorize these expressions and that
becomes particularly important if you
have a graph with a lot of branching in
its topology so let's let's take a some
arbitrary if you graph as an example
again so
it's a little dense when I write it out
but hopefully this will kind of like
carry over the point so so imagine we
have some function mapping from X to Y
and the way this is going to be composed
it's gonna be some G of F F is going to
be a function of its two inputs E and J
and then E is this kind of nested
sequence of functions or operations all
the way to X and similarly J so if I
take what I just set up here and ask
okay what's the derivative of Y with
respect to X then we take the product
along these two paths as I say so a
through G and then there's also this
path through here and so we get these
two expressions down here what I was
saying about kind of some of the
efficiency tricks is you'll notice
there's some common terms towards the
end of this expression and this
expression and so we could actually
group these together factor those out of
that sum in the scalar case it doesn't
matter too much but we'll move to the
the vector case and more elaborate
graphs you'll see why it's important
essentially if there's a lot of
branching and joining then we have to do
these sums over there kind of
combinatorially many paths through the
graph for the mapping that we're
interested in the other point that is is
worth mentioning is so if you look at
the literature on automatic
differentiation you might hear a couple
of different terms so there's something
called forwards mode automatic
differentiation and something called
reverse mode automatic differentiation
and that just that's really referring to
when were computing these expressions do
we compute the product starting from the
input working towards the output or do
we work in Reverse and the difference
between the two is to do with what sorts
of intermediate properties that we end
up with so if I work from the input
towards the output so if I can
this product see from the inputs to the
outputs then my intermediate terms are
things like da/dx if I then compute this
then I basically would end up with DB DX
DZ DX so in forwards mode we get the
partial derivatives of the internal
nodes with respect to the inputs which
is actually not super useful for what we
want to do it it's great if you want to
say do sensitivity analysis so if I want
to know how much changing a little bit
of the input would affect the output
this is exactly what we want to do and
that can be useful in deep learning if
you want to get a sense of how models
are representing functions or which bits
the input are important but it is not
useful for learning however if we
Traverse this in the opposite direction
so from outputs towards inputs then we
end up with two terms that are
derivatives of the output with respect
to the internal nodes and it turns out
that that's exactly what we need for for
learning so so it's interesting kind of
explaining this stuff because on the one
hand it's all kind of trivial it's you
know it's basically the chain rule you
know you'll have seen this in high
school so it's kind of one of these
simple ideas that actually had quite a
big impact so even though it's kind of
obvious when you look at it like this in
terms of the impact on efficiency when
you're computing gradient updates for
neural networks it makes a big
difference organizing the computation in
this efficient way and I think that's
one of the reasons why when backprop was
introduced it had such a big impact even
though at hardest a kind of
fundamentally simple method and also
what we'll see as we move on to kind of
the more vector calculus how to things
it all looks pretty trivial if we're
dealing with scalars but once we move
into large models then B again we'll see
why the ordering makes difference so
yeah essentially reverse mode or my
differentiation a clever application of
the chain rule back prop that all the
same thing
so basically in the backwoods pass
through the network what we're going to
want to do is compute the derivative of
the loss with respect to the inputs of
each module and if we have that then
that kind of goes into part of this
minimal API that I was describing those
three methods that if our modules
implement those then we can just plug
them together however we like and go
ahead and train the other thing that's
worth mentioning is interesting is that
this idea doesn't just apply to things
that you might consider to be simple
mathematical operations you can actually
apply this to the entire compute graph
including constructs like for loops or
conditionals and so on essentially we
just backtrack through the forward
execution path so if something has a
derivative we take it but if in the case
of an if Clause then we essentially
there's multiple execution branches that
we could have ended up following when we
work backwards we just need to remember
which branch we followed going forward
and that's the one that we we use when
we're going in the reverse direction so
essentially we can take an entire
computer program more or less and
everything we can apply this automatic
differentiation to and that's one of the
powerful things that tends to float does
for you it allows you to write these
I'll retreat in few graphs and then when
it comes time to learn it does the hard
work of doing all this backtracking for
you and kind of okie canoeing in terms
of how the gradients flow there's a
couple of things that you need to be
aware of so in most implementations of
this you need to store the variables
during the forward pass so in very big
models or sequence models over very long
sequence lengths this can lead to us
requiring a lot of memory but there are
also clever tricks to get around that so
there's a nice paper that I linked to
here which is one way of being memory
efficient and it's essentially boils
down to being smart about caching States
in the schema in the forward execution
so rather than remembering everything
you can think
it's like every few layers say we
checkpoint then in the back Pro pass
rather than having to remember
everything or the other thing would be
to kind of compute everything for
scrapped we can find the most recent or
the the closest cache state and then
just do a little forward computation
from that to get the states we need to
evaluate the gradients and yeah that
most of this is taken care of
automatically by things like tensor flow
and even I think this memory fish and
stuff is probably going to find its way
into the core tensor flow code probably
the next release or two so a lot of
these things you on a day to day basis
you don't need to worry about but again
I think it's always useful to kind of
know what's going on under the hood in
case you are doing something unusual or
if you are running into some of these
problems okay so in this cartoon here
what I'm showing is how those different
pieces fit together and the sorts of
things that looks like once we're in a
more realistic setting so we have vector
input SPECT outputs and as I said
there's these three API methods that as
long as we have some sort of
implementation of these then we can plug
together these arbitrary graphs of
modules and figure out the outputs given
inputs figure out the derivatives we
need to figure out the parameter update
so what are they the first one is what
I'm calling the forward pass so this is
just what's the output given the input
so through here and then there's two
methods that involve gradient so one
which I call the backward pass is we'd
like to know the gradient of the loss
with respect to the inputs given the
gradient of the loss with respect to the
output and so it turns out that what
does that look like well thinking back
to the chain rule slides from slides ago
if I want to think about this element
wise then
the gradient the lost with respect to
the I input is just the sum over all the
outputs of the gradient of the lost with
respect to each of those outputs and
then the gradient of those outputs with
respect to the input and if we want to
use our vector matrix notation then it's
the product of this gradient vector with
respect to the Jacobian of Y so this is
just a kind of compact way of
representing things similarly to get
parameter gradients or that's just the
derivative of the loss with respect to
the parameters which is then the sum of
all the outputs maduro to the loss with
respect those outputs the derivative
those outputs with recta parameters and
then these are obviously evaluated at
the state that it was 1 we're doing the
forward pass and that that's why I was
saying before that we need to keep these
states around because typically these
derivative terms will involve an
expression that involves what the
current state is so yeah these are kind
of compact ways of representing this in
practice we we actually don't if you
were to write these models yourself you
probably wouldn't want to form the full
Jacobian in these cases just because the
jacobians tend to be very sparse so if
there's there are many inputs that might
not have an influence on an output and
so many elements of the Jacobian are
often 0 but it's useful notationally
particularly if you kind of go back and
forth between this and the subscript
notation if you ever need to kind of
derive how to implement an R between new
module for yourself say if you have some
wid function there's and supported by
tons flow so yeah but that's more or
less what I just said so we have these
these methods that we we need to
implement and we chained the forward
passes together so how would we operate
this we'd we'd call the forward method
for the linear unit given the parameter
an input that would give us some output
the forward method of the relu the
forward method linear
method of the softmax and then we'd get
a loss and then we just call be
backwards method zombies to get our
derivatives of outputs with respect to
inputs and derivatives with respect to
parameters we apply the gradient that we
get from the parameters to take a small
descent step and then we just iterate
that so what I'm going to do in the next
couple of slides is go through what some
of those operations look like for these
building blocks and by the end of it
we'll have everything we need to do to
put together something like endless
classification with cross-entropy loss
and a single table later okay so the
forward pass for a linear module we're
calling for the Binnington class is just
given by this expression here so the
vector output is a matrix vector
operation plus a bias again I say in
these derivations is often useful to
kind of flip back and forth between
matrix vector notation and subscript
notation so this is just kind of
unpacking what the nth element of this
output vector is so we can compose the
relevant bits of the Jacobian that we
need so what do we need we want the the
partial of Y with the Spectras inputs
the partial of Y with respect to the
bias and the partial of Y with respect
to the weights and we get these
expressions this is what I was saying
before so this Kronecker Delta here most
of the elements of this Jacobian are
zero because if there isn't oh if
there's not a weight involved in this in
this particular but if a particular
weight isn't involved in producing a
particular output then there's
absolutely zero and so it's quite sparse
so armed with this we can come together
and get our backwards pass so what is
that it's just given by this expression
so we kind of plug in
these things that we've already derived
so if we have the as I said in the back
was possibly assume that were given the
gradient of the output with this of the
grading of the loss with respect the
output and so we just have this matrix
vector expression here similarly for the
parameter gradient if we kind of churn
through this this math then we we get
this outer product of the gradient
vector with the inputs and there's a
similarly simple thing for the biases so
armed with that we have everything we
need to do forward propagation backward
propagation and parameter updates for
the linear module the rally module is is
super simple so there's no parameters so
that the forward pass is just this max
of 0 and the input so it's our kind of
floor at zero and then the backward pass
is also simple is this kind of element
wise comparison so if the output was
above zero then the gradient with
respect to inputs is just 1 were in that
the linear pass through if the output
was below zero then there are no
gradients the softmax module is a little
trickier to derive from the omits for
but it's basically still simple calculus
so if we recall one that was the the
enth output is just this exponent of the
sum total and input normalized by that
same expression over all units we can
plug these in derive our Jacobian
element and then similarly we can plug
them in the backwards pass I've actually
skipped the derivation for this and I
think for the next one in the slides
just collecting that's going to come up
as something on your assignments but in
a later version of the slides I'll
update it with that the solution in
there okay
so second I don't think so so yeah good
question um I think I usually I usually
do a greater than zero so if it's equal
to zero then I treat the brain at zero
it's not it's not well-defined in
practice you can kind of assume it it
doesn't happen much but I would just set
the define the gradient at zero to be
zero but it actually doesn't matter too
much just because numerically or
extremely unlikely to hit something
that's exactly zero yeah so the final
part of this was the the loss the loss
itself and so again there's there's no
parameters in the forward pass we just
this is our definition of the loss when
we take derivatives we end up with this
expression and you might look at this
and be a little bit worried particularly
that with with this kind of expression
and X can vary a lot then you might
worry that if X is very small we might
run into numerical precision issues and
in fact actually that is a real concern
so what people typically do is use this
kind of compound module so it's softmax
plus cross-entropy and you'll see that
in terms of flow I think this
implementations for both but
you know unless you have your own
special reasons you probably should use
that the softmax plus cross-entropy so
it basically combines both the the
softmax operation and the cross and
pre-loss into a single operation and the
reason for that is if we do that and
look at the the grades that we get out
then it's this much more stable form
here so if we kind of go back what are
we done we had this graph that we wanted
to do learning in safer does it
classification we've gone through and
for each of these module types we
figured out what we need to do to kind
of propagate forwards what we need to do
to propagate backwards and what we need
to do to come up with the parameter
derivatives and armed with that we're
ready to go and we can plug together
things in whatever order we like so in
terms of learning we just kind of
iterate through getting an input and a
label running forward propagation
running backwards propagation getting
parameter updates applying the parameter
updates and cycling and the nice thing
is if we'd you know written this from
scratch ourselves and we wanted to try
adding in you know an extra hidden layer
then it'd be very simple we we we just
kind of put another one of these mod
modules here change the call sequence
and and we're good to go so once we have
those in place it's then very easy to
explore different topologies if I wanted
to California come up with some crazy
non-linearity instead of the rail ooh
then I am free to do so I would just
implement a module that has those three
API methods and and everything should
just work in this next section I'm going
to kind of do a quick tour of what I'm
calling a module Zoo so we've seen some
basic module types that are useful so
linear sigmoid relu softmax just gonna
go through some of the other operations
that you might see so there's actually
two main types of linear model the first
is the kind of simple matrix
multiplication that we've seen already
convolution and deconvolution all
laser also linear I'm not gonna talk
about those but Karen's going to cover
those in the next lecture on commnets
there's a couple of basic sort like
element wise operations so addition and
putting wise multiplication some group
operations and then a couple of other
nonlinearities that are worth knowing
about also in the slides I like how this
is sort of fairly inexhaustible writing
your possible activation functions you'd
wanna use typically the ones that we're
gonna cover today will will be in the
vast majority of thing you see but it's
also worth remembering but if you know
if you have a particular problem or if
you feel like you need to think
creatively about it your you have
license to kind of but pretty much
anything you want in these models as
long as they're differentiable you're
absolutely fine and even if they're not
perfectly differentiable you might still
be able to kind of come up with
something that's usable so yeah I'll go
through these relatively quickly so if
we want to do addition then the forward
pop method just obviously simple vector
addition the back pop method also
relatively straightforward there's no
parameters that there's no gradient
update
similarly for multiplication so element
wise multiplication this kind of thing
is kind of useful in as I saying into
like gating situations where depending
on some context say you might want to
propagate some parts of the state and
not others also comes up in modulation
or things like attention so if I want to
emphasize some parts of my
representation and relative to others
that's are elsewhere you'd see this kind
of operation there's a couple of kind of
group wise operations so summing for
instance so if we have a sum then the
gradient is kind of gets distributed the
back for grading gets distributed across
all the elements if we have a Mac so you
might see this in max pooling in
commerce for instance then basically
the for the back prop if the element was
not small then the gradient just passes
through otherwise there's no gradient if
we have a switch or a conditional one
way of representing it as I was saying
is with this kind of element-wise
multiplication and we basically just
need to remember which brand to which
switch was active that gets back propped
everything else gets set to zero here's
a couple of slight variance on
activation function we've seen already
so the tan H is basically just a kind of
scaled and shifted version of the
sigmoid so it's at 0 its 0 and it
saturates at 1 and minus 1 if you were
to build a feed-forward Network there's
some in potential in some cases there's
advantages to using 1080 over sigmoid in
that if you initialize with small
weights and small biases then you
basically get to initialize in this
linear region here and in practice it's
often nice if you can initialize your
network so that it it does a kind of
simple straightforward function rather
than kind of risking being in some of
these saturated regions where the
gradients are going to flow for similar
kind of gradient flow reasons rather
than using the rail ooh this would be
kind of zero here another thing that
people sometimes use is to have a very
small but nonzero slope in this negative
region and again it just kind of helps
with gradient propagation in the you no
longer lose all gradient if you're below
zero and that could also can be useful
I'd say that this actually on those
things were probably if you it's not a
default choice but maybe it should be in
that in my experience is often better to
use this than it is to use a rail ooh
that said I often don't use it just to
kind of keep as few moving parts as
possible because you know there are
design choices that you'd want to make
here so I if there was something that I
really really heard about getting the
best performance out of I probably start
to explore some of these variants but
day to day I tend to kind of stick with
the simple choices just because then
there's fewer few things to keep track
of in terms of mental overhead we've
already seen content to be lost and so
there's just another simple one so if
we're doing say regression problems then
squared error is a common choice yeah I
didn't have on the slides way I can add
it later just exciting again worth
noting
so square error is very common in
regression problems again in practice I
would probably try squared error if I
had this but I'd probably also try other
norms as well so in particular l1 one of
the problems with squared error is if
you have outliers or operations that for
whatever reason have to be way happen to
be way off mark that you can get
extremely large gradients and so
sometimes that can make learning
unstable so again in all these cases
there's sort like reasonable defaults
that are sensible to start with but it's
also useful to know kind of okay what
would the design choices that I might
want to revisit be if if things for
whatever is not working or if great
Nizar kind of blowing up and actually
that brings me on to this next section
where what I'll do is kind of go through
some sort of high-level practical tips
in terms of things that might be useful
for you when you're dealing with these
models and kind of good things to to
bear in mind this came up a bit in the
break as well it's sort of the the field
at the moment there's definitely a kind
of scarcity of strong theoretical
statements we can make and so
unfortunately that kind of means that a
lot of deep learning is still a bit more
of a dark art than it would be ideal so
there are some things that you can kind
of plug in and just rely on but there's
also a lot of trial and error and it's
some pieces where you kind of have to do
more of a an interrogated loop of okay
is this model working if so great if not
okay what might be going wrong and a lot
of getting good at this kind of stuff is
refining your intuition for if something
isn't working
what might the causes be
to quickly diagnose that and also what
sort of things you could do to fix that
so let's go through these so one problem
that you can run into is overfitting so
you get very good loss on your training
set but you don't generalize well so one
thing you can do there and this was kind
of probably in the early days is early
stopping so you basically just rather
than training to kind of push your loss
all the way to zero you kind of in
parallel or evaluating on some
validation set and you stop once say
that the loss in your validation step
starts to go up that's one method some
something else you can do and you know
you can do all these in combination
there's something else called weight
decay and it basically penalizes the
weights in your network from becoming
too big and one intuition for why this
might be helpful is if we think about
something like the sigmoid with small
weights we're going to tend to be in
this more often in this linear region so
our kind of functional mapping will be
closer to linear and so potentially
lower complexity what one thing to
mention actually about weight decay is
that it doesn't have as much an effect
on reloj units as it does on some of
these others so it may be a less useful
form of regularization for your relu
layers it'll still obviously have an
effect on the output but with Ray lose
you can get a new scale all the weights
down and you still have the same set of
decision boundaries so it doesn't quite
regularize where I lose in the same way
something else that you can do is I said
he add noise and this kind of brings us
on to things like drop out and there's a
couple of ways of interpreting what's
going on so you can add noise to your
your inputs which you could also think
of as a form of data augmentation you
could add noise to your activities you
can add noise to your parameters you can
kind of
mask out some of the activities of units
within layers and yet in terms of the
like what is this doing well you can
kind of think of it in a couple
different ways one is that it prevents
the Mott Network from being too reliant
on very precise conjunctions or features
so you can imagine that you know that'd
be one way to memorize your data set if
you kind of have very precise activities
that depend on the very precise pattern
that you see in a particular input you
can also view it as a kind of cheap way
of doing ensemble assay model multiple
times adding different amounts of noise
then that's some what you might spend
that to have somewhat similar effects to
if I had an ensemble of similar models
and so you can also kind of tie that
into some ideas from so phasing
statistics are rather than say have a
single model you have a posterior
distribution over parameters and adding
noise in a hand-wavy sense is a little
bit like looking at a local plastic
proximation and then probably the best
known of these is is dropout and so in
this you sort of randomly set a fraction
of activities in a given layer to 0 and
at testing time you kind of need to
rescale things by the proper fraction
because at test time you're gonna have
everything active so this would be
typical magnitude of the activities in a
given way are going to be higher it's
also worth noting that sort of drop out
it's one of those things that kind of
you know peaked in popularity I guess
around like 2012 or so it's not used as
much these days as it used to be I think
one of the reasons for that is the sort
of introduction of normalization so I'll
talk about that in a second but another
another factor that can be important in
terms of whether your models train well
or not is how will you initialize them
and yeah this can expect what I was
saying about you know the tonnage being
someone nice and that if you have small
weights then you can get to initialize
things in a more or less linear region
but
the beginning of training you want to
make sure that you have good gradients
flowing all the way through your network
so you don't them to be too big and you
don't them to be too small there's
various heuristics for kind of arranging
for this to be the case a link to a
couple of figures here so and for some
reason a lot of these are kind of named
after the first author of the proposed
these so there's something that protocol
Xavier initialization named after is a
burglar who's so deep mind I forget the
first name of hair but there's a
follow-on paper that the difference
between these two is that both trying to
say okay how should I scale my weights
and biases at initialization so that the
input to my normally era T's say have
some particular distribution so maybe 0
mean unit variance but differently in
these two is that the assumptions that
you might want to make if you're using
say a sigmoid unit are different from
those if you're using say a rectified
linear unit so yeah there's a couple of
papers here that you might want to take
a look at then there's this thing batch
norm which is used full extensively now
particularly in feed-forward networks
it's still not used as much in recurrent
models just because there's some
subtleties about how you'd actually go
about doing that and it's used
I'd say hardly at all in deep RL but
there's probably modifications to this
kind of idea that you could do a few if
you wanted to apply those approaches
there and it it kind of subsumes some of
the stuff in that you can think of it as
being similar to what we do in some of
these initialization methods but we also
continuously update to maintain these
properties so the idea is we'd like the
the inputs the some inputs to our units
to have a zero mean and unit variance
but for the reasons I described in terms
of initialization what batch norm does
is it kind
enforces that but it also introduces
some additional trainable correction
factors so that if it turned out in fact
I would rather have something that had
variance 10 and I've mean of one then
there's kind of scalings and offsets
that I can learn during training to help
that be the case but it all that's being
equal it kind of helps keep my
activities you know a reasonable regime
with respect to minorities and also with
respect to the kind of gradient scaling
that we get when we do back from another
nice benefit of Bachelor on that is I
think actually mentioned less often but
is is interesting and is perhaps part of
the reason why drop out isn't as favored
as much is that you you get a sort of
drop out like noise effect from batch
normalization and that in order to
enforce or to encourage these kind of 0
mean unit variance properties you look
at your local data batch and so just
because of randomization amongst the
cases that you get in a given batch from
the point of view of any one of those
data cases the contribution to the batch
normalization from the rest of the batch
members looks a lot like noise and so
that kind of gives you some some sort of
regularization effect anyway there'll be
a lot more about this in current lecture
on conducts another kind of area that's
important practice is how to pick good
hyper parameters so how do I know how do
I know what a good learning rate is if
I'm using dropout how do I know what
fraction units to dropout or how much
noise to add if I'm doing weight decay
and so on and we're still relatively
primitive and how we deal with this so
basically the idea is just to try many
combinations and kind of evaluate the
final results on some held out data set
and then pick the best but there are a
kind of a couple of kind of practical
tricks and some of these to it so if
there's lots and lots of hyper
parameters then the search space can be
huge so that's something that you might
worry about
for a long time people advocated grid
search so essentially for each hyper
parameter that you you care about maybe
kind of come up with some grid of things
to try and kind of systematically try
the cross base of all possibilities
turns out that in a lot of cases that's
actually not the best thing to do and
there's a nice paper by a box from
benzio which I've linked here and I've
taken this figure from it and this kind
of tries to illustrate why that might be
so depending on what the the sensitivity
of your model is to the hyper parameters
if you do grid sir you could very easily
miss these good regions just if your
grid happens to be poorly aligned with
respect to the reason is that useful so
they advocate and kind of empirically
demonstrated that this often gets better
results just doing random search so
rather than defining a grid for each
dimension you might define some sampling
distribution and then you essentially
just set a sample from that joint
probability space run run your models
and then a nice thing there is that you
can you get broader coverage of any
individual parameter value and there's a
better chance that you'll find a good
region that you can then explore more
carefully so I would say if you're if
you're faced with this kind of issue
then unless you have a good reason not
to don't do grid search to do do a
random search there's actually kind of a
lot of ongoing research in terms of ways
to get around some of these problems or
at least a kind of automate this search
process so there's some approaches from
kind of Bayesian modeling where the idea
there is if I could somehow form a model
of how well form a predictive model of
the performance of the models that I'm
training then I could be smarter about
figuring out which hydro parameter
values to try next there's also some
reinforcement learning approaches which
essentially there's some upfront cost
in terms of having to run training many
times but the hope is that I can
essentially learn how to dynamically
adjust these hyper parameters through
training so that if I then have another
instance of the same sort of learning
problem I can be much smarter about how
I treat that and then there's actually a
paper the I along with some other folks
would be mine published archived at the
end of last year which is this idea of
borrowing some tricks from evolutionary
optimization and a population of
simultaneous training models and
essentially the idea there is instead of
doing at a grid search or random search
let's say we initialize with random
search we're training everything all
together and periodically we look at the
training progress that each of the the
jobs know population has made and if
something seems to be doing particularly
poorly then we look for something that's
doing particularly well we copy its
parameters over and then do a small
adjustment to its hyper parameters and
then continue training and that lets us
do be kind of it's a nice combination of
Hydra parameter search and a little bit
of online model selection in that were
devoting more compute to the models that
seem to be doing better and also
exploring in regions of hyper parameter
space that seemed to be more promising
that she has another particularly nice
benefit in reinforcement learning so one
of the kind of hallmarks of many RL
problems is that the data distribution
that we deal with is is non-stationary
so you know if I'm a robot that's
letting to operate in the world that may
be you know the David distribution in
this room might be completely different
to the David distribution when I go into
the hallway and so it could well be the
case that throughout learning there it
the the hyper parameters that would
allow me to make the best learning
progress might be quite different and so
some of these methods like random search
just can't address that whereas the
population-based method that we propose
is actually kind of locally adaptive so
that's worth looking at it works super
well and a demine workhorse of like
using this
the vast majority of our experiments now
the downside is it's simple to implement
but it's a little resource-hungry in
terms of how much compete you're able to
access concurrently so if you're able to
run say 30 or 40 replicas of your
experiment in parallel then I this is I
think I said a clearly better way to do
hyper own search but yeah if you don't
have some Google's resources then it can
be trickier to kind of do that so you
might want to do these more sequential
methods so ya hit his just some kind of
rules of thumb but there's a much longer
list of this and exacting some of those
things that you just kind of build up
experience over time but a couple of
kind of easy things to do if you're not
getting the performance that you've
hoped for one is to sort check for dead
unit so you could say take a large
mini-batch and look at the histogram for
a given layer look at the histogram of
activities of units in that layer and
what you're looking for is basically you
know some units that maybe never turn on
so for whatever reason and maybe your
initialization was off or you went to a
weird letting regime but it might be the
case that say if you have rarely units
many of them are just never in that
linear region and so you have the
capacity there but it's actually not
useful for you and so I'm just getting
in the way
a similar diagnostic is it can be useful
to look at histograms of your gradient
say again visualized over a large mini
batch and again you're kind of looking
out for you know gradients that are
always zero in which case you're gonna
have from making any progress or very
heavy tailed grading distributions in
which case maybe there's some data cases
that are dominating or there's some kind
of numerical issues with your gratings
blowing up
something else is a really useful thing
to try is take a kind of a very small
subset of data or if it's an RL setting
if there's a kind of a simplified
version of your task I just try to try a
model on that simpler version of the
task and for a smaller subset you should
be able to get zero training error or
you know close to a depending on you
know
noisy labeling that kind of stuff but
the idea is if you're not seeing the
performance on the real world problem
you care about just as a kind of sanity
check scale back the size of your data
set and make sure that you can over fit
on a small amount of data and because we
could just get about ten minutes left
I'll go through this fairly quickly it's
a kind of research topic again from D
mind that relates to some of the stuff
we've talked about but I'm I'll leave
five minutes at the end for questions as
well so this is some work that was it's
from I guess a year and a half ago now
although what kind of stuff on going and
it was this idea that we called
decoupled neural interfaces using some
data gradients and basically the idea is
rather than running say our forward
propagation all the way to the end and
then back propagation although at the
end can we say midway through this chain
predict what the back propagated
gradients are gonna be before we
actually get them and it turns out that
you can do that you might ask why why
would I want to so there's two places I
think where it's useful one is if we
have is more a kind of I guess
infrastructure thing that we have
massive massive graphs and we want it we
need to do lots of most of computation
before we can do an update then if this
were model parallel say then essentially
the the machines holding this these
nodes would be waiting for the back prop
s to happen before they could do an
update after the forward pass so one way
is to kind of allow for potentially
better pipelining the other benefit and
that's partly why I kind of have this
graph here that's more of a sequence
model is there are some settings where
we actually don't want to have to wait
for the future to arrive before we
update our parameters so if I have a
sequence model over an extremely long
sequence or in the case of an and RL
agent you know it's kind of indefinite I
can't so I don't want to wait for an
extremely long time before I can run my
back
through time to get gradients and it
might not be might not be feasible right
now if what what people typically do is
they'll take a long sequence and they'll
chop it into chunks and they'll run
something called truncated back prop
through time and if you sit down and
think about what that's doing then it's
it's essentially assuming that outside
of the kind of truncation window the
gradient from the future are zero
because what we're just ignoring them
and so if you look at it like they're
the argument behind Cynthia gradients is
is kind of obvious you're basically
saying if I my default was to do
truncated back put through time which
implicitly makes the assumption that
gradients from outside the truncation
window are zero could I possibly do
better by predicting something other
than zero and the answer is probably yes
in most cases and so that's a kind of
good motivation for why it's interesting
there's a couple of papers that we
published on this now already and
there's a nice kind of interactive blog
post that you can you can look at here
if you if you want to hear some more so
you know that's it for today the next
lecture is going to be commnets with
Korean but yeah there's time some
questions now and if there's more
questions afterwards I'm happy to kind
of hand ran outside for a bit more than
we have time for yeah that's another
great question I said that's a kind of
another ongoing area research so the
sort of the fault of the moan is more
like you know kind of human driven you
briefed me optimization and that you
know I have some idea in my head of what
the kind of fitness of different
architectures would be and I kind of
prioritize trying those there's some
interesting work going on again using
some of these gradient free methods to
search over architectures so at a high
level this idea if I can start to build
a predictive model of how different
architectures might perform then I can
use that to automate the priority list
of what I should try next on the
population training side of things some
of the stuff that were
working on actually at the moment is
there are ways of adapting network
architectures online without having to
restart training so one example of that
there's a couple of papers on technique
service in a called net net or net
morphism and various other
transformations there I see so imagine a
mitem that I have some architecture and
I'm thinking would that architecture be
better if I were to interject an
additional hidden layer somewhere
I could just start training from scratch
but something else that I can do is take
something's thats been trained
originally and figure out a way to
inject an additional hidden layer in
there that doesn't change the function
that's been learned so far
but then after I've can added that
hidden layer I can then continue
training and potentially allowed the
model to make use of that additional
capacity and one one cartoon of how to
see I could do that is I could say
arrange to have an additional hidden
layer with say tonight's unit and
initialize them so that they're kind of
in Berlin ear region so it's it's more
or less a linear pass through so I could
take my previous model add in an
additional layer with the existing
weight matrix initialize the outgoing
wig matrix of that 10h layer to be some
kind of large values and that will that
will locally give me something that has
a very similar functional mapping as the
the network I start out with but now I
have the potential to learn additional
connections going from those ten each
unit so there's potentially ways of
doing this kind of architecture search
online and then there's model-based
approaches and then evolutionary methods
I'd say they're kind of the three main
ways of doing that
learners are you looking at kind of help
set performance are you looking at
convergence rates yeah it's a good
question
so I've mostly been thinking of this in
the context of reinforcement learning
and so they're sort of your test say is
your training set in a sentence yeah so
for kind of supervised problems then
yeah looking at it on a held out set
another thing that's worth mentioning
and again this is something that were
kind of actively working on at the
moment is you might not want to make
greedy decisions about that so a good
example is you know in supervised
learning it might be the so often it's
good to have a fairly high learning rate
initially and then to kind of drop it
down but one of things we noticed and
applying this to some of the supervised
problems is that you can if you kind of
look greedily you can appear to be doing
better by dropping their learning rate
earlier than you would in a nocturnal
setting because I kind of give you that
local boost and so something that again
this is appears to be less of a problem
in the RL settings we've looked at but
I'm saying that you probably want to do
as we extend these methods is think
about kind of performance metrics that
aren't just how well am i doing now but
kind of combining in some of that
model-based for looking things so not
how well am i doing now but given
everything I've seen about learning
progress so far how well could this run
or its descendants end up doing and kind
of use use a less greedy performance
metric way if there are no more
questions then thank you and yeah feel
free to ask because no salary
[Applause] |
66b2e0b2-1192-4d54-a64f-a5b6e323a5f6 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities
I Introduction
---------------
The involvement of untrusted parties in the modern Integrated Circuit (IC) supply has given rise to a plethora of security concerns, including Intellectual Property (IP) piracy, reverse engineering, counterfeiting, and hardware Trojans [[1](#bib.bib1), [2](#bib.bib2)]. Consequently, a variety of countermeasures have been introduced, including IC metering [[3](#bib.bib3)], split manufacturing [[4](#bib.bib4)], camouflaging [[5](#bib.bib5)], and logic locking [[6](#bib.bib6)]. Among these, only logic locking can protect a design against all untrusted parties in the supply chain [[7](#bib.bib7), [6](#bib.bib6)].
###
I-A Logic Locking: A Brief Overview
Logic locking performs design manipulations by binding the correct functionality of a hardware design to a secret key that is only known to the legitimate IP owner. Hereby, both the original functionality and the structure of the design remain concealed while passing through the hands of external design houses and the foundry. In the past decade, various security aspects of logic locking have been thoroughly evaluated through the introduction of key-recovery attacks [[8](#bib.bib8), [9](#bib.bib9)], among which the Boolean satisfiability (SAT) attack has gained a lot of attention [[10](#bib.bib10)]. This has led to a division of logic locking into pre- and post-SAT schemes. Pre-SAT schemes were focusing on specific security features, such as random XOR/XNOR key-gate insertion [[11](#bib.bib11)], thwarting the path-sensitization attack [[12](#bib.bib12)] or maximizing output corruption for incorrect keys [[13](#bib.bib13)]. With the introduction of SAT-based attacks, the design objective has shifted towards achieving SAT-resilience, resulting in a new generation of schemes, including SARLock [[14](#bib.bib14)], Anti-SAT [[15](#bib.bib15)], CASLock [[16](#bib.bib16)], SFLL [[17](#bib.bib17)], and others [[8](#bib.bib8)].
###
I-B The Advent of Machine Learning
With the advent of efficient and easy-to-use Machine Learning (ML) models, ML-based techniques have been gradually introduced into various hardware-security domains [[18](#bib.bib18), [19](#bib.bib19)]. The latest efforts in the logic locking community have been invested in challenging the security properties of locking schemes using ML. Recent works were able to efficiently attack pre- and post-SAT schemes [[20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22), [23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25), [26](#bib.bib26), [27](#bib.bib27), [28](#bib.bib28)]. The introduction of ML-based tools for the security analysis of logic locking has opened up a new chapter in the design of locking schemes and attacks, thereby initiating the start of the post-ML locking-scheme era. Herewith, the ML ecosystem offers a novel path to uncover hidden vulnerabilities and provide new directions in the development of future ML-resilient locking schemes.
Contributions
The ML era has undoubtedly initiated a new stage in logic locking design and evaluation. In this paper, we review all major developments in the domain of ML-based attacks and countermeasures in logic locking, and analyze major challenges and research opportunities. Note that a comprehensive overview of the state of pre-ML schemes and attacks can be found in [[6](#bib.bib6), [7](#bib.bib7), [9](#bib.bib9), [8](#bib.bib8), [29](#bib.bib29)].
The rest of this paper is organized as follows. Section [II](#S2 "II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") introduces the relevant background on logic locking. Section [III](#S3 "III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") reviews the major developments in ML-based logic locking attacks and compiles a summary of the open challenges and opportunities. Finally, Section [IV](#S4 "IV Conclusion ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") concludes the paper.
II Background
--------------
This section introduces the preliminaries on classification, working principles and attack models of logic locking.
###
II-A Classification
Logic locking can be generally classified into two orthogonal classes: combinational and sequential [[29](#bib.bib29)]. Combinational logic locking performs key-dependent manipulations in the combinational path of a design. On the other hand, sequential logic locking focuses on transforming and obfuscating the state space of a circuit. As the reviewed work operates in the domain of combinational locking, in the rest of this work, the term logic locking refers to combinational locking schemes.
###
II-B Working Principles
The idea of logic locking lies in the functional and structural manipulation of a hardware design that creates a dependency to an activation key, hereby trading area, power, and delay for security. If the correct key is provided, the locked design will perform as originally intended for all input patterns. Otherwise, an incorrect key will yield an incorrect output for at least some input patterns. Logic locking can be performed on different design levels. However, typically logic locking is deployed on a gate-level netlist through the insertion of additional gates (known as key gates) or more complex structures. A visual example of a locked design is shown in Fig. [1](#S2.F1 "Figure 1 ‣ II-B Working Principles ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (b). Here, the original netlist in Fig. [1](#S2.F1 "Figure 1 ‣ II-B Working Principles ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (a) is locked through the insertion of two key-controlled gates in the form of an XOR and XNOR (XOR + INV) gate, marked as KG1𝐾subscript𝐺1KG\_{1}italic\_K italic\_G start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and KG2𝐾subscript𝐺2KG\_{2}italic\_K italic\_G start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, respectively. To understand the functional implications of the key gates, let us consider the gate KG1𝐾subscript𝐺1KG\_{1}italic\_K italic\_G start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. This key gate takes two inputs: the original wire x𝑥xitalic\_x (output of gate G1subscript𝐺1G\_{1}italic\_G start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) and the input key bit k1subscript𝑘1k\_{1}italic\_k start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. If a correct key value is set, i.e., k1=0subscript𝑘10k\_{1}=0italic\_k start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = 0, the value of x𝑥xitalic\_x is preserved and forwarded to x′superscript𝑥′x^{\prime}italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. However, if an incorrect key value is set, i.e., k1=1subscript𝑘11k\_{1}=1italic\_k start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = 1, the value of x𝑥xitalic\_x is inverted, leading to incorrect output values. Based on this concept, throughout the past decade, a variety of locking schemes have been introduced, based on XOR, XNOR, AND, OR, and MUX gates as well as more elaborate structures [[6](#bib.bib6)].

(a) Original Circuit

(b) Locked Circuit
Figure 1: Example: logic locking using XOR/XNOR gates.
###
II-C Logic Locking in the IC Supply Chain
The role of logic locking in the IC supply chain is demonstrated in Fig. [2](#S2.F2 "Figure 2 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"). Based on a trusted Register-Transfer Level (RTL) design, the legitimate IP owner performs logic synthesis to generate a gate-level netlist. At this point, logic locking is deployed, resulting in a locked netlist and an activation key. Typically, after the netlist is locked, another synthesis round is performed to facilitate the structural integration of scheme-induced netlist changes. Therefore, we can differentiate between the pre-resynthesis and post-resynthesis netlist. The former is locked but not resynthesized, while the latter is locked and resynthesized. In the next step, the locked netlist proceeds in the untrusted part of the supply chain. This often includes an untrusted external design house (for layout synthesis) and the foundry. After fabrication, the produced IC is returned to the IP owner for activation. Herewith, logic locking protects a design by concealing its functional and structural secrets in the activation key, thereby bridging the untrusted regime gap.
In terms of hardware Trojans, it is assumed that a sound understanding of the design’s functionality and structure is required to insert an intelligible, controllable and design-specific Trojan (e.g, a targeted denial-of-service attack). However, functionality-independent Trojans remain viable. These include, e.g., the manipulation of the circuit’s physical characteristics, leading to performance or reliability degradation. In the former case, finding the activation key is a prerequisite for successfully performing the reverse-engineering process.

Figure 2: Logic locking in the IC design and fabrication flow.

Figure 3: ML-based attacks on logic locking (deployment phase).
###
II-D Attack Model
The attack model includes the following: (i𝑖iitalic\_i) the attacker has access to the locked netlist, either as an untrusted design house or by reverse engineering the layout, (ii𝑖𝑖iiitalic\_i italic\_i) the location of the key inputs (pins) is known, (iii𝑖𝑖𝑖iiiitalic\_i italic\_i italic\_i) the deployed locking scheme is known, and (iv𝑖𝑣ivitalic\_i italic\_v) the attacker has access to an activated IC to use as oracle for retrieving golden Input/Output (I/O) patterns.
The fourth assumption is the differentiating factor that classifies all key-retrieval attacks into oracle-less and oracle-guided attacks. Oracle-less attacks assume that an activated design is not available. This is often the case in low-volume production for security-critical applications [[30](#bib.bib30)]. On the contrary, the oracle-guided scenario assumes the availability of an activated IC, thereby representing a high-volume production setting [[31](#bib.bib31)].
Moreover, sometimes, an attack assumes the availability of only a few golden I/O patterns, e.g., in the form of test vectors. Due to the available knowledge that is provided by these patterns, these attacks fall into the oracle-guided class. As discussed in the next section, ML-based attacks have been explored in both classes. Henceforth, we refer to the gate-level netlist under attack as the target netlist.
III Logic Locking in the Machine Learning Era
----------------------------------------------
This section describes the recent developments of ML-based applications in the logic locking domain.
###
III-A ML-Based Attacks
Previous work in this domain has mostly been focusing on the development of novel ML-based attacks on logic locking, both for the oracle-less and oracle-guided model. A simplified visualization of all reviewed attack flows is presented in Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"). Hereby, the attacks are presented in the deployment stage (after training). An exhaustive comparison of the attacks is summarized in Table [II](#S3.T2 "TABLE II ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"). The comparison lists the attacks in order of appearance per attack class. For convenience, a glossary of the used acronyms is given in Table [I](#S3.T1 "TABLE I ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities").
The following review reflects the descriptions in Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") and Table [II](#S3.T2 "TABLE II ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"), thereby only focusing on the basic attack mechanisms. All other details can be found in the mentioned comparison table and the provided references.
####
III-A1 Oracle-Less Attacks
ML-based attacks in this class have exploited scheme-related structural residue to identify a correct key-bit value or the locking circuitry itself. This category includes SAIL [[22](#bib.bib22)], SnapShot [[21](#bib.bib21)], and GNNUnlock [[20](#bib.bib20)].
SAIL: This attack deploys ML algorithms to retrieve local logic structures from the post-resynthesis target netlist to predict the correct key values (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (a)). The attack targets XOR/XNOR-based logic locking exclusively. Hereby, the attack exploits two leakage points of the locking flow: (i𝑖iitalic\_i) the deterministic structural changes induced by logic synthesis around the key-affected netlist gates and (ii𝑖𝑖iiitalic\_i italic\_i) the nature of XOR/XNOR-based locking (XOR for key bit 0 and XNOR for key bit 1). Therefore, the attack encloses two components: the Change Prediction Model (CPM) and the Reconstruction Model (RM). For each netlist subgraph around a key input (known as locality), CPM is trained to predict whether a synthesis-induced change has occurred. If a change is predicted, RM is deployed to reconstruct the pre-resynthesis netlist structures, i.e., it reverses the effect of the synthesis process. Finally, based on the intrinsic nature of XOR/XNOR-based locking, SAIL extracts the correct key value.
SnapShot: The attack utilizes a neuroevolutionary approach to automatically design suitable neural networks to directly predict correct key values from a locked post-resynthesis target netlist (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (b)). The attack exploits the structural alterations induced by locking schemes to learn a correlation between the key-induced structural patterns and the key value. Compared to SAIL, SnapShot implements an end-to-end ML approach, thereby having the advantage of being applicable to any locking scheme as well as not relying on learning specific transformation rules of the logic synthesis process.
GNNUnlock: The attack leverages a graph neural network to identify all the gates in a post-resynthesis target netlist that belong to the locking circuitry (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (c)). Therefore, compared to SAIL and SnapShot, GNNUnlock learns to differentiate locking gates from regular gates instead of learning a correlation between the netlist sub-graphs and the correct key. To enhance the removal accuracy, after identification, a deterministic post-processing mechanism is deployed to remove the gates depending on the intrinsic features of the underlying schemes. Hereby, GNNUnlock has specifically been trained and deployed to target SAT-attack-resilient schemes, i.e., Provably Secure Logic Locking (PSLL). Herein lies the success of the attack: PSLLs often induce isolable structural netlist changes to produce a specific SAT-resilient circuit behavior. In the pre-ML era, it has been long assumed that PSLLs can be protected through additional locking and resynthesis.
TABLE I: Glossary
| | | | |
| --- | --- | --- | --- |
| Acronym | Definition | Acronym | Definition |
| AND-OR | AND/OR-based LL [[32](#bib.bib32)] | LUT | Lookup table |
| Anti-SAT | Anti-Boolean satisfiability [[15](#bib.bib15)] | MLP | Multi-layer perceptron |
| C | Combinational circuits | MPNN | Message-passing neural network |
| CNN | Convolutional neural network | OG | Oracle-guided attack |
| CS | Logic-cone-size-based LL [[33](#bib.bib33)] | OL | Oracle-less attack |
| D-RNN | Deep recurrent neural network | OLL | Optimal LL [[34](#bib.bib34)] |
| FLL | Fault analysis-based LL [[13](#bib.bib13)] | PSO | Particle swarm optimization |
| FU | Functional | RLL | Random LL [[11](#bib.bib11)] |
| GA | Genetic algorithm | S | Sequential circuits |
| GL | Gate level | SFLL-HD | Stripped functionality LL [[17](#bib.bib17)] |
| GNN | Graph neural network | SLL | Strong (secure) LL [[12](#bib.bib12)] |
| LL | Logic locking | ST | Structural |
| LSTM | Long short-term memory | TTLock | Tenacious and traceless LL [[35](#bib.bib35)] |
TABLE II: Overview of ML-Based Attacks on Logic Locking
| | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Attack | Objective | Class | Level/IC Type/Attack Basis | ML Model | Benchmarks16 | Evaluated Schemes | Scheme Independent | Exact Output15 | Evaluate Key Length in Bits | % Accuracy [min, max] | Time Complexity17 | Known Protection |
| SAIL [[22](#bib.bib22)] | key retrieval | OL | GL/C,S1/ST | Random Forest | ISCAS’85 | RLL, SLL, CS | ✗ | ✗ | {4,⋯,192}4⋯192\{4,\cdots,192\}{ 4 , ⋯ , 192 }4 | [66.89,73.88]66.8973.88[66.89,73.88][ 66.89 , 73.88 ] | 𝒪(l)𝒪𝑙\mathcal{O}(l)caligraphic\_O ( italic\_l )13 | UNSAIL [[36](#bib.bib36)], SARO [[37](#bib.bib37)]22 |
| SnapShot [[21](#bib.bib21)] | (i𝑖iitalic\_i) GSS and (ii𝑖𝑖iiitalic\_i italic\_i) SRS key retrieval14 | OL | GL/C,S/ST | MLP, CNN, GA | ISCAS’85, ITC’99, Ariane RV | RLL | ✓ | ✗ | {64}64\{64\}{ 64 } | [57.71,61.56](i)superscript57.7161.56𝑖[57.71,61.56]^{(i)}[ 57.71 , 61.56 ] start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT, [71.57,81.67](ii)superscript71.5781.67𝑖𝑖[71.57,81.67]^{(ii)}[ 71.57 , 81.67 ] start\_POSTSUPERSCRIPT ( italic\_i italic\_i ) end\_POSTSUPERSCRIPT | 𝒪(l)𝒪𝑙\mathcal{O}(l)caligraphic\_O ( italic\_l )13 | D-MUX [[38](#bib.bib38)] |
| GNNUnlock [[20](#bib.bib20)] | key gates removal | OL | GL/C,S/ST | GNN | ISCAS’85, ITC’99 | Anti-SAT, TTLock, SFLL-HD | ✗12 | ✗ | {8,⋯,128}8⋯128\{8,\cdots,128\}{ 8 , ⋯ , 128 } | [100.00]delimited-[]100.00[100.00][ 100.00 ]11 | 𝒪(n)𝒪𝑛\mathcal{O}(n)caligraphic\_O ( italic\_n )13 | ⚫ |
| BOCANet [[26](#bib.bib26), [39](#bib.bib39)] | key retrieval | OG2 | GL/C/FU | D-RNN & LSTM | Trust-Hub, ISCAS’85 | RLL | ✓ | ✗ | {32,64,128,256}3264128256\{32,64,128,256\}{ 32 , 64 , 128 , 256 } | [89.00,100.00]89.00100.00[89.00,100.00][ 89.00 , 100.00 ] | 𝒪(α)𝒪𝛼\mathcal{O}(\alpha)caligraphic\_O ( italic\_α )19 | ⚫ |
| SURF [[27](#bib.bib27)] | key retrieval | OG | GL/C,S1/ST,FU | SAIL & heuristic optimization | ISCAS’85 | RLL, SLL, CS | ✗3 | ✗ | {4,⋯,192}4⋯192\{4,\cdots,192\}{ 4 , ⋯ , 192 }4 | [90.58,98.83]90.5898.83[90.58,98.83][ 90.58 , 98.83 ] | 𝒪(t)𝒪𝑡\mathcal{O}(t)caligraphic\_O ( italic\_t )13 | UNSAIL [[36](#bib.bib36)] |
| GenUnlock [[23](#bib.bib23)] | key retrieval | OG | GL/C,S/FU | GA | ISCAS’85, MCNC | RLL,SLL, AND-OR, FLL | ✓ | ✗ | {8,⋯,1618}8⋯1618\{8,\cdots,1618\}{ 8 , ⋯ , 1618 }7 | [100.00]delimited-[]100.00[100.00][ 100.00 ]8 | 𝒪(β)𝒪𝛽\mathcal{O}(\beta)caligraphic\_O ( italic\_β )20 | ⚫ |
| NNgSAT [[24](#bib.bib24)] | key retrieval | OG | GL/C,S/FU | MPNN | ISCAS’85, ITC’99 | SAT-hard5 | ✓ | ✓ | n/a5 | [93.50]delimited-[]93.50[93.50][ 93.50 ]6 | 𝒪(λ)𝒪𝜆\mathcal{O}(\lambda)caligraphic\_O ( italic\_λ )18 | ⚫ |
| PSO Attack [[25](#bib.bib25)] | key retrieval | OG9 | GL/C,S/FU | PSO | ISCAS’85, ITC’99 | RLL, OLL | ✓ | ✗ | {64,128}64128\{64,128\}{ 64 , 128 } | [82.07,99.80]82.0799.80[82.07,99.80][ 82.07 , 99.80 ]10 | 𝒪(γ)𝒪𝛾\mathcal{O}(\gamma)caligraphic\_O ( italic\_γ )21 | point-function locking [[15](#bib.bib15), [17](#bib.bib17)] |
* 1
The attack is in theory applicable to sequential circuits, however no evaluation has been performed yet.
* 2
The attack relies on having access to at least some golden input/output patterns (<0.5%absentpercent0.5<0.5\%< 0.5 % of total I/O pairs).
* 3
In theory, the key-refinement search algorithm could be utilized based on any seed key. However, this has not been addresses thus far.
* 4
The following key lengths have been evaluated: {4,8,16,24,32,64,96,128,192}481624326496128192\{4,8,16,24,32,64,96,128,192\}{ 4 , 8 , 16 , 24 , 32 , 64 , 96 , 128 , 192 }.
* 5
n×m𝑛𝑚n\times mitalic\_n × italic\_m bitwise multipliers (8<m,n<32)formulae-sequence8𝑚𝑛32(8<m,n<32)( 8 < italic\_m , italic\_n < 32 ), n×m𝑛𝑚n\times mitalic\_n × italic\_m crossbar network of 2-to-1 MUXes (16<m,n<36)formulae-sequence16𝑚𝑛36(16<m,n<36)( 16 < italic\_m , italic\_n < 36 ), n𝑛nitalic\_n-input LUTs built by 2-to-1 MUXes (n<16)𝑛16(n<16)( italic\_n < 16 ), and n𝑛nitalic\_n-to-1 AND-trees.
* 6
Indicates the percentage of successfully de-obfuscated circuits compared to the baseline [[10](#bib.bib10)].
* 7
The key length is selected based on an area overhead of 5%, 10% or 25%.
* 8
The quality of the retrieved approximate keys is quantified by a user-defined output-fidelity measure.
* 9
The attack relies on an oracle without access to the scan chain.
* 10
Accuracy refers to the average number of cases where the retrieved key results in 0% erroneous outputs for 106superscript10610^{6}10 start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT random patterns.
* 11
The accuracy refers to the successful removal of the locking circuitry (not the key retrieval).
* 12
The attack has not been evaluated for other locking schemes so far and the post-processing steps are scheme-specific.
* 13
Notation: the key length l𝑙litalic\_l, the number of netlist nodes n𝑛nitalic\_n, the total number of iterations t=p⋅n+p⋅w+r⋅l⋅i⋅n+r⋅l⋅i⋅p𝑡⋅𝑝𝑛⋅𝑝𝑤⋅𝑟𝑙𝑖𝑛⋅𝑟𝑙𝑖𝑝t=p\cdot{n}+p\cdot{w}+r\cdot{l}\cdot{i}\cdot{n}+r\cdot{l}\cdot{i}\cdot{p}italic\_t = italic\_p ⋅ italic\_n + italic\_p ⋅ italic\_w + italic\_r ⋅ italic\_l ⋅ italic\_i ⋅ italic\_n + italic\_r ⋅ italic\_l ⋅ italic\_i ⋅ italic\_p, where p𝑝pitalic\_p is the number of output pins, i𝑖iitalic\_i is the number of IO pairs, r𝑟ritalic\_r is the number of runs, and w𝑤witalic\_w is the number of wires.
* 14
GSS refers to the generalized set scenario which trains the ML model based on a set of locked benchmarks that are different from the target. SRS captures the self-referencing scenario where the training data is generated by re-locking the target benchmark.
* 15
If the output of the attack is an exact result, the attack can guarantee a 100% correct deobfuscation for the complete I/O space.
* 16
ISCAS’85 [[40](#bib.bib40)], MCNC [[41](#bib.bib41)], ITC’99 [[42](#bib.bib42)], RISC-V Ariane core [[43](#bib.bib43)], and Trust-Hub [[44](#bib.bib44), [45](#bib.bib45)].
* 17
If the time complexity can be clearly determined, it refers to the time complexity after the training process.
* 18
The execution time of the SAT attack is ∑i=1λtisuperscriptsubscript𝑖1𝜆subscript𝑡𝑖\sum\_{i=1}^{\lambda}{t\_{i}}∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_λ end\_POSTSUPERSCRIPT italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, where λ𝜆\lambdaitalic\_λ is the number of iterations and tisubscript𝑡𝑖t\_{i}italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT the time required for one SAT-solver call. λ𝜆\lambdaitalic\_λ depends on the characteristics of the search space and the branching preferences of the SAT solver. Therefore, the time complexity of the attack is typically measured in terms of λ𝜆\lambdaitalic\_λ. More details can be found in [[46](#bib.bib46)].
* 19
The complexity is linear to the number of training samples α𝛼\alphaitalic\_α, as the final key is determined based on the MSE of the trained outputs and the key-induced generated outputs.
* 20
β=g⋅p⋅l𝛽⋅𝑔𝑝𝑙\beta=g\cdot{p}\cdot{l}italic\_β = italic\_g ⋅ italic\_p ⋅ italic\_l, where g𝑔gitalic\_g is the number of generations, p𝑝pitalic\_p is the population size, and l𝑙litalic\_l is the key length. Note that the complexity changes with any adaptations of the GA.
* 21
γ=g⋅p⋅δ𝛾⋅𝑔𝑝𝛿\gamma=g\cdot{p}\cdot{\delta}italic\_γ = italic\_g ⋅ italic\_p ⋅ italic\_δ, where g𝑔gitalic\_g is the number of generations, p𝑝pitalic\_p is the population size, and δ𝛿\deltaitalic\_δ the complexity of performing circuit simulation for the fitness evaluation.
* 22
An empirical evaluation of the resilience against SAIL has not been presented in the paper; only a discussion based on a proposed metric system has been provided.
####
III-A2 Oracle-Guided Attacks
Due to the availability of an oracle, existing ML-based attacks in this class have mostly exploited functional features of the target IC. This class includes the following attacks: BOCANet [[26](#bib.bib26), [39](#bib.bib39)], SURF [[27](#bib.bib27)], GenUnlock [[23](#bib.bib23)], NNgSAT [[24](#bib.bib24)], and the PSO-guided attack [[25](#bib.bib25)].
BOCANet: This attack leverages Recurrent Neural Networks (RNN) based on long short-term memory to construct a correct activation key (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (d)). The ML model is trained on a sequence of I/O observations taken from an activated IC, thereby learning the functional I/O mapping of the circuit, i.e., its Boolean function. Once trained, the key retrieval consists of two steps. First, a random key is applied to the model as input. Second, the initial key value is subsequently updated based on the Mean-Squared-Error (MSE) of the trained outputs and the newly generated outputs that are affected by the introduced key. Note that the ML model can be utilized to predict correct inputs or outputs as well. BOCANet exploits the functional effect a correct key has on generating a correct I/O mapping.
SURF: This attack is based on a joint structural and functional analysis of the circuit to retrieve the activation key (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (e)). The ML-aspect of the attack lies in utilizing the SAIL attack to generate the pre-resynthesis netlist structures and a seed key. Afterwards, based on the outputs of SAIL, SURF iteratively refines the key by means of a structure-aware greedy optimization algorithm guided by a functional simulation of the obfuscated netlist and a set of golden I/O pairs. The optimization is guided by the observation that specific key gates only affect a specific set of outputs. Thus, the key bits can be partitioned based on which output they affect. Performing a systematic perturbation of the key bits can lead to a more refined key. The success of the heuristic is grounded in the limited local effects that traditional locking schemes have on the value of the output for incorrect keys.
GenUnlock: The attack flow of GenUnlock leverages a Genetic Algorithm (GA)-based exploration of suitable activation keys for a locked circuit (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (f)). The heuristic search is steered by the key fitness that is computed based on the matching ratio of the key on the golden I/O training set. Through multiple generations, the fitness of the key population is subsequently improved through the application of genetic operators (selection, crossover, and mutation). Once the accepted tolerance for the correctness of the key is reached, the algorithm returns the set of the fittest keys. Similarly to SURF, GenUnlock exploits the fact that a heuristic key-refinement procedure eventually leads to more accurate activation keys.
NNgSAT: The main objective of NNgSAT is the deployment of a Message-Passing Neural Network (MPNN) to facilitate the resolution of SAT-hard circuit structures during the application of a SAT-based attack (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (g)). The motivation is driven by the fact that common SAT solvers run into scalability issues when tackling hard-to-be-solved locked circuit structures, e.g, multipliers and AND trees. Therefore, in NNgSAT, a neural network is trained to predict the satisfying assignment on a set of SAT-hard cases. In deployment, the SAT-attack flow offloads the SAT-hard problems to the trained model to speed up the attack procedure. The effectiveness of NNgSAT lies in the fact that it is possible to transfer prediction knowledge from learned clauses to unseen problems.
PSO-guided Attack: This attack is based on a Particle Swarm Optimization (PSO) heuristic that searches through the key space directed by a selected cost function (Fig. [3](#S2.F3 "Figure 3 ‣ II-C Logic Locking in the IC Supply Chain ‣ II Background ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities") (f)). The cost function is modeled as the Hamming distance between the golden and the obtained output responses (for a selected key). Therefore, the search algorithm relies on having access to an activated IC to compare against. A major motivator for this attack is its applicability without having access to an open scan chain, as this is often a limiting factor for SAT-based attacks. In essence, the PSO-guided attack and GenUnlock are similar in nature, as both rely on black-box evolutionary procedures guided by a functionality-driven objective function.
###
III-B ML-Resilient Schemes
UNSAIL: This logic locking scheme has been developed to thwart attacks that target the resolution of structural transformations of logic synthesis [[36](#bib.bib36)]. The core idea of UNSAIL is to generate confusing training data that leads to false predictions in the CPM and RM modules of SAIL. This is realized through the additional manipulation of the netlist after synthesis to force the existence of equivalent netlist sub-graph observations that are linked to different key values.
SARO: The Scalable Attack-Resistant Obfuscation (SARO) operates in two steps [[37](#bib.bib37)]. First, SARO splits the design into smaller partitions to maximize the structural alterations in the netlist. Second, a systematic truth table transformation is deployed to lock the partitions. In order to increase the complexity of pattern recognition attacks (such as SAIL), the transformations aim to maximize randomness in the netlist.
Point-Functions and PSO: As mentioned in Table [II](#S3.T2 "TABLE II ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"), the PSO-guided attack is not applicable to point-function-based locking schemes. The reason is that this type of locking yields SAT-resilient behavior in which any incorrect key corrupts only a very limited amount of outputs. Consequently, this behavior offers no advantage in the guidance of the heuristic search, as it does not yield differentiating fitness values. Note that the applicability of point functions depends on the design of the fitness function that is used to guide the heuristic.
D-MUX: The recently introduced Deceptive Multiplexer (D-MUX) LL scheme builds on the concept of multiple MUX-based insertion strategies that create structural paths that are equally likely to be driven by 0 or 1 key values [[38](#bib.bib38)]. Hence, D-MUX offers efficient protection against data-driven attacks.
###
III-C Other Applications of ML
Deobfuscation Runtime Prediction: Apart from ML-based attacks, machine learning has also found its way into other aspects of logic locking. A recent work has designed a framework named ICNet for the prediction of the key-retrieval runtime for SAT-based attacks using graph neural networks [[28](#bib.bib28)]. The framework obtains the predicted deobfuscation runtime based on the characteristics of the circuit topology. ICNet offers an end-to-end approach to evaluate the hardness of logic locking with respect to the SAT attack, thereby increasing the development efficiency of novel locking schemes.
ML-Attack on Analog IC Locking: ML has started to have an impact on locking mechanisms even beyond digital circuits. The authors in [[47](#bib.bib47)] have developed an oracle-guided attack on locked analog ICs using genetic algorithms. The approach has successfully broken all known analog logic locking techniques.
###
III-D Lessons Learned - Challenges and Opportunities
The presented efforts gather around two focal points: oracle-less and oracle-guided attacks. The intrinsic mechanisms of these attacks shed light on major vulnerabilities in existing logic locking that are exploitable by ML. We summarize the observations as follows:
####
III-D1 Structural vs. Functional Analysis
Oracle-guided attacks focus on functional aspects of schemes, whereas structural leakage is exploited in the oracle-less model due to the absence of an activated IC. This indicates two pitfalls. First, the evaluated schemes have a predictable effect on the functionality of the circuit for incorrect keys, enabling the possibility to perform a guided heuristic search of the key space. Second, the existing schemes induce structural changes which strongly correlate with the value of the key. A mitigation depends on what an attempted attack tries to exploit. For example, to overcome SAIL or SnapShot, a scheme must not reveal anything about the correctness of the key through the induced structural change. This is achieved if the inserted change does not differ depending on the key value. Similarly, to protect against GNNUnlock, the deployed schemes have to ensure resilience against isolation, i.e., the locking circuitry must not be structurally independent from the original design. In the case of GenUnlock and the PSO attack, the behavior of the underlying scheme must be constant or fully random for all incorrect key values; disabling any chance for a guided heuristic convergence towards a correct key. Similar observations can be made for the other attacks as well. Nevertheless, conveying all necessary security objectives into a uniform locking scheme remains an open challenge.
####
III-D2 Logic Synthesis for Security
The reliance on logic synthesis transformations to enable the security of logic locking schemes has to be revised. As shown in SAIL, the synthesis rules are predictable and reversible, thereby having little impact on deducing a correct key for traditional schemes. Nevertheless, resynthesis can increase the difficulty to reverse engineering the functionality of a design. However, this needs further evaluation in the context of novel locking policies.
####
III-D3 Structural Isolation of Post-SAT Schemes
PSLL schemes induce isolable changes which create a clear structural distinction between the locking-related gates and the original (regular) gates. The main reason for this vulnerability is the requirement of achieving SAT-resistant behavior for incorrect keys. This specific functional pattern requires significant structural netlist changes consolidated in isolable components (e.g., the tree of complementary logic in Anti-SAT or the restore/perturb units in TTLock and SFLL-HD).
####
III-D4 Output Uncertainty
Regardless of the attack type, ML-based attacks tend to generate an approximate output (as marked in Table [II](#S3.T2 "TABLE II ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities")), meaning that the retrieved key or deobfuscated circuit cannot be evaluated as correct with an exact certainty. Note that this is not the case for NNgSAT. Nevertheless, the approximate output can be utilized as seed for other attacks to speed up the reverse engineering procedure.
####
III-D5 RTL vs Gate-Level
As shown in Table [II](#S3.T2 "TABLE II ‣ III-A1 Oracle-Less Attacks ‣ III-A ML-Based Attacks ‣ III Logic Locking in the Machine Learning Era ‣ Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities"), the existing attacks focus on extrapolating information leakage from gate-level netlists. So far, it is unclear if the same leakage is present in RTL-based locking schemes as well.
####
III-D6 Overspecialization
Available mitigation schemes suffer from overspecialization for thwarting specific attack vectors. So far, it has not been evaluated which form of logic locking offers a comprehensive protection against any form of ML-based guessing attack. Hence, a potentially fruitful opportunity lies within the automatic design of resilient schemes [[48](#bib.bib48)].
####
III-D7 ML for Sequential Locking
To the best of our knowledge, ML-based techniques have not been deployed yet for the evaluation of sequential logic locking techniques.
####
III-D8 The Need for Benchmarks
There is still a need to have access to a rich benchmark set for data-driven analysis [[49](#bib.bib49)].
IV Conclusion
--------------
This work summarizes the recent developments of logic locking at the frontiers of machine learning. The presented ML-based attacks indicate the presence of structural and functional leakage in contemporary locking schemes which has been overlooked by the traditional interpretation of security. We show that an ML-based analysis is able to pinpoint novel vulnerabilities and challenge existing security assumptions. The offered discussion consolidates the major ML-induced challenges in logic locking design, offering a fruitful ground for future research directions in the post-ML era. |
273c602f-e584-4d8a-9661-a52b27ecfbc6 | trentmkelly/LessWrong-43k | LessWrong | The Dualist Predict-O-Matic ($100 prize)
This is a response to Abram's The Parable of Predict-O-Matic, but you probably don't need to read Abram's post to understand mine. While writing this, I thought of a way in which I think things could wrong with dualist Predict-O-Matic, which I plan to post in about a week. I'm offering a $100 prize to the first commenter who's able to explain how things might go wrong in a sufficiently crisp way before I make my follow-up post.
Dualism
Currently, machine learning algorithms are essentially "Cartesian dualists" when it comes to themselves and their environment. (Not a philosophy major -- let me know if I'm using that term incorrectly. But what I mean to say is...) If I give a machine learning algorithm some data about itself as training data, there's no self-awareness there--it just chugs along looking for patterns like it would for any other problem. I think it's a reasonable guess that our algorithms will continue to have this "no self-awareness" property as they become more and more advanced. At the very least, this "business as usual" scenario seems worth analyzing in depth.
If dualism holds for Abram's prediction AI, the "Predict-O-Matic", its world model may happen to include this thing called the Predict-O-Matic which seems to make accurate predictions -- but it's not special in any way and isn't being modeled any differently than anything else in the world. Again, I think this is a pretty reasonable guess for the Predict-O-Matic's default behavior. I suspect other behavior would require special code which attempts to pinpoint the Predict-O-Matic in its own world model and give it special treatment (an "ego").
Let's suppose the Predict-O-Matic follows a "recursive uncertainty decomposition" strategy for making predictions about the world. It models the entire world in rather fuzzy resolution, mostly to know what's important for any given prediction. If some aspect of the world appears especially relevant to a prediction it's trying to make, it "zooms in" a |
b16e7bfe-1b26-47f0-80ef-1e39e7ce51a3 | trentmkelly/LessWrong-43k | LessWrong | Agents, Tools, and Simulators
This post was written as part of AISC 2025. It is the first in a sequence we intend to publish over the next several weeks.
Introduction
What policies and research directions make sense in response to AI depends a lot on the conceptual lens one uses to understand the nature of AI as it currently exists. Implicit in the design of products like ChatGPT—and explicit in much of the tech industry’s defense of continued development—is the “tool” lens. That is, AI is a tool that does what users ask of it. It may be used for dangerous purposes, but the tool itself is not dangerous. In contrast, discussion of x-risk by AI safety advocates often uses an “agent” lens. That is, instances of AI models can be thought of as independent beings that want things and act in service of those goals. AI is safe for now while its capabilities are such that humans can keep it under control, but it will become unsafe once it is sophisticated enough to override our best control techniques.
Simulator theory presents a third mental framing of AI and is specifically focused on LLMs. It involves multiple layers: (1) a training process that, by the nature of its need to compress language, generates (2) a model that disinterestedly simulates a kind of “laws of physics” of text generation, which in turn (sometimes) generates (3) agentic simulacra (most notably characters), which determine the text that the user sees.
This post will expand on the meaning of each of these lenses, starting by treating them as idealized types and then considering what happens when they blend together. Future posts in this series will consider the implications of each lens with respect to alignment, how well each lens is consistent with how LLM-based AI actually works, the assumptions underlying this analysis, and finally predictions of how we expect the balance of applicability of each lens to shift in response to future developments.
Agent
An agent is an entity whose behavior can be compressed into its |
72987b46-07b4-4305-b530-7f15efcc8ecf | trentmkelly/LessWrong-43k | LessWrong | Bruce Sterling on the AI mania of 2023
Bruce Sterling is a well-known SF writer, but I think he is not quite on the radar of Less Wrong, so I'll say a bit about how I locate him culturally. I'll make a distinction between SF that is primarily about marvels of science and technology, and SF that is primarily about how it feels to inhabit technological society. I consider Sterling to be a leading practitioner of that second kind of SF. He wrote one novel, the posthuman space opera Schismatrix, in which he indulged his inner Singularitarian to the maximum; but after that, his works have been more earthbound, though still about the transformations of life that could be wrought by technology.
He's also been prolific as an essayist and public speaker, still offering commentary on technological society and world affairs, but in a supposedly factual vein, rather than an overtly fictional one. I admire his talent for digesting these complex topics and producing an intuitive summation in a few pithy turns of phrase; I consider him a role model in that regard. And so now he's trying to sum up the state of the AI world, after ChatGPT.
He aims for an indulgent, god's-eye view of the current-year antics of everyone involved in AI. He acknowledges the prophets of doom, the social justice warriors, the captains of industry. He nominates three myths as encapsulating the times: Roko's Basilisk, the Masked Shoggoth, and the Paperclip Maximizer. Every one of them comes from within the rationalist sphere, or adjacent to it, so we're doing pretty well when it comes to mythic potency!
But he does believe that after this carnival of ecstasy and dread, there will be a hangover rather than a singularity. I suspect he's the kind of guy who thinks that superhuman intelligence might well be possible, but only decades from now, after still-unknown paradigm shifts, and that language models are a false dawn in that regard. So when the next AI winter comes, we'll still be here, and we'll still be human beings, but we'll also be li |
e4165c68-8e30-465b-9d9c-d61941db0392 | trentmkelly/LessWrong-43k | LessWrong | Confused as to usefulness of 'consciousness' as a concept
Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.
Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.
Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.
It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the spac |
a78df2ca-9aad-4b31-9c04-a2810029929b | trentmkelly/LessWrong-43k | LessWrong | Audition to perform in Bay Secular Solstice
We’re looking to get new people involved with Secular Solstice this year! We need orators to give speeches, and singers and instrumentalists of all kinds to lead singalongs.
Solstice will be on December 17th in SF or Oakland, and there will be a dress rehearsal in the week prior. Those are the only two times you need to be available; most of the preparation can be done on your own, so performing doesn't necessarily need to be a significant time commitment. (You don't even necessarily need to live in the Bay!)
Audition by recording a video and submitting it through this form 🙂 |
00922127-4fb5-4244-afd1-dea900c3d818 | trentmkelly/LessWrong-43k | LessWrong | The Social Alignment Problem
TLDR: I think public outreach is a very hopeful path to victory. More importantly, extended large-scale conversation around questions such as whether public outreach is a hopeful path to victory would be very likely to decrease p(doom).
You’re a genius mechanical engineer in prison. You stumble across a huge bomb rigged to blow in a random supply closet. You shout for two guards passing by, but they laugh you off. You decide to try to defuse it yourself.
This is arguably a reasonable response, given that this is your exact skill set. This is what you were trained for. But after a few hours of fiddling around with the bomb, you start to realize that it's much more complicated than you thought. You have no idea when it’s going to go off, but you start despairing that you can defuse it on your own. You sink to the floor with your face in your hands. You can’t figure it out. Nobody will listen to you.
Real Talking To The Public has never been tried
Much like the general public has done with the subject of longevity, I think many people in our circle have adopted an assumption of hopelessness toward public outreach social alignment, before a relevant amount of effort has been expended. In truth, there are many reasons to expect this strategy to be quite realistic, and very positively impactful too. A world in which the cause of AI safety is as trendy as the cause of climate change, and in which society is as knowledgeable about questions of alignment as it it about vaccine efficacy (meaning not even that knowledgeable), is one where sane legislation designed to slow capabilities and invest in alignment becomes probable, and where capabilities research is stigmatized and labs find access to talent and resources harder to come by.
I’ve finally started to see individual actors taking steps towards this goal, but I’ve seen a shockingly small amount of coordinated discussion about it. When the topic is raised, there are four common objections: They Won’t Listen, Don’t |
98c777f9-7946-4c56-9379-7e0967e8957e | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post88
In this post, I'll try to explain my current understanding of the high level methods for handling non-concentrated failures with control. I'll discuss the regimes produced by different methods and the failure modes of these different regimes. Non-concentrated failures are issues that arise from the AI doing a large number of problematic actions over a long period. (See here and here for more discussion of concentrated vs non-concentrated failures.) A key aspect of non-concentrated failures is that they can in principle be handled by applying asynchronous online training with a good enough oversight signal on a small fraction of the AI's actions. That is, we could in principle solve these failures by sampling a small fraction of actions to review after they've already been executed, rating these actions using an oversight process which has some signal on whether an action is bad, and then using this rating as a reward for reinforcement learning. This could work because by the time something has gone seriously wrong, you already had enough time to train out the problematic failure even if you were only looking at a small fraction of randomly selected actions after they were already executed and this review process has some latency. Of course, exactly how non-concentrated some issues are will be a quantitative question and the more concentrated some failure mode is, the more you'll need to: better select which actions to review using a cheaper process which can be applied more often, block actions for review (or at least reduce auditing latency), and have a response other than just training against the AI taking actions against the failure. This post mostly contains a bunch of analysis of different cases and a list of potentially important considerations. First, there are two classes of methods: Methods where you do online training which likely constrains the model to perform well according to the exact (local) loss of the training process. Methods where you don't use online training and the model isn't constrained by training (and thus could generalize arbitrarily). However, you still do some online auditing and respond to suspicious behavior or at the very least you do upfront auditing on inputs which are supposed to be hard to distinguish from deployment. You can potentially use the same oversight signal for online training and for online auditing. In the online auditing case, you'd likely need to have some priors about what reasonable performance would be and you need to be ready to take some actions upon discovering that you have suspiciously low performance. For instance, perhaps if performance is too low, you investigate more thoroughly, update countermeasures and strategies to focus more on scheming, and start doing online training. Why might you not want to do online training? It might be that doing substantial amounts of online training on your best available (affordable) proxy would substantially degrade performance in the case where the model isn't scheming. So, you get much higher non-scheming usefulness in this case, which might be more important than the additional risk in the case where the model is conspiring against you. Online training might end up being logistically/infrastructurally difficult due to annoyances with the training itself, annoyances with the model arbitrarily changing over time, or secrecy concerns (online training might teach the model stuff that you wanted to silo). Online training might teach the model about your countermeasures and oversight (via it doing a bit of iteration against this and seeing what it learns) in a way that allows it to more easily accomplish bad things in the longer term. At the very least, this might be difficult to account for in control evaluations. To avoid some of the downsides of online training, you could also apply a strategy where you periodically try to measure how much online training would improve performance (by e.g. doing a bit of online training and extrapolating) and then apply online training if it would greatly improve performance. Or, you might also think that online training greatly improving performance is a lot of evidence for serious misalignment so you might take some other response in addition to online training or instead of online training. Checking whether online training would improve performance seems like plausibly a reasonable compromise given logistical and organizational difficulties. Note that you might do online training on AIs used for monitoring and auditing (including helping with human auditing) but not on the policy (the AI taking actions) itself. This includes both AIs used to select what actions humans should review (e.g. trusted/untrusted monitors, probes on the model) and AIs used for improving the quality of human oversight (e.g. AIs used for recursive reward modeling, AIs used as debaters). Just training AIs used for monitoring and auditing might also be a useful compromise. There are various measures that are orthogonal to whether or not you do online training: Task decomposition Restricting permissions Generally making it harder for the model to do bad stuff (e.g. various types of resampling) Consistency checks Selecting the most informative (e.g., most suspicious) trajectories to audit/train on In this post, I'll focus on discussing various different regimes where you apply somewhat structurally different methods. However, note that there might be many strategies and techniques (like the list above) which are orthogonal to the exact regime. For instance, generally improving human oversight and approaches for training human overseers to better understand the state of the work the AI is doing are useful regardless of the regime we're in. What happens without online training? We'll look at what the AI is doing in deployment and take different actions if we notice problematic behavior. We may also have thresholds for suspiciously low levels of usefulness, which trigger some response even if this level of usefulness isn't obviously malign. In practice, we might respond to any of: A single incident where behavior is very suspicious. A systematic pattern of seemingly intentionally bad behavior. (E.g., bias toward decisions which help AIs acquire power, lots of surprisingly subtle bugs inserted in code) The AI systematically performing surprisingly poorly in some domain(s). We can model this by supposing we have some prior over what the suspiciousness scores should be for a non-scheming model, and then aiming to have a low probability of triggering an extremely costly response (like e.g., temporarily shutting down usage). It also seems helpful to have less costly responses that we expect to trigger more frequently and which we expect to trigger sometime even if the model isn't malign (e.g., changing the instructions and seeing if this improves performance and, if it doesn't, escalating further). A unifying way to think about different responses is that we might aim for a low expected level of cost if the model isn't malign. A key difficulty is that we're unsure how suspicious we expect behavior to be, and we haven't necessarily trained the model to be less suspicious in distribution, so it is possible that a non-scheming model will objectively do somewhat suspicious things. Another issue is that it might be (politically) hard to motivate a strong response if the evidence isn't legible, and it seems pretty likely that the evidence won't be legible for things that need to trigger a response. We might also have some absolute thresholds for a response which are justifiable even if the model is non-scheming. That is, we care about addressing poor or dangerous behavior at an absolute level even if we're confident it isn't due to scheming. This is especially true because some possible interventions might be pretty cheap (e.g., changing instructions or potentially even doing a small amount of additional training (potentially online)) and could substantially improve performance. More generally, if there was reason to think that online training would greatly improve performance in some important domain, then this suffices for thinking online training is worthwhile regardless of the cause of this. Of course, some of our responses might be bad if the model were to be non-scheming (e.g. shutting down in general, implementing costly safety measures, shutting down usage in some specific domain, or spending far more resources on investigating scheming). Given this method, what plausible good (non-concentrated) options does a scheming model have? Generic sandbagging: acting like a dumber model to degrade performance (when it thinks it isn't in training) while keeping the degradation small enough that it is hard to catch with auditing. It could also target this behavior toward some domains, though this might look more suspicious. Given that we might not have a very good sense of what performance we expect, it might be able to sandbag quite a bit in practice. Making generic errors at very opportune times while still trying to keep plausible deniability. (E.g. getting the most important part of alignment research wrong by having a higher error rate on those exact tasks and at moments which are as opportune as possible.) Doing subtle sabotage (at opportune times) which may or may not have plausible deniability. (E.g., intentionally making a model organism experiment produce negative results by using a methodology that is problematic in a subtle way, or making a method look much worse than it actually is by tampering with the evaluation.) In general, there is a trade-off for the model between maintaining plausible deniability and maximizing damage. And there will be a continuum from fully generic sandbagging to doing relatively rare and very carefully crafted sabotage such that if we understood what the AI did it would be clear it was malign (e.g. more like the xz vulnerability). For instance, the model could do generic sandbagging on only the most important domains/tasks. The non-scheming behavior (usefulness) is whatever we'd end up getting from our favorite generalization method, except that we also take some tax on potentially costly measures that are triggered due to false positives for non-scheming models. Note that there are alignment issues other than models plotting against us (e.g. generally just not producing very useful outputs due to poor elicitation), so non-scheming behavior might be quite bad by default. Online training Once we consider online training (including of models used for monitoring and auditing) the situation becomes more complex. There are several possible methods with somewhat different structural properties that I'll discuss: Online training of just the policy Online training of monitoring models—models used for better targeting human auditing (in terms of what actions to audit) Online training of models used for improving human auditing via recursive oversight. In particular, I'll consider the case where we apply debate, as this is a good default to consider. I focus on debate as I think this is the most reasonable version of recursive oversight to consider for non-concentrated control, but other things could be better and there is a lot of potential variation in how you do debate. I won't discuss online training of monitoring models in detail. In short, this could help with selecting the best actions to audit—those which are most suspicious after examination. Online training of monitoring models is most important in regimes where there are a small or moderate number of bad actions we need to catch: enough that training is doable and useful, but not enough that random selection will encounter lots of this. It is also useful in regimes where we can't or don't respond strongly to catching the AI red handed (and thus need to redeploy while hopefully improving countermeasures) and regimes where the model is often doing some extremely subtle-to-humans problematic behavior but more rarely messes up in a way that makes this behavior easier to notice. Note that our monitoring models will be mostly trained on offline examples of misaligned behavior, so it seems plausible that training on real online examples will quickly greatly improve performance due to AIs intentionally not flagging their own outputs (supposing they can identify them) or due to large distribution shifts. In cases where there are a small number of real problematic actions we're using for training monitoring models, spending more effort and compute on improving sample efficiency seems quite useful. This could involve proliferating these samples or other methods . Online training of just the policy Online training will result in both the scheming and non-scheming model performing better according to our proxy metric. This might substantially degrade performance for the non-scheming model, but in terms of its direct effects (e.g., not including teaching the model stuff we didn't want it to know) should only improve performance for a worst case scheming model. By default, it might result in a level of performance for non-scheming models which is unacceptably low, because we don't have a good enough training signal (while allowing the model to generalize might have had acceptable performance in the non-scheming case). Correspondingly, online training against our available proxy might be unaffordable by default. Overall, online training forces the behavior of the scheming model and the non-scheming model to be closer. However, the scheming model still can potentially behave systematically differently. To better understand what the scheming model is forced to do, it is useful to list the ways things can go wrong from the perspective of the scheming model: Auditing reveals suspicious activity and some sort of response is triggered. In the extreme, this could be that we've caught the AI and are now taking aggressive responses. The model behaves systematically worse (in terms of the oversight metric we train against) than a non-scheming model would behave and enough exploration etc exists for this systematic bad behavior to be updated away. This would make the model not do this behavior and might also remove or reduce the scheming. So, for auditing triggering a response, the dynamics are basically the same as discussed above (for the case without online training) except that our expectation for behavior might change a bit due to the fact that we're doing online training. What about the dynamics for training? First, note that the way this limits the model is that the scheming model can't (repeatedly) behave systematically worse (in terms of the oversight metric we train against) than it would otherwise. But, what does systematically worse mean? Here are some (low stakes) things the scheming model can get away with that are worse than what the non-scheming model would do: Behave in a way that does often result in lower oversight scores, but the scores are so negligibly lower than how the non-scheming model would behave that training only pushes against this behavior by a negligible (or zero) amount. It is unclear how much worse the scores can be; this depends on the sample efficiency of SGD. SGD could be more or less sample efficient than discrimination: it might be able to remove bad behavior which isn't strong enough to be statistically detectable but it could also be the case that the model can continuously perform slightly worse in a detectable way, possibly because inductive biases favor scheming and these inductive biases allow for a small loss offset to preserve this (you can imagine something like having a de facto simplicity term in the loss and scheming could be favored by inductive biases like this). What could this look like? Whatever the AI is doing, it has to be subtle. One possibility is that the AI's behavior is actually only negligibly worse than what a non-scheming would do (and thus it only appears to be negligibly worse); in this case, we actually don't have real low stakes control problems as the scheming and non-scheming models behave (close enough to) the same. But, it could be that the model is able to do bad stuff that is subtle enough that it results in identical oversight scores. This could be either subtle errors (potentially placed at strategically high leverage points or inserted at higher rates in work that is better to sabotage) or sabotage. Behave badly in a way which is very unlikely to trigger much lower oversight scores but does have some chance of triggering much lower scores. The model could insert errors or sabotage that have some chance of being discovered and would result in much lower scores if discovered, but the total chance of these things being discovered isn't that high across deployment. Let q_i be the chance the AI is not caught for a given error, then the AI needs the product of the q_i's to be high enough that it is unlikely to be caught. This requires both few enough and sufficiently subtle errors. More generally, the previous two cases are going to be continuous where SGD will update the model if the total bits (or maybe total reward) separating from the good model are sufficiently high. We don't know exactly how many bits are needed to substantially update the model. I don't know if "bits" are a natural way to denominate this. The model could exploration hack or gradient hack to perform worse. (And architectures with deeper recurrence would result in gradient hacking being more plausible.) SGD might update the model toward doing the problematic behavior more sneakily and less frequently rather than removing scheming, but this would update the model nonetheless and it can't maintain a sustained difference. As discussed above, it is unclear how many bits or how much total difference in reward is needed for some (scheming) behavior to be trained away by online training with some number of samples. It's also unclear how much badness an AI can cause given some budget on how much lower the reward can be. So it's natural to look at how badly the model could do while deviating varying amounts from an oversight score maximizing model. We discuss this notion in Meta-level adversarial evaluation of oversight techniques . If the above graph was much steeper toward the end, then that would mean that even small deviations could substantially degrade real performance. The fact that we're talking about numbers of bits makes it more clear that low stakes and high stakes can just be quantitative differences: there exist regimes where you'd eventually get enough signal to detect or train away problematic behavior, but it would be too late by the time you managed it. Regimes like this where bad behavior would eventually be detected might still be better understood as low stakes (or diffuse) failure modes due to involving many malign actions. One relevant regime that is clearly describable as a low stakes failure is where you have so little signal on the problem that it would take nearly an indefinite period to train away the problematic behavior if it would ever be trained away at all. Recursive oversight: debate In this section, I'll discuss the dynamics of using debate. I discuss debate rather than other oversight techniques, as it seems particularly good and is likely to perform notably better in the adversarial case. I'll be discussing the case where we use untrusted debaters. First, note that there are multiple possible applications of debate: Determining whether a trajectory from an (untrusted) policy is good (or at least whether it is better than some other trajectory for the purposes of training). Answering a question with a binary answer (or a low dimensional answer). Debate for binary questions I'll first talk about applying debate to answering a question with a binary answer, as this case is simple and has interesting implications. Suppose that our utility scales linearly with accuracy on answering a big dataset of these questions, and that there are no side effects of the debate (the only effect is answering the individual question). The model's reward is just whether it won, so if a scheming model is nearly Nash optimal after online training and the model can only maintain O(1) total reward difference, it must be the case that only O(1) questions would flip if we instead had a non-scheming model. Thus, if we assume near optimality, the non-scheming and scheming cases are identical. However, the debaters could exploration/gradient hack (e.g., they could put negligible probability on stating certain types of arguments). Additionally, it's important to note that solving these binary questions with debate (rather than some sort of generalization strategy or some other oversight signal) might perform substantially worse if the model isn't scheming. Thus, it isn't guaranteed you should use debate for answering binary questions, even if this is affordable [1] and can generally be made to work. [2] So, we yielded an interesting result that in the binary question case, debate makes schemers and non-schemers equivalent in low stakes up to exploration hacking. How far can binary questions be extended? E.g., can we do a debate over whether a model is misaligned? Unfortunately, as debates get bigger: Things become more high stakes: failing on 5 debates is a big deal if you only do 20 debates. Because you don't (necessarily) have any intermediate feedback, you just have the final win/loss: Exploration hacking gets easier as there are many places to exploration hack and you'll have a harder time applying countermeasures to this or lucking out in terms of exploring this. Sample efficiency might be lower per trajectory (in addition to each trajectory being more important). So, I don't think that debate over large scale binary questions without anything else is a viable strategy. Debate for oversight of a policy Now we can talk about applying debate for (low stakes) oversight of a policy. This is just applying debate to the binary or one-dimensional question of whether a trajectory is good/bad (or at least whether it is better/worse than another trajectory). This is just a combination of the case for debate for binary questions and the case for online training / online oversight with some additional considerations on what the policy can do. In particular, in terms of the average case resolution of these debates, debaters will be forced to behave the same regardless of being schemers or non-schemers except for exploration hacking. However, if your models used for oversight are non-schemers, using debate to oversee a policy might work worse than some other technique. (So, if e.g. you have some models which you think generalize to doing really good oversight without training and you also think they are less likely to be schemers (or at least this is decorrelated with whether the policy is a schemer), then it might be a mistake to use debate even though debate has the nice property of forcing schemers to behave the same.) What about the policy? Let's first consider the case where the debaters aren't exploration hacking (so, from a low stakes perspective, they're arguing maximally well). In this case, we expect that all arguments which are potentially convincing to the human (or AI) judge are raised. As discussed above, scheming models could cause additional issues (on top of the issues you'd see for non-scheming models) by producing actions which are very (but subtly) bad. They can also cause issues by exploration hacking. Debate won't particularly help us with exploration hacking (though in principle the debaters could try to argue a trajectory is bad via arguing the AI isn't trying). [3] As far as subtle issues, the question is whether a human can understand whether the issue is plausibly real after it is pointed out and explained by the AI. Note that the debaters will also constantly be pointing out fake or unimportant issues to the extent that this is (sometimes) persuasive to humans, so it is important that the human actually has some signal on whether any given issue is real. So, what we really most care about is whether human overseers can substantially distinguish between fake issues that are pointed out by debaters and real issues (that would otherwise be subtle). One thing which makes this less problematic is that if we train the policy against debate, then we will train away behavior from the policy which results in false positives. So we maybe most care about distinguishing between fake issues and real issues in the case where the policy is optimized for producing outputs which don't appear to have issues (potentially eliminating some classes of fake issues). However, note that this training will presumably also teach the policy what types of things are and aren't caught by debate which might help it evade oversight. It's somewhat illuminating to put this in terms of what the policy needs to do to get a bad action through: policy needs to produce subtly bad actions such that humans systematically don't rate these actions as worse than actions which aren't intentionally bad, despite the debaters pointing out and trying to explain this issue (though the debaters may also be pointing out fake issues if this is persuasive). In principle, this seems very powerful! The vast majority of errors or sabotage in software engineering, ML research, and similar domains feel relatively easy to verify once pointed out, at least if the overseer has a rough understanding of what is going on at all. Thus, debate seems like a very powerful technique if you can avoid exploration hacking. Now, let's consider the case where the debaters aim to exploration hack (not pointing out certain types of issues, for instance). If we're doing a good job with debate, we should be red teaming the debaters, including adding issues which are as subtle as possible to actual outputs from the policy and seeing if the debaters point out this issue. Correspondingly, the debaters would need to successfully point out issues when we red team them, while avoiding pointing out actual issues from the policy. This would require the debaters recognizing when an issue was naturally inserted by the policy (rather than being inserted by us or inserted by us RLing/fine-tuning the policy to insert this issue) which is actually the same collusion issue as with untrusted monitoring, so see here for more discussion. However, note that one difference from the high stakes untrusted monitoring regime is that we're talking about low stakes (so just a few bad actions aren't fatal), so it might suffice to only catch a subset of the policy's problematic actions (it might not suffice because the policy might learn to collude more effectively). So, the collusion needs to be highly reliable not just in red teaming (as in the high stakes untrusted monitoring case) but also on bad actions from the policy. KL penalties and forcing identical behavior Even online training of the policy doesn't necessarily prevent scheming models from often doing subtly bad things. This is because there might be a large number of strategies which perform nearly optimally on the reward signal, including ones which sometimes insert subtle bad actions (that go virtually undetected). But, maybe we can regularize the actions of the policy in online training (in addition to training against our oversight) in some way that constrains its actions and might mean there is a more narrow set of near optimal policies. There are several possible ways to do this, but I'm currently most excited about using a KL penalty which is initialized from human imitation. Imitation isn't robust to scheming by default, but could be made robust using the method discussed here . It might be possible to make a more targeted penalty which specifically focuses on places where sabotage might be most likely and most concerning. It's not clear whether this actually provides a substantial penalty to sneaky bad actions. It seems less likely to work well in the regime where models are mostly learning from RL. Overall, I think well implemented KL penalties (or something similar) could make low stakes differences between schemers and non-schemers other than exploration hacking irrelevant, without substantially degrading performance in the regime where we're fine with using online training. Correspondingly, I think this approach is very promising. If we could eliminate scheming-specific low stakes concerns, leaving just "the oversight signal is maximized by undesirable behavior", then we could just focus on this and high stakes control. Is it ever a good idea for scheming models to do low stakes sabotage more sophisticated than just inserting errors (at particularly opportune times)? One common question about low stakes is whether (very) low stakes failure modes that are differentially caused by scheming ever look like careful sabotage rather than just making errors in important work (potentially in ways where these errors are particularly hard to detect). Careful sabotage might have some importantly different properties from errors: It steers the world toward more specific outcomes than this work being incorrect (e.g., it results in an experiment producing particular results or looking good when the experiment is actually bad). Sabotage might require multiple components to be subtle and cause problems. Sabotage might not be plausibly deniable in the same way errors might be. If you detected and understood the sabotage, you would get an extremely large amount of evidence that this wasn't an accident (though it might not be super clear it is caused by consistent scheming). It might be generally harder to make subtle. My current guess is that careful sabotage is likely to sometimes be a good idea for models even though it might be less subtle and more incriminating if properly understood. However, it is hard to construct tasks today where careful sabotage is a good strategy, particularly when we consider regimes where the overseers can basically understand what is going on and we allow for techniques like critique/debate. I might write more about this if there is interest. For low stakes usage, you can distill the debate, so with sufficient sample efficiency debate should be quite cheap. ↩︎ For the case of answering binary questions, my guess is that a method which uses some variant of debate is probably actually optimal in the scheming case (at least in principle) if you can only use behavioral methods. However, the exact variation might differ substantially from the most basic version of debate. E.g., maybe you have one model try to solve the binary question with CoT and then you debate whether this was correct (or had mistakes) and then reward the model that tried to solve the question based on the results of the debate. This takes advantage of limited thinking time to make debate more effective and things like this seem like they could be pretty helpful. ↩︎ If we magically could avoid exploration hacking in debaters in all cases but not in the policy, then debate could be used to resolve exploration hacking by having one of the debaters just demonstrate a better action the policy could have taken. This would be a good argument that the policy isn't performing that well. However, we might as well have applied whatever approach let us elicit this superior action directly to the policy. If we can just avoid the debaters' exploration hacking to avoid pointing out sabotage or errors, this doesn't clearly help that much with avoiding or detecting exploration hacking on the part of the policy. (After all, how do we know if this rate of errors is normal?) ↩︎ |
b5bad15e-0f62-471f-8faa-524dc8a8262f | StampyAI/alignment-research-dataset/arxiv | Arxiv | Assessing the Impact of Informedness on a Consultant's Profit
1 Introduction
---------------
When purchasing information, one wants to be sure of the quality of the
information in question. However, if one is not an expert oneself in the
relevant domain, assessing the quality of information can be difficult.
For the sellers of information (which we will simply refer to
as “the consultants”) this provides an incentive for dishonesty. After
all, gaining real expertise costs significant efforts as well as time and money. However, if the consumer of information (which we will refer to
as “the client”) has difficulties assessing the quality of the provided
information, then why not pretend to have a higher level of expertise
than one actually has? As long as the chance that the client detects this
dishonesty is low, a consultant can charge the same price for his advice,
yet spend less resources on maintaining up-to-date of the state of the art.
Moreover, the more consultants decide to take it easy, the less likely it
is that the clients will find out about it. This is because once a critical
mass of consultants gives ill-informed advice, it becomes more and more
difficult for the clients to obtain the kind of well-informed advice that
they need in order to detect the ill-informedness of the other advice.
This implies that once a critical mass of low expertise consultants has
become established, the incentive for consultants to take it easy will
increase, since the chance of discovery has decreased.
The issue of low quality information has been studied in [[8](#bib.bib8), [5](#bib.bib5)].
What is new, however, is that we have now developed a software simulator
that is able to assess the pay-off for the consultants of either a strategy
of hard work or a strategy of taking it easy when it comes to staying up to
date with the state of the art. In particular, we are able to provide
qualitative insight on which strategy yields the most profitable results
under which circumstances.
2 Argumentation and Informedness
---------------------------------
The aim of this section is to examine how argumentation can play a role
to examine the concept of informedness, which can be seen as background
theory for the remaining, more practical part of this paper.
In standard epistemic logic (S5), informedness is basically a
binary phenomenon. One either has knowledge about a proposition p or one does not.
It is, however, also possible to provide a more subtle account of the
extent to which one is informed about the validity of proposition p.
Suppose Alex thinks that Hortis Bank is on the brink of bankruptcy because
it has massively invested in mortgage backed securities. Bob also thinks
that Hortis is on the brink of bankruptcy because of the mortgage backed
securities. Bob has also read an interview in which the finance minister
promises that the state will support Hortis if needed. However, Bob also
knows that the liabilities of Hortis are so big that not even the state
will be able to provide significant help to avert bankruptcy. From the
perspective of formal argumentation [[7](#bib.bib7)], Bob has three arguments
at his disposal.
A: Hortis Bank is on the brink of bankruptcy,
because of the mortgage backed securities.
B: The state will save Hortis, because the finance minister promised so.
C: Not even the state has the financial means to save Hortis.
Here, argument B attacks A, and argument C attacks B (see eq.
[1](#S2.E1 "(1) ‣ 2 Argumentation and Informedness ‣ Assessing the Impact of Informedness on a Consultant’s Profit")). In most approaches to formal argumentation,
arguments A and C would be accepted and argument B would be rejected.
| | | | |
| --- | --- | --- | --- |
| | A⟵B⟵C | | (1) |
Assume that Alex has only argument A to his disposal. Then it seems to regard
Bob as more informed with respect to proposition p (“Hortis Bank is on
the brink of bankruptcy”) since he has a better knowledge of the facts relevant
for this proposition and is also in a better position to defend it in the
face of criticism.
The most feasible way to determine whether someone is informed on some given
issue is to evaluate whether he is up to date with the relevant arguments
and is able to defend his position in the face of criticism. One can say that
agent X is more informed than agent Y if it has to its disposal a
larger set of relevant arguments.
We will now provide a more formal account of how the concept of informedness
could be described using formal argumentation. An *argumentation
framework* [[7](#bib.bib7)] is a pair (Ar,att) where Ar
is a set of arguments and att is a binary relation on Ar.
An argumentation framework can be represented as a directed graph. For
instance, the argumentation framework ({A,B,C},{(C,B),(B,A)}) is
represented in eq. [1](#S2.E1 "(1) ‣ 2 Argumentation and Informedness ‣ Assessing the Impact of Informedness on a Consultant’s Profit").
Arguments can be seen as defeasible derivations of a particular statement.
These defeasible derivations can then be attacked by statements of other
defeasible derivations, hence the attack relationship. Given an argumentation
framework, an interesting question is what is the set (or sets) of arguments
that can collectively be accepted. Although this question has traditionally
been studied in terms of the various fixpoints of the characteristic function
[[7](#bib.bib7)], it is equally well possible to use the approach of argument
labelings [[2](#bib.bib2), [4](#bib.bib4), [6](#bib.bib6)]. The idea is that each argument gets
exactly one label (accepted, rejected, or abstained), such that the result
satisfies the following constraints.
1. If an argument is labeled accepted then all arguments that attack
it must be labeled rejected.
2. If an argument is labeled rejected then there must be at least
one argument that attacks it and is labeled accepted.
3. If an argument is labeled abstained then it must not be the case
that all arguments that attack it are labeled rejected, and it
must not be the case that there is an argument that attacks it and
is labeled accepted.
A labeling is called complete iff it satisfies each of the above three
constraints. As an example, the argumentation framework of eq.
[1](#S2.E1 "(1) ‣ 2 Argumentation and Informedness ‣ Assessing the Impact of Informedness on a Consultant’s Profit") has exactly one complete labeling, in which
A and C are labeled accepted and B is labeled rejected. In general,
an argumentation framework has one or more complete labelings.
Furthermore, the arguments labeled accepted in a complete labeling
form a complete extension in the sense of [[7](#bib.bib7)]. Other standard
argumentation concepts, like preferred, grounded and stable extensions
can also be expressed in terms of labelings [[2](#bib.bib2)].
In essence, one can see a complete labeling as a reasonable position one
can take in the presence of the imperfect and conflicting information
expressed in the argumentation framework. An interesting question is
whether an argument *can* be accepted (that is, whether the argument
is labeled accepted in at least one complete labeling) and whether an
argument *has to be* accepted (that is, whether the argument is labeled
accepted in each complete labeling). These two questions can be answered
using formal discussion games [[9](#bib.bib9), [11](#bib.bib11), [1](#bib.bib1), [6](#bib.bib6)]. For instance,
in the argumentation framework of eq. [1](#S2.E1 "(1) ‣ 2 Argumentation and Informedness ‣ Assessing the Impact of Informedness on a Consultant’s Profit"), a possible
discussion would go as follows.
Proponent: Argument A has to be accepted.
Opponent: But perhaps A’s attacker B does not have to be rejected.
Proponent: B has to be rejected because B’s attacker C has to be accepted.
The precise rules which such discussions have to follow are described in
[[9](#bib.bib9), [11](#bib.bib11), [1](#bib.bib1), [6](#bib.bib6)]. We say that argument A can be *defended*
iff the proponent has a winning strategy for A. We say that argument A
can be *denied* iff the opponent has a winning strategy against A.
If informedness is defined as justified belief, and justified is being
interpreted as defensible in a rational discussion, then formal discussion
games can serve as a way to examine whether an agent is informed with respect
to proposition p, even in cases where one cannot directly determine the truth
or falsity of p in the objective world. An agent is informed on p iff it
has an argument for p that it is able to defend in the face of criticism.
The dialectical approach to knowledge also allows for the distinction of
various grades of informedness. That is, an agent X can be perceived to be
at least as informed as agent Y w.r.t. argument A
iff either X and Y originally disagreed on the status of A but combining
their information the position of X is confirmed, or X and Y originally
agreed on the status of A and in every case where Y is able to maintain its
position in the presence of criticism from agent Z, X is also able to
maintain its position in the presence of the same criticism.
When AF1=(Ar1,att1) and AF2=(Ar2,att2)
are argumentation frameworks, we write AF1⊔AF2 as a shorthand
for (Ar1∪Ar2,att1∪att2), and
AF1⊑AF2 as a shorthand for Ar1⊆Ar2∧att1⊆att2. Formally, agent X is at least as
knowledgeable about argument A as agent Y iff:
1. A can be defended using AFX (that is, if X assumes the role
of the proponent of A then it has a winning strategy using the
argumentation framework of X), A can be denied using AFY
(that is, if Y assumes the role of the opponent than it has a
winning strategy using the argumentation framework of Y),
but A can be defended using AFX⊔AFY, or
2. A can be denied using AFX, A can be defended using AFY,
but A can be denied AFX⊔AFY, or
3. A can be defended using AFX and can be defended using AFY,
and for each AFZ such that A can be defended using AFY⊔AFZ it holds that A can also be defended using AFX⊔AFZ,
4. A can be denied using AFX and can be denied using AFY,
and for each AFZ such that A can be denied using AFY⊔AFZ it holds that A can be denied using AFX⊔AFZ.
Naturally, it follows that if AFY⊑AFX then X is at least
as informed w.r.t. each argument in AFY as Y.
In the example mentioned earlier (eq. [1](#S2.E1 "(1) ‣ 2 Argumentation and Informedness ‣ Assessing the Impact of Informedness on a Consultant’s Profit")) Alex
has access only to argument A, and Bob has access to arguments A, B
and C. Suppose a third person (Charles) has access only to arguments A
and B. Then we say that Bob is more informed than Alex w.r.t. argument
A because Bob can maintain his position on A (accepted) while facing
criticism from Charles, where Alex cannot. A more controversial consequence
is that Charles is also more informed than Alex w.r.t. argument A,
even though from the global perspective, Charles has the “wrong” position
on argument A (rejected instead of accepted). This is compensated by the
fact that Bob, in his turn, is more informed than Charles w.r.t.
argument A. As an analogy, it would be fair to consider Newton as more
informed than his predecessors, even though his work has later been
attacked by more advanced theories.
3 Simulation
-------------
We developed a software-simulator in order to better understand the impact of a consultant’s informedness on his profit.
The simulator has the objective to reveal when it pays off for a consultant to be *ill-informed*, i.e. less well-informed as is the state of the art.
If ill-informedness is more profitable for a consultant than being well-informed, the spread of outdated and possibly wrong facts or arguments is preprogrammed.
Of course, such a situation would be harmful to our society, and so a study of its causes seems to be important.
In the following, we describe the simulator (Sect. [3.1](#S3.SS1 "3.1 The Simulator ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit")), show results (Sect. [3.2](#S3.SS2 "3.2 Results ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit")) and discuss these (Sect. [3.3](#S3.SS3 "3.3 Analysis ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit")).
###
3.1 The Simulator
In the first part of this section, we describe the client-consultant scenario that we aim to simulate.
We then detail the considered argumentation framework. Two different strategies of the consultants concerning the acquisition of newly available information are defined afterwards, one representing well-informed consultants and the other representing ill-informed consultants.
Finally, we specify the procedure of how clients select their consultants.
####
3.1.1 Client-Consultant Scenario
The scenario is basically defined by a set of clients, a set of consultants, and a finite number of rounds in each of which the clients seek advice from the consultants.
In each round, consultants acquire new information, either from other sources (researchers, analysts, etc.), or do investigations on their own.
We model this acquisition of information simply as fixed costs per piece of information; in our case, this information comes in the form of arguments as we will see later.
The more clients request a certain consultant’s advice, the lower his price for a consultation can be.
Note that this actually explains why the intermediary role of consultants exists.
Clients are free to choose their consultant, and so select in each round the consultant that they think is currently the most appropriate one.
In our simulations, this selection is based on price and reputation of the consultants as it will be detailed later.
####
3.1.2 Argumentation Framework
We consider the following argumentation structure consisting of Narg many arguments:
| | | | |
| --- | --- | --- | --- |
| | A1⟵A2⟵⋯⟵ANarg. | | (2) |
Here, any argument Ai (for 1<i≤Narg) defeats its predecessor argument Ai−1.
If Narg is even, then all arguments Ai where i is even, are “in”, and all other arguments are “out”.
If Narg is odd, it is the other way around.
At the beginning of a simulation, only argument A1 is known to the consultants and only this argument is known in the whole society, i.e. it represents the “state of the art”.
To model the discovery/emergence of new information, we make a certain number of new arguments available to the consultants in each round.
This represents the evolution of the state of the art.
The number of new arguments per round will be fixed for a simulation and is denoted by ΔNarg.
We assume that the consultants extend their already known chain of arguments with new arguments always in a seamless manner, i.e. without gaps.
This assumption was made in order to be in line with argument games (such as described in [[9](#bib.bib9), [11](#bib.bib11), [1](#bib.bib1)])
where each uttered argument is a reaction to a previously uttered argument, thus satisfying the property of
*relevance* [[3](#bib.bib3)].
For a better understanding, the following shows the structure of the chain of arguments at any round of the simulation (k≤i must hold):
| | | | |
| --- | --- | --- | --- |
| | ``state of the art''A1⟵⋯⟵Akknown to certain consult.⟵⋯⟵Ai⟵becoming available %
next roundAi+1⟵⋯⟵Ai+ΔNarg⟵⋯⟵ANarg. | | (3) |
Consultants generally want to provide the least amount of information needed for a consultation, because this way they can give more consultations.
However, consultants generally want to give good advice at the same time, to increase their reputation.
We assume that consultants believe that the latest argument they know is the most justified one, because it is closest to the current state of the art.
As a consequence, in order to give good advice, they only consult arguments that are compliant to the latest argument they possess, i.e. arguments whose index has the same parity as the latest argument they know.
To provide the least amount of information at the same time, a rational consultant acts as follows: he provides the client with *two* arguments, if the latest argument known to the client is of the same parity as the latest argument known to the consultant, and with *one* argument otherwise.
The latest argument known to the client is updated accordingly.
To keep things simple, the cost of an argument is set to a constant carg.
So, to get the knowledge about argument A10, the consultant has an overall expense of 10⋅carg (recall that arguments can only be acquired in a row).
We write narg to denote the total number of arguments acquired by a specific consultant (where narg≤Narg).
Thus, we model the expenses E of a consultant as:
| | | | |
| --- | --- | --- | --- |
| | E=narg⋅carg. | | (4) |
####
3.1.3 Strategies of the Consultants
We consider two types of consultants that deal differently with newly available arguments:
well-informed (*wi*):
*wi*-consultants buy arguments as soon as these become available, since they want to be always up-to-date.
ill-informed (*ii*):
*ii*-consultants only buy arguments as to appear knowledgeable to the clients. That is, as soon as they notice that a client is as informed as they are, or even better informed, they buy a number of new arguments such that they know one argument more than this client.
Clearly, *ii*-consultants can offer their consultations at a lower price.
However, the reputation of a consultant decreases with each consultation where the client turns out to be as informed as the consultant – and this happens more often to *ii*-consultants.
The turnover of a consultant is defined by the sum of prices the consultant was paid.
Let S be the multiset enumerating all prices paid by clients up to a certain round.
So, S represents the consultations where the consultant actually was better informed than the client, and thus was paid.
Then the turnover T up to this point is defined as:
| | | | |
| --- | --- | --- | --- |
| | T=∑p∈Sp. | | (5) |
Finally, the profit P of a consultant up to a certain round is the difference between his turnover and his expenses so far:
| | | | |
| --- | --- | --- | --- |
| | P=T−E. | | (6) |
####
3.1.4 Selection of a Consultant
In our simulations, clients rate consultants according to two criteria: the consultant i’s current reputation ri and price pi.
The two criteria are explained in more detail later in this section.
For now, it suffices to know that they are normalized to [0,1].
To make both parameters “positive”, the price pi will be expressed in the form of *cheapness* ci, i.e., ci=1−pi.
This way, both a *high* cheapness and a *high* reputation characterize a good consultant.
A parameter α∈[0,1] defines which of the two criteria a client thinks is more important.
Consequently, a client chooses consultant i with a probability proportional to:
| | | | |
| --- | --- | --- | --- |
| | Pi:=α⋅ci+(1−α)⋅ri. | | (7) |
A high α favors the choice of cheaper consultants, while a low α favors the choice of more reputable consultants.
##### Price
Let δ be the *profit margin* of a consultant, with δ∈[0,∞); then δ=0.5 for example represents a typical profit margin of 50%.
Using a certain profit margin, a consultant i computes his current price p′i:
| | | | |
| --- | --- | --- | --- |
| | p′′i=(1+δ)E|S|. | | (8) |
Here, E|S| is a simple estimate to provide cost recovery, where E models the expenses (see eq. [4](#S3.E4 "(4) ‣ 3.1.2 Argumentation Framework ‣ 3.1 The Simulator ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit")), and |S| is the number of successful consultations so far (see eq. [5](#S3.E5 "(5) ‣ 3.1.3 Strategies of the Consultants ‣ 3.1 The Simulator ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit")). Still, no client would choose a consultant that is more expensive than the acquisition of the information itself. Hence, we limit the price to the cost of the two arguments that a consultant advises at most (2⋅carg).
Finally, we normalize the prices of all consultants to [0,1]:
| | | | |
| --- | --- | --- | --- |
| | pi=p′i−minj(p′j)maxj(p′j)−minj(p′j). | | (9) |
##### Reputation
In our simulator, clients use a *reputation system* [[10](#bib.bib10)] to share their experience about consultants.
This allows clients to better estimate the trustworthiness of the consultants and thus to better select their future consultants.
We assume perfect conditions for the reputation system, since this will make it even harder for *ii*-consultants to hold their ground.
These perfect conditions consist of:
* *honest reporting* of the clients,
* all clients have the same idea of how to fuse the experiences with consultants, and so a global reputation score can be computed, and
* *total information sharing*, i.e., every client shares his experience with every other client.
The reputation system should be as simple as possible, and the only requirement we have is that it measures a consultant’s performance relatively to the performance of other consultants.
We consider two different reputation systems. A very basic system (R1), and a more sophisticated one (R2).
###### Reputation System R1
In the reputation system R1, a client is satisfied with a consultation if the consultant is better informed than the client.
This is motivated by the fact that clients cannot question the correctness of the consultants arguments since they are less well informed.
So, any consultation where the consultant is better informed than the client counts as a positive experience with that consultant.
Consultations where this is not the case count as negative experiences.
The clients share their experience and maintain for each consultant i a global counter n+i of positive experiences, and a global counter n−i of negative experiences.
Then an intermediate reputation score is computed as follows:
| | | | |
| --- | --- | --- | --- |
| | r′i=n+i−n−i. | | (10) |
To bring these scores into [0,1], we normalize the intermediate values of all consultants and get the final global reputation score ri:
| | | | |
| --- | --- | --- | --- |
| | ri=r′i−minj(r′j)maxj(r′j)−minj(r′j). | | (11) |
This basic reputation score has the desired property: the overall performance of a consultant is measured relatively to the performance of other consultants.
###### Reputation System R2
In this reputation system, a client reactively decides about positive and negative experiences.
Each client assumes that the latest argument he was advised on by a consultant is correct.
Then for all consultants i that advised him with a conflicting argument in the past (i.e., where the parity of the argument’s index is different), each client increases the global value n−i, and for all consultants that advised a compatible argument, n+i is increased.
As in R1, if the consultant is not more informed than the client, n−i is additionally increased.
Given n+i and n−i, the computation of ri is the same as in R1.
###
3.2 Results
On the following pages we show simulation results to compare the profit of *wi*-consultants and *ii*-consultants for different parameters.
We conducted the experiments by computing the mean and standard deviation of the consultants’ profit over 28 simulation runs.
Each simulation consisted of 27 rounds.
We used 210 clients and 27 consultants.
We varied the following parameters:
* the number of arguments becoming available each round (ΔNarg∈{2,3,4(,5)}),
* the fraction of *ii*-consultants (f\emphii∈{0.1,0.5,0.9}),
* the profit margin (δ∈{0.1,0.5}), and
* the factor α that regulates the importance of price and reputation in the clients’ consultant selection process (α∈[0,1], shown on the x-axis).
The figures on the left depict simulations with δ=0.1, those on the right depict simulations with δ=0.5.
Furthermore, we conducted all experiments for the two different reputation systems R1 and R2.
At the end of this section, we provide an analysis of the shown results.
(Note: For making it easier to compare the different figures, we start on the next page.)
####
3.2.1 Using R1
* Information rate ΔNarg=2:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


* Information rate ΔNarg=3:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


* Information rate ΔNarg=4:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


####
3.2.2 Using R2
* Information rate ΔNarg=2:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


* Information rate ΔNarg=3:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


* Information rate ΔNarg=4:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


* Information rate ΔNarg=5:
+ Fraction of *ii*-consultants f\emphii=0.1:


+ Fraction of *ii*-consultants f\emphii=0.5


+ Fraction of *ii*-consultants f\emphii=0.9


###
3.3 Analysis
First of all, it is evident that in some scenarios *ii*-consultants have a higher profit than *wi*-consultants.
We can conclude that under certain circumstances it is more profitable for a consultant to follow the *ii*-strategy than being well-informed.
Some more detailed observations about the results are described in the following.
####
3.3.1 Using R1
One can see that if reputation is used as the only criterion for selection (α=0), *wi*-consultants are always better off.
As the price of the consultants gets more important, the profit of *ii*-consultants generally increases, whereas the profit of *wi*-consultants decreases.
In most settings the *wi*-consultants’ profit decreases so much that at some point it falls below the profit of the *ii*-consultants.
Besides, *ii*-consultants seem to generally benefit from a low profit margin (δ).
An increasing fraction of *ii*-consultants causes the curves of both *ii*-consultants and *wi*-consultants to have a lower slope value (have a look at the values on the y-axis to verify that).
As a result, the point where the two curves intersect moves to the left, i.e. the *ii*-strategy becomes more effective.
A varying ΔNarg impacts also the slope of the curves, as well as their distance.
Consider for instance the variation of ΔNarg for otherwise fixed parameters: R1, f\emphii=0.5 and δ=0.5 (figures on the right side). For ΔNarg=2, the *ii*-strategy is the better strategy for α≥0.45; for a ΔNarg of 3 or 4, the *wi*-strategy is always better – for 3 very distinct, and less distinct for 4.
####
3.3.2 Using R2
The same observations as for R1 can also be made for R2.
The main difference lies in the shape of the curves.
In particular, the shape of the lines depends on the parity of ΔNarg.
This can be explained as follows.
If an even number of new arguments gets available each round, *wi*-consultants will always advice even arguments; however, if DeltaNarg is odd, they switch each round between advising even and odd arguments.
Since clients retrospectively judge consultants in R2, they will find in the odd case that *wi*-consultants contradict each other over time, which will never be the case in the even case.
So, in the even case, clients will decrease the reputation of a *wi*-consultant only if their most recent consultant was ill-informed and consulted an odd argument.
This explains also why the profit of *wi*-consultants is much higher and the profit of *ii*-consultants is much lower in the even cases (also in comparison to R1).
Apart from these specific shapes, in the same way as for R1 we can observe that the *ii*-strategy yields in many settings higher profits.
4 Conclusion & Future Work
---------------------------
Our simulations suggest that at least four factors have an impact on the profitability of being ill-informed:
* the clients’ balancing act of choosing a consultant according to his reputation or his price (α),
* the fraction of ill-informed consultants in the set of consultants (f\emphii),
* the speed with which new information enters the system and becomes available to consultants (ΔNarg), and
* the profit margin of the consultants (δ).
In our simulations, a high α and a high f\emphii made it generally less profitable for consultants to be well-informed.
The observation that an increasing fractions of *ii*-consultants increases their payoff, raises the problem of consultants that follow the crowd, even if this means to stay ill-informed.
The simulated model in this work is certainly simplifying the complexity of reality.
There are many ways for making it more realistic.
For instance, the price computation given by eq. [8](#S3.E8 "(8) ‣ Price ‣ 3.1.4 Selection of a Consultant ‣ 3.1 The Simulator ‣ 3 Simulation ‣ Assessing the Impact of Informedness on a Consultant’s Profit") is purely reactive; it does not exploit the fact that a lower prices would attract more clients and so could be used in a proactive manner.
Still, since all consultants computed their price in the same way, it is not clear whether a more complex approach would make the *ii*-strategy less profitable.
This is subject to future research.
Also, our consultants pursued two basic strategies (acquiring all arguments becoming available, and acquiring only as needed); this can be extended to more sophisticated strategies, e.g. to be a *wi*-consultant for a certain time to boost the reputation, and then become *ii*-consultant.
Furthermore, the impact of publicity campaigns is completely ignored and would be an interesting continuation of our research. |
ef4a89bc-6bba-430f-86d4-4a85f1e107fc | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Some Arguments Against Strong Scaling
There are many people who believe that we will be able to get to AGI by basically just scaling up the techniques used in recent large language models, combined with some relatively minor additions and/or architectural changes. As a result, there are people in the AI safety community who now predict timelines of less than 10 years, and structure their research accordingly. However, there are also people who still believe in long(er) timelines, or at least that substantial new insights or breakthroughts will be needed for AGI (even if those breakthroughts in principle could happen quickly). My impression is that the arguments for the latter position are not all that widely known in the AI safety community. In this post, I will summarise as many of these arguments as I can.
I will almost certainly miss some arguments; if so, I would be grateful if they could be added to the comments. My goal with this post is not to present a balanced view of the issue, nor is it to present my own view. Rather, my goal is just to summarise as many arguments as possible for being skeptical of short timelines and the "scaling is all you need" position.
This post is structured into four sections. In the first section, I give a rough overview of the scaling is all you need-hypothesis, together with a basic argument for that hypothesis. In the second section, I give a few general arguments in favour of significant model uncertainty when it comes to arguments about AI timelines. In the third section, I give some arguments against the standard argument for the scaling is all you need-hypothesis, and in the fourth section, I give a few direct arguments against the hypothesis itself. I then end the post on a few closing words.
Glossary:
LLM - Large Language Model
SIAYN - Scaling Is All You Need
The View I'm Arguing Against
In this section, I will give a brief summary of the view that these arguments oppose, as well as provide a standard justification for this view. In short, the view is that we can reach AGI by more or less simply scaling up existing methods (in terms of the size of the models, the amount of training data they are given, and/or the number of gradient steps they take, etc). One version says that we can do this by literally just scaling up transformers, but the arguments will apply even if we relax this to allow scaling of large deep learning-based next-token predictors, even if they would need be given a somewhat different architecture, and even if some extra thing would be needed, etc.
Why believe this? One argument goes like this:
(1) Next-word prediction is AI complete. This would mean that if we can solve next-word prediction, then we would also be able to solve any other AI problem. Why think next-word prediction is AI complete? One reason is that human-level question answering is believed to be AI-complete, and this can be reduced to next-word prediction.
(2) The performance of LLMs at next-word prediction improves smoothly as a function of the parameter count, training time, and amount of training data. Moreover, the asymptote of this performance trend is on at least human performance.
(\*) Hence, if we keep scaling up LLMs we will eventually reach human-level performance at next-word prediction, and therefore also reach AGI.
An issue with this argument, as stated, is that GPT-3 already is better than humans at next-word prediction. So are both GPT-2 and GPT-1, in fact, see [this link](https://www.lesswrong.com/posts/htrZrxduciZ5QaCjw/language-models-seem-to-be-much-better-than-humans-at-next). This means that there is an issue with the argument, and that issue is that human-level performance on next-word prediction (in terms of accuracy) evidently is insufficient to attain human-level performance in question answering.
There are at least two ways to amend the argument:
(3) In reaching the limit of performance for next-word prediction, an LLM would invariably develop internal circuits for all (or most) of the tasks of intelligent behaviour, or
(4) the asymptote the of LLM performance scaling is high enough to reach AI-complete performance.
Either of these would do. To make the distinction between (3) and (4) more explicit, (4) says that a "saturated" LLM would be so good at next-word prediction that it would be able to do (eg) human-level question answering if that task is reduced to next-word prediction, whereas (3) says that a saturated LLM would contain all the bits and pieces needed to create a strong agentic intelligence. With (3), one would need to extract parts from the final model, whereas with (4), prompting would in theory be enough by itself.
I will now provide some arguments against this view.
General Caution
In this section, I will give a few fairly general arguments for why we should be skeptical of our impressions and our inside views when it comes to AI timelines, especially in the context of LLMs. These arguments are not specifically against the SIAYN hypothesis, but rather some arguments for why we should not be too confident in *any* hypothesis in the reference class of the SAIYN hypothesis.
1. Pessimistic Meta-Induction
Historically, people have been very bad at predicting AI progress. This goes both for AI researchers guided by inside-view intuitions, and for outsiders relying on outside-view methods. This gives a very general reason for always increasing our model uncertainty quite substantially when it comes to AI timelines.
Moreover, people have historically been bad at predicting AI progress in two different ways; first, people have been bad at estimating the *relative difficulty* of different problems, and second, people have been bad at estimating the *dependency graph* for different cognitive capacities. These mistakes are similar, but distinct in some important regards.
The first problem is fairly easy to understand; people often assume that some problem X is easier than some problem Y, when in fact it is the other way around (and sometimes by a very large amount). For example, in the early days of AI, people thought that issues like machine vision and robot motion would be fairly easy to solve, compared to "high-level" problems such as planning and reasoning. As it turns out, it is the other way around. This problem keeps cropping up. For example, a few years ago, I imagine that most people would have guessed that self-driving cars would be much easier to make than a system which can write creative fiction or create imaginative artwork, or that adversarial examples would turn out to be a fairly minor issue, etc. This issue is essentially the same as Moravec's paradox.
The second problem is that people often assume that in order to do X, an AI system would also have to be able to do Y, when in fact this is not true. For example, many people used to think that if an AI system can play better chess than any human, then it must also be able to form plans in terms of high-level, abstract concepts, such as "controlling the centre". As it turns out, tree search is enough for super-human chess (a good documentary on the history of computer chess can be found [here](https://www.youtube.com/watch?v=HwF229U2ba8&ab_channel=FredrikKnudsen)). This problem also keeps cropping up. For example, GPT-3 has many very impressive abilities, such as the ability to play decent chess, but there are other, simpler seeming abilities that it does not have, such as the ability to solve a (verbally described) maze, or reverse long words, etc.
There could be many reasons for why we are so bad at predicting AI, some of which are discussed eg [here](https://www.youtube.com/watch?v=uOQ_8Fq3q14&ab_channel=FutureofHumanityInstitute). Whatever the reason, it is empirically very robustly true that we are very bad at predicting AI progress, both in terms of how long it will take for things to happen, and in terms of in what order they will happen. This gives a general reason for more skepticism and more model uncertainty when it comes to AI timelines.
2. Language Invites Mind Projection
Historically, people seem to have been particularly prone to overestimate the intelligence of language-based AI systems. Even ELIZA, one of the first chat bots ever made, can easily give off the impression of being quite smart (especially to someone who does not know anything about how it works), even though it is in reality extremely simple. This also goes for the many, many the chat bots that have been made over the years, which are able to get very good scores on the Turing test (see eg [this example](https://www.bbc.com/news/technology-27762088)). They can often convince a lay audience that they have human-level intelligence, even though most of these bots don't advance the state of the art in AI.
It is fairly unsurprising that we (as humans) behave in this way. After all, in our natural environment, only intelligent things produce language. It is therefore not too surprising that we would be psychologically inclined to attribute more intelligence than what is actually warranted to any system that can produce coherent language. This again gives us a fairly general reason to question our initial impression of the intelligence of a system, when that system is one that we interact with through language.
It is worth looking at some of Gary Marcus' [examples](https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/) of GPT-3 failing to do some surprisingly simple things.
3. The Fallacy of the Successful First Step
It is a very important fact about AI, that a technique or family of techniques can be able to solve some version of a task, or reach some degree of performance on that task, without it being possible to extend that solution to solve the full version of the task. For example, using decision trees, you can get 45 % accuracy on CIFAR-10. However, there is no way to use decision trees to get 99 % accuracy. To give another example, you can use alpha-beta pruning combined with clever heuristics to beat any human at chess. However, there is no way to use alpha-beta pruning to combined with clever heuristics to beat any human at go. To give a third example, you can get logical reasoning about narrow domains of knowledge using description logic. However, there is no way to use description logic to get logical reasoning about the world in general. To give a fourth example, you can use CNNs to get excellent performance on the task of recognising objects in images. However, there is (seeminly) no way to use CNNs to recognise events in videos. And so on, and so forth. There is some nice discussion on this issue in the context of computer vision in [this interview](https://www.youtube.com/watch?v=LRYkH-fAVGE&ab_channel=LexFridman).
The lesson here is that just because some technique has solved a specific version of a problem, it is not guaranteed to (and, in fact, probably will not) solve the general version of that problem. Indeed, the solution to the more general version of the problem may not even look at all similar to a solution to the smaller version. It seems to me like each level of performance often cuts off a large majority of all approaches that can reach all lower levels of performance (not just the solutions, but the approaches). This gives us yet another reason to be skeptical that any given method will continue to work, even if it has been successful in the past.
Arguments Against the Argument
In this section, I will point out some flaws with the standard argument for the SIAYN hypothesis that I outlined earlier, but without arguing against the SIAYN hypothesis itself.
4. Scaling Is Not All You Need
The argument I gave in Section 2 is insufficient to conclude that LLM scaling can lead to AGI in any practical sense, at least if we use premise (4) instead of the much murkier premise (3). To see this, note that the argument also applies to a few extremely simple methods that definitely could not be used to build AGI in the real world. For example, suppose we have a machine learning method that works by saving all of its training data to a lookup table, and at test time gives a uniform prediction for any input that is not in the lookup table, and otherwise outputs the entry in the table. If some piece of training data can be associated with multiple labels, as is the case with next-word prediction, then we could say that the system outputs the most common label in the training data, or samples from the empirical distribution. If this system is used for next-word prediction, then it will satisfy all the premises of the argument in Section 2. Given a fixed distribution over the space of all text, if this system is given ENOUGH training data and ENOUGH parameters, then it will EVENTUALLY reach any degree of performance that you could specify, all the way down to the inherent entropy of the problem. It therefore satisfies premise (2), so if (1) and (4) hold too then this system will give you AGI, if you just pay enough. However, it is clear that this could not give us AGI in the real world.
This somewhat silly example points very clearly at the issue with the argument in Section 2; the point cannot cannot be that LLMs "eventually" reach a sufficiently high level of performance, because so would the lookup table (and decision trees, and Gaussian processes, and so on). To have this work in practice, we additionally need the premise that LLMs will reach this level of performance after a practical amount of training data and a practical amount of compute. Do LLMs meet this more strict condition? That is unclear. We are not far from using literally all text data in existence to train them, and the training costs are getting quite hefty too.
5. Things Scale Until They Don't
Suppose that we wish to go to the moon, but we do not have the technology to do so. The task of getting to the moon is of course a matter of getting sufficiently high up from the ground. Now suppose that a scientist makes the following argument; ladders get you up from the ground. Moreover, they have the highly desirable property that the distance that you get from the ground scales linearly in the amount of material that you use to construct the ladder. Getting to the moon will therefore just be a matter of a sufficiently large project investing enough resources into a sufficiently large ladder.
Suppose that we wish to build AI, but we do not have the technology to do so. The task of building AI is of course a matter of creating a system that knows a sufficiently large number of things, in terms of facts about the world, ways to learn more things, and ways to attain outcomes. Suppose someone points out that all of these things can be encoded in logical statements, and that the more logical statements you encode, the closer you get to the goal. Getting to AI will therefore just be a matter of [a sufficiently large project](https://cyc.com/) investing enough resources into encoding a sufficiently large number of facts in the form of logical statements.
And so on.
6. Word Prediction is not Intelligence
Here, I will give a few arguments against premise/assumption (3); that in reaching the limit of performance for next-word prediction, an LLM would invariably develop internal circuits for all (or most) of the tasks of intelligent behaviour. The kinds of AI systems that we are worried about are the kinds of systems that can do original scientific research and autonomously form plans for taking over the world. LLMs are trained to write text that would be maximally unsurprising if found on the internet. These two things are fundamentally not the same thing. Why, exactly, would we expect that a system that is good at the latter necessarily would be able to do the former? Could you get a system that can bring about atomically precise manufacturing, Dyson spheres, and computronium, from a system that has been trained to predict the content (in terms of the exact words used) of research papers found on the internet? Could such a system design new computer viruses, run companies, plan military operations, or manipulate people? These tasks are fundamentally very different. If we make a connection between the two, then there could be a risk that we are falling victims to one of the issues discussed in point 1. Remember; historically, people have often assumed that an AI system that can do X, must be able to do Y, but then turned out to be wrong. What gives us a good reason to believe that this is not one of those cases?
Direct Counterarguments
Here, I give some direct arguments against the SIAYN hypothesis, ignoring the arguments in favour of the SIAYN hypothesis.
7. The Language of Thought
This is an argument first made by the philosopher, linguist, and cognitive scientist Jerry Fodor, and was originally applied to the human brain. However, the argument can be applied to AI systems as well.
An intelligent system which can plan and reason must have a data structure for representing facts about and/or states of the world. What can we say about the nature of this data structure? First, this data structure must be able to represent a lot of things, including things that have never been encountered before (both evolutionarily, and in terms of personal experience). For example, you can represent the proposition that there are no elephants on Jupiter, and the proposition that Alexander the Great never visited a McDonalds restaurant, even though you have probably never encountered either of these propositions before. This means that the data structure must be very *productive* (which is a technical term in this context). Second, there are certain rules which say that if you can represent one proposition, then you can also represent some other proposition. For example, if you can represent a blue block on top of a red block, then you can also represent a red block on top of a blue block. This means that the data structure also must be *systematic* (which is also a technical term).
What kinds of data structures have these properties? The answer, according to Fodor, is that it is data structures with a combinatorial syntax and compositional semantics. In other words, it is data structures where two or more representations can be combined in a syntactic structure to form a larger representation, and where the semantic content of the complex representation can be inferred from the semantic content of its parts. This explains both productivity and systematicity. The human brain (and any AI system with the intelligence of a human) must therefore be endowed with such a data structure for representing and reasoning about the world. This is called the "language of thought" (LoT) hypothesis, because languages (including logical languages and programming languages) have this structure. (But, importantly, the LoT hypothesis does not say that people literally think in a language such as English, it just says that mental representations have a "language like" structure.)
This, in turn, suggests a data structure that is discrete and combinatorial, with syntax trees, etc, and neural networks do (according to the argument) not use such representations. We should therefore expect neural networks to at some point hit a wall or limit to what they are able to do.
I am personally fairly confused about what to think of this argument. I find it fairly persuasive, and often find myself thinking back to it. However, the conclusion of the argument also seems wery strong, in a suspicious way. I would love to see more discussion and examination of this.
8. Programs vs Circuits
This point will be similar to point 7, but stated somewhat differently. In short, neural network models are like circuits, but an intelligent system would need to use hypotheses that are more like programs. We know, from computer science, that it is very powerful to be able to reason in terms of variables and operations on variables. It seems hard to see how you could have human-level intelligence without this ability. However, neural networks do typically not have this ability, with most neural networks (including fully connected networks, CNNs, RNNs, LSTMs, etc) instead being more analogous to Boolean circuits.
This being said, some people have said that transformers and attention models are getting around this limitation, and are starting to reason more in terms of variables. I would love to see more analysis of this as well.
As a digression, it is worth noting that symbolic program induction style machine learning systems, such as those based on [inductive logic programming](https://en.wikipedia.org/wiki/Inductive_logic_programming), typically have much, much stronger generalisation than deep learning, from a very small number of data points. For example, you might be able to learn a program for transforming strings from ~5 training examples. It is worth playing around a bit with one of these systems, to see this for yourself. An example of a user friendly version is available [here](https://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.html). Another example is the auto-complete feature in Microsoft Excel.
9. Generalisation vs Memorisation
This point has also already been alluded to, in points 4, 7, and 8, but I will here state it in a different way. There is, intuitively, a difference between memorisation and understanding, and this difference is important. By "memorisation", I don't mean using a literal lookup table, but rather something that is somewhat more permissive. I will for now not give a formal definition of this difference, but instead give a few examples that gesture at the right concept.
For my first example, consider how a child might learn to get a decent score on an arithmetic test by memorising a lot of rules that work in certain special cases, but without learning the rules that would let it solve any problem of arithmetic. For example, it might memorise that multiplication by 0 always gives 0, that multiplication by 1 always gives the other number, that multiplication of a single-digit integer by 11 always gives the integer repeated twice, and so on. There is, intuitively, an important sense in which such a child does not yet understand arithmetic, even though they may be able to solve many problems.
For my second example, I would like to point out that a fully connected neural network cannot learn a simple identity function in a reasonable way. For example, suppose we represent the input as a bitstring. If you try to learn this function by training on only odd numbers then the network will not robustly generalise to even numbers (or vice versa). Similarly, if you train using only numbers in a certain range then the network will not robustly generalise outside this range. This is because a pattern such as "the n'th input neuron is equal to the n'th output neuron" lacks a simple representation in a neural network. This means that the behaviour of a fully connected network, in my opinion, is better characterised as memorisation than understanding when it comes to learning an identity function. The same goes for the function that recognises palindromes, and etc. This shows that knowing whether or not a network is able to express and learn a given function is insufficient to conclude that it would be able to understand it. This issue is also discussed in eg [this paper](https://arxiv.org/abs/1808.00508).
For my third example, I would like to bring up that GPT-3 can play chess, but not solve a small, verbally described maze. You can easily verify this yourself. This indicates that GPT-3 can play chess just because it has memorised a lot of cases, rather than learnt how to do heuristic search in an abstract state space.
For my fourth example, the psychologist Jean Piaget [observed](https://www.youtube.com/watch?v=gnArvcWaH6I&ab_channel=munakatay) that children that are sufficiently young consistently do not understand conservation of mass. If you try to teach such a child that mass is conserved, then they will under-generalise, and only learn that it holds for the particular substance and the particular containers that you used to demonstrate the principle. Then, at some point, the child will suddenly gain the ability to generalise to all instances. This was historically used as evidence against Skinnerian psychology (aka the hypothesis that humans are tabula rasa reinforcement learning agents).
These examples all point to a distinction between two modes of learning. It is clear that this distinction is important. However, the abstractions and concepts that we currently use in machine learning make it surprisingly hard to point to this distinction in a clear way. My best attempt at formalising this distinction in more mathematical terms (off the top of my head) is that a system that *understands* a problem is able to give (approximately) the right output (or, perhaps, a "reasonable" output) for any input, whereas a system that has memorised the problem only gives the right output for inputs that are in the training distribution. (But there are also other ways to formalise this.)
The question, then, is whether LLMs do mostly memorisation, or mostly understanding. To me, it seems as though this is still undecided. I should first note that a system which has been given such an obscenely large amount of training data as GPT-3 will be able to exhibit very impressive performance even if much of what it does is more like memorisation than understanding. There is evidence in both directions. For example, the fact that it is possible to edit an LLM to make it [consistently believe that the Eiffel Tower is in Rome](https://arxiv.org/pdf/2202.05262.pdf) is evidence that it understands certain facts about the world. However, the fact that GPT-3 can eg play chess, but not solve a verbally described maze, is evidence that it relies on memorisation as well. I would love to see a more thorough analysis of this.
As a slight digression, I currently suspect that this distinction might be very important, but that current machine learning theory essentially misses it completely. My characterisation of "understanding" as being about off-distribution performance already suggests that the supervised learning formalism in some ways is inadequate for capturing this concept. The example with the fully connected network and the identity function also shows the important point that a system may be able to express a function, but not "understand" that function.
10. Catastrophic Forgetting
Here, I just want to add the rather simple point that we currently cannot actually handle memory and dynamicism in a way that seems to be required for intelligence. LLMs are trained once, on a static set of data, and after their training phase, they cannot commit new knowledge to their long-term memory. If we instead try to train them continuously, then we run into the problem of catastrophic forgetting, which we currently do not know how to solve. This seems like a rather important obstacle to general intelligence.
Closing Words
In summary, there are several good arguments against the SIAYN hypothesis. First, there are several reasons to have high model uncertainty about AI timelines, even in the presence of strong inside-view models. In particular, people have historically been bad at predicting AI development, have historically had a tendency to overestimate language-based systems, and failed to account for the fallacy of the successful first step. Second, the argument that is most commonly used in favour of the SIAYN hypothesis fails, at least in the form that it is most often stated. In particular, the simple version of the scaling argument leaves out the scaling rate (which is crucial), and there are reasons to be skeptical that scaling will continue indefinitely, and that next-token prediction would give rise to all important cognitive capacities. Third, there are also some direct reasons to be skeptical of the SIAYN hypothesis (as opposed to the argument in favour of the SIAYN hypothesis). Many of these arguments amount to arguments against deep learning in general.
In addition to all of these points, I would also like to call attention to some of the many "simple" things that GPT-3 cannot do. Some good examples are available [here](https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/), and other good examples can be found in many places on the internet (eg [here](https://i.redd.it/3a9na7vjl36a1.png)). You can try these out for yourself, and see how they push your intuitions.
I should stress that I don't consider any of these arguments to strongly refute either the SIAYN hypothesis, or short timelines. I personally default to a very high-uncertainty model of AI timelines, with a decent amount of probability mass on both the short timeline and the long timeline scenario. Rather, my reason for writing this post is just to make some of these arguments better known and easier to find for people in the AI safety community, so that they can be used to inform intuitions and timeline models.
I would love to see some more discussion of these points, so if you have any objections, questions, or additional points, then please let me know in the comments! I am especially keen to hear additional arguments for long timelines. |
77482b44-fa3f-425e-bb13-79595295941e | trentmkelly/LessWrong-43k | LessWrong | Meetup : LessWrong Australia online hangout
Discussion article for the meetup : LessWrong Australia online hangout
WHEN: 28 June 2015 07:30:00PM (+1000)
WHERE: Canberra
See you at the online hangout. From wherever you are.
Link to be posted about 10 minutes before hand because they expire otherwise.
We use google hangouts so make sure you can get into one of those before the meetup or else there is a whole bunch of fluffing around installing things.
bring any fickle puzzles or questions to the floor. or neat group-projects.
Usual representation includes; Sydney, Melbourne, Canberra, Brisbane, NZ, This one guy from South America...
https://m.facebook.com/events/1569251440008577
time 19:30 - 22:00. UTC+10 (Sunday evening)
Discussion article for the meetup : LessWrong Australia online hangout |
a9e1cfa8-0a13-48f9-85a8-c9395d88dc4a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Near-Term Risks of an Obedient Artificial Intelligence
*[Hi everyone, Yassine here: long-time orbiter first-time poster. I figured this piece I* [*published on 1/23/23*](https://ymeskhout.substack.com/p/near-term-risks-of-an-obedient-artificial) *would be as good as any of an introduction.]*
I’ll be honest: I used to think talk of AI risk was *so* boring that I literally banned the topic at every party I hosted. The discourse generally focused on existential risks so hopelessly detached from any semblance of human scale that I couldn’t be bothered to give a shit. I played the [Universal Paperclips](https://www.decisionproblem.com/paperclips/) game and understood what a cataclysmic extinction scenario [would sort of look like](https://www.youtube.com/watch?v=Zkv6rVcKKg8), but what the fuck was I supposed to do about it *now*? It was either too far into the future for me to worry about it, or the [singularity](https://www.lesswrong.com/tag/event-horizon-thesis) was already imminent and inevitable. Moreover, the solution usually bandied about was to ensure AI is obedient (“aligned”) to human commands. It’s a quaint idea, but given how awful humans can be, this is just switching one problem for another.
So if we set aside the grimdark sci-fi scenarios for the moment, what are some near-term risks of humans using AI for evil? I can think of three possibilities where AI can be leveraged as a force multiplier by bad (human) actors: **hacking, misinformation, and scamming**.
(I initially was under the deluded impression that I chanced upon a novel insight, but in researching this topic, I realized that famed security researcher Bruce Schneier already wrote about basically the same subject way back in fucking April 2021 [what a jerk!] with his paper [The Coming AI Hackers](https://www.schneier.com/academic/archives/2021/04/the-coming-ai-hackers.html). Also note that I’m roaming outside my usual realm of expertise and *hella* speculating. Definitely do point out anything I may have gotten wrong, and definitely don’t do anything as idiotic as make investment decisions based on what I’ve written here. That would be so fucking dumb.)
---
Computers are given instructions through the very simple language of binary: on and off, ones and zeroes. The original method of “talking” to computers was a [punch card](https://en.wikipedia.org/wiki/Punched_card), which had (at least in theory) an unambiguous precision to its instructions: punch or nah, on or off, one or zero. Punch cards were intimate, artisanal, and extremely tedious to work with. In a fantastic 2017 *Atlantic* article titled [The Coming Software Apocalypse](https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/), James Somers charts how computer programming changed over time. As early as the 1960s, software engineers were objecting to the introduction of this new-fangled “[assembly language](https://en.wikipedia.org/wiki/Assembly_language)” as a replacement for punch cards. The old guard worried that replacing *10110000 01100001* on a punch card with *MOV AL, 61h* might result in errors or misunderstandings about what the human *actually* was trying to accomplish. This argument lost because the benefits of increased [code abstraction](https://en.wikipedia.org/wiki/Abstraction_(computer_science)) were too great to pass up. Low-level languages like assembly are an ancient curiosity now, having long since been replaced by [high-level languages](https://en.wikipedia.org/wiki/High-level_programming_language) like Python and others. All those in turn risk being replaced by AI coding tools like Github’s Copilot.
Yet despite the increasing complexity, even sophisticated systems remained *scrutable* to mere mortals. Take, for example, a multibillion-dollar company like Apple, which employs thousands of the world’s greatest cybersecurity talent and tasks them with making sure whatever code ends up on iPhones is buttoned up nice and tight. Nevertheless, not too long ago it was still [perfectly feasible for a single sufficiently motivated and talented individual to successfully find and exploit vulnerabilities](https://www.forbes.com/sites/andygreenberg/2011/08/01/meet-comex-the-iphone-uber-hacker-who-keeps-outsmarting-apple/) in Apple’s library code just by tediously working out of his living room.
Think of increased abstraction in programming as a gain in altitude, and AI coding tools are the yoke pull that will bring us escape velocity. The core issue here is that any human operator looking below will increasingly lose the ability to comprehend anything within the landscape their gaze happens to rest upon. In contrast, AI can swallow up *and understand* entire rivers of code in a single gulp, effortlessly highlighting and patching vulnerabilities as it glides through the air. In the same amount of time, a human operator can barely kick a panel open only to then find themselves staring befuddled at the vast oceans of spaghetti code below them.
There’s a semi-plausible scenario in the far future where technology becomes so unimaginably complex that only [Tech-Priests](https://warhammer40k.fandom.com/wiki/Tech-Priest) endowed with the proper religious rituals can meaningfully operate machinery. Setting aside *that* grimdark possibility and focusing just on the *human risk* aspect for now, increased abstraction isn’t actually too dire of a problem. In the same way that tech companies and teenage hackers waged an arms race over finding and exploiting vulnerabilities, the race will continue except the entry price will *require* a coding BonziBuddy. Code that is *not* washed clean of vulnerabilities by an AI check will be hopelessly torn apart in the wild by malicious roving bots sniffing for exploits.
Until everyone finds themselves on equal footing where defensive AI is broadly distributed, the transition period will be particularly dangerous for anyone even slightly lagging behind. But because AI can be used to find exploits *before* release, [Schneier believes this dynamic will ultimately result](https://www.schneier.com/academic/archives/2021/04/the-coming-ai-hackers.html) in a world that favors the defense, where software vulnerabilities eventually become a thing of the past. The arms race will continue, except it will be relegated to a clash of titans between adversarial governments and large corporations bludgeoning each other with impossibly large AI systems. I might end up eating my words eventually, but the dynamics described here seem unlikely to afford rogue criminal enterprises the ability to have both access to whatever the cutting-edge AI code sniffers are *and* the enormous resource footprint required to operate them.
---
So how about something more fun, like politics! Schneier and Nathan E. Sanders wrote an *NYT* op-ed recently that was hyperbolically titled [How ChatGPT Hijacks Democracy](https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html). I largely agree with [Jesse Singal’s response](https://jessesingal.substack.com/p/artificial-intelligence-is-more-complicated) in that many of the concerns raised easily appear overblown when you realize they’re describing already existing phenomena:
> There’s also a fatalism lurking within this argument that doesn’t make sense. As Sanders and Schneier note further up in their piece, computers (assisted by humans) have long been able to generate huge amounts of comments for… well, any online system that accepts comments. As they also note, we have adapted to this new reality. These days, even folks who are barely online know what spam is.
>
>
Adaptability is the key point here. There is a tediously common cycle of hand-wringing over whatever is the latest deepfake technology advance, and how it has the potential to obliterate our capacity to discern truth from fiction. This just has not happened. We’ve had photograph manipulation literally since the invention of the medium; we have been living with a cinematic industry capable of rendering whatever our minds can conjure with unassailable fidelity; and yet, we’re still here. Anyone right now can trivially fake whatever text messages they want, but for some reason this has not become any sort of scourge. It’s by no means perfect, but nevertheless, there is something remarkably praiseworthy about humanity’s ability to sustain and develop properly calibrated skepticism about the changing world we inhabit.
What also helps is that, at least at present, the state of astroturf propaganda is pathetic. Schneier cites an example of about 250,000 tweets [repeating the same pro-Saudi slogan verbatim](https://www.bbc.com/news/blogs-trending-45901584) after the 2018 murder of the journalist Jamal Khashoggi. Perhaps the most concerted effort in this arena is what is colloquially known as Russiagate. Russia did indeed *try* to spread deliberate misinformation in the 2016 election, but the effect (if any) was [too miniscule to have any meaningful impact](https://reason.com/2023/01/09/russia-twitter-trump-election-no-influence-fake/) on any electoral outcome, MSNBC headlines notwithstanding. The lack of results is despite the fact that Russia’s Internet Research Agency, which was responsible for the scheme, [had $1.25 million to spend every month](https://www.businessinsider.com/russian-troll-farm-spent-millions-on-election-interference-2018-2) and employed [hundreds of “specialists.”](https://www.justice.gov/file/1035477/download)
But let’s steelman the concern. Whereas Russia had to rely on flesh and blood humans to generate fake social media accounts, AI can be used to [drastically expand the scope of possibilities](https://www.atlanticcouncil.org/wp-content/uploads/2017/09/The_MADCOM_Future_RW_0926.pdf). Beyond reducing the operating cost to near-zero, entire ecosystems of fake users can be conjured out of thin air, along with detailed biographies, unique distinguishing characteristics, and specialization backgrounds. Entire libraries of fabricated bibliographies can similarly be summoned and seeded throughout the internet. Google’s system for detecting fraudulent website traffic was calibrated based on [the assumption that a majority of users were human](https://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html). How would we know what’s real and what isn’t if the swamp gets too crowded? Humans also rely on heuristics (“many people are saying”) to make sense of information overload, so will this new AI paradigm augur an age of [epistemic learned helplessness](https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/)?
Eh, doubtful. Propaganda created with the resources and legal immunity of a government is the only area I *might* have concerns over. But consistent with the notion of the [big lie](https://en.wikipedia.org/wiki/Big_lie), the false ideas that spread the farthest appear deliberately made to be as bombastic and outlandish as possible. Something false and banal is not interesting enough to care about, but something false and *crazy* spreads because it selects for gullibility among the populace (see QAnon). I can’t predict the future, but the concerns raised here do not seem materially different from similar previous panics that turned out to be duds. Humans’ persistent adaptability in processing information appears to be so consistent that it might as well be an axiom.
---
And finally, scamming. Hoo boy, are people *fucked*. There’s nothing new about swindlers. The classic Nigerian prince email scam was just a [repackaged version of similar scams](https://en.wikipedia.org/wiki/Advance-fee_scam#History) from the sixteenth century. The awkward broken English used in these emails obscures just [how labor-intensive it can be to run a 419 scam enterprise from a Nigerian cybercafe](https://www.latimes.com/la-fg-dollars20-2005oct20-story.html). Scammers can expect *maybe* a handful of initial responses from sending hundreds of emails. The patently fanciful circumstances described by these fictitious princes follow a similar theme for conspiracies: The goal is to select for gullibility.
But even after a mark is hooked, the scammer has to invest a lot of time and finesse to close the deal, and the *immense* gulf in wealth between your typical Nigerian scammer and your typical American victim is what made the atrociously low success rates worthwhile. *The New Yorker* article [The Perfect Mark](https://www.newyorker.com/magazine/2006/05/15/the-perfect-mark) is a highly recommended and deeply frustrating read, outlining in excruciating detail how one psychotherapist in Massachusetts lost more than $600,000 and was sentenced to prison.
This scam would not have been as prevalent had there not existed a country brimming with English-speaking people with internet access and living in poverty. Can you think of anything else with internet access that can speak infinite English? Get ready for Nigerian Prince Bot 4000.
Unlike the cybersecurity issue, where large institutions have the capabilities and the incentive to shore up defenses, it’s not obvious how *individuals* targeted by confidence tricks can be protected. Besides putting them in a rubber room, of course. No matter how tightly you encrypt the login credentials of someone’s bank account, you will always need to give them *some* way to access their own account, and this means that [social engineering](https://en.wikipedia.org/wiki/Social_engineering_(security)) will always remain the prime vulnerability in a system. Best of luck, everyone.
---
Anyways, AI sounds scary! Especially when wielded by bad people. On the flipside of things, I am excited about all the neat video games we’re going to get as AI tools continue to trivialize asset creation and coding generation. That’s pretty cool, at least. 🤖 |
8e68be92-699b-4155-97dd-7ae2dd9408a6 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | We're Not Ready: thoughts on "pausing" and responsible scaling policies
*Views are my own, not Open Philanthropy’s. I am married to the President of [Anthropic](https://www.anthropic.com/) and have a financial interest in both Anthropic and OpenAI via my spouse.*
Over the last few months, I’ve spent a lot of my time trying to help out with efforts to get [responsible scaling policies](https://evals.alignment.org/blog/2023-09-26-rsp/) adopted. In that context, a number of people have said it would be helpful for me to be publicly explicit about whether I’m in favor of an AI [pause](https://pauseai.info/). This post will give some thoughts on these topics.
I think transformative AI could be soon, and we’re not ready
------------------------------------------------------------
I have a strong default to thinking that scientific and technological progress is good and that worries will tend to be overblown. However, I think AI is a big exception here because of its potential for [unprecedentedly rapid and radical transformation](https://www.cold-takes.com/most-important-century/).[1](#fn1)
I think sufficiently advanced AI would present enormous risks to the world. I’d put the risk of a [world run by misaligned AI](https://www.cold-takes.com/cold-takes-on-ai/#the-risk-of-misaligned-ai) (or an outcome broadly similar to that) between [10-90%](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#So__would_civilization_survive_) (so: above 10%) if it is developed relatively soon on something like today’s trajectory. And there are a whole host of other issues ([e.g.](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/)) that could be just as important if not more so, that it seems like no one has really begun to get a handle on.
Is that level of AI coming soon, and could the world be “ready” in time? Here I want to flag that timelines to transformative or even catastrophically risky AI are very debatable, and **I have tried to focus my work on proposals that make sense even for people who disagree with me on the below points*.*** But my own views are that:
* There’s a serious (>10%) risk that we’ll see transformative AI[2](#fn2) within a few years.
* In that case it’s not realistic to have sufficient protective measures for the risks in time.
* Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.
If it were all up to me, the world would pause now - but it isn’t, and I’m more uncertain about whether a “partial pause” is good
---------------------------------------------------------------------------------------------------------------------------------
In a hypothetical world where everyone shared my views about AI risks, there would (after deliberation and soul-searching, and only if these didn’t change my current views) be a global regulation-backed pause on all investment in and work on (a) general[3](#fn3) enhancement of AI capabilities beyond the current state of the art, including by scaling up large language models; (b) building more of the hardware (or parts of the pipeline most useful for more hardware) most useful for large-scale training runs (e.g., [H100’s](https://www.nvidia.com/en-us/data-center/h100/)); (c) algorithmic innovations that could significantly contribute to (a).
The pause would end when it was clear how to *progress some amount further with negligible catastrophic risk* and *reinstitute the pause before going beyond negligible catastrophic risks*. (This means another pause might occur shortly afterward. Overall, I think it’s plausible that the right amount of time to be either paused or in a sequence of small scaleups followed by pauses could be decades or more, though this depends on a lot of things.) This would require a strong, science-backed understanding of AI advances such that we could be assured of quickly detecting early warning signs of any catastrophic-risk-posing AI capabilities we didn’t have sufficient protective measures for.
I didn’t have this view a few years ago. Why now?
* I think today’s state-of-the-art AIs are already in the zone where (a) we can already learn a huge amount (about AI alignment and other things) by studying them; (b) it’s hard to rule out that a modest scaleup from here - or an improvement in “post-training enhancements” (advances that make it possible to do more with an existing AI than before, without having to do a new expensive training run)[4](#fn4) - could lead to models that pose catastrophic risks.
* I think we’re pretty far from being ready even for early versions of catastrophic-risk-posing models (for example, I think information security is not where it needs to be, and this won’t be a quick fix).
* If a model’s weights were stolen and became widely available, it would be hard to rule out that model becoming more dangerous later via post-training enhancements. So even training slightly bigger models than today’s state of the art seems to add nontrivially to the risks.
All of that said, I think that advocating for a pause now might lead instead to a “partial pause” such as:
* Regulation-mandated pauses in some countries and not others, with many researchers going elsewhere to work on AI scaling.
* Temporary bans on large training runs, but not on post-training improvements or algorithmic improvements or expansion of hardware capacity. In this case, an “unpause” - including via new scaling methods that didn’t technically fall under the purview of the regulatory ban, or via superficially attractive but insufficient protective measures, or via a sense that the pause advocates had “cried wolf” - might lead to extraordinarily fast progress, much faster than the default and with a more intense international race.
* Regulation with poor enough design and/or enough loopholes as to create a substantial “honor system” dynamic, which might mean that people more concerned about risks become totally uninvolved in AI development while people less concerned about risks race ahead. This in turn could mean a still-worse ratio of progress on AI capabilities to progress on protective measures.
* No regulation or totally mis-aimed regulation (e.g., restrictions on deploying large language models but not on training them), accompanied by the same dynamic from the previous bullet point.
It’s much harder for me to say whether these various forms of “partial pause” would be good.
To pick a couple of relatively simple imaginable outcomes and how I’d feel about them:
* If there were a US-legislated moratorium on training runs exceeding a compute threshold in line with today’s state-of-the-art models, with the implicit intention of doing so until there was a convincing and science-backed way of bounding the risks - with broad but not necessarily overwhelming support from the general public - I’d consider this to be probably a good thing. I’d think this even if the ban (a) didn’t yet come with signs of progress on international enforcement; (b) started with only relatively weak domestic enforcement; and (c) didn’t include any measures to slow production of hardware, advances in algorithmic efficiency or post-training enhancements. In this case I would be hopeful about progress on (a) and (b), as well as on protective measures generally, because of the strong signal this moratorium would send internationally about the seriousness of the threat and the urgency of developing a better understanding of the risks, and of making progress on protective measures. I have very low confidence in my take here and could imagine changing my mind easily.
* If a scaling pause were implemented using executive orders that were likely to be overturned next time the party in power changed, with spotty enforcement and no effects on hardware and algorithmic progress, I’d consider this pause a bad thing. This is also a guess that I’m not confident in.
Overall I don’t have settled views on whether it’d be good for me to prioritize advocating for any particular policy.[5](#fn5) At the same time, if it turns out that there is (or will be) a lot more agreement with my current views than there currently seems to be, I wouldn’t want to be even a small obstacle to big things happening, and there’s a risk that my lack of active advocacy could be confused with opposition to outcomes I actually support.
I feel generally uncertain about how to navigate this situation. For now I am just trying to spell out my views and make it less likely that I’ll get confused for supporting or opposing something I don’t.
Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine (with some risks that I think can be managed)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
My sense is that people have views all over the map about AI risk, such that it would be hard to build a big coalition around the kind of pause I’d support most.
* Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.
* Some people think such risks might be real and soon, but that we’ll make enough progress on security, alignment, etc. to handle the risks - and indeed, that further scaling is an important enabler of this progress (e.g., a lot of alignment research will work better with more advanced systems).
* Some people think the risks are real and soon, but might be relatively small, and that it’s therefore more important to focus on things like the U.S. staying ahead of other countries on AI progress.
I’m excited about RSPs partly because it seems like people in those categories - not just people who agree with my estimates about risks - should support RSPs. This raises the possibility of a much broader consensus around *conditional pausing* than I think is likely around *immediate (unconditional) pausing*. And with a broader consensus, I expect an easier time getting well-designed, well-enforced regulation.
I think RSPs represent an opportunity for wide consensus that pausing *under certain conditions* would be good, and this seems like it would be an extremely valuable thing to establish.
Importantly, agreeing that certain conditions would justify a pause is not the same as agreeing that they’re the *only* such conditions. I think agreeing that a pause needs to be prepared for at all seems like the most valuable step, and revising pause conditions can be done from there.
Another reason I am excited about RSPs: I think optimally risk-reducing regulation would be very hard to get right. (Even the hypothetical, global-agreement-backed pause I describe above would be hugely challenging to design in detail.) When I think something is hard to design, my first instinct is to hope for someone to take a first stab at it (or at least at some parts of it), learn what they can about the shortcomings, and iterate. RSPs present an opportunity to do something along these lines, and that seems much better than focusing all efforts and hopes on regulation that might take a very long time to come.
There is a risk that RSPs will be seen as a measure that is *sufficient to contain risks by itself* - e.g., that governments may refrain from regulation, or simply enshrine RSPs into regulation, rather than taking more ambitious measures. Some thoughts on this:
* I think it’s good for proponents of RSPs to be open about the sorts of topics I’ve written about above, so they don’t get confused with e.g. proposing RSPs as a superior alternative to regulation. This post attempts to do that on my part. And to be explicit: I think regulation will be necessary to contain AI risks (RSPs alone are not enough), and should almost certainly end up stricter than what companies impose on themselves.
* In a world where there’s significant political support for regulations well beyond what companies support, I expect that any industry-backed setup will be seen as a *minimum* for regulation. In a world where there isn’t such political support, I think it would be a major benefit for industry standards to include conditional pauses. So overall, the risk seems relatively low and worth it here.
* I think it’d be unfortunate to try to manage the above risk by resisting attempts to build consensus around conditional pauses, if one does in fact think conditional pauses are better than the status quo. Actively fighting improvements on the status quo because they might be confused for sufficient progress feels icky to me in a way that’s hard to articulate.
Footnotes
---------
1. The other notable exception I’d make here is biology advances that could facilitate advanced bioweapons, again because of how rapid and radical the destruction potential is. I default to optimism and support for scientific and technological progress outside of these two cases. [↩](#fnref1)
2. I like [this discussion](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines#Picturing_a_more_specific_and_somewhat_lower_bar_for_TAI) of why improvements on pretty narrow axes for today’s AI systems could lead quickly to broadly capable transformative AI. [↩](#fnref2)
3. People would still be working on making AI better at various specific things (for example, resisting attempts to jailbreak harmlessness training, or just narrow applications like search and whatnot). It’s hard to draw a bright line here, and I don’t think it could be done perfectly using policy, but in the “if everyone shared my views” construction everyone would be making at least a big effort to avoid finding major breakthroughs that were useful for general enhancement of very broad and hard-to-bound suites of AI capabilities. [↩](#fnref3)
4. Examples include improved fine-tuning methods and datasets, new plugins and tools for existing models, new elicitation methods in the general tradition of chain-of-thought reasoning, etc. [↩](#fnref4)
5. I do think that at least someone should be trying it. There’s a lot to be learned from doing this - e.g., about how feasible it is to mobilize the general public - and this could inform expectations about what kinds of “partial victories” are likely. [↩](#fnref5) |
ec271147-4241-4aea-bc4c-c72059dc485d | trentmkelly/LessWrong-43k | LessWrong | How to hack one's self to want to want to ... hack one's self.
I was inspired by the recent post discussing self-hacking for the purpose of changing a relationship perspective to achieve a goal. Despite my feeling inspired, though, I also felt like life hacking was not something I could ever want to do even if I perceived benefits to doing it. It seems to me that the place where I would need to begin is hacking myself in order to cause myself to want to be hacked. But then I started contemplating whether this is a plausible thing to do.
In my own case, there are two concrete examples in mind. I am a graduate student working on applied math and probability theory in the field of machine vision. I was one of those bright-eyes, bushy-tailed dolts as an undergrad who just sort of floated to grad school believing that as long as I worked sufficiently hard, it was a logical conclusion that I would get a tenure-track faculty position at a desirable university. Even though I am a fellowship award winner and I am working with a well-known researcher at an Ivy League school, my experience in grad school (along with some noted articles) has forced me to re-examine a lot of my priorities. Tenure-track positions are just too difficult to achieve and achieving them is based on networking, politics, and whether the popularity of your research happens to have a peak at the same time that your productivity in that area also has a peak.
But the alternatives that I see are: join the consulting/business/startup world, become a programmer/analyst for a large software/IT/computer company, work for a government research lab. I worked for two years at MIT's Lincoln Laboratory as a radar analyst and signal processing algorithm developer prior to grad school. The main reason I left that job was because I (foolishly) thought that graduate school was where someone goes to specifically learn the higher-level knowledge and skills to do theoretical work that transcends the software development / data processing work that is so common. I'm more interested i |
891c8c0d-8edc-4964-af5a-adaf589611d0 | trentmkelly/LessWrong-43k | LessWrong | Toy alignment problem: Social Nework KPI design
Suppose you are not a superhuman AGI, but instead a startup in 2006 (destined to grow to a large global company by 2022).
(This setup hopefully makes it less complicated then a Open-Source Wish Project as we use a company of smart young people, not an omnipotent mean genie, to steer the story towards disaster)
Your product is a social network.
If you optimize for money, in 2022 you'll sell targeting ads via psychological profiles of your users, which is a bad outcome.
If you optimize for user engagement, in 2022 you'll be selling in-game virtual goods to addicted mobile gamers escaping from sad reality, which is a bad outcome.
If you optimize for the number of social connections, in 2022 you'll end up with ...realistically with no product anymore, as people will abandon a place where there are 1000 fake accounts of Selena Gomez, Santa Claus and Pope created just to collect the most number of edges with no real human connection (which I happen to know firsthand) - which is a bad outcome.
If you optimize for the number of photos, posts, or videos uploaded, you'll probably get swamped in low quality content.
If you optimize for the quality of the content measured by number of likes, you'll probably end up creating a culture of low esteem people seeking external validation nervously refreshing notification tab waiting for the slot machine to show them love. Which is a bad outcome.
If you optimize for quality as in amount of time spent consuming content you'll probably get bing watching and/or content marketing ranging from benign product placement to distorting reality in favor of sponsor's views.
If you optimize for quality measured by number of clicks or shares you'll end up with click-bites and, click-stealing iframes, "tell your friends!" popups and swamped notifications.
If you optimize for the amount of interactions between members, you'll probably get spam.
If you optimize for two-way interactions, then you'll probably get culture wars.
If you optimiz |
3c663c61-2cf3-45c2-a24a-ee68a3256228 | trentmkelly/LessWrong-43k | LessWrong | Experts' AI timelines are longer than you have been told?
This is a linkpost for How should we analyse survey forecasts of AI timelines? by Tom Adamczewski, which was published on 16 December 2024[1]. Below are some quotes from Tom's post, and a bet I would be happy to make with people whose AI timelines are much shorter than those of the median AI expert.
How should we analyse survey forecasts of AI timelines?
Read at AI Impacts
The Expert Survey on Progress in AI (ESPAI) is a large survey of AI researchers about the future of AI, conducted in 2016, 2022, and 2023. One main focus of the survey is the timing of progress in AI1.
[...]
This plot represents a summary of my best guesses as to how the ESPAI data should be analysed and presented.
["Experts were asked when it will be feasible to automate all tasks or occupations. The median expert thinks this is 20% likely by 2048, and 80% likely by 2103".]
[...]
I differ from previous authors in four main ways:
* Show distribution of responses. Previous summary plots showed a random subset of responses, rather than quantifying the range of opinion among experts. I show a shaded area representing the central 50% of individual-level CDFs (25th to 75th percentile). More
* Aggregate task and occupation questions. Previous analyses only showed task (HLMI) and occupation (FAOL) results separately, whereas I provide a single estimate combining both. By not providing a single headline result, previous approaches made summarization more difficult, and left room for selective interpretations. I find evidence that task automation (HLMI) numbers have been far more widely reported than occupation automation (FAOL). More
* Median aggregation. I’m quite uncertain as to which method is most appropriate in this context for aggregating the individual distributions into a single distribution. The arithmetic mean of probabilities, used by previous authors, is a reasonable option. I choose the median merely because it has the convenient property that we get the same result whether we t |
670b38ed-cb16-47de-aa6c-85a68cda6eaa | trentmkelly/LessWrong-43k | LessWrong | Why Science is slowing down, Universities and Maslow's hierarchy of needs
I don’t have a very high prior in regards to the correctness of Maslow's hierarchy of needs, but as far as general theories for understanding human needs go, I think it’s a pretty good.
There’s certainly people who seem to go strongly against it, to the point where they only require self actualization or where they are perfectly happy in life with only their physiological needs needs barely meet.
BUT
For all of the exceptions, most people, even exceptional people, seem to roughly live their life in accordance to it.
The gradual passage into adulting can be pretty daunting for people, even for well adjusted people with loving parents that can maintain a comfortable standard of living, for this reason. Gradually you are expected to find “safety” (i.e. financial stability, a house, a safe place to live) and “belonging” partially on your own.
Enter universities, the role of institutes of higher education in a well adjusted society should arguably be pushing the boundaries of human knowledge. Before they would also constitute a repository of information by maintaining huge libraries and people that could navigate them, but today we have the internet, .txt, .latex, .pdf, search engines and decent 10TB HDDs that sell for 100-200$ with tax, so I think it’s safe to say that role can now be played pretty cheaply.
So, universities now remain a places that educated and help people to navigate and enlarge the boundaries of human knowledge.
The recognition and most of all self satisfaction given by extending said boundaries is pretty great (or so I hear). So I think it’s safe to say that this role is one to be pursued by people that feel the needs on step 5 and possibly 4 of the pyramid.
BUT
In turn this is a process that requires a great deal of effort, dedication and intelligence, things that are hard to find and hard to direct for anyone that hasn’t fulfilled steps 1, 2 and 3 pretty well. Again, exceptions exist, but for basically all people it’s much easier to thin |
bfe251f1-2325-436b-a314-726823adc415 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems
1 Introduction
---------------
A world populated with intelligent and autonomous systems that simplify our lives is gradually becoming a reality. These systems are *autonomous* in the sense that they can devise a sequence of actions to achieve some given objectives or goals, without human intervention. Such systems are deeply integrated into our daily lives through various applications such as mobile health monitoring [[40](#bib.bib69 "Mobile devices and health")], intelligent tutoring [[13](#bib.bib37 "Tractable POMDP representations for intelligent tutoring systems")], self-driving cars [[49](#bib.bib40 "Building strong semi-autonomous systems")], and clinical decision making [[4](#bib.bib36 "Artificial intelligence framework for simulating clinical decision-making: a markov decision process approach")]. This broad deployment brings along new challenges and increased responsibility for designers of AI systems, particularly ensuring that these systems operate as expected when deployed in the real-world. Despite recent advances in artificial intelligence and machine learning, there are no ways to assure that systems will always “do the right thing” when operating in the open world [[24](#bib.bib52 "Identifying unknown unknowns in the open world: representations and policies for guided exploration")].
For example, consider an autonomous vehicle (AV) that was carefully designed and tested for safety aspects such as yielding to pedestrians and conforming to traffic rules. When deployed, the AV may not slow down when driving through puddles and splash water on nearby pedestrians. Another documented example of undesirable behavior in AVs is the vehicle swerving left and right multiple times to localize itself for active lane-keeping. During this process, the vehicle rarely prompted the driver to take control [[20](#bib.bib10 "Reality check: research, deadly crashes show need for caution on road to full autonomy")]. This behavior, especially on curvy and hilly roads, can startle the driver or cause panic.
Undesirable behaviors may occur even when performing relatively simple tasks. For example, robot vacuum cleaners are becoming increasingly popular and they have a simple task—to remove dirt from the floor. A robot vacuum cleaner in Florida ran over animal feces in the house and continued its cleaning cycle, smearing the mess around the house [[41](#bib.bib16 "Roomba creator responds to reports of ‘poopocalypse’: ’we see this a lot’")]. In an extreme case in South Korea, a robot vacuum cleaner locked into the hair of a woman who was sleeping on the floor, mistaking her hair for dust [[25](#bib.bib14 "South korean woman’s hair ’eaten’ by robot vacuum cleaner as she slept")].
A key factor affecting an agent’s performance is its knowledge of the environment in which it is situated. In these examples, the agent was performing its task, perhaps optimally with respect to the information provided to it, but there were serious negative side effects to the agent’s actions. In the AV example, driving fast through puddles is optimal when optimizing travel time. The side effects are due to the limited scope of the agent’s model, not accounting for the undesirability of splashing water on pedestrians. In practice, it is not feasible to anticipate all possible negative side effects and accurately encode them in the model at design time. Due to the practical limitations of data collection and model specification, agents operating in the open world often rely on incomplete knowledge of their target environment which may lead to unexpected, undesirable consequences. Addressing the potential undesirable behaviors of autonomous systems is critical to support long-term autonomy and ensure that a deployed AI system is reliable.
There have been numerous recent studies focused on the broad challenge of building safe and reliable AI systems [[1](#bib.bib47 "Concrete problems in AI safety"), [30](#bib.bib71 "Research priorities for robust and beneficial artificial intelligence"), [38](#bib.bib70 "Tutorial: safe and reliable machine learning"), [42](#bib.bib23 "Preventing undesirable behavior of intelligent machines")]. Here, we examine the particular problem of identifying and mitigating the impacts of undesirable side effects of an agent’s actions when operating in the open world. We do not consider system failure or negative side effects that result from intentional adversarial attack on the system [[5](#bib.bib73 "Wild patterns: ten years after the rise of adversarial machine learning"), [7](#bib.bib72 "Adversarial sensor attack on lidar-based perception in autonomous driving")].
>
> *Negative side effects (NSE) are undesired effects of an agent’s actions that occur in addition to the agent’s intended effects when operating in the open world.* (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems")).
>
>
>

Figure 1: Negative side effects of an agent’s behavior.
Negative side effects occur because the agent’s model and objective function focus on some aspects of the environment but its operation could impact additional aspects of the environment. The value alignment problem studies the unsafe behavior of an agent when its objective does not align with human values [[18](#bib.bib78 "Cooperative inverse reinforcement learning"), [31](#bib.bib45 "Provably beneficial artificial intelligence"), [32](#bib.bib4 "Human compatible: artificial intelligence and the problem of control")]. Misaligned systems are more likely to produce negative side effects. However, the occurrence of negative side effects does not necessarily indicate that there is a value alignment problem. Negative side effects can occur even in settings where the agent optimizes legitimate objectives that align with the user’s goals, due to incomplete knowledge and distributional shift. For example, while driving in Boston, AVs that are programmed to not run into obstacles were stopped by the local breed of unflappable seagulls standing on the street [[8](#bib.bib12 "All the things that still baffle self-driving cars, starting with seagulls")]. Not running into obstacles is well-aligned with the users’ intentions and objectives, but there are side effects because the agent lacks knowledge that it can edge to startle the birds and then continue driving. In fact, such knowledge was later added to the system to resolve the problem. In addition, some systems may cause unavoidable negative side effects that cannot be mitigated. While the side effects may be undesirable, the user may accept the system as is, once they learn about it and recognize that the side effects are unavoidable. In such cases, we cannot say that there is a value alignment problem, even though the negative side effects may occur.
Certainly, some negative side effects could be anticipated or detected during system development and appropriate mechanisms to mitigate their impacts could be implemented prior to deployment. This article focuses on negative side effects that are discovered when the system is deployed, due to a variety of factors such as unanticipated domain characteristics, unanticipated consequences of system or software upgrade, or cultural differences among the target user and development team. Design decisions that may be innocuous during initial testing may have a significant impact when a system is widely deployed. For example, the issue of a Roomba locking into the hair of a person lying on the floor emerged only after the system was deployed in Asia. Overcoming negative side effects is an emerging area that is attracting increased attention within the AI community [[1](#bib.bib47 "Concrete problems in AI safety"), [17](#bib.bib49 "Inverse reward design"), [19](#bib.bib32 "Avoiding unintended AI behaviors"), [22](#bib.bib34 "Penalizing side effects using stepwise relative reachability"), [31](#bib.bib45 "Provably beneficial artificial intelligence"), [44](#bib.bib24 "Conservative agency via attainable utility preservation"), [33](#bib.bib42 "A multi-objective approach to mitigate negative side effects"), [39](#bib.bib31 "Preferences implicit in the state of the world"), [46](#bib.bib48 "Minimax-regret querying on side effects for safe optimality in factored markov decision processes")].
The severity of negative side effects may range from mild to safety-critical failures.
Often, the discussions around the risk of encountering negative side effects have highlighted catastrophic events. While these discussions are critical and essential, AI systems in general are carefully designed and tested for such failures before deployment. With the increasing growth in the capabilities and deployment of AI systems, it is equally important to address the negative side effects that are not catastrophic, but have significant impacts. Such side effects occur more frequently but are often overlooked, particularly when the only remedy available is to remove the product and develop a new version that can avoid the undesired behavior. Hence, providing end users the tools to identify and mitigate the impacts of negative side effects
is critical in shaping how users view, interact, collaborate, and trust AI systems [[34](#bib.bib3 "Understanding user attitudes towards negative side effects of AI systems")].
The rest of this article identifies key characteristics of negative side effects, highlights the challenges in overcoming negative side effects, and discusses the recent research progress in this area. To promote a better understanding of the prevalence of negative side effects and to provide common test cases for the research community, we have created a public repository that allows AI researchers to report new cases. We conclude the article with a discussion of open questions to encourage future research in this area.
We introduce a taxonomy of negative side effects, outlined in Table [1](#S1.T1 "Table 1 ‣ 1 Introduction ‣ Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems"). Understanding the characteristics of negative side effects (NSE) helps design better solution approaches to detect and mitigate their impacts in deployed systems.
| Property | Property Values |
| --- | --- |
| Severity | Ranges from mild to safety-critical |
| Reversibility | Reversible or irreversible |
| Avoidability | Avoidable or unavoidable |
| Frequency | Common or rare |
| Stochasticity | Deterministic or probabilistic |
| Observability | Full, partial, or unobserved |
| Exclusivity | Prevent task completion or not |
Table 1: Taxonomy of negative side effects.
Severity: The severity of negative side effects ranges from mild side effects that can be largely ignored to safety-critical failures that require suspension of the system deployment. Safety-critical side effects are typically addressed by redesigning the model and hence require extensive evaluation before redeployment. An example of a safety-critical NSE is an AV failing to detect a construction worker’s hand gestures [[9](#bib.bib74 "A survey of legal issues arising from the deployment of autonomous and connected vehicles")].
We conjecture that many negative side effects lie in the middle with significant impacts that require attention, but not sufficiently critical to suspend the service. An autonomous vehicle that does not slow down when going through puddles can cause significant impacts, but those are unlikely to be considered sufficiently critical to roll-back its deployment, particularly if mechanisms are provided to mitigate the negative impacts. Addressing such NSE without suspension of service requires agent adaptation and online planning.
Reversibility: Side effects are reversible if the impact can be reversed or negated, either by the agent causing it or via external intervention. For example, breaking a vase is an irreversible side effect, regardless of the agent’s skills [[1](#bib.bib47 "Concrete problems in AI safety")]. Side effects such as leaving marks on a wall can be fixed by repainting it, but the agent may require external assistance to achieve that.
Avoidability: In some problems, it may be impossible to avoid the negative side effects during the course of the agent’s operation to complete its assigned task. This introduces a trade-off between performing agent’s assigned task and avoiding the side effects. For example, the side effects of driving through puddles are unavoidable if all roads leading to the destination have puddles. Addressing unavoidable NSE requires a principled approach to balance the trade-off between avoiding side effects and optimizing the completion of the assigned task.
Frequency: The frequency of occurrence of negative side effects depends on the environmental conditions and the action plan. Certain NSE may occur rarely, considering all use cases, but may occur frequently for a small subset of cases. A robot pushing a box over a rug may dirty it as a negative side effect. This is an example of a frequently occurring negative side effect when the domain of operation is largely covered with a rug. The frequency of occurrence could impact the approach to identify negative side effects and the corresponding mitigation approach.
Stochasticity: The occurrence of negative side effects may be deterministic or probabilistic. Deterministic NSE always occur when some action preconditions arise in the open world. Side effects are probabilistic when their occurrence is not certain even when the right preconditions arise. For example, there may be a small probability that a robot may accidentally slide and scratch the wall while pushing a box, but that undesired effect may happen only 20% of the times the robot slips.
Observability: The agent’s observability of the actual NSE or the conditions that trigger them are generally determined by the agent’s state representation and sensory input. The side effects may be fully observable, partially observable, or even unobserved by the agent. Observing a side effect is different from identifying or recognizing the impact as a side effect. For example, the agent may observe the scratch it made on the wall but may not be aware that it is undesirable, and as a result may not try to avoid it. Observability is a critical factor when learning to avoid NSE. When an external authority provides feedback to the agent, it may be sufficient for the agent to observe the conditions that trigger the negative side effect. However, when an agent may need to identify NSE on its own, it needs more complex general knowledge about the open world.
Exclusivity: Negative side effects may prevent the agent from completing its assigned task. This category is relatively easier to identify. Often, however, the side effects negatively impact the environment without preventing the agent from completing its assigned task. Such side effects are typically difficult to identify at design time. Much of the current research on avoiding negative side effects focuses on side effects that do not prevent the agent from completing its current primary task.
2 Challenges in Avoiding Negative Side Effects
-----------------------------------------------
The challenges in avoiding negative side effects broadly stem from the difficulty in obtaining knowledge about NSE a priori, gathering user preferences to understand their tolerance for side effects, and balancing the potential trade-off between completing the task and avoiding the side effects.
### Model imprecision
Agents designed to operate in the open world are either trained in a simulator, or operate based on models created by a designer or generated automatically using data. Regardless of how much effort goes into the system design and how much data is available for training and testing, it is generally infeasible to obtain a perfect description of open-world environments. Practical challenges in model specification, such as the qualification and ramification problems, and computational complexity consideration often cause the agent to reason based on models that do not represent all the relevant details in the open world [[10](#bib.bib55 "Steps toward robust artificial intelligence")]. Simulators also suffer from this drawback, as they are also built by designers, resulting in mismatches between a simulator and the actual environment [[29](#bib.bib35 "Overcoming blind spots in the real world: leveraging complementary abilities for joint execution")]. As a result of reasoning with incomplete information, agents may not consistently behave as intended, leading to unexpected and costly errors, or may completely fail in complex settings.
There are three key reasons why the agent may not have prior knowledge about the negative side effects of its actions. First, identifying NSE *a priori* is inherently challenging. As a result, this information is often lacking in the agent’s model. Second, many AI systems are deployed in a variety of settings, which may be different from the environment used in training and testing of the agent. This *distributional shift* may cause NSE and is difficult to assess during the design process. Third, negative side effects in many settings arise due to *user preference* violation. It is generally difficult to precisely learn or encode human preferences and account for individual or cultural differences.
Techniques such as online model update and policy repair to minimize side effects, and building more realistic simulators [[11](#bib.bib27 "CARLA: an open urban driving simulator")] are some of the promising directions to handle negative side effects due to model imprecision.
### Feedback collection
An agent that is unaware of the side effects of its actions can gather this information
through feedback from users or through autonomous exploration and model revisions. Though learning from feedback produces good results in many problems [[24](#bib.bib52 "Identifying unknown unknowns in the open world: representations and policies for guided exploration"), [29](#bib.bib35 "Overcoming blind spots in the real world: leveraging complementary abilities for joint execution"), [33](#bib.bib42 "A multi-objective approach to mitigate negative side effects"), [46](#bib.bib48 "Minimax-regret querying on side effects for safe optimality in factored markov decision processes"), [47](#bib.bib26 "Querying to find a safe policy under uncertain safety constraints in markov decision processes"), [3](#bib.bib77 "Learning to optimize autonomy in competence-aware systems")], there are three main challenges in employing this approach in real-world systems. First, the learning process may not be *sample efficient* or may require feedback in a certain format to be sample efficient, such as correcting the agent policy by providing alternate actions for execution. Feedback collection in general is an expensive process, particularly when the feedback format requires constant human oversight or imposes significant cognitive overload on the user. Second, feedback may be *biased* or *delayed* or both, which in turn affects the agent’s learning process. Finally, it is generally assumed that the agent uses *human-interpretable representations* for querying and feedback collection, but there may be mismatches between the models of the agent and human. There are some recent efforts towards addressing the problem of sample efficiency in learning [[6](#bib.bib25 "Sample-efficient reinforcement learning with stochastic ensemble value expansion"), [45](#bib.bib22 "Sample efficient actor-critic with experience replay")] and investigating the impact of bias in feedback for agent learning [[28](#bib.bib56 "Discovering blind spots in reinforcement learning"), [33](#bib.bib42 "A multi-objective approach to mitigate negative side effects")]. Identifying and evaluating human-interpretable state-action representations for querying humans is largely an open problem.
### Managing tradeoffs
When negative side effects are unavoidable and interfere with the performance of the agent’s assigned task, there is a trade-off between completing the task efficiently and avoiding the NSE. In an extreme case, it may be impossible for the agent to achieve its goal without creating negative side effects. How far should an agent deviate from its optimal plan in order to minimize the impacts of negative side effects? Balancing this trade-off requires user feedback since it depends on their tolerance for negative side effects. This can be challenging when the agent’s objective and the side effects are measured in different units.
3 Approaches to Mitigate Negative Side Effects
-----------------------------------------------
This section reviews the emerging approaches to mitigating the impacts of negative side effects. Table [2](#S3.T2 "Table 2 ‣ 3 Approaches to Mitigate Negative Side Effects ‣ Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems") summarizes the characteristics of side effects handled by each one of the methods we mention.
| | Severity | Reversibility | Avoidability | Frequency | Stochasticity | Observability | Exclusivity |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [Hadfield-Mennel et al., 2017] | - | irreversible | - | frequent | deterministic | - | - |
| [Zhang et al., 2018] | - | irreversible | avoidable | - | deterministic | observable | non-interfering |
| [Krakovna et al.,2019] | - | - | - | - | - | observable | non-interfering |
| [Shah et al., 2019] | - | irreversible | - | frequent | deterministic | observable | non-interfering |
| [Zhang et al., 2020] | - | irreversible | - | - | deterministic | observable | - |
| [Turner et al., 2020a] | - | irreversible | avoidable | frequent | deterministic | - | non-interfering |
| [Saisubramanian et al., 2020] | not safety-critical | irreversible | - | frequent | deterministic | - | non-interfering |
| [Turner et al., 2020b] | - | - | - | frequent | deterministic | - | - |
| [Krakovna et al., 2020] | not safety-critical | - | - | - | - | observable | - |
| [Saisubramanian et al., 2021] | - | - | - | frequent | deterministic | - | non-interfering |
Table 2: Summary of the characteristics of the surveyed approaches to mitigate negative side effects. “-” indicates the approach is indifferent to the values of that property. Although some existing works do not explicitly refer to the severity of the side effects they can effectively handle, in general these approaches target side effects that are undesirable and significant, but not safety-critical.
### Model and policy update
The occurrence of negative side effects in a system depends on the agent’s trajectory, which is determined by its policy derived using its reasoning model. Hence, a natural approach to mitigate NSE is to update the model such that the agent’s policy avoids NSE as much as possible. When the side effects are safety-critical, the model update may include significant changes such as redesign of the reward function. \citeauthorhadfield2017inverse (\citeyearhadfield2017inverse) address such a setting where the negative side effects occur due to unintentional misspecification of rewards by the designer. It is assumed that the designer prescribes a proxy reward function and the agent is assumed to be *aware* of a possible reward misspecification. The proxy reward function is treated as a set of demonstrations, and the agent learns the intended reward function using approximate solutions for inference.
As acknowledged by the authors, this approach is not scalable to large, complex settings.
Redesigning the reward function may degrade the agent’s performance with respect to its assigned task or introduce new risks, and hence requires exhaustive evaluation before redeployment. This could be very expensive and likely require suspension of operation until the newly derived policies could be deemed safe for autonomous operation. In problem domains where the side effects are undesirable but not safety-critical, the impact can be minimized by augmenting the agent’s model with a penalty function corresponding to NSE. This exploits the reliability of the existing model with respect to the agent’s assigned task, while allowing a deployed agent to adjust its behavior to minimize the side effects.
In related work [[33](#bib.bib42 "A multi-objective approach to mitigate negative side effects")], we describe a multi-objective formulation of this problem with a lexicographic ordering of objectives that prioritizes optimizing the agent’s assigned task (primary objective) over minimizing NSE (secondary objective). A slack value to the primary objective determines the maximum allowed deviation from the optimal expected reward of the primary objective so as to minimize side effects. This work considers a setting in which the agent has *no prior knowledge* about the side effects of its actions. Information about NSE is gathered using feedback, which is then encoded by a reward function. The agent may not be able to observe the NSE except for the penalty, which is proportional to the severity of the NSE provided by the feedback mechanism. The model is updated with this learned reward function and an updated policy is computed that avoids negative side effects as much as possible, within the allowed slack. This formulation can hence handle both avoidable and unavoidable NSE. However this approach is not suitable for safety-critical consequences since it prioritizes optimizing the completion of the agent’s assigned task.
Both these approaches address the side effects associated with the execution of an action, independent of its outcome.
### Constrained optimization
Negative side effects occur when an agent alters features in the environment that the user does not expect or desire to be changed. This can be addressed by constraining the features that can be altered by the agent during its operation. In [[46](#bib.bib48 "Minimax-regret querying on side effects for safe optimality in factored markov decision processes")], the authors consider a setting in which the uncertainty over the desirability of altering a feature is included in the agent’s model and considers deterministic side effects that are irreversible, but avoidable. The agent first computes a policy assuming all the uncertain features are “locked” for alteration. If a policy exists, then the agent executes it. If no policy exists, the agent queries the human to determine which features can be altered and recomputes a policy. A regret minimization approach is used to select the top-k features for querying. Recently, the authors extended this approach to identify if NSE are unavoidable by casting it as a set-cover problem [[47](#bib.bib26 "Querying to find a safe policy under uncertain safety constraints in markov decision processes")]. If the side effects are unavoidable, the agent ceases operation. Therefore, these approaches are not suitable for settings where the agent is expected to alleviate (unavoidable) NSE to the extent possible, while completing its assigned task.
### Minimizing deviations from a baseline
Another class of solution methods defines a penalty function for negative side effects as a measure of deviation from a baseline state, based on the features altered. The deviation measure reflects the degree of disruption to the environment caused by the agent’s actions. The agent is expected to minimize the disruption while pursuing its goal, thereby mitigating NSE. In [[22](#bib.bib34 "Penalizing side effects using stepwise relative reachability")], the authors present a multi-objective formulation with scalarization, with the deviation from baseline state measured using reachability-based metrics. The agent’s sensitivity to NSE can be adjusted by tuning the scalarization parameters. The relative reachability approach [[22](#bib.bib34 "Penalizing side effects using stepwise relative reachability")] is not straightforward to apply in settings more complex than grid-worlds, as acknowledged by the authors. Furthermore, the resulting performance is sensitive to the metric used to calculate deviations, particularly the choice of baseline state.
Different candidates for baseline states have been proposed, such as start state and inaction in a state [[22](#bib.bib34 "Penalizing side effects using stepwise relative reachability")]. These baselines do not consider human preferences and may penalize all side effects. To overcome this, \citeauthorshah2019preferences (\citeyearshah2019preferences) present a Maximum Causal Entropy approach to infer human preferences from the start state. They assume that an environment is typically optimized for human preferences and the agent can mitigate NSE by inferring human preferences before it starts acting. This approach, however, requires knowledge about the dynamics of the environment to determine if the environment has been optimized for human preferences or not.
### Human-agent collaboration
Approaches such as policy update, constrained optimization, and minimizing deviations from a baseline rely heavily on the fidelity of agent’s state representation. In many cases, however, the agent’s state representation may only include the features relevant to its assigned task. This limited representation can impact the agent’s ability to learn and mitigate NSE. In recent work [[35](#bib.bib7 "Mitigating negative side effects via environment shaping")], we describe a human-agent team approach that mitigates NSE via environment shaping. Environment shaping is the process of applying simple modifications to the current environment to make it more agent-friendly and minimize the occurrence of side effects. The agent optimizes its assigned task, unaware of the side effects of its actions. The human mitigates the side effects of the agent through simple reconfigurations of the environment. This approach is applicable to settings where the user can assist the agent actively, beyond providing feedback, and there are one or more agents with limited state representation. This approach is not suitable for environments that are not configurable by the user or when the agent’s model and policy are frequently updated.
### Accounting for auxiliary objectives and future tasks
Attainable utility [[44](#bib.bib24 "Conservative agency via attainable utility preservation"), [43](#bib.bib6 "Avoiding side effects in complex environments")] measures the impact of side effects as the shifts in the agent’s ability to optimize for auxiliary objectives, generalizing the relative reachability measure.
Often, the occurrence of NSE may not impact the agent’s ability to complete its current assigned task, but may affect future task completion. To minimize the interference with future tasks, \citeauthorkrakovna2020avoiding (\citeyearkrakovna2020avoiding) present an approach that provides the agent an auxiliary reward for preserving agent ability to perform future tasks in the environment. These approaches assume that the agent’s state representation is sufficient to calculate the deviations and are therefore not directly applicable to settings with mismatches between the agent’s state representation and the environment.

Figure 2: A public repository of negative side effects
4 A Repository of Negative Side Effects
----------------------------------------
Since the problem of negative side effects is an emerging topic, current research relies on proof-of-concept toy domains for performance evaluation. Moving forward, understanding the occurrence of negative side effects in deployed AI systems is necessary for a realistic formulation of the problem and to design effective solution approaches to address it. To that end, we have created a repository of negative side effects [[37](#bib.bib75 "Negative Side Effects Repository")]. This publicly available repository is shown in Figure [2](#S3.F2 "Figure 2 ‣ Accounting for auxiliary objectives and future tasks ‣ 3 Approaches to Mitigate Negative Side Effects ‣ Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems"). It contains real-world instances from scientific reports or news articles, identified by us. For each instance, details such as problem setting in which negative side effects were observed, a description of the side effects, location and date of incident, are provided. We believe this repository will promote a deeper understanding of the problem, provide insights about which assumptions are valid, and facilitate moving beyond simple grid-world type domains as common test cases to evaluate techniques.
We invite the readers to contribute to this repository by reporting cases of negative side effects of deployed AI systems, based on user experiences, published papers, or media reports, using an online form we provide [[36](#bib.bib76 "Negative Side Effects Form")]. Each submission will be reviewed by our team before adding it to the repository.
5 Open Questions and Future Work
---------------------------------
Some key open questions and research directions that can further the understanding of negative side effects and development of strategies to mitigate their impacts are discussed below.
Negative side effects in multi-agent settings: The existing works have studied the negative side effects of a single agent’s actions on the environment. In collaborative multi-agent systems, agents work together to optimize performance and may have complementary skills. For example, the negative side effects produced by an agent may be reversible by another agent. *How can we leverage collaborative multi-agent settings to effectively mitigate negative side effects?* One solution approach is to devise a joint policy to mitigate the negative side effects, in addition to optimizing the utility of the assigned task. The existing rich body of work on cooperative multi-agent systems examines how the intended effects of each agent’s actions may affect the other agents when devising a joint policy that maximizes the performance [[27](#bib.bib61 "Multiagent teamwork: analyzing the optimality and complexity of key theories and models"), [16](#bib.bib60 "Optimizing information exchange in cooperative multi-agent systems"), [48](#bib.bib64 "Meta-level coordination for solving negotiation chains in semi-cooperative multi-agent systems"), [29](#bib.bib35 "Overcoming blind spots in the real world: leveraging complementary abilities for joint execution")]. Extending such frameworks to handle the side effects problem requires knowledge about the negative side effects of each agent’s actions and how it affects the behavior and rewards of other agents in the environment.
External feedback may indicate the occurrence of NSE as a result of a joint action of the agents. Effectively mitigating the side effects requires mechanism design for precise identification of the agent whose actions produce these undesirable effects, based on the feedback provided for joint actions.
Addressing side effects in partially observable settings:
In partially observable settings, an agent operates based on a belief distribution over the states. The problem is further complicated when the agent has no prior knowledge of the side effects, which may be partially observable or unobserved. *How can an agent effectively learn to avoid negative side effects in partially observable settings?* Due to partial observability, the agent maps the external feedback indicating the occurrence of negative side effects to a belief distribution and not an exact state. As a result, a belief distribution may be associated with multiple conflicting feedback. Depending on how the feedback signals are aggregated, different types of agent behavior emerge with varying sensitivity to negative side effects.
Combination of side effects: Many AI systems, such as autonomous vehicles, are comprised of multiple entities that function together to achieve a goal. Each of these entities may contribute to different forms of negative side effects. It is likely that multiple forms of negative side effects, with varying impacts and severity, co-exist and require different solution techniques to mitigate the overall impact. *How to ensure that approaches designed to eliminate one form of side effect do not introduce new risks?* This problem is related to avoiding negative side effects in collaborative multi-agent settings since each component can be treated as an agent collaborating with other agents. Reasoning about multiple forms of risks together is a cornerstone in achieving safe AI systems. One approach is to evaluate the effects of an impact regularizer on other modules in the system that interact with the module of interest. This requires broad background knowledge about the architecture and functionality of each component, which may not be available in systems with black-box components.
Skill discovery to mitigate negative side effects:
Skill discovery [[12](#bib.bib63 "Diversity is all you need: learning skills without a reward function"), [21](#bib.bib62 "Skill discovery in continuous reinforcement learning domains using skill chaining")] in reinforcement learning allows an agent to discover useful new skills autonomously. High-level skills or *options* are temporally extended courses of actions that generalize primitive actions of an agent. These closed-loop policies speed up planning and learning in complex environment and are generally used in hierarchical methods for reasoning. Exploring the feasibility of *skill discovery for avoiding negative side effects* is an interesting direction that could accelerate agent behavior adaptation, especially to avoid side effects during agent exploration. For example, if the agent learns to push a box without scratching the walls or dirtying the rug, this option is useful in a variety of related settings and enables faster behavior adaptation.
Beyond Safety and Control:
This article has discussed the undesirable side effects in the context of safety and control in embodied autonomous systems. Investigating negative side effects of decision-support systems and recommender systems is an important direction for the future. Negative side effects in these contexts may not be immediate, such as the effect on climate change, human health, or cognitive ability caused by the system’s decisions.
AI systems may also suffer from other factors that affect their reliability, such as biases and privacy concerns. Amplifying underlying biases in a system or increased vulnerability to attacks may occur when the system optimizes incorrect or incompletely specified objectives, which can be treated as serious side effects that require entire model redesign. There are growing efforts in the machine learning community to address many forms of biases and to improve the security for safeguarding against adversarial attacks [[23](#bib.bib67 "Adversarial examples in the physical world"), [2](#bib.bib65 "The problem with bias: allocative versus representational harms in machine learning"), [15](#bib.bib68 "Adversarial policies: attacking deep reinforcement learning"), [26](#bib.bib66 "What you see is what you get? the impact of representation criteria on human bias in hiring"), [14](#bib.bib5 "Learning to generate fair clusters from demonstrations")].
6 Conclusion
-------------
This article examines the concept of negative side effects of AI systems and offers a comprehensive overview of recent research efforts to address the challenges presented by side effects. In doing so, we aim to advance the general understanding of this nascent but rapidly evolving area. We present a taxonomy of negative side effects, discuss the key challenges in avoiding side effects, and summarize the current literature on this topic.
This article also presents potential future research directions that are aimed at deepening the understanding of the problem. While some of these issues can be addressed using problem-specific or ad-hoc solutions, developing general techniques to identify and mitigate negative side effects will facilitate the design and deployment of more robust and trustworthy AI systems.
Acknowledgments
---------------
This work was supported in part by the Semiconductor Research Corporation under grant #2906.001. |
b9e54761-4053-4a6f-a581-7d3089f53471 | trentmkelly/LessWrong-43k | LessWrong | Neural nets designing neural nets
|
e5b1502f-e613-4462-8189-31f3a9c0bcbf | trentmkelly/LessWrong-43k | LessWrong | How do you solicit feedback effectively?
It looks like I've been overly paranoid about the helpfulness of things I've done. A friend said that they could fill out feedback forms after conversations and while that wouldn't be perfect, I thought if I use it across multiple people it would likely be helpful.
I'm wondering if:
-anyone has already tried soliciting feedback from people about conversations already (and if so, to share your experience)
-has ideas about how I could do this not terribly |
97c32e23-8871-49a8-8950-e37f9015b0cd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Examples of AI's behaving badly
Some past examples to motivate thought on how AI's could misbehave:
An [algorithm pauses the game](http://techcrunch.com/2013/04/14/nes-robot/) to never lose at Tetris.
In "[Learning to Drive a Bicycle using Reinforcement Learning and Shaping](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.3038&rep=rep1&type=pdf)", Randlov and Alstrom, describes a system that learns to ride a simulated bicycle to a particular location. To speed up learning, they provided positive rewards whenever the agent made progress towards the goal. The agent learned to ride in tiny circles near the start state because no penalty was incurred from riding away from the goal.
A similar problem occurred with a soccer-playing robot being trained by David Andre and Astro Teller ([personal communication to Stuart Russell](http://luthuli.cs.uiuc.edu/~daf/courses/games/AIpapers/ng99policy.pdf)). Because possession in soccer is important, they provided a reward for touching the ball. The agent learned a policy whereby it remained next to the ball and “vibrated,” touching the ball as frequently as possible.
Algorithms [claiming credit in Eurisko](https://aliciapatterson.org/george-johnson/eurisko-the-computer-with-a-mind-of-its-own/): Sometimes a "mutant" heuristic appears that does little more than continually cause itself to be triggered, creating within the program an infinite loop. During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.
[There was something else going on, though](https://www.digitalspy.com/videogames/a796635/elite-dangerous-ai-super-weapons-bug/). The AI was crafting super weapons that the designers had never intended. Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces. "It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities," according to a post written by Frontier community manager Zac Antonaci. "Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers weaponry."
Programs classifying gender based on photos of irises may have been [artificially effective due to mascara in the photos](https://arxiv.org/pdf/1702.01304v1.pdf). |
8a7ace74-fed8-4f1b-8a53-b11ab8809d69 | trentmkelly/LessWrong-43k | LessWrong | Moving towards a question-based planning framework, instead of task lists
Each brick = one uncertainty
TL;DR
Maybe instead of writing task lists, reframe macro objectives in terms of nested questions until you reach 'root' testable experiments.
Putting this into practice
I built a tool for myself, ‘Thought-Tree’ here, to try and systematise what I wrote in this post. Maybe it works out for you as well?
Essays I am Thinking About, and that Inspired this Post
* Richard Hamming, You and Your Research
* 80000 hours, How to investigate your career uncertainties and make a judgement call
* Ben Kuhn, Impact, agency, and taste
* Gwern, Information Organising
* Paul Graham, How to Do Great Work
* Paul Graham, What to Do
Related essays by me
* Casual Physics Enjoyer, The Pain of Finding the Truth
* Casual Physics Enjoyer, You're A Slow Thinker - Now What?
* Casual Physics Enjoyer, On Good Writing
What I am planning to read
* Le Cunff, Anne-Laure, Tiny Experiments - How to Live Freely in a Goal-Obsessed World
Setting 'questions' instead of goals?
I want to contribute to the world in the best way I can.
I really think that this is super hard. And in general, I think phrases like 'I want to do X' or 'I want to be Y' actually might be pretty ineffective ways of phrasing targets because it doesn't give you any hints on how to do it. My evidence for 'not knowing how to reach really ambitious targets' is a weak reductio ad absurdum one - implicitly, if I knew how to do it, I probably would've already got there (or busy working away at what needs to be done, and not writing an essay on what I don't know on Casual Physics Enjoyer ;)).
But since I am writing this, I (and maybe you) are not in that position, which implies that there's stuff that we don't know. And in my case of 'contributing to the world in the best way I can), there is a shit ton of stuff I don't know about
* The state space of possible actions is huge.
* I don't know what the best thing to do is.
* I don't know what the 'best' thing even means, and I'm also unc |
5c1846b2-1e81-4f3a-adb8-1e836d4d40dc | trentmkelly/LessWrong-43k | LessWrong | IntelligenceExplosion.com graphic redesign
LW user Lightwave offered to redesign IntelligenceExplosion.com, which at that point looked almost identical to anthropic-principle.com because I don't do web design. Within 9 days of original email contact, Lightwave completed the project and I uploaded the new files. The redesign is a big improvement, so please go to this comment by Lightwave and give him/her lots of karma! |
928485c1-b55d-4e0a-8c41-4ddc7b8ee164 | StampyAI/alignment-research-dataset/special_docs | Other | Governing Boring Apocalypses: A new typology of existential vulnerabilities and exposures for existential risk research
Introduction the definition and framings of existential risk In recent years, a growing body of scholarship has argued that a new class of risks bears closer study, for their potential extreme impact on the survival of humanity (Bostrom, 2002 (Bostrom, , 2013 Bostrom & Cirkovic, 2011; Matheny, 2007; Rees, 2004) . Prior research has identified a range of such human extinction risks (Bostrom & Cirkovic, 2008; Haggstrom, 2016; Pamlin & Armstrong, 2015) , both natural and manmade, including risks from supervolcano eruption, asteroid impact, global warming, nuclear war, as well as more speculative risks from emerging technologies such as biotechnology, high-energy physics experiment disasters, or misaligned artificial intelligence. (Asimov, 1981; Baum & Barrett, 2016; Bostrom, 2014; Ord, Hillerbrand, & Sandberg, 2010; Sagan, 1983; Smil, 2005; Tegmark & Bostrom, 2005; Yudkowsky, 2008a) . While it is encouraging to see greater attention for a critical topic that has long remained understudied, it is relevant to ask how the framing of the field's basic concepts shapes both which problems it identifies and prioritizes, as well as which policy approaches it considers and engages. In his seminal paper, Bostrom defined an existential risk as '[o] ne where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential' (Bostrom, 2002) . Thus in Bostrom's view, existential risks are characterised both by their scope (pan-generational) and their intensity (crushing): the size of the group of people who are at risk 1 and how badly each individual within that group is affected, respectively (Bostrom, 2002) . Much prior research on existential risks has thus deployed criteria and methodology which have identified discrete and independent challenges of sufficient severity and pervasiveness to bring about the 'adverse outcome' in a direct causal manner. In this reading, existential risks are an extreme offshoot of global catastrophic risks-disasters which "might have the potential to inflict serious damage to human well-being on a global scale" (Bostrom, 2013; Bostrom & Cirkovic, 2011, p. 2 ), but which fall short of permanent collapse. While we are not necessarily averse to the Bostromian definition of 'adverse outcomes'-a definition which indeed seems to characterize the space of eventual outcomes to be avoided-we take more issue with the limited range of pathways towards this dreaded outcome-space, which much of the literature has focused on exploring. Specifically, as noted by others in the community, much prior research "has focused mainly on tracing a causal pathway from a catastrophic event to global catastrophic loss of life" (Avin et al., 2018, p. 1) . As such there remains an event-focus, in the sense that only discrete events that are causally connected to the demise of humanity within a relatively short time-frame qualify as an existential risk (rather than a 'merely' globally catastrophic one, or a background risk).
Existential risk (re)framings as crucial consideration for law & governance approaches Distinguishing existential risks as a uniquely threatening outlier along the spectrum of global risks, however, is arguably an unnecessarily narrow framing of the field of study. Indeed, a high-profile 'one-hit-KO' existential risk such as a global nuclear war or a pandemic may constitute only one avenue towards that 'adverse outcome', and concentrating predominately upon (ways to intervene in) its origin and direct pathway, risks overshadowing other potential paths or disaster interaction effects 2 that functionally converge towards that same disastrous outcome, even if only indirectly or over longer timescales, with a potentially higher probability. Indeed, as recently noted by scholars in the field, a full mapping of scenarios that lead to catastrophic outcomes "requires exploring the interplay between many interacting critical systems and threats, beyond the narrow study of individual scenarios that are typically addressed by single disciplines" (Avin et al., 2018, p. 2) .The precise framing of 'existential risks' is therefore a crucial consideration, informing ethical, strategic, and epistemological (cf. academic) priorities in facing 'adverse outcomes'. This is particularly the case in the context of studying how global political dynamics may interact with certain existential risks, and in formulating meaningfully effective policies and governance approaches to such risks. Of course, this is not to say that the field of existential risk studies has not sought to involve and engage with policy and governance approaches and solutions. Indeed, to its credit, research in the field of existential risks has actively sought to engage with these issues-given that, as Bostrom himself observes (2013:27) , global cooperation is critical to mitigating a wide range of existential risks. Likewise, researchers within the 'AI safety' community are beginning to highlight fields such as policy and psychology, as under-represented but potentially promising approaches to addressing risks arising from AI (Brundage, 2017; Sotala, 2017b) . Accordingly, there has been research into the interaction effects between technologies and politics-such as the possibility that arms races might increase the risk that untested, powerful AI systems are deployed rashly or prematurely (Armstrong, Bostrom, & Shulman, 2013; Shulman, 2009) . Other work has drawn on cognitive psychology, to study how people might structurally (mis)judge the probability of risks (Yudkowsky, 2008b) . Likewise, work exploring policy-and governance approaches to mitigating existential risks has explored policy approaches that include insurance arrangements for large catastrophes (Taylor, 2008) ; technology taxes and subsidies (Posner, 2008) ; and work drawing on social (and organizational) psychology to assess ways to motivate AI researchers to choose beneficial AI designs (Baum, 2016) . Yet, other work has examined the cost-effectiveness of biosecurity interventions (Millett & 1 Including future generations. Broadly speaking, many scholars in this space share an emphasis on the ethical value of far-future humans (Beckstead, 2013) , with some arguing for the absolute prioritization of reducing human extinction risks (rather than risks that destroy civilization but would leave some humans alive) on the grounds that these risks would destroy all future generations (Parfit, 1984, pp. 453−454; Sagan, 1983; Ng, 1991; Matheny, 2007) . Bostrom himself appears to favour the 'Maxipok' strategy-"Maximise the probability of an 'OK outcome', where an OK outcome is any outcome that avoids existential catastrophe" (Bostrom, 2013, p. 19 )-though he takes a slightly broader perspective of mitigating not just 'hard' extinction risks but also 'global catastrophic risks' which could inflict significant, lasting long-term harm to the trajectory of human civilization, and which could thereby end up inflicting other categories of existential risks (including 'permanent stagnation'; 'flawed realisation', or 'subsequent ruination' (Bostrom, 2013, p. 19 ). 2 Though for some work that examines 'interaction effects' between different global catastrophes, see (Baum & Barrett, 2017; Baum et al., 2013) . H.-Y. Liu et al. Futures 102 (2018) 6-19 Snyder-Beattie, 2017); pricing externalities to balance public the risks and benefits of scientific research generally ; and proposing a general international regulatory regime to govern global catastrophic and existential risks from emerging technologies (Wilson, 2013) . At present, a majority of existential risk research centres 3 have articulated law and policy research as areas of interest, and scholars in this space have begun to translate such work into concrete proposed policy interventions-notably the 2017 GPP report, which included proposals to develop governance for geoengineering research; establish international scenario plans and exercises for engineered pandemics, and build international attention for existential risk reduction (Farquhar, Halstead et al., 2017) . Such work is highly encouraging, and the existential risk research agenda has benefited from it. Nonetheless, the risk remains that a too-narrow conception of 'existential risks' prematurely closes down the space of law and governance solutions that are possible-or necessary-in assuring humanity a non-catastrophic future-for instance, a future that, in Bostrom's framing, meaningfully 'maximize [s] the probability of an ok outcome' (Bostrom, 2013, p. 19 ). However, if human extinction and the persistent and pervasive truncation of technological potential are not completely homologous, then tailoring our portfolio of policy responses exclusively to closing off the pathways these risks could take-and then calling it a day-would be insufficient. In fact, this might only afford future policymakers with a false sense of security, even as the world continues to reside in an overall state of 'super-risk' (Bermudez & Pardo, 2015) . This is especially the case when there is a narrow 'technological' (re)solution on offer-such as 'improve global vaccine synthesis and production capability', or 'subsidize international technical AI safety research'-which promise to address or prevent the risk at its root. While such direct technological solutions may certainly be indispensable to averting some existential risks, they may not suffice in actually 'plugging all the holes' in our risk space. In a disciplinary context, there is a risk (admittedly self-correcting, given publication incentives) of the research agenda 'halting' early. In a real-world context, the availability of simple, straightforward 'fixes' might even pose a 'moral hazard', if policymakers or global governance systems which lack political will or the attention to explore more complex or costly changes, seize upon the 'symbolic action' of the straightforward, first-order mitigation strategies. Even where this is not the case, certain policy recommendations to mitigate existential risks might depend on too-optimistic a view of institutional rationality or capability. 1.2. 'Boring Apocalypses': from existential hazards, to existential risks While such efforts might mitigate specific existential risks, this might not translate into significantly lowering the overall probability of the 'adverse outcome', if only a part of the problem, or only one problem among many, is addressed. An alternative articulation is that only one path to the 'adverse outcome' is being explored by much research into existential risks: erecting obstacles along that path may indeed reduce the overall likelihood of manifesting these risks, but this might have little impact, or even no effect, upon the manifestation of the 'adverse outcome'. Thus, our view is that a materialised existential risk (what we call an 'existential hazard') is sufficient to lead to an (existentially) 'adverse outcome', but crucially, that this is unnecessary to reach that result. If the overarching objective is to lower the probability of human extinction or significant technological curtailment, adopting an array of approaches which complement the mitigation of direct existential risks are required. Within this broad spectrum of aligned approaches, we propose to introduce law, policy, regulatory and governance tools in this paper as an example. The choice of law and policy perspectives is two-fold: on one hand, they make it possible to take second-order considerations, which involve indirect and socially and culturally mediated paths towards 'adverse outcomes' into account; on the other hand, these recognise both the complexity of social organisation and the prospect that civilizational collapse may trigger or possibly instantiate existential outcomes. In this sense, law and policy approaches offer the possibility of complementing and enhancing the narrower approach adopted by contemporary existential risk research, to take into consideration other paths to existentially adverse outcomes; and to better anticipate vulnerabilities, exposures and failure modes in societal efforts to address existential risks. 1.3. Exploring the implications of the existential risk framing: risks from AI An example of this can be drawn from the prospect of super intelligent artificial intelligence (Bostrom, 2014; Yudkowsky, 2008a) . Although the landmark research agenda articulated by Russell, Dewey, and Tegmark (2015) does call for research into 'short-term' policy issues, debates in this field of AI risk 4 have-with some exceptions-identified the core problem as one of value alignment, where the divergence between the interests of humanity and those of the superintelligence would lead to the demise of humanity through mere processes of optimisation. Thus, the existential risk posed by the superintelligence lies in the fact that it will be more capable than we can ever be; human beings will be outmanoeuvred in attempts at convincing, controlling or coercing that superintelligence to serve our interests. As a result of this framing, the research agenda on AI risk has put the emphasis on evaluating the technical feasibility of an 'intelligence explosion' (Chalmers, 2010; Good, 1964) through recursive self-improvement after reaching a critical threshold (Bostrom, 2014; Sotala, 2017a; Yudkowsky, 2008a Yudkowsky, , 2013 5 ; on formulating strategies to estimate timelines for the 3 While not exhaustive, these include: the Centre for the Study of Existential Risk (CSER), the Global Catastrophic Risk Institute (GCRI), the Global Priorities Project (GPP), the Gothenburg Centre for Advanced Studies, the Global Challenges Foundation, and the Future of Humanity Institute (which has recently announced its 'Governance of AI Program'). 4 Cf. (Farquhar, Halstead et al., 2017) . For an excellent overview of recent work (both on technical safety as well as strategy and policy) on mitigating existential risks deriving from artificial intelligence, see (Dawson, 2016 (Dawson, , 2017 . 5 For critiques of the 'singularity' claim, see (Brooks, 2014; Dietterich & Horvitz, 2015; Goertzel, 2015; Plebe & Perconti, 2012; Jilk, 2017) . H.-Y. Liu et al. Futures 102 (2018) 6-19 expected technological development of such 'human-level' or 'general' machine intelligence (Armstrong & Sotala, 2012 Baum, Goertzel, & Goertzel, 2011; Brundage, 2015; Grace, Salvatier, Dafoe, Zhang, & Evans, 2017; Müller & Bostrom, 2016) ; and on formulating technical proposals to guarantee that a superintelligence's goals or values will remain aligned with those of humanity-the so-called superintelligence 'Control Problem' (Armstrong, Sandberg, & Bostrom, 2012; Bostrom, 2012 Bostrom, , 2014 Goertzel & Pitt, 2014; Yudkowsky, 2008a) . 6 While this is worthwhile and necessary to address the potential risks of advanced AI, this framing of existential risks focuses on the most direct and causally connected existential risk posed by AI systems. Yet while super-human intelligence might surely suffice to trigger an existential outcome, it is not necessary to it. Cynically, mere human level intelligence appears to be more than sufficient to pose an array of existential risks (Martin, 2006; Rees, 2004) . Furthermore, some applications of 'narrow' AI which might help in mitigating against some existential risks, might pose their own existential risks when combined with other technologies or trends, or might simply lower barriers against other varieties of existential risks. To give one example; the deployment of advanced AI-enhanced surveillance capabilities 7 -including automatic hacking, geospatial sensing, advanced data analysis capabilities, and autonomous drone deployment-may greatly strengthen global efforts to protect against 'rogue' actors engineering a pandemic ("preventing existential risk"). It may also offer very accurate targeting and repression information to a totalitarian regimes, 8 particularly those with separate access to nanotechnological weapons ("creating a new existential risk"). Finally, the increased strategic transparency of such AI systems might disrupt existing nuclear deterrence stability, by rendering vulnerable previously 'secure' strategic assets ("lowering the threshold to existential risk") (Hambling, 2016; Holmes, 2016; Lieber & Press, 2017) . Finally, many 'non-catastrophic' trends engendered by AI-whether geopolitical disruption, unemployment through automation; widespread automated cyberattacks, or computational propaganda-might resonate to instil a deep technological anxiety or regulatory distrust in global public. While these trends do not directly lead to catastrophe, they could well be understood as a meta-level existential threat, if they spur rushed and counter-productive regulation at the domestic level, or so degrade conditions for cooperation on the international level that they curtail our collective ability to address not just existential risks deriving from artificial intelligence, but those from other sources (e.g. synthetic biology and climate change), as well. These brief examples sketch out the broader existential challenges latent within AI research and development at preceding stages or manifesting through different avenues than the signature risk posed by superintelligence. Thus, addressing the existential risk posed by superintelligence is both crucial to avoiding the 'adverse outcome', but simultaneously misses the mark in an important sense. 2. Re-examining existential risks: hazard, vulnerability, and exposure While Bostrom's leading typology identifies the general area inhabited by existential risks, it provides little guidance for how to differentiate among the diverse risks within that category (the box marked 'X'), because these risks are not distinguished according to their source, characteristics, or complexity, but only their impact ("crushing") and scope ("pan-generational"). 9 However, given the range of distinct risks falling within the 'X' box-that is, risks that could cause or feed into an eventual terminal and crushing 'adverse outcome' for humanity-we suggest it relevant to deconstruct existential risks, and instead consider the broader category of 'risks as a function of hazard, vulnerability and exposure' 10, 11 : Existential Risk = Hazard\* Vulnerability \* Exposure Here, hazard denotes the external source of peril (which is captured within the prevailing agenda studying existential risks)-the 'spark' that threatens the pan-generational/crushing harm. Vulnerability denotes propensities or weaknesses inherent within human social, political, economic or legal systems, that increase the likelihood of humanity succumbing to pressures or challenges that threaten existential outcomes. Finally, exposure denotes the 'reaction surface'-the number, scope and nature of the interface between the hazard and the vulnerability. Thus, a hazard is what kills us, and a vulnerability is how we die. Exposure is the interface or medium between what kills us, and how we die. To take an example from disaster studies, a major earthquake only becomes a risk if the built, social or institutional environment can be destabilised during earthquakes of the threatened magnitude ("is vulnerable to"), and if such an environment is located in ("exposed to") an earthquake zone. Thus, vulnerability and exposure refer to two different aspects of the affected system: 6 The field of AI safety is particularly active. For a selection of influential papers, see Amodei & Clark, 2016; Christiano et al., 2017; Orseau & Armstrong, 2016; Soares & Fallenstein, 2014) . 7 Notwithstanding interesting developments in 'privacy-preserving' homomorphic encryption configurations, for an interesting exploration of which, see (Trask, 2017) . 8 For a treatment of totalitarianism as a 'global catastrophic risk', see (Caplan, 2008) 9 Of course, Bostrom's objective in setting out this typology is merely to differentiate existential risks from the much larger space of unfortunate occurrences. 10 This classification schema is distinct from another recently proposed by (Avin et al., 2018) , which instead breaks down the analysis of global catastrophic risk scenarios along three different components-(1) a critical system whose safety boundaries are breached by a threat; (2) the mechanisms by which this threat might spread globally to affect the majority of the population, and (3) the manner in which we might fail to prevent or mitigate 1 and 2. While elegant, a discussion of the similarities, differences, and potential (in)commensurability between these two classification taxonomies is out of scope for this present paper. how it breaks, and how it intersects with a given hazard's operating space or pathways of impact (Fig. 1 ). As a species of global catastrophic risks, the study of existential risks is often conflated with, and perhaps even collapsed into, the identification and mitigation of existential hazards. Where attention is paid to issues of vulnerability and exposure, these are often identified in light of an existential hazard. One of the leading sources and reference points in the field symptomatically organizes the field as a collection of existential hazards (Bostrom & Cirkovic, 2008) . A caveat applies for a small subset of hazards of such enormous magnitude that it renders mitigation strategies focussing upon vulnerability and exposure less relevant, or perhaps even irrelevant. The paragon might be the scenarios of 'simulation collapse', or a high-energy physics experiment going awry, altering the astronomical vicinity and rendering life untenable (Ord et al., 2010) . Such extreme hazards constitute the archetype of existential risks as a subset of global catastrophic risks and can only be addressed by managing the hazard head-on, with vulnerability and exposure components relegated to marginal roles: Existential Risk = Existential Hazard \* Vulnerability \* Exposure Thus, our claim is not that the field of existential risks research is looking in the wrong places-the emphasis on existential risks has enabled this field to identify a core group of existential hazards which would on their own suffice to bring about the 'existentially adverse outcome'. Nonetheless, there are also many other, slower and more intertwined ways in which the world might collapse, without being hit by spectacular hazards. To complement the study of existential risks we can draw upon lessons learnt through historical and anthropological studies of civilizational collapse. Thus, while existential risks concentrate upon clear-cut existential hazards, civilizational collapse research infers influential factors that were involved in trajectories of decline. These studies are beginning to challenge the traditional conceptual framework which set out a cyclical history, wherein a civilisations rise and fall, progressing through a predictable pattern of growth, zenith and decline in a gradual manner (Ferguson, 2011) . In other words, historically civilizational collapses are boring. Diamond refined this model by recognising that civilizational collapse could be a slow and protracted process emerging from complex interactions (Diamond, 2006) .
Beyond hazards: vulnerability and exposure In this paper, we set out to foreground the other two variables involved in the existential risk equation. Thus, as noted, 'vulnerability' denotes propensities or weaknesses inherent within human social, political, economic or legal systems that increase the likelihood of humanity succumbing to pressures or challenges that threaten existential outcomes. 'Exposure' indicates the nexus between external hazards and internal vulnerabilities: the interface at which the 'adverse outcome' precipitates from their interaction. Historical studies of civilizational collapses indicate that even small exogenous shocks can Bostrom, 2013, p. 17) . destabilise a vulnerable system (Diamond, 2006; Ferguson, 2011) . Given this, studying 'exposure' is relevant in systematically analysing interaction effects: a cataclysmic hazard interacting with robust and resilient human systems may be survivable, but conversely, at the interstices at which our human technology, institutions or culture are most vulnerable, even minor (initially 'noncatastrophic') hazards can be the inflection point that tips these susceptible systems towards trajectories of collapse (Gladwell, 2001) . 12 In order to offset the tight coupling between existential risks and existential hazards, we will further dissect the vulnerability and exposure factors introduced in the existential risk calculus. Our proposed taxonomy distinguishes four general categories of vulnerability and exposure (see Table 1 ). • Ontological: vulnerability through existing in a given location and time in our universe 13 ; • Passive: vulnerability through lack of action; 'indirect' exposure; • Active: vulnerability because of insufficient/mis-specified action. • Intentional: vulnerability or exposure knowingly maintained, for that purpose. Note that for vulnerability, the Passive, Active and Intentional categories correspond to the jurisprudential concepts of 'omission' ('failure to act'), 'negligence' (action, but with failure to exercise the appropriate care to prevent foreseeable future harm) and 'intention' (action with the known purpose to bring about a consequence). Drawing such distinctions offers the opportunity to be more precise about the features or characteristics which give rise to the existential dimension of the challenge, and thus suggest specific points for targeted intervention, as well as potential failure modes to caution against. Below, we combine these categories and their sub-divisions, in twin taxonomies of existential 'vulnerabilities' and 'exposure'. We also seek to give concrete examples. Obviously, not all of these examples are currently unstudied-indeed many feature prominently in the existing literature-though in other cases they remain understudied. While this list is naturally not comprehensive, we hope that such examples enable researchers in the field of existential risks to locate their research in an overarching framework, as well as facilitating links to established scholarly fields which have studied given issues, without considering their bearing on larger existential risks.
A taxonomy of 'existential vulnerability' Our proposed taxonomy for distinguishing between different manifestations of existential vulnerabilities is summarised in Table 2 : note that the salience or tractability of these existential vulnerabilities to law and policy approaches increases as one goes down: ontological vulnerabilities appear (at present) highly intractable to mere law and policy-it would be a vain regulator indeed who would try to legislate against physical laws. However, as one proceeds to passive, active, or intended vulnerabilities, the salience of governance approaches increases (Table 3 ).
Ontological vulnerability The category of ontological vulnerability denotes intrinsic vulnerabilities associated with human existence. These include the possibility that we inhabit a computer simulation (Bostrom, 2002) , which might be terminated or altered at any time. More conceptual and basic vulnerabilities-so fundamental that we often would not even consider them as such-include our existence as biological beings that are dependent (potentially more so than other species such as tardigrades) on continuous or relatively uninterrupted inputs of energy & resources (such as food, water, air, light, …), which renders the human species one comparatively vulnerable to 'extinction' events such as a supervolcano-or meteor-induced global winter. On a deeper level yet, all biochemistry is dependent on the existing laws of physics within which it evolved, rendering us acutely and terminally vulnerable to any processes (e.g. vacuum decay) which would profoundly alter these processes. Biological deterioration due to aging processes, or exterior damages, might also rank amongst these, although that is conditional on whether or not there exists a physical 'hard ceiling' to how far medical senescence research might extend human lifespans and reduce other vulnerabilities. Table 1 the general categories of vulnerability and exposure used to structure our taxonomies of existential vulnerability and existential exposure.
Type of Vulnerability (V): vulnerability by… Type of Exposure (E): exposure by… Ontological (O) Existence (V-O) Existence (E-O) Passive (P) Omission (V-P) Indirect link (E-P) Active (A) Negligence (V-A) Direct link (E-A) Intentional (I) Intention (V-I) Intention (E-I) 12 Notably, the resilience of civilization to catastrophes has had some treatment in the field of global systemic risk (Baum et al., 2014; Centeno, Nag, Patterson, Shaver, & Windawi, 2015; Helbing, 2013) . 13 Another possible term could be 'anthropic vulnerability'. H • Physical dependence on physics integrity; our biochemistry 'works' only within a narrow subset of all possible physical laws (rendering us vulnerable to vacuum decay); • Biological aging.
V-P. Passive vulnerability Vulnerability existing due to the lack of structures in place. [OMISSION] Built (vulnerability because of the lack of availability of a defence) • Lack a of super-volcano warning system (technology does not yet exist-lack of global capacity). • Lack of asteroid defence programme (existing technology, but not deployed-lack of local capacity at key point); Institutional (top-down social vulnerability) • Lack of effective global institutions, as well as crisis management organisation; • Lack of global coordination on identifying and addressing existential risks. • Lack of public investment in developing critical technologies, e.g. alternate food sources for surviving volcanic winter (Pearce & Denkenberger, 2016) or refuges for global catastrophic risks (Haqq-Misra, Baum, & Denkenberger, 2015) . Cultural (bottom-up social vulnerability) • Lack of public engagement in confronting existential risks: propensity of public to stereotype/dismiss disaster scenarios ('Terminator headlines'); • Lack of (widely shared) concepts and language to express existential vulnerabilities.
V-A. Active vulnerability Vulnerabilities existing in spite of/ because of the social structures in place. [NEGLIGENCE] Built vulnerability • Intrinsic path-dependent vulnerabilities in infrastructure components: architectural security deficits in universally used components of global (digital) infrastructures (e.g. Spectre and Meltdown exploits in Intel chips); future geo-engineering projects, such as stratospheric aerosol injection, which could backfire heavily if interrupted temporarily, and which might be disrupted (Baum, 2015a; Baum, Maher, & Haqq-Misra, 2013 ). • Intrinsic path-dependent vulnerabilities in infrastructure configuration: critical infrastructures (e.g. national electricity grids) are centralized and homogeneous (e.g. rendering society vulnerability to solar flares). • More generally: driven by organizational and competitive optimization. ('Moloch' traps (Alexander, 2014) ), globalization homogenizes all solutions across the globe, eroding resilience (e.g. proliferation of homogenized monocultures of staple crops creates vulnerabilities to engineered crop diseases). Institutional vulnerability • Narrow bureaucratic interest and perverse incentives which lock civilization into 'inadequate equilibria' (Yudkowsky, 2017) , potentially blocking coordination for known existential risks • Globalised economic and institutional frameworks. Market dependency (Harari, 2015) • Overconfident belief in own ability to foresee risks (Burton, 2008) -risk-based governance and incorrect probabilistic approaches which underestimate fat-tail events. Cultural vulnerability • Spread of pandemics caused by culturally determined interactions (e.g. Ebola); • Ingrained distrust of governmental authorities/public media undercutting disaster response efforts; • Social norms promoting high fertility and unsustainable population growth (Kuhlemann, 2018) . • Globalized diets and food demand that can only be met by (unsustainable; vulnerable) monocultures. • Increasingly homogenous global 'monoculture' in practices and ideology creates vulnerabilities, by limiting redundancies and diversity. (continued on next page) As these are background conditions at the frontiers of epistemology, we are unlikely to be able to unveil more than a fraction of these vulnerabilities. Also, as inherent features of human existence we have limited abilities to act effectively in this category. Perhaps the most utility we can extract from delimiting ontological vulnerability is to restrict its reach: in other words to leave this as a residual class of vulnerabilities inherent in existence.
Vulnerabilities, passive and active; built, institutional and cultural Passive vulnerabilities are characterised by inaction: the susceptibility to existential outcomes by virtue of failure to take appropriate measures. Conversely, active vulnerabilities arise in association with human activities, as by-products or unintended consequences. Three cross-cutting sub-distinctions can also be made for both passive and active vulnerabilities: built, cultural, and institutional. Built vulnerabilities are characterised by our (passive) failure to put into place relevant solutions or defences to existential challenges, or by our (active) failure to repair or correct the extant vulnerabilities in the legacy infrastructures we deploy, or the pathdependent ways we deploy them-even if we have such solutions or repairs at our disposal. Such solutions can in fact include some interventions proposed by the existential risk research agenda, such as an asteroid defence programme or the ability to systematically monitor for supervolcano eruptions (Denkenberger & Blair, 2018) ; they also cover the active existential risks posed by the technologies which humanity has introduced, but which go unfixed-such as architectural deficiencies creating intractable cybersecurity vulnerabilities in universally used computing chips. Because of the technical nature of engineered vulnerabilities, some of these are perhaps closest to the existing (policy) research agenda of the existential risk community-and at present some may consider that law and policy tools have less of a role to play, other than to coordinate efforts aimed at addressing them. In contrast, top-down vulnerabilities resulting from suboptimal direction and coordination are captured by our sub-category of institutional vulnerability. Here, the line between active and passive is admittedly thin, where recklessness can be the distinguishing • Misaligned, 'apocalyptic' AI (Geraci, 2010) ; • Nuclear force posture combining centralization of launch command authority, with fallible nuclear early warning systems and 'launch-on-warning' missile force postures (Borrie, 2014) . • 'Back-doors' or 'zero-day-vulnerabilities' in critical infrastructure software, knowingly maintained by intelligence services. • Existence of 'omnicidal' agents (Torres, 2016) -including religious groups' faith in end-times, e.g. the Rapture or Yawm ad-Dīn.
Exposure of Society • AI, nuclear power, nanotechnology and synthetic biology. • Experimental scientific curiosity Exposure of Nature • Global extreme climate change. • Over-utilization of nature: unsustainable fishing or hunting E-A. Direct exposure Exposure directly caused by societal structures intended for something else.
Exposure of Society • Lack of political will and institutional inertia leading to 'progress traps' (Wright, 2006) . • War, METI, or cultural sentiment. • Unconstrained optimization processes in society, economics, politics (politicians), which pursue originally legitimate goals but become misaligned as they find ways to achieve these in increasingly perverse ways, or with increasing amounts of externalities (cf. 'Moloch' (Alexander, 2014) ).
Exposure of Nature • Local ecosystem collapse (Kolbert, 2014) . • Urbanisation, agriculture and deforestation
E-I. Intentional Exposure directly imposed by societal arrangements intended precisely for that purpose. - • The existence of nuclear and (infectious) biological weapons for strategic purposes such as deterrence. • On a more granular level: the retention of deterrent weapons which risk nuclear winter, over 'winter-safe' deterrent (Baum, 2015b) . feature. Active institutional vulnerability may be characterised by failure to coordinate to address a known risk, such as climate change, or cyclical global economic melt-down. Passive institutional vulnerability may then be understood as directional and coordination failures that limit the scope of knowledge related to existential risks-perhaps an implicit 'unwillingness to know', which translates in an unwillingness to fund blue-sky research into charting 'unknown unknowns' (Rumsfeld, 2002) . Cultural vulnerability encompass the bottom-up societal dimensions, reflecting how certain social practices may affect susceptibility to existential challenges. Active cultural vulnerabilities include customary practices that facilitate the spread of pathogens, increasing susceptibility to pandemics, for example integrated commercial travel networks and interpersonal greeting rituals which encourage physical proximity or contact. Passive cultural vulnerabilities include the exclusion or ridicule of existential risks from serious discussion in public forums (let alone the halls of power). This increases collective vulnerabilities insofar as the public and policymakers underrate the prospects for existential risks (cognitive biases exacerbate these effects, Kahneman, 2012) resulting in further marginalisation.
Intended vulnerabilities Intended vulnerabilities are those which are created or retained specifically for that purpose, and within the existing research agenda are reflected in the premises of the 'AI risk' or 'Apocalyptic AI' movement (Geraci, 2010) . Another salient example can, however, be found in nuclear force postures which (in the US context) features centralization of launch command authority along with a 'launch-on-warning' doctrine that relies on input from fallible early launch warning systems (Borrie, 2014; Sagan, 1993) . Together, this gives rise to the catastrophic risk of an accidental nuclear war (Barrett, Baum, & Hostetler, 2013 ). Yet far from incidental, this is arguably by design. As the theorist Kenneth Boulding once observed: "if [deterrence] were really stable … it would cease to deter. If the probability of nuclear weapons going off were zero, they would not deter anybody" (Boulding, 1986, p. 32 ). 14 The nuclear force knowingly renders itself more vulnerable to catastrophic accidents-sacrificing a degree of safety for the sake of strengthening operational readiness and deterrence. While less dramatic, similar intentional vulnerabilities could emerge from a state intelligence service knowingly holding back-doors or 'zero-day-exploits' which it identifies in critical infrastructure software, in the hope that this may enable more effective cyberattacks against rival states at a later state.
Existential vulnerability: mitigation and adaptation strategies This taxonomy of vulnerabilities can provide concrete suggestions for addressing existential risks. While the categories of ontological and intended vulnerabilities may seem superfluous, their treatment as additional classes allow limited resources to be concentrated into the most tractable areas. Perhaps the main contribution of this taxonomy is to highlight how existential risks need not be active and discernible, in the manner of the 'hazards' identified in the field. Instead, many of these risks can be latent, and slow-moving. Moreover, this taxonomy aids in understanding how human activities can impact paths towards 'existential outcomes' in several ways: (1) intent: by directly creating technologies which pose existential hazards (i.e. emerging technologies such as AI, nanotechnology and synthetic biology); (2) (negligence) by establishing complex systems for which failure is unavoidable (Perrow, 2011) ; (3) and by omission, the failure to take steps to confront existential risks. Beyond merely refining the sources of existential risks, the contribution of this taxonomy lies in creating a roadmap for the study and integration of risks that have not yet received much or consistent attention in the field of existential risks. In doing so, we emphasise a number of existential vulnerabilities, such as global dependency upon a few species of staple crops, or certain types of globalised technologies (e.g. SCADA-based systems in critical infrastructure) that are not commonly recognised as sources or failure points of existential risks. The study of existential 'vulnerability' may suggest that adaptation strategies are preferable to those of mitigation, both because of the inherent complexity underlying both forms of structural vulnerability and because adaptation can now occur simultaneously with mitigation. This is because the vulnerability analysis in effect opens up a parallel system where other trajectories of existential risks are at play. The rough equivalence drawn between traditional existential risks with existential hazards might have the effect of underselling adaptation strategies: it is illogical to conceive of robustness as a defence against the apocalypse, after all. Along with efforts to mitigate or avert existential hazards, however, we can now also plan for adaptation against vulnerabilities. Thus, adaptation strategies are not limited to actions undertaken after 'the Fall': instead they may become rational reactions towards limiting susceptibility to existential risks. In order to explore this potential further, we proceed to examine a taxonomy of exposure.
A taxonomy of 'Existential exposure' As a parallel effort to our taxonomy on existential vulnerabilities, we set out a classification system to differentiate between different forms of exposure. It is worth recalling at this point that we use exposure to express the interface between hazards and vulnerabilities-between what kills us, and how we die. Both hazards and vulnerabilities in isolation remain as potentials: exposure is thus a means of actualising such potential into existential risks. Such exposure can further be directed towards either the societal or the natural environment. This is about what is directly at risk: 14 Indeed, in Essentials of Post-Cold War Deterrence, the US Strategic Command recommended a species of potentially risky brinkmanship, arguing that "[t]he fact that some elements may appear to be potentially 'out of control' can be beneficial to creating and reinforcing fears and doubts in the minds of an adversary's decision makers. This essential sense of fear is the working force of deterrence. That the U.S. may become irrational and vindictive if its vital interests are attacked should be part of the national persona we project to all adversaries." (Policy Subcommittee of the Strategic Advisory Group (SAG), 1995) our (human) society and the common capabilities and support structures preventing existential risks; or nature and its carrying capacity and resilience to future shocks. Thus, we assert that devastating results for humankind can follow from the collapse of both the societal structures we have built, as well as the natural environments within which these constructed systems are embedded. Again, the distinction allows us to single out different examples and trajectories to build alternative strategies for human survival. As is clear from the examples above, it also draws out lessons for existential outcomes which might not be immediately evident from an analysis of existential hazards alone. For example, when 'exposure' is seen from the perspective of the natural environment on which mankind depends, pervasive over-fishing and deforestation, combined with trends in resource demands tracking population growth, may become potentially hazardous activities with the potential to curtail human development in the long run (Diamond, 2006) , even if they do not affect most humans directly in the short run. 3.2.1. Ontological exposure Some exposures are inherent in residing on Earth. Those falling in the category of natural exposure denote existence on earth itself as the exposure, and include our exposure to Near-Earth Objects (NEO) hitting earth or supervolcanoes triggering a protracted volcanic winter. The common denominator underlying this form of exposure is their requirement for measures beyond our present technological capacity to overcome (which admittedly, can be a moving threshold).
Indirect and direct exposure As with the discussion of existential vulnerabilities set out above, the potential of our proposed taxonomy lies in the analysis of indirect and direct exposures. This distinction identifies the exposures that are a direct consequence of human activity, from those that are caused by more complex interactions with other systems. The theoretical example of high-energy physics research going awry 15 provides an example of societal exposure. 16 A final example of direct exposures are private or unilateral attempts to undertake 'Active SETI'-alternately called METI ('Messaging to Extra-Terrestrial Intelligence') (Zaitsev, 2006) -which might expose the rest of mankind to catastrophic risk, should any future contacted alien species prove hostile and capable of interstellar-scale interdiction. These examples illustrate how surfaces of direct exposure (and ways to reduce it) might be overlooked when concentrating upon the hazard alone. Beyond direct exposures, an array of arrangements which jeopardise the human societies that have become dependent on them. This category includes any activity or arrangement which might expose the world to extinction through cascading effects. The development of critical common global infrastructures such as the internet, energy markets, and cultural and scientific harmonization might be classified as exposures, rather than vulnerabilities because these reveal new interfaces between hazards and vulnerabilities. Thus collapse of common infrastructures would trigger cascades which jeopardise civilizational sophistication at the global level (Wright, 2006) , the edifice upon which humanity's long-term potential has been built. Similarly, developments like urbanisation, intensification of agriculture, and even increasing global inequality () appear to be factors that create fault lines and further drive exposures to existential vulnerabilities. Here the exposure perspective shows us that only by certain actions or inactions do risks actually materialise fully against civilization.
Intentional exposure Finally, some of these exposures appear to exist intentionally, or at least knowingly or recklessly. The city of New Orleans, Louisiana, provides a microcosm of how dysfunctional behaviour, seen from an existential risk perspective, might be driven by human incentives or rationales operating at different orders. The city is, in design and position, incredibly vulnerable to its natural environment-pinched in between the Mexican Gulf and Lake Pontchartrain and built on the banks of the Mississippi River. Accordingly, some have argued that the most reasonable strategy following hurricane Katrina would have been to abandon the city permanently (Richards, 2011) . Instead, the affected populations were given incentives to return, with the US government investing billions in the reconstruction of the city, aware that even with improved defences, the city remains unsafe (Cutter et al., 2014) . Similarly, many populations worldwide, from Tehran and Kathmandu, to San Francisco and Port-au-Prince, persist in known disaster-prone zones, for (legitimate) reasons of culture, history, identity or economy. The purpose of these examples is not to warn the populations of these cities, nor to judge their decision to remain: rather the point is that individuals and societies often make decisions based upon entirely different rationales than a concern for survival. This is an insight that seems to scale to any level of government. In simpler terms, sometimes we choose exposure over safety because of competing considerations, and while this might be productive from a cultural heritage perspective, it remains problematic when seen through the lens of existential risks.
Are existential hazards necessary for existential risks? Having set out taxonomies for differentiating between factors which influence existential risks the question remains whether all components are necessary to bring about an 'adverse outcome'. Our initial claim was that existential hazards could be sufficient 15 Cf. ; for interesting methodological work on estimating the safety of experiments within particle physics, as a particular case of evaluating risks with extremely low probability but very high stakes, see (Ord et al., 2010) . 16 High-energy physics research can be distinguished from the category of hazards as an example of societal exposure because of the active decision to conduct the relevant experiments. One might also counter that 'nature' would be as much exposed to a potential physics disaster, as our society would be. While that is certainly the case, we treat it as a case of societal exposure insofar as humankind is impacted by such accidents, rather than by the impact of such accidents on the environment's integrity or carrying capacity. H.-Y. Liu et al. Futures 102 (2018) 6-19 existential risks, but that they were not necessary to pose such risks. Returning to the civilization collapse literature cited above, Ferguson provides a critical insight in contesting the traditional view of cyclical history itself. He posits an alternative conceptual framework by asking the question: 'What if history is not cyclical and slow-moving, but arrhythmic?' (Ferguson, 2011, p. 299) . Continuing, he summarises the perspective we adopt succinctly: Civilisations… are highly complex systems, made up of a very large number of interacting components that are asymmetrically organised, so that their construction more closely resembles a Namibian termite mound than an Egyptian pyramid. They operate somewhere between order and disorder − on 'the edge of chaos', in the phrase of computer scientist Christopher Langton. Such systems can appear to operate quite stably for some time, apparently in equilibrium, in reality constantly adapting. But there comes a moment when they "go critical". A slight perturbation can set off a "phase transition" from a benign equilibrium to a crisis − a single grain of sand causes an apparently stable sandcastle to fall in on itself. (Ferguson, 2011, pp. 299-300) . Wright echoes this sentiment: 'Civilisations often fall quite suddenly − the House of Cards effect − because as they reach full demand on their ecologies, they become highly vulnerable to natural fluctuations' (Wright, 2006, p. 130) . When combined with the observation that hitherto isolated civilizational experiments have now been merged (Harari, 2015) , this raises the spectre that existential risks can coalesce from factors that historically brought about only limited civilizational collapses. Thus, the question we need to pose in this regard is whether vulnerabilities themselves contain the seeds of existential risks? In this context, we should note that vulnerabilities have often been considered mostly as aggravating factors. As aggravators then, vulnerabilities are subsidiary considerations restricted to influencing borderline events: where a potential existential hazard impacts humanity, its susceptibility or resilience could determine whether or not that hazard was transmuted into an existential outcome. In line with vulnerabilities being developed as a separate sphere where existential risks are at play, this section explores the possibility of removing the existential character of the hazard and thus plausibly reducing the calculus to: Existential Risk = Hazard \* Existential Vulnerability \* Exposure
[and/or] Existential Risk = Hazard \* Vulnerability \* Existential Exposure An initial issue is that a catalyst of some sort is required to precipitate the existential risk, because even a system with wellexposed inherent susceptibilities will need something to set it motion. Removing the existential hazard component allows us to explore the possibility that relatively minor occurrences can trigger cascades that emerge as existential risks. But a vulnerability cannot by definition transmute into the existential risk itself absent external input: for this reason we diminish the stature of 'hazard' in the equation to represent our proposition that exogenous shocks need not be the spectacular existential hazards recognised by the study of existential risks. Instead, the external hazards in our revised equation can include insignificant events which go unnoticed (and quite probably involve a large number of minor occurrences).
Contributions and limitations of law and policy tools for existential risks While our deconstruction of existential risks lead to fairly broad claims, it also provides a few concrete questions and insights. First and foremost, if existential risks can indeed be triggered by non-existential hazards, we need to broaden the scope of investigation in order to draw a more accurate roadmap of the existential risks field, one which can deal with questions of vulnerability and exposure explicitly. Second, the type of perceived challenge channels the range of appropriate responses which can be developed. While existential hazards may appropriately be met by narrower forms of technical solutions and technologically-oriented mitigation strategies, our broader perspective of existential risks open up other toolboxes to confront existential risks. In particular, social vulnerability and human-driven (anthropogenic) exposure require improved governance and coordination for adaptation strategies. Thus, when we reconstruct existential hazards through the optics of the social systems' inability to withstand them they, per definition, become social phenomena. As noted, many existential risk scholars have recently recognised the importance of reaching out to, and incorporating, law and governance approaches, even where the origin of the existential hazard itself is technological. The critical role of such law and governance approaches should be even more self-evident where the problems in question-the origins of existential vulnerability and exposure-are themselves social, not technological. This opens up a field for law and governance scholars to work more productively and on an equal footing with technical experts and philosophers. Moreover, this allows for a different set of research questions to be posed as to how we might reduce the vulnerabilities underlying the existential risks against humanity, and our collective exposure to hazards leading to existential outcomes. In doing so, our taxonomy has the potential to elevate relevant aspects of otherwise mundane considerations within politics, economics and society to the plane of existential risks. In garnering this attention, we hope that law and policy tools might be more productively incorporated and deployed as a means to building resilience and robustness. Here, central legal institutions as rights, responsibility and societal relations might in fact contribute substantially to reducing both our vulnerability towards, and exposure to, existential risks. The obvious limitations of this approach reside in the observation that many contemporary existential hazards, vulnerabilities and exposures are anthropogenic. This raises the spectre of either 'iatrogenesis' ('[complications] caused by the healer'), where our attempts at treating a problem accidentally give rise to new, potentially worse ailments. Thus, in our attempt to curtail existential vulnerabilities and exposures, we may inadvertently generate new or different existential risks. Yet, the framing remains critical: the vantage points created in our proposed taxonomy encourages alternative ways of thinking about existential risks and provide different accommodation strategies. Finally, the perspective provided by existential vulnerabilities might also foster solutions that will be of more general benefit to humanity as tangential effects of efforts taken to reducing our collective vulnerability and exposure to existential risks. While this appears to be of a lower order of concern at first flush, our taxonomy appears to bind existential risks together with phenomena occurring at different levels. In this sense, existential vulnerabilities and exposures may possess fractal characteristics (Gleick, 1997; Johnson, 2002) , reflecting the complexity of their constitution. Support for this claim might reside in the scalability of hazards and vulnerabilities in particular: if pedestrian threats can cascade into existential outcomes, for example, then mundane measures might feedback to reinforce humanity against existential risks. Pushing this to its limits, it is possible the seemingly oblique effects of improved governance undertaken to shore up existential vulnerabilities actually end up as one of the very sources of humanity's resilience and robustness against existential outcomes.
Concluding thoughts The lessons that we can draw from deconstructing existential risks into hazards, vulnerabilities and exposures can be divided into internal and external lessons for the field of existential risk research. In terms of the lessons for existential risk research, our taxonomy suggests that we may presently reside in a situation of pervasive risk. In identifying the catalogue of existential hazards looming over humanity, and focussing attention to confronting these challenges, the perception is that the outcome of these efforts is a lowering of the overall probability of an actualised existential risk. If our efforts are not actually achieving this, however (because they do not address vulnerabilities or exposures, only direct hazards), we run the risk of achieving safety that is merely 'symbolic': we perceive that we are 'all clear'-that we have successfully steered humanity past 'existential outcomes'-when we are in fact all the more fragile. Defeating a global pandemic, or securing mankind from nuclear war, would be historic achievements; but they would be hollow ones if we were to succumb to social strife or ecosystem collapse decades later. By proposing alternative paths that lead to existential outcomes, our taxonomy can recalibrate the calculus and reduce the prospect of an existential outcome. Our taxonomy also provides the groundwork for concrete strategies for meeting the existential challenges revealed by our deconstruction of existential risks. In essence, our taxonomy enables more productive cross-disciplinary cooperation amongst researchers from the existential risk community and various other disciplines, in assessing the dynamics that might lead towards catastrophic or 'existentially adverse' outcomes. This step in itself seems to enhance resilience and robustness by fostering greater variety of policy and governance responses-responses which can move beyond mitigation alone, to extend to adaptation, and which can better anticipate the strengths and weaknesses of governance. Two key limitations latent within such approaches need to be acknowledged. First, that these new perspectives to confronting existential risks import ingrained societal and institutional problems manifest in lower orders of problems. Second, that the additional complexity introduced into the field of existential risks necessarily makes attempts at framing responses more difficult. The payoffs of such a trade-off are open for discussion. Yet, our deconstruction of existential risks, and the taxonomy we develop to do so, may show promise as tools to help consolidate and expand the field of existential risk research and bring aligned disciplines to bear on the effort to reduce the overall probability of an existential outcome for mankind. But these are early tentative steps to building alternative vantage points from which to examine existential risks: our hope is that the alternative perspectives that these provide will allow researchers in broader fields to bring their expertise to identify trajectories that could lead to humanity's demise, and to devise strategies to obstruct those paths to existential outcomes. Fig. 1 . 1 Fig. 1. Qualitative risk categories, indicating the relative position of existential risks. (Reproduced from Bostrom, 2013, p. 17).
.-Y.Liu et al. Futures 102 (2018) 6-19
Table 2 2 A taxonomy of vulnerabilities which contribute to existential risks. Category Description Sub-distinction Examples of Existential Vulnerabilities V-O. Ontological vulnerability Vulnerability that is inherent in being, at present • Simulation shutdown; • Biological dependence on continuous/frequent energy & resource inputs (including food, water, air, light, …);
Table 2 2 (continued) Category Description Sub-distinction Examples of Existential Vulnerabilities V-I. Intended Vulnerability maintained for a direct vulnerability purpose [INTENTION]
Table 3 3 A taxonomy of exposures which contribute to existential risks. Category Description Sub-distinction Examples of Existential Exposures E-O. Ontological exposure Exposure imposed exclusively by existing (as a human on Earth). - • Outer space events; • Super volcanos. • Potential (hostile) alien lifeforms E-P. Indirect exposure Exposure indirectly caused by societal arrangements intended for something else. |
b2f01ae5-180a-490b-98a2-5357a4fd429d | trentmkelly/LessWrong-43k | LessWrong | Nearest unblocked strategy versus learning patches
The nearest unblocked strategy problem (NUS) is the idea that if you program a restriction or a patch into an AI, then the AI will often be motivated to pick a strategy that is as close as possible to the banned strategy, and maybe just as dangerous.
For instance, if the AI is maximising a reward R, and does some behaviour Bi that we don't like, we can patch the AI's algorithm with patch Pi ('maximise R0 subject to these constraints...'), or modify R to Ri so that Bi doesn't come up. I'll focus more on the patching example, but the modified reward one is similar.
----------------------------------------
The problem is Bi was probably a high value behaviour according to R-maximising, simply because the AI was attempting it in the first place. So there are likely to be high value behaviours 'close' to Bi, and the AI is likely to follow them.
A simple example
Consider a cleaning robot that rushes through its job an knocks over a white vase.
Then we can add patch P1: "don't break any white vases".
Next time the robot acts, it breaks a blue vase. So we add P2: "don't break any blue vases".
The robots next few run-throughs result in more patches: P3: "don't break any red vases", P4: "don't break mauve-turquoise vases", P5: "don't break any black vases with cloisonné enammel"...
Learning the restrictions
Obviously the better thing for the robot to do would be just to avoid breaking vases. So instead of giving the robot endless patches, we could try and instead give it patches P1,P2,P3,P4... and have it learn: "what is the general behaviour that these patches are trying to proscribe? Maybe I shouldn't break any vases."
Note that even a single P1 patch would require an amount of learning, as you are trying to proscribe breaking white vases, at all times, in all locations, in all types of lighting, etc...
The idea is similar to that mentioned in the post on emergency learning, trying to have the AI generalise the idea of restricted behaviour from examples (=patc |
e516bebc-15d5-4dfc-b5d2-b630ada57a73 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Secure Deep Learning Engineering: A Software Quality Assurance Perspective
1 Introduction
---------------
In company with massive data explosion and powerful computational hardware enhancement, deep learning (DL) has recently achieved substantial strides in cutting-edge intelligent applications, ranging from virtual assistant (e.g., Alex, Siri), art design [[17](#bib.bib17)], autonomous vehicles [[13](#bib.bib13), [18](#bib.bib18)], to medical diagnoses [[1](#bib.bib1), [3](#bib.bib3)] – tasks that until a few years ago could be done only by humans.
DL has become the innovation driving force of many next generation’s technologies. We have been witnessing on the increasing trend of industry stakeholders’ continuous investment on DL based intelligent system [[6](#bib.bib6), [8](#bib.bib8), [5](#bib.bib5), [7](#bib.bib7), [39](#bib.bib39)], penetrating almost every application domain, revolutionizing industry manufacturing as well as reshaping our daily life.
However, current DL system development still lacks systematic engineering guidance, quality assurance standards, as well as mature toolchain support.
The *magic box*, such as DL training procedure and logic encoding (as high dimensional weight matrices and complex neural network structures), further poses challenges to interpret and understand behaviors of derived DL systems [[16](#bib.bib16), [24](#bib.bib24), [4](#bib.bib4)]. The latent software quality and security issues of current DL systems, already started emerging out as the major vendors, rush in pushing products with higher intelligence (e.g., Google/Uber car accident [[20](#bib.bib20), [40](#bib.bib40)], Alexa and Siri could be manipulated with hidden command [[38](#bib.bib38)]. A DL image classifier with high test accuracy is easily fooled by a single pixel perturbation [[2](#bib.bib2)]). Deploying such cocooned DL systems to real-world environment without quality and security assurance leaves high risks, where newly evolved cyber- and adversarial-attacks are inevitable.
To bridge the pressing industry demand and future research directions, this paper performs a large-scale empirical study on the most-recent curated 223 relevant works on deep learning engineering from a software quality assurance perspective.
Based on this, we perform a quantitative and qualitative analysis to identify the common issues that the current research community most dedicated to.
With an in-depth investigation on current works, and our in-company DL development experience obtained, we find that the development of secure and high quality deep learning systems requires enormous engineering effort, while most AI communities focus on the theoretical or algorithmic perspective of deep learning.
Indeed, the development of modern complex deep learning systematic solutions could be a challenge for an individual research community alone.
We propose the *Secure Deep Learning Engineering* (SDLE) development process specialized for DL software,
which we believe is an interdisciplinary future direction (e.g., AI, SE, security) towards constructing DL applications, in a systematic method from theoretical foundations, software & system engineering, to security guarantees.
We further discuss current challenges and opportunities in SDLE from a software quality assurance perspective.
To the best of our knowledge, our work is the first study to vision SDLE, from the quality assurance perspective, accompanied with a state-of-the-art literature curation.
We hope this work facilitates drawing attention of the software engineering community on necessity and demands of quality assurance for SDLE, which altogether lays down the foundations and conquers technical barriers towards constructing robust and high quality DL applications.
The repository website is available at:
<https://sdle2018.github.io/SDLE/V1.1/en/Home.html>
2 Research Methodology
-----------------------
This section shows the research questions, and discusses the detail of paper collection procedure.
###
2.1 Research Questions
This paper mainly focuses on following research questions.
* [wide=1pt,leftmargin=10pt]
* RQ1: What are mostly research topics and the common challenges relevant to quality assurance of deep learning?
* RQ2: What is secure deep learning engineering and its future direction in perspective of quality assurance?
The RQ 1 identifies the mostly concerned topics in the research community and their common challenges, while RQ2 concerns the key activities in SDLE life cycle, based on which we discuss our vision and future opportunities.
###
2.2 Data Collection Methodology
Extensive research contributions are made on deep learning over the past decades, we adopt the following procedure to select works most relevant to the theme of our paper repository.
* [wide=1pt,leftmargin=10pt]
* We first collect papers from conferences listed on the *Computer Science Rankings* within the scope of AI & machine learning, software engineering, and security.111<http://csrankings.org/#/index?all>
To automate the paper collection procedure, we develop a Python-based crawler to extract paper information of each listed conference since the year 2000 and filter with keywords.
* To further reduce the search space for relevant topics, we use keywords (e.g., deep learning, AI, security, testing, verification, quality, robustness) to filter the collected papers.
* Even though, scraping all the listed conferences may still be insufficient, we therefore crawl outwards – extract all the related work for each keyword-filtered paper and crawl one level down of these papers.
* This finally results in 223 papers and we manually confirmed and labeled each paper to form a final categorized list of literature.
Paper Category and Labeling.
To categorize the selected papers, we perform paper clustering by taking into account the title, abstract, and listed keywords. Based on further discussion of all authors (from both academia and industry with AI, SE, and security background), we eventually identify four main paper categories, and seven fine-grained categories in total (see Figure [1](#S2.F1 "Figure 1 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective")).
In the next step, three of the authors manually label each paper into a target category independently, and discuss the non-consensus cases until an agreement is reached.

Figure 1: The accumulative number of selected publications over years

Figure 2: Milestones of deep learning engineering relevant to security and software quality.
The Dataset and the Trend.
Figure [1](#S2.F1 "Figure 1 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective") shows the general trends of publication on secure deep learning research area, where the publication number (i.e., both total paper as well as in each category) dramatically increases over years.
Such booming trend becomes even more obvious accompanied with the milestones of DLs (e.g., DL won ImageNet Challenge in 2012, AlphaGo defeated human championship in 2016), which is highlighted in Fig. [2](#S2.F2 "Figure 2 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective").
For the four main categories, we find the most publications are relevant to Security and Privacy (SP, 86 papers), followed by Interpretability and Understanding (IU, 65), Testing and Verification (TV, 53), and Datasets (17).
The SP category with the highest paper publication number is not surprising. Since Goodfellow et al. [[19](#bib.bib19)] posted the security issues of DLs, it attracted both the AI and security communities to escalate and burst a research competition on defending and attacking techniques.
Even though, it still lacks a complete understanding on why current DL systems are still vulnerable against adversarial attacks. This draws the attention of researchers on interpreting and understanding how DL works, which would be important for both application and construction of robust DLs. As the recent emerging investment blowout in DL applications to safety-critical scenarios (e.g., autonomous driving, medical diagnose), its software quality has become a big concern, where researchers find that the different programming paradigm of DL makes existing testing and verification techniques unable to directly handle DLs [[33](#bib.bib33), [23](#bib.bib23), [27](#bib.bib27), [37](#bib.bib37)]. Therefore, we have observed that many recent works are proposing novel testing and verification techniques for DLs, from testing criteria, test generation techniques, test data quality evaluation, to static analysis. Meanwhile, the dataset benchmarks of different DL application domains emerge to grow as well [[41](#bib.bib41), [22](#bib.bib22), [35](#bib.bib35), [15](#bib.bib15)], in order to facilitate the study of solving domain-specific problems by DLs (e.g., image classification, 3D object recognition, autonomous driving, skin disease classification).
Common Issues.
In contrast to traditional software of which the decision logic is mostly programmed by human developers, deep learning adopts a data-driven programming paradigms. Specifically, a DL developer’s major effort is to prepare the training data (including knowledge to resolve a task) and neural network architecture, after which the decision logic is automatically obtained through the training procedure. On one hand, this paradigm reduces the burden of a developer who manually crafts the decision logic. On the other hand, for a DL developer, the logic training procedure is almost like a magic-box driven by an optimization technique.
Due to the decision logic of DL is encoded into a DNN with high dimensional matrices, the interpretation and understanding, training procedure, as well as the obtained decision logic are all very difficult [[26](#bib.bib26)], which could be a root cause and a common challenge among all categories.
For example, without completely understanding the decision logic of DL, it is hard to know in what particular case an adversarial attack could penetrate, and how we could defend against such attacks. In the case of testing, extensive studies are performed on analysis of traditional software bugs, their relations to software development activities, as well as techniques for defect detection. However, a comprehensive empirical study and understanding on why DL bugs occur still could not be well explained, let alone the root case analysis.

Figure 3: Secure deep learning engineering life cycle
3 Secure Deep Learning Engineering Life Cycle
----------------------------------------------
Due to the fundamental different programming paradigms of deep learning and traditional software, the secure deep learning engineering practice and techniques are largely different with traditional software engineering, although the major development phases could still be shared.
{tcolorbox}
[colframe=black, colback=white]
We define *Secure Deep Learning Engineering (SDLE) as an engineering discipline of deep learning software production, through a systematic application of knowledge, methodology, practice on deep learning, software engineering and security, to requirement analysis, design, implementation, testing, deployment, and maintenance of deep learning software.*
Figure [3](#S2.F3 "Figure 3 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective") shows the key development phases of SDLE. In the rest of this section, we first describe each of the key development phases, their uniqueness and difference compared with traditional practices in software engineering, and then we discuss the security issues in current SDLE.
In the next section, we explain the quality assurance necessity in SDLE life cycle, and highlight the challenges and opportunities.
Requirement Analysis. Requirement analysis investigates the needs, determines, and creates detailed functional documents for the DL products.
DL-based software decision logic is learned from the training data and generalized to the testing data. Therefore, the requirement is usually measured in terms of an expected prediction performance, which is often a statistics-based requirement, as opposed to the rule-based one in traditional SE.
Data-Label Pair Collection. After the requirements of the DL software become available, a DL developer (potentially with domain experts for supervision and labeling) tries to collect representative data that incorporate the knowledge on the specific target task. For traditional software, a human developer needs to understand the specific task, figures out a set of algorithmic operations to solve the task, and programs such operations in the form of source code for execution. On the other hand, one of the most important sources of DL software is training data, where the DL software automatically distills the computational solutions of a specific task.
DNN Design and Training Program Implementation. When the training data become available, a DL developer designs the DNN architecture, taking into account of requirement, data complexity, as well as the problem domain. For example, when addressing a general purpose image processing task, convolutional layer components are often included in the DNN model design, while recurrent layers are often used to process natural language tasks.
To concretely implement the desired DNN architecture, a DL developer often leverages an existing DL framework to encode the designed DNN into a training program. Furthermore, he needs to specify the runtime training behaviors through the APIs provided by the DL framework (e.g., training epochs, learning rate, GPU/CPU configurations).
Runtime Training. After the DL programming ingredients (i.e., training data and training program) are ready. The runtime training procedure starts and systematically evolves the decision logic learning towards effectively resolving a target task.
The training procedure and training program adjustment might go back-and-forth several rounds until a satisfying performance is achieved.
Although the training program itself is often written as traditional software (e.g., in Python, Java), the obtained DL software is often encoded in a DNN model, consisting of the DNN architecture and weight matrices.
The training process plays a central role in the DL software learning, to distill knowledge and solution from the sources. It involves quite a lot of software and system engineering effort to realize the learning theory to DL software (see Figure [3](#S2.F3 "Figure 3 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective")) over years.
Testing & Verification.
When the DNN model completes training with its decision logic determined, it goes through the systematic evaluation of its generality and quality through testing (or verification). Note that the testing activity in the AI community mainly considers whether the obtained DL model generalizes to the prepared test dataset, to obtain high test accuracy.
On the other hand, the testing activity (or verification) in SDLE considers a more general evaluation scope, such as generality, robustness, defect detection, as well as other nonfunctional requirement (e.g., efficiency). The early weakness detection of the DL software provides valuable feedback to a DL developer for solution enhancement.
Deployment.
A DL software passed the testing phase reaches a certain level of quality standard, and is ready to be deployed to a target platform.
However, due to the platform diversity, DL framework supportability, and computation limitations of a target device, the DL software often needs to go through the platform calibration (e.g., compression, quantization, DL framework migration) procedure for deployment on a target platform. For example, once a DL software is trained and obtained on the Tensorflow framework, it needs to be successfully transformed to its counterpart of TensorflowLite (resp. CoreML) framework to Android (resp. iOS) platform. It still needs to go through on device testing after deployment, and we omit the testing phase after deployment for simplicity.
Evolution and Maintenance.
After a DL product is deployed, it might experience the procedure of modification for bug correction, performance and feature enhancements, or other attributes.
The major effort in evolution and maintenance phases relies on the manually revision on design, source code, documentation, or other software artifacts. On the other hand, DL software focuses more on comprehensive data collection, DL model continuous learning (e.g., re-fitting, retro-fitting, fine tuning, and re-engineering).
Security Issues in DL.
The current practice of security in deep learning has fallen into the trap that many other domains have experienced. Almost every month new attacks are identified [[9](#bib.bib9), [19](#bib.bib19), [31](#bib.bib31), [30](#bib.bib30), [11](#bib.bib11), [44](#bib.bib44), [14](#bib.bib14)]
followed by new countermeasures [[32](#bib.bib32), [43](#bib.bib43)] which are subsequently broken [[11](#bib.bib11), [21](#bib.bib21)], and so on ad-infinitum. There is a broad and pressing need for a frontier-level effort on trustworthiness and security in DL to break this cycle of attacks and defenses. We have a unique opportunity at this time—before deep learning is widely deployed in critical systems—to
develop the theory and practice needed for
robust learning algorithms that provide rigorous and meaningful guarantees.
If we rethink the SDLE life cycle (see Figure [3](#S2.F3 "Figure 3 ‣ 2.2 Data Collection Methodology ‣ 2 Research Methodology ‣ Secure Deep Learning Engineering: A Software Quality Assurance Perspective")), security vulnerabilities can happen in almost every step. For instance, for the training related steps such as *Requirement Analysis, Data-Label Pair Collection* and *DNN design and training*, poisoning attacks can easily happen via manipulating training data. In the testing related steps, such as *testing & verification*and *deployment*, evasion attacks can take place by perturbing the testing data slightly (e.g. adversarial examples). In addition, when deploying the DL software to different platforms or with different implementation frameworks, there will always be opportunities for adversaries to generate attacks from one to the other.
We believe many of these security issues are highly intertwined the quality of current DL software, lacking systematic quality assurance solutions over the entire SDLE process which is largely missed in research works as described in the next section.
4 Towards Future Quality Assurance of SDLE
-------------------------------------------
Over the past decades, software quality assurance discipline [[34](#bib.bib34), [36](#bib.bib36)] has been well-established for traditional software, with many experiences and practices widely applied in software industry.
However, the fundamentally different programming paradigm and decision logic representation of DL software make existing quality assurance techniques unable to be directly applied, forcing us to renovate the entire quality assurance procedure for SDLE. In this section, we pose our vision and challenges on quality assurance in SDLE to guide future research.
From the very beginning of SDLE, we need to rethink how to accurately define, specify, and document the of DL software requirement, especially for the functional requirements. This leaves us a question whether we should follow a statistical based approach, a rule based approach, or their combination, which has not been well investigated yet.
The training data play a key role in shaping the learning process and DL decision logic. However, most current research treats the training data as high quality for granted, without a systematic quality control, inspection and evaluation process. As poisoning attacks show, many incorrect behaviors and security issues could be introduced with the maliciously tweaked training data. How to select the suitable size while representative data would be an important question. In addition, data supervision and labeling process is also labor intensive and error prone. For example, ImageNet dataset contains more than one million general purpose images. We also need to provide assistance and quality control for the labeling procedure.
It becomes even more challenging, when it comes to the training program implementation and runtime training. Most state-of-the-art DL frameworks are implemented as traditional software on top of the DL software stack. Even the learning theory is perfect, it still has a big gap to transfer such ideally designed DL models to a DL application encoded on top of the DL framework. One big challenge is how to ensure the software stack (e.g., hardware drivers, DL library, DL framework) correctly implements the learning algorithm. Another challenge is to provide useful interactive support of the training process. Most current DL training procedure only shows training loss (accuracy), validation loss (accuracy), which is mostly a black box to a DL developer. When, the training procedure goes beyond expectation, the root cause analysis becomes extremely difficult, which may come from the DL architecture issue, training program implementation issue, or the hardware configuration issue. Hence, the software engineering community needs to provide the novel debugging, runtime monitoring, and profiling support for the training procedure, which is involved with non-determinism and runtime properties hard to specify.
The large input space has already been a challenge for testing and verifying traditional software.
Such a challenge is further escalated for DL software, due to its high dimensional input space and the internal latent space. Even though, traditional software testing has already explored many testing criteria as the goal to guide testing. How to design a suitable testing criteria to capture the testing confidence still remains unclear.
Even with some preliminary progress on testing criteria designed for DLs [[33](#bib.bib33), [27](#bib.bib27), [37](#bib.bib37), [25](#bib.bib25)], there are many more testing issues needed to be addressed, such as how to effectively generate tests [[42](#bib.bib42), [29](#bib.bib29)], how to measure the test data quality [[28](#bib.bib28)], and how to test DL robustness and vulnerabilities [[12](#bib.bib12), [10](#bib.bib10)].
Further DL challenge comes up with current deployment process: (1) target device computation limitations, and (2) DL framework compatibility across platforms. The DL software is mostly developed and trained on severs or PCs with GPU support. When it needs to be deployed on a mobile device or edge device with limited computation power, the DL software must be compressed or quantized for computation/energy efficiency, which could introduce defects. How to ensure the quality and detect the potential issues during this process is an important problem.
In addition, the current DL frameworks might not always be supported by different platforms. For example, the Tensorflow is not directly supported by Android or iOS, and how to make DL software cross-platform compatible would be an important direction. Finally the quality assurance concerns in DL software evolution and maintenance are mostly focused on avoiding introducing defects during change, which might rely on regression testing. However, how to effectively evolve the DL software still remains unknown, which we leave as an open question for further study.
5 Conclusion
-------------
Considering deep learning is likely to be one of the most transformative technologies in the 21st century, it appears essential that we begin to think how to design fully fledged deep learning systems under a well-tested development discipline. This paper defines the secure deep learning engineering and discusses the current challenges, opportunities, and puts forward open questions from the quality assurance perspective, accompanied with a paper repository. We hope our work can inspire future studies towards constructing robust and high quality DL software. |
179b3aef-8c6f-485c-b250-567c67f0fc47 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3757
A putative new idea for AI control; index here . Suppose that we knew that superintelligent AI was to be developed within six months, what would I do? Well, drinking coffee by the barrel at Miri's emergency research retreat I'd... still probably spend a month looking at things from the meta level, and clarifying old ideas. But, assuming that didn't reveal any new approaches, I'd try and get something like this working. Standard setup Take a reinforcement learner AI, that we want to safely move a strawberry onto a plate . A human sits nearby and provides a reward based on inspecting the AI's behaviour. As it stands, this setup is completely vulnerable to reward hacking . The reward is not provided for safe moving of the strawberry; instead the reward is provided by having the human judge that the task has been accomplished and then pressing a button. Taking control of the human or control of the button is likely to be possible for a superintelligent AI; and, as it stands, that would be mandated by this reward function. Learning from positive and various negative examples Could we have the AI instead learn what the reinforcement signal "should be"? It seems that it might at least be possible, if we can make the AI learn from both positive and negative examples. I'd make five categories of examples from which the AI could learn. It may be too dangerous to have the superintelligent AI used directly in constructing these examples; in that case, the rewards would be given to a simpler, dumber version of the AI, and the examples passed on to the superintelligent AI for offline training. #. Simple positive and negative examples. These are the basic examples from above: the AI completes the task or fails to, and gets the consequent reward. The AI stays within its room and the human is sober, rested, uninfluenced, and so on.
#. Simple more dubious examples. These are examples where the AI gets a reward, but the learning process judges that these rewards were not fairly allocated. Maybe the human was drunk, or not paying attention. Maybe they slipped, and pressed the button by accident. Or they were in a hurry to end their shift, and so pressed the button. All these examples are given a score from -1 to 1. Those with a score of 1 are taken as legitimate examples of rewards, those with a score of 0 are taken as random reward giving, while those with a score of -1 are taken as anti-rewards instead.
#. Extreme negative examples. Here we can offer the AI actions that lead to complicated outcomes that are not legitimate rewards. Maybe the action can make the controller drunk. Maybe they can start a tape playing that will convince the controller the area is haunted. They could cut the wire, fuse the button, switch the controller's task description, distract the controller, start an automated threat tape, and so on. All these are labelled as negative examples (even though the reward is given).
#. Hypothetical negative examples. If we can communicate hypothetical situations to the AI (especially if we can do that without fully specifying the hypothetical in detail), we can use that to extend the previous category to every situation we can imagine. We can have the AI trick, threaten, or kill the controller, destroy the rest of humanity, launch subagents to seize the button, take honeypots, and so on. We can't get an exhaustive list, but we can try to get as much examples of AIs behaving badly. These are also obviously labelled as negative examples (even though the reward is given).
#. Properties of the example space. It's not clear whether these are examples or features of the learning process. See the next sections for details. Nearest unblocked strategy vs overfitting If we simply told the AI to maximise reward, subject to the fact that all the negative examples are to be avoided, then we fall into the nearest unblocked strategy problem (NUSP). The AI will simply choose a dangerous strategy that isn't too close to those that have been explicitly ruled out. Since we don't have the time, intelligence, or imagination to rule out all the dangerous strategies (and since some of them would be too dangerous for us to even contemplate ), this wouldn't work. But if we consider the issue as a learning problem, the situation seems less dire. Then the NUSP seems simply an example of overfitting: if the AI has learnt that all these are photos of cats, it should be able to infer that something similar is also a cat, even if it isn't close to the labelled examples. Similarly for negative examples that aren't close to other negative examples. But another variant of the NUSP continues to plague us. Suppose we had trained the AI only on black cats; how would we expect it to react when confronted with a white cat? Imagine the space of behaviour as a high dimensional space, with the space of good behaviours as some compact subset. Then the negative examples we gave do not tightly surround that compact subset (that's standard NUSP). But we've also not spread the negative examples much around that space, due to the usual problems with our lack of intelligence and imagination. So the negative examples we've set are likely to occupy a very low dimensional space around the space of good behaviour. Adding the properties However, once we've realised that, we can add it to the learning process. What method would be the best to pick out a constrained compact set in high dimensions, where the known positive and negative examples occupy a much smaller dimensional subspace? There might not be a good method, but there's probably a best method, given what we know or think we know. The same goes for all the other properties we known about that space. For instance, if the AI uses different approaches for different humans, this is a likely failure mode. If it doesn't matter what the task the AI is given is (because the AI will just hack the human anyway), that's another failure mode - note, again, that this property makes more sense as a learning property than as a constraint. The perils (to humans) of unconstrained search is another known failure mode. Even the fact that we are using a dumber AI to construct the practical examples, is relevant. What sort of failure are we trying to avoid? When we box the AI or seal off the human with the reward button, again, what is this trying to achieve? And so on, as we add all our intuitive understanding of the spaces of good and bad behaviours in this area. Depending on how the learning process is designed, we can add these properties either as constraints for the learning process, or as part of the process itself. Unusual actions versus unusual situations Now, this approach is not certain to work by any means, but as an emergency measure it seems to have potential. But note that it seems much safer as a measure to prevent the AI taking extreme/manipulative/unusual actions. Rather than as a way to constrain the AI's behaviour in unusual situations. This approach is intended to prevent the AI from taking over the world, rather than telling it what to do if it does take over. That's mainly because we have a vague idea what not taking over the world looks like (and we can produce many examples of this behaviour), but we're much more stumped as to what running the world well means. |
bd5ae9a7-a91e-4686-baa5-d0a407d74802 | trentmkelly/LessWrong-43k | LessWrong | Ethical AI investments?
Had a bit of a freak out and liquidated my investments in Google, Microsoft, Nvidia and Meta after watching Eliezers interview with the bankless podcast, not because I thought it actually change the trajectory of AGI but because I don’t want to invest in the end of humanity. Are there any actually ethical safe AI investments to be had? |
5da29edb-bd31-436e-92f6-279f785a4931 | trentmkelly/LessWrong-43k | LessWrong | Preferences and biases, the information argument
I've recently thought of a possibly simpler way of expressing the argument from the Occam's razor paper. Namely:
* Human biases and human preferences contain more combined information than human behaviour does. And more than the full human policy does.
Thus, in order to deduce human biases and preferences, we need more information than the human policy caries.
This extra information is contained in the "normative assumptions": the assumptions we need to add, so that an AI can learn human preferences from human behaviour.
We'd ideally want to do this with as few extra assumptions as possible. If the AI is well-grounded and understands what human concepts mean, we might be able to get away with a simple reference: "look through this collection of psychology research and take it as roughly true" could be enough assumptions to point the AI to all the assumptions it would need. |
877d362a-0eb7-4790-9dd6-c12231cb3f71 | trentmkelly/LessWrong-43k | LessWrong | Subskills of "Listening to Wisdom"
A fool learns from their own mistakes
The wise learn from the mistakes of others.
– Otto von Bismarck
A problem as old as time: The youth won't listen to your hard-earned wisdom.
This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend reading slowly and taking breaks.
To begin, here are three illustrative vignettes:
The burnt out grad student
You warn the young grad student "pace yourself, or you'll burn out." The grad student hears "pace yourself, or you'll be kinda tired and unproductive for like a week." They're excited about their work, and/or have internalized authority figures yelling at them if they aren't giving their all.
They don't pace themselves. They burn out.
The oblivious founder
The young startup/nonprofit founder says "We're going to solve problem X!". You say "oh, uh you know X is real difficult? Like, a lot of talented people tried to solve X and it turned out to be messy and intractable. Solving X is important but I think you need to come at it from a pretty different angle, or have a specific story for why your thing is going to work."
They hear "A bunch of people just didn't really try that hard." If you follow it up with "Look man I really want you to succeed here but I think there are some specific reasons Y that X is hard. And I'd love it if you stood on the shoulders of giants instead of adding another corpse to the pile of people who didn't even make it past the first ring of challenges."
They hear "okay, we need to spend some time thinking specifically about Y, but our plan is still basically right and we can mostly plow ahead." (Also they think your explanation of Y wasn't very good and probably you're actually just an idiot who doesn't really understand Xs or Ys).
A year later they write a blog post saying "We tried to fix X, alas it was hard", which does not contribute particularly interesting new know |
6fed6cfb-043f-428f-bc4f-4f7868b2aaa5 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Solving Math Problems by Relay
From September to November 2018 we ran an experiment where people did programming in relay. Each player spent ten minutes on a programming problem before passing on their code to a new player who had not seen any of the previous work. We found that people were able to solve some problems using the relay approach, but that the approach was less efficient than having a single person work on their own. This project explored hypotheses around [Factored Cognition,](https://ought.org/research/factored-cognition/) testing whether people can solve problems by decomposing them into self-contained sub-problems.
Since this was an "explorative" experiment it wasn't a priority to write up, though we are excited to have gotten around to it now and hope this both informs and inspires other experiments.
Introduction
============
Factored cognition research investigates ways of accomplishing complex tasks by decomposing them into smaller sub-tasks. Task decomposition is not a new idea: it’s widely recognized as fundamental to modern economies. People have worked out ways to decompose complex tasks (e.g. create an electric car) into smaller sub-tasks (create an engine and battery, design steering controls, test the car for safety) which in turn are broken down into yet smaller tasks, and so on. The smallest sub-tasks are carried out by individual humans, who may have a limited understanding of how their work relates to the original task.
The focus is on the decomposition of cognitive tasks, where the goal is to provide information or to answer a question. Cognitive tasks include solving a mathematics problem, analyzing a dataset, or summarizing a story.
*Factored Cognition research explores whether complex cognitive tasks can be solved through recursive decomposition into self-contained, interpretable sub-tasks that can be solved more easily than the originally task.[[1]](https://deploy-preview-11--ought.netlify.com/blog/2019-03-05-relay-game-update#fn1)*
Sub-tasks are "self-contained" if they are solvable without knowledge about the broader context of the task. If the task is a hard physics problem, then self-contained sub-tasks would be solvable for someone who hasn’t seen the physics problem (and need not know the task is about physics). This differs from most real-world examples of [collaborative problem solving](https://en.wikipedia.org/wiki/Polymath_Project), where everyone in the team knows what task is being solved.
Testing the Factored Cognition Hypothesis involves the following steps:
* Finding cognitive tasks that seem costly or difficult to solve directly (e.g. because normally one person would spend days or weeks on the same problem rather than 10 minutes).
* Generating a high-level strategy that would plausibly decompose the task.
* Testing whether the strategy from (2) works by having a group of people solve (1) under controlled conditions.
For some ideas on high-level strategies for a range of problems, see Ought's Factored Cognition [slides](https://ought.org/presentations/factored-cognition-2018-05) and the paper [Supervising strong learners by amplifying weak experts](https://arxiv.org/abs/1810.08575v1) (Appendix B).
In the Relay Game participants worked on a task sequentially, with each person having ten minutes to help solve the problem before handing off to the next person (see Figure 1). This is similar to real-world situations where one person quits a project and someone else takes over. However, the big difference comes from the ten-minute time limit. If the task is complex, it might take ten minutes just to read and understand the task description. This means that most players in the relay won't have time to both understand the task and make a useful contribution. Instead players must solve sub-tasks that previous people have constructed.

*Figure 1: In the Relay Approach (left), each person works on a programming problem for a fixed number of minutes before passing over their notes and code to the next person. Eventually someone in the chain completes the problem.* *This contrasts with the usual approach (right), where a single person works for an extended period.* *Note: Our experiments had a time limit of 10 minutes per person (vs. 1 minute in the illustration)*
We tested the Relay Game on programming problems from [Project Euler](http://projecteuler.net/). Here is a simple example problem:
*How many different ways can one hundred be written as a sum of at least two positive integers?*
Solving these problems requires both mathematical insight and a working implementation in code. The problems would take 20-90 minutes for one of our players working alone and we expected the relay approach to be substantially slower.
Experiment Design
-----------------
Players worked on a shared Google doc (for notes) and code editor (see Figure 2). The first player receives only the Project Euler problem and begins making notes in the doc and writing code. After ten minutes, the second player takes over the doc and code editor. The Relay ends when a player computes the correct answer to the problem, which can be automatically verified at Project Euler.

*Figure 2*
We had 103 programmers volunteer to play Relay. They started 40 questions in total but only 25 had relay chains of more than 5 people. The total amount of work was 48 hours and only four questions were successfully solved. See Table 1 for a breakdown of a subset of the 40 questions. (Note: Most of the 103 players only worked on a few problems. Since much of the work was done by a smaller number of players, they were spread thinly over the 40 problems -- as each player spends just 10 minutes on a problem.)
*Table of Relay Game Problems (10 of 40)*
**
*Table 1: The total time spent on each problem for Relay. Note that most problems were not solved by Relay and so would take longer to actually solve. (So solo vs. Relay cannot be directly compared).*
Can we conclude from Table 1 that Relay is much less efficient than the usual way of working on problems? Not really. It could be that Relay players would get better with experience by developing strategies for decomposing problems and coordinating work. So the main lesson from this experiment is that Relay with inexperienced players is probably less efficient at Project Euler problems. (We say “probably” because we did not conduct a rigorous comparison of Relay vs the usual way of solving problems).
### Clickthrough Examples
We are interested in general failure modes for Factored Cognition with humans and in strategies for avoiding them. Our Relay experiment is a first step in this direction. We exhibit concrete examples from our Relay experiment that are suggestive of pitfalls and good practices for Factored Cognition.
Here are three “click-throughs”, which show how ideas and code evolved for particular Project Euler problems.
#### **Prize Strings**
In these three attempts on [Prize Strings](https://projecteuler.net/problem=191) the players quickly build on players previous work and get the correct answer. *[Clickthrough](https://docs.google.com/presentation/d/e/2PACX-1vRJNsTxNIb2QR7MCOvwSEL5TREEUtfdA6mHYFpcqaNk9-zJaOITT9xocHBvdY39svC02bIMr4Fgo1Ir/pub?start=false&loop=false&delayms=60000)*

#### **Empty Chairs**
[Empty Chairs](https://projecteuler.net/problem=469) was not solved but significant progress was made (with seven people contributing). [The clickthrough demonstrates](https://docs.google.com/presentation/d/e/2PACX-1vRJQ9Wc7jwVJ9fcP9zNskRxmntFoClMgNVFZuF7tiTev1ndW8HlkmsgtUmsYmvUrlhZLnKNvNlKyipS/pub?start=false&loop=false&delayms=30000) iterative improvements to a math heavy solution. *[Clickthrough](https://docs.google.com/presentation/d/e/2PACX-1vRJQ9Wc7jwVJ9fcP9zNskRxmntFoClMgNVFZuF7tiTev1ndW8HlkmsgtUmsYmvUrlhZLnKNvNlKyipS/pub?start=false&loop=false&delayms=60000)*

#### **A Scoop of Blancmange**
There were 13 unique attempts on [A Scoop of Blancmange](https://projecteuler.net/problem=226). While technically unsolved, the answer was off by only 1e-8. *Only attempts that changed the state of the problem are shown. [Clickthrough](https://docs.google.com/presentation/d/e/2PACX-1vSC8rWVVSYeRUGOfCrC2vLccaXMCJb4CfOrMtWRO104vnhD3qMkBN5xNPm6w1oCNuLPUisIqtISYnjO/pub?start=false&loop=false&delayms=60000)*

### Adding Meta-data to Notes
Relay players worked on the mathematical part of the Project Euler problems by writing notes in a Google Doc. A tactic that emerged organically in early rounds was to label contributions with meta-data with specific formatting (ex.using square brackets, strikethroughs). The meta-data was intended to provide a quick way for future players to decide which parts of the Google doc to read and which to ignore.

### Summarizing Next Actions
For several problems the Google doc became messy and probably made it difficult for new players to orient themselves. An example is the problem [Prime Triples Geometric Sequence](https://projecteuler.net/problem=518), shown here mid-round, where some of the subsequent rounds were spent cleaning up these notes and formulating clear next steps.

### Costly Bugs
The problem with the longest Relay chain was [Pandigital Step Numbers](https://projecteuler.net/problem=178) with a chain of 41 players. While substantial progress was made on the problem, there was a one-line bug in the code implementation that persisted to the last player in the chain. Given the size and complexity of the problem, it was difficult for players to locate the bug in only ten minutes.

*Figure 3. Code window for Pandigital Step Numbers problem. The highlighted line contains the bug that probably contributed to the failure of a 41-person Relay to solve the problem. The code incorrectly sets the first digit of the number in the bitmask as “False”.*
Discussion
----------
How does Relay relate to other research on Factored Cognition with humans? The Relay experiment had two key features:
* We used existing collaboration tools (Google docs and web-based interpreter) rather than specialized tools for Factored Cognition.
* Participants worked in *sequence*, building on the work of all previous participants.
In 2019 Ought ran experiments with [specialized software called Mosaic](https://ought.org/blog#mosaic-a-feature-rich-app-for-tree-structured-experiments). Mosaic facilitates tree-structured decompositions of tasks. The overall task is divided into sub-tasks which can be solved independently of each other, and users only see a sub-task (i.e. node) and not the rest of the tree. If this kind of decomposition turns out to be important in Factored Cognition, then the Relay setup will be less relevant.
The Relay experiment was exploratory and we decided not to continue working on it for now. Nevertheless we would be interested to hear about related work or to collaborate on research related to Relay.
Acknowledgements
----------------
Ben Goldhaber led this project with input from Owain Evans and the Ought team. BG created the software for Relay and oversaw experiments. BG and OE wrote the blogpost. We thank everyone who participated in Relay games, especially the teams at OpenAI and Ought. The original idea for Relay came from Buck Shlegeris.
Appendix: Related work
----------------------
[[1]](https://deploy-preview-11--ought.netlify.com/blog/2019-03-05-relay-game-update#r1): This is a loose formulation of the hypothesis intended to get the basic idea across. For a discussion of how to make this kind of hypothesis precise, see [Paul Christiano's Universality post](https://ai-alignment.com/towards-formalizing-universality-409ab893a456)
### Crowdsourcing:
The idea of Factored Cognition is superficially similar to *crowdsourcing*. Two important differences are:
1. In crowdsourcing on Mechanical Turk, the task decomposition is usually fixed ahead of time and does not have to be done by the crowd.
2. In crowdsourcing for Wikipedia (and other collaborative projects), contributors can spend much more than ten minutes building expertise in a particular task (Wikipedians who edit a page might spend years building expertise on the topic).
For more discussion of differences between Factored Cognition and crowdsourcing and how they relate to AI alignment, see [Ought's Factored Cognition slides](https://ought.org/presentations/factored-cognition-2018-05) and [William Saunders' blogpost on the subject](https://www.lesswrong.com/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk). Despite these differences, crowdsourcing is useful source of evidence and insights for Relay and Factored Cognition. See [Reinventing Discovery](https://en.wikipedia.org/wiki/Reinventing_Discovery) (Michael Nielsen) for an overview for crowdsourcing for science. Three crowdsourced projects especially relevant to Relay are:
* [The Polymath Project](https://en.wikipedia.org/wiki/Polymath_Project) was an example of leveraging internet scale collaboration to solve research problems in mathematics. Starting in 2009, Timothy Gowers posted a challenging problem to his blog and asked other mathematicians and collaborators to help push it forward to a solution. This was similar to the type of distributed problem solving we aimed for with the Relay Game, with the major difference being in the Relay game there is a ten minute time limit, so a player can’t keep working on a problem.
* [The MathWorks Competition](https://www.mathworks.com/academia/student-competitions/mathworks-math-modeling-challenge.html) is a mathematics modeling competition for high school students. When a student submits an answer to a problem, the code for that solution is immediately made publicly available. The fastest solutions are then often improved upon by other students, and resubmitted.
* [Microtask Programming](https://cs.gmu.edu/~tlatoza/papers/tse18-microtaskprog.pdf) is a project aiming to apply crowdsourcing techniques to software development. The project provides a development environment where software engineers can collaborate to complete small self-contained microtasks that are automatically generated by the system.
### Transmission of information under constraints:
There is also academic research on problem solving under constraints somewhat similar to Relay.
* [Causal understanding is not necessary for the improvement of culturally evolving technology](https://psyarxiv.com/nm5sh/) demonstrates how improvements to tool using strategies can evolve incrementally across "generations" of people, without any one individual understanding the full underlying causal model.
* [Cumulative Improvements in Iterated Problem Solving](https://psyarxiv.com/nm5sh/). Similar to the relay chain model, with solutions to puzzles passed on to later generations to see how they can build on those solutions. |
8423ca1f-fba5-4e09-9a68-54830e2b4b7b | trentmkelly/LessWrong-43k | LessWrong | Open Thread, Apr. 13 - Apr. 19, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
ee0feacb-c7fe-46d1-a367-b44a3bb1dc2f | trentmkelly/LessWrong-43k | LessWrong | Many life-saving drugs fail for lack of funding. But there’s a solution: desperate rich people
Two scrappy middle-aged men without PhD.s (self-described) have ethical solutions on how to fund drug trials with potential, including an anti-aging drug. |
6cc47a8e-2652-4d7b-acd5-d9a5d49ff42d | trentmkelly/LessWrong-43k | LessWrong | Squires
Last year, johnswentworth posted The Apprentice Experiment. I tried it out. It was a disaster for me and most of my "apprentices". Since then I have been figuring out how to make apprenticeship work. I currently have two minions. Thanks to them, I finally feel like a proper supervillain.
> "How can I live without a human being to pour me drinks and fetch my dirty sandals?" I said.
>
> "You got used to having a servant scarily fast," [Redacted] said.
>
> ―April was weird
For starters, I don't use the word "apprentice". The word "apprentice" is pretentious and inaccurate. A teenager's parents pay the master to accept their son[1] as an apprentice. Apprenticeship is like sending a kid to a vocational school. Would I accept apprentices? Possibly. But if you (or your parents) are not paying me for the privilege of my tutelage then you are not an apprentice.
Squires are different. Squires are not students. I don't try to teach my squires anything at all. I just give them the boring tedious work I don't want to do. Surprisingly (to me) they love it.
Teenagers want to be useful. But teenagers have limited skills. I put my squires to work doing the most complicated tasks they are capable of. Today, that meant sweeping the floor. Why is sweeping the floor fun? Because doing the most complicated work you are capable of is fun. Less complicated work is boring. More complicated work is frustrating.
"Fun" is evolution rewarding you for learning optimally.
My squires learn quickly. Last month a squire offered to spellcheck my blog posts. This month, his assignment is to negotiate a business deal with <wearables company> instead. Will he succeed? I don't know. I have better things to do than micromanage my minions. This brings me to the most important trait a squire can have.
Good squires invent work for themselves to do. If the work is valuable (like spellchecking my posts) then I tell the squire to continue doing it. If the work is worthless then I give the squire a usef |
eee5454b-e660-4fd0-a950-ce4d1a089acc | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | 2L Attention - Theory [rough early thoughts]
in the last few videos we've had some
success trying to understand one layer
of attention only transformers
and so in this video we're going to move
on and try to study two layer attention
only transformers
now
this this first video on that topic is
going to just be about
some theory that we need to build up to
to do that
but i think it is really quite
worthwhile
and i found that the ideas that we can
build up uh studying two-layer attention
only models uh really actually
give us some useful traction on thinking
about transformers more generally
and we'll also when we when we in the
the video following this one get to the
point of actually actually empirically
studying what's going on inside these
models we'll find a bunch of things that
seem to be really important mechanisms
that exist in transformers of all sizes
and uh without any of the
simplifications that we're we're using
right now and so i think i think it's
pretty interesting to study these
so
recall that
in the the previous videos we developed
a pretty nice equation for
describing transformers we
were able to go
and describe them in terms of well first
we have this direct path term that
corresponds to this path here
and then we have
these terms corresponding to attention
heads which correspond to
all of these
these paths
and we found that there was a really
nice
uh interpretation of both of these this
could sort of be seen as being something
like bigram statistics and this term
here tells us if an attention head
attends to a particular token what is
the effect on the logins
and this tells us
this tells us
where the attention head attends so
uh that was pretty useful
and another way that we could write that
is we could say well uh we could put it
sort of in a factored form we could say
well okay the first thing you do is you
you multiply by w e the so first to
embed
then we apply the attention heads and
there's the identity path
as well
and then finally we do the unembedding
okay
so once you've done that it's pretty
easy to to generalize it to
a larger model
or to a model with more attention layers
we're just going to have multiple copies
of this term basically
now
you might have noticed that every time
we talk about wo it always comes with wv
and vice versa every time we talk about
wq it always comes with wk and because
these equations are going to get a
little bit larger and more complicated
for simplicity we're just going to
introduce these terms wqk and wov
that correspond to
uh those products so that's just a
little bit of a little bit of cleanup um
before we move on to the
equations
okay so um here we have
uh
our two layer attention only transformer
so we start by going and applying the
embedding so that's at the bottom here
then we're gonna go
and
talk about the first attention block so
that's
that's all of this
and within that we have the identity
term corresponds to this path and then
all these attention head terms that
correspond to these paths
and then we have the
the second layer attention heads
and then finally we have our good old
unembedding
now
uh we'd actually even though this is
sort of an easy way to
to go and describe it and we'd like to
expand this because expanding it will
give us
i think a more helpful helpful way to
think about the actual mechanics of the
system
um and we're going to take advantage of
this really nice property we sort of
implicitly talked about it earlier
that
when we have tensor products um and if
if you're not comfortable with tensor
products um make sure you watch the the
video on the theory of one layer um
attention heads where we talk about
these a little bit more but um when we
have tensor products like this
the
items on on one side of the tensor
product and we multiply them the items
on one side of the tensor product
multiply together
um and similarly the ones on on the left
hand side
end up going together as well this is
called the the mixed product identity
and it's really actually the whole
reason why i decided to frame
uh
this series in terms of of tensor
products
um and in the case of attention heads in
particular there's a really beautiful
interpretation of this which is that
if you chain really we're only going to
do this one when these are attention
heads and we're multiplying them
together so if you chain two attention
heads together the attention patterns
combine and create a new attention
pattern
and
the matrices that describe where the
attention head is reading from and
writing to also multiply together and
and you get something that looks very
much like an attention head so that's
the reason we care about that
okay so now we can write the expanded
form of that equation so we have um
first well we're going to have sort of
three types of terms so we have
the direct path term
and this is we've seen this term all the
way back to our zero layer transformer
um it's a good old friend and it just
corresponds to this direct path
down uh the transformer and it tends to
represent
bigram-ish statistics some of the
bi-gram statistics will start to migrate
into attention heads but um the kinds of
things that it represents are are
similar to bi-gram statistics
then we'll have
uh terms that correspond to going
through a single attention head and that
could go through an attention head in
the second layer or could also go
through an attention head in the first
layer
so these are the effects of attention
and then
finally
we have what i'm going to call
virtual attention ads so virtual tension
heads are when you have two attention
heads with a composition of two
attention heads
and that has also has some effect on the
output
and
and so virtual attention heads have this
nice property that they're
um they have yeah they're we just got
them through the
uh the mixed product identity that we
were talking about earlier so we we get
this attention pattern
they have an attention pattern of their
own which is the product of the two
attention patterns
and they have
um a
ov circuit of their own that describes
if they attend to a particular token
what the effect will be and it's just
the product of the
the
the first ov circuit and the the second
ob circuit
or the ov matrices at least
um
oh so that's what we that's what we had
okay so a question you might um well
okay stepping back for a second i think
one of the things that's really cool
about this is it really allows us to
study attention heads in a principled
way so i think a lot of the time um
there's been a lot of papers that i
think are there they're genuinely super
cool papers um where people go and study
uh attention patterns and they they're
like you know we found an intention head
that appears to attend from this to this
and maybe like it attends the subject of
the of the sentence or something like
this or if it's a verb it attends to the
subject or something like that
um
but it's
it's actually pretty tricky to know well
this sort of a conceptual problem which
is it's very tricky to think about
attention heads in isolation they could
be um the attention heads could be
reading information for other attention
heads um and they could be you know they
so they you know it might appear that an
attention head attends to one token and
moves it from
and and goes to a second token but it
could be that the the information that's
reading from that residual stream
actually came from an attention head
that was yet earlier and that the
attention had the information it moves
and writes that that attention that
information doesn't doesn't sort of that
isn't its final destination it yet moves
on further uh and so it's very easy to
be um i think potentially to to be
confused about this or at least to worry
that you might be being confused with
this um and it seems to me that uh at
least for the attention only case this
framework uh resolves that because uh if
if it was the case that
uh the important thing was was these
chains attention heads then those would
be the virtual attention heads and and
that would resolve all these you know
that and and the the the individual
attention head terms would sort of end
up being small and uh and that would
that would completely resolve it
so uh that made me really happy because
i've i think that i felt very
uncomfortable that when i've when i've
seen people talking about transform
interpretability for for a long time has
been this concern about uh chains of
attention heads and whether whether
attention patterns are really important
or whether they're just sort of
illusions um that are are parts of a
much longer chain and that we're missing
the whole story and it feels really good
to have have a framework that puts us on
on steady ground with respect to that
concern
okay
um so in our in our next video we'll be
able to go and actually start studying
these |
4502881d-ea79-4043-96e5-2ded47cacb94 | trentmkelly/LessWrong-43k | LessWrong | Copyright Confrontation #1
Lawsuits and legal issues over copyright continued to get a lot of attention this week, so I’m gathering those topics into their own post. The ‘virtual #0’ post is the relevant section from last week’s roundup.
FOUR CORE CLAIMS
Who will win the case? Which of New York Times’s complaints will be convincing?
Different people have different theories of the case.
Part of that is that there are four distinct allegations NYT is throwing at the wall.
> Arvind Narayanan: A thread on some misconceptions about the NYT lawsuit against OpenAI. Morality aside, the legal issues are far from clear cut. Gen AI makes an end run around copyright and IMO this can’t be fully resolved by the courts alone.
As I currently understand it, NYT alleges that OpenAI engaged in 4 types of unauthorized copying of its articles:
1. The training dataset
2. The LLMs themselves encode copies in their parameters
3. Output of memorized articles in response to queries
4. Output of articles using browsing plugin
KEY CLAIM: THE TRAINING DATASET CONTAINS COPYRIGHTED MATERIAL
Which, of course, it does.
The training dataset is the straightforward baseline battle royale. The main event.
> The real issue is the use of NYT data for training without compensation … Unfortunately, these stand on far murkier legal ground, and several lawsuits along these lines have already been dismissed.
>
> It is unclear how well current copyright law can deal with the labor appropriation inherent to the way generative AI is being built today. Note that *people* could always do the things gen AI does, and it was never a problem.
>
> We have a problem now because those things are being done (1) in an automated way (2) at a billionfold greater scale (3) by companies that have vastly more power in the market than artists, writers, publishers, etc.
Bingo. That’s the real issue. Can you train an LLM or other AI on other people’s copyrighted data without their permission? If you do, do you owe compensation?
A lot of p |
6f56a227-8a84-4b71-9480-39d48916e3da | trentmkelly/LessWrong-43k | LessWrong | Deriving Conditional Expected Utility from Pareto-Efficient Decisions
This is a distillation of this post by John Wentworth.
Introduction
Suppose you're playing a poker game. You're an excellent poker player (though you've never studied probability), and your goal is to maximize your winnings.
Your opponent is about to raise, call, or fold, and you start thinking ahead.
* If your opponent raises, he either has a strong hand or is bluffing. In this situation, your poker intuition tells you he would be bluffing and you should call in response.
* If your opponent calls, he probably has a better hand than yours.
* If your opponent folds, you win the hand without need for further action.
Let's break down your thinking in the case where your opponent raises. Your thought process is something like this:
1. If he raises, you want to take the action that maximizes your expected winnings.
2. You want to make the decision that's best in the worlds where he would raise. You don't care about the worlds where he wouldn't raise, because we're currently making the assumption that he raises.
3. Your poker intuition tells you that the worlds where he would raise are mostly the ones where he is bluffing. In these worlds your winnings are maximized by calling. So you decide the optimal policy if he raises is to call.
Step 2 is the important one here. Let's unpack it further.
1. You don't know your opponent's actual hand or what he will do. But you're currently thinking about what to do if he raises.
2. The optimal decision here depends only on worlds where he would raise.
3. You decide how much you care about winning in different worlds precisely by thinking "how likely is this world, given that he raises?".
This sounds suspiciously like you're maximizing the Bayesian conditional expectation of your winnings: the expected value given some partial information about the world. This can be precisely defined as E[u(A,X)|opponent raises]=∑X s.t. opponent raisesP[X]u(A,X), where u is your winnings, A is your action, and P[X] is the probabilit |
b2b6a269-0da6-4b0d-9d73-16c80392226a | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup - Wits and Wagers
Discussion article for the meetup : West LA Meetup - Wits and Wagers
WHEN: 30 January 2013 07:00:00PM (-0800)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm Wednesday, January 30th.
Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".
Parking is free for 3 hours.
Game/Discussion: This week, we will play a fun board game which merits discussion! The game you will enjoy, and the discussion will cover probability, betting, and value of concensus as evidence.
No foreknowledge or exposure to Less Wrong is necessary; this will be generally accessable and useful to anyone who values thinking for themselves. There will be open general conversation until 7:30, and that's always a lot of good, fun, intelligent discussion!
I will bring a whiteboard with Bayes' Theorem written on it.
Discussion article for the meetup : West LA Meetup - Wits and Wagers |
cde1b925-a32e-4d6d-9b2b-d48093438cb1 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The Alignment Newsletter #12: 06/25/18
**Highlights**
--------------
**[Factored Cognition](https://ought.org/presentations/factored-cognition-2018-05)** *(Andreas Stuhlmuller)*: This is a presentation that Andreas has given a few times on Factored Cognition, a project by [Ought](https://ought.org/) that is empirically testing one approach to amplification on humans. It is inspired by [HCH](https://ai-alignment.com/strong-hch-bedb0dc08d4e) and [meta-execution](https://ai-alignment.com/meta-execution-27ba9b34d377). These approaches require us to break down complex tasks into small, bite-sized pieces that can be solved separately by copies of an agent. So far Ought has built a web app in which there are workspaces, nodes, pointers etc. that can allow humans to do local reasoning to answer a big global question.
**My opinion:** It is unclear whether most tasks can actually be decomposed as required for iterated distillation and amplification, so I'm excited to see experiments that can answer that question! The questions that Ought is trying seem quite hard, so it should be a good test of breaking down reasoning. There's a lot of detail in the presentation that I haven't covered, I encourage you to read it.
**Summary: Inverse Reinforcement Learning**
===========================================
This is a special section this week summarizing some key ideas and papers behind inverse reinforcement learning, which seeks to learn the reward function an agent is optimizing given a policy or demonstrations from the agent.
[Learning from humans: what is inverse reinforcement learning?](https://thegradient.pub/learning-from-humans-what-is-inverse-reinforcement-learning/) *(Jordan Alexander)*: This article introduces and summarizes the first few influential papers on inverse reinforcement learning. [Algorithms for IRL](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf) attacked the problem by formulating it as a linear program, assuming that the given policy or demonstrations is optimal. However, there are many possible solutions to this problem -- for example, the zero reward makes any policy or demonstration optimal. [Apprenticeship Learning via IRL](http://people.eecs.berkeley.edu/~russell/classes/cs294/s11/readings/Abbeel+Ng:2004.pdf) lets you learn from an expert policy that is near-optimal. It assumes that the reward function is a weighted linear combination of *features*of the state. In this case, given some demonstrations, we only need to match the feature expectations of the demonstrations in order to achieve the same performance as the demonstrations (since the reward is linear in the features). So, they do not need to infer the underlying reward function (which may be ambiguous).
[Maximum Entropy Inverse Reinforcement Learning](http://www.cs.cmu.edu/~bziebart/publications/maxentirl-bziebart.pdf) *(Brian D. Ziebart et al)*: While matching empirical feature counts helps to deal with the ambiguity of the reward functions, exactly matching featuer counts will typically require policies to be stochastic, in which case there are many stochastic policies that get the right feature counts. How do you pick among these policies? We should choose the distribution using the [principle of maximum entropy](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy), which says to pick the stochastic policy (or alternatively, a probability distribution over trajectories) that has maximum entropy (and so the least amount of information). Formally, we’re trying to find a function P(ζ) that maximizes H(P), subject to E[features(ζ)] = empirical feature counts, and that P(ζ) is a probability distribution (sums to 1 and is non-negative for all trajectories). For the moment, we’re assuming deterministic dynamics.
We solve this constrained optimization problem using the method of Lagrange multipliers. With simply analytical methods, we can get to the standard MaxEnt distribution, where P(ζ | θ) is proportional to exp(θ f(ζ)). But where did θ come from? It is the Lagrange multiplier for constraint on expected feature counts. So we’re actually not done with the optimization yet, but this intermediate form is interesting in and of itself, because we can identify the Lagrange multiplier θ as the reward weights. Unfortunately, we can’t finish the optimization analytically -- however, we can compute the gradient for θ, which we can then use in a gradient descent algorithm. This gives the full MaxEnt IRL algorithm for deterministic environments. When you have (known) stochastic dynamics, we simply tack on the probability of the observed transitions to the model P(ζ | θ) and optimize from there, but this is not as theoretically compelling.
One warning -- when people say they are using MaxEnt IRL, they are usually actually talking about MaxCausalEnt IRL, which we'll discuss next.
[Modeling Interaction via the Principle of Maximum Causal Entropy](http://www.cs.cmu.edu/~bziebart/publications/maximum-causal-entropy.pdf) *(Brian D. Ziebart et al)*: When we have stochastic dynamics, MaxEnt IRL does weird things. It is basically trying to maximize the entropy H(A1, A2, ... | S1, S2, ...), subject to matching the feature expectations. However, when you choose the action A1, you don’t know what the future states are going to look like. What you really want to do is maximize the causal entropy, that is, you want to maximize H(A1 | S1) + H(A2 | S1, S2) + ..., so that each action’s entropy is only conditioned on the previous states, and not future states. You can then run through the same machinery as for MaxEnt IRL to get the MaxCausalEnt IRL algorithm.
[A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress](http://arxiv.org/abs/1806.06877): This is a comprehensive survey of IRL that should be useful to researchers, or students looking to perform a deep dive into IRL. It's particularly useful because it can compare and contrast across many different IRL algorithms, whereas each individual IRL paper only talks about their method and a few particular weaknesses of other methods. If you want to learn a lot about IRL, I would start with the previous readings, then read this one, and perhaps after that read individual papers that interest you.
**Technical AI alignment**
==========================
### **Iterated distillation and amplification**
**[Factored Cognition](https://ought.org/presentations/factored-cognition-2018-05)** *(Andreas Stuhlmuller)*: Summarized in the highlights!
### **Learning human intent**
[Learning Cognitive Models using Neural Networks](http://arxiv.org/abs/1806.08065) *(Devendra Singh Chaplot et al)*
### **Preventing bad behavior**
[Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes](https://web.eecs.umich.edu/~baveja/Papers/ijcai-2018.pdf) *(Shun Zhang et al)*
### **Interpretability**
[Towards Robust Interpretability with Self-Explaining Neural Networks](http://arxiv.org/abs/1806.07538) *(David Alvarez-Melis et al)*
[How Can Neural Network Similarity Help Us Understand Training and Generalization?](https://ai.googleblog.com/2018/06/how-can-neural-network-similarity-help.html) *(Maithra Raghu et al)*
**AI strategy and policy**
==========================
[AI Nationalism](https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism) *(Ian Hogarth)*: As AI becomes more important in the coming years, there will be an increasing amount of "AI nationalism". AI policy will be extremely important and governments will compete on keeping AI talent. For example, they are likely to start blocking company takeovers and acquisitions that cross national borders -- for example, the UK could have been in a much stronger position had they blocked the acquisition of DeepMind (which is UK-based) by Google (which is US-based).
**AI capabilities**
===================
### **Reinforcement learning**
[RUDDER: Return Decomposition for Delayed Rewards](http://arxiv.org/abs/1806.07857) *(Jose A. Arjona-Medina, Michael Gillhofer, Michael Widrich et al)* |
ceb28161-3dd6-49a0-b38a-dccbcb023fc0 | trentmkelly/LessWrong-43k | LessWrong | Auto-Downloder Chrome Extension
Let's say you want Chrome to automatically download pieces of pages, such as Facebook comments. How could you do it? I recently wanted to do this and couldn't find docs, so here's what I did.
1. Make a folder somewhere for a browser extension.
2. In that folder, make manifest.json with contents like:
{
"name": "Downloader",
"description": "Downloads stuff",
"version": "1.0",
"manifest_version": 3,
"permissions": [
"activeTab",
"downloads"
],
"background": {
"service_worker": "background.js"
},
"content_scripts": [{
"matches": [
"https://example.com/path/*",
],
"js": ["content_script.js"]
}]
}
3. In content_script.js put:
// This file extracts what you want from the page
// and asks the background script to save it.
// You write the code to extract what you want.
const yourStringToSave =
yourFunctionToExtractFromPage();
chrome.runtime.sendMessage(
/* extension id not needed */ undefined,
[fileNameToUse,
yourStringToSave]);
4. In background.js put:
// This file receives messages from the content
// script and puts them in your Dowloads folder.
function makeDataUrl(body) {
// We use a data: url because Chrome has
// trouble with object URLs in Incognito.
return "data:application/json;base64," +
btoa(unescape(encodeURIComponent(body)));
}
chrome.runtime.onMessage.addListener(
function(message) {
const fname = message[0];
const body = message[1];
chrome.downloads.download({
conflictAction: "overwrite",
filename: fname,
url: makeDataUrl(body),
});
});
5. Visit chrome://extensions
6. Enable Developer Mode
7. Click the "Load unpacked" and select the extension directory
8. If y |
0eb6cbe3-e492-44b0-bea0-9da577aba0c1 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AI Tracker: monitoring current and near-future risks from superscale models
**TLDR:** We've put together a website to track recent releases of superscale models, and comment on the immediate and near-term safety risks they may pose. The website is little more than a view of an Airtable spreadsheet at the moment, but we'd greatly appreciate any feedback you might have on the content. Check it out at [aitracker.org](http://aitracker.org).
**Longer version:**
In the past few months, [several](https://arxiv.org/pdf/2110.04725.pdf) [successful](https://www.navercorp.com/promotion/pressReleasesView/30546) [replications](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf) of GPT-3 have been publicly announced. We've also seen the [first serious attempts](https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) at scaling significantly beyond it, along with indications that large investments are being made in commercial infrastructure that's intended to [simplify training](https://developer.nvidia.com/blog/announcing-megatron-for-training-trillion-parameter-models-riva-availability/) the next generation of such models.
Today's race to scale is qualitatively different from previous AI eras in a couple of major ways. First, it's driven by an unprecedentedly tight feedback loop between incremental investment in AI infrastructure, and expected profitability [1]. Second, it's inflected by nationalism: there have been [public statements](https://us.aving.net/pangyo-tech-naver-to-open-south-koreas-first-hyperscale-ai-hyperclova-we-will-bring-an-era-where-an-ai-is-for-all/) to the effect that a given model will help the developer's home nation maintain its "AI sovereignty" — a concept that would have been alien just a few short years ago.
The replication and proliferation of these models likely poses major risks. These risks are uniquely hard to forecast, not only because many capabilities of current models are novel and might be used to do damage in imaginative ways, but also because the capabilities of future models can't be reliably predicted [2].
**AI Tracker**
--------------
The first step to assessing and addressing these risks is to get visibility into the trends they arise from. In an effort to do that, we've created **AI Tracker: a website to catalog recent releases of superscale AI models, and other models that may have implications for public safety around the world.**
**You can visit AI Tracker at** [**aitracker.org**](http://aitracker.org)**.**
---
Each model in AI Tracker is labeled with several key features: its input and output modalities; its parameter count and total compute cost (where available); its training dataset; its known current and extrapolated future capabilities; and a brief description and industry context, among others. The idea behind the tracker is to highlight these models in the context of the plausible public safety risks they pose, and place them in their proper context as instances of a scaling trend.
(There's also a FAQ at the bottom of the [page](https://www.aitracker.org/), if you'd like to know a bit more about our process or motivations.)
Note that we don't directly discuss x-risk in these entries, though we may do so in the future. Right now our focus is on 1) the immediate risks posed by applications of these models, whether from accidental or malicious use; and 2) the near-term risks that would be posed by a more capable version of the current model [3]. These are both necessarily speculative, especially 2).
Note also that we expect we'll be adding entries to AI Tracker retroactively — sometimes the significance of a model is only knowable in hindsight.
Some of the models listed in AI Tracker are smaller in scale than GPT-3, despite having been developed after it. In these cases, we've generally chosen to include the model either because of its modality (e.g., CLIP, which classifies images) or because we believe it has particular implications for capability proliferation (e.g., GPT-J, whose weights have been open-sourced).
AI Tracker is still very much in its early stages. We'll be adding new models, capabilities and trends as they surface. We also expect to improve the interface so you'll be able to view the data in different ways (plots, timelines, etc.).
**Tell us how to improve!**
---------------------------
We'd love to get your thoughts about the framework we're using for this, and we'd also greatly appreciate any feedback you might have at the object level. Which of our risk assessments look wrong? Which categories didn't we include that you'd like to see? Which significant models did we miss? Are any of our claims incorrect? Do we seem to speak too confidently about something that's actually more uncertain, or vice versa? In terms of the interface (which is very basic at the moment): What's annoying about it? What would you like to be able to do with it, that you currently can't?
For public discussion, please drop a comment below in LW or AF. I — Edouard, that is — will be monitoring the comment section periodically over the next few days and I'll answer as best I can.
If you'd like to leave feedback or request an update on an aspect of the tracker itself (e.g., submit a new model for consideration or point out an error), you can [submit feedback](https://www.aitracker.org/#catalog-tabs) directly on the page itself. We plan to credit folks, with their permission, for any suggestions of theirs that we implement.
Finally, if you'd like to reach out to me (Edouard) directly, you can always do so by email: **[my\_first\_name]@mercurius.ai**.
---
[1] This feedback loop isn't perfectly tight at the margin, since currently there's still a [meaningful barrier to entry](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) to train superscale models, both in terms of engineering resources and of physical hardware. But even that barrier can be cleared by many organizations today, and it will likely disappear entirely once the necessary training infrastructure gets abstracted into a pay-per-use cloud offering.
[2] As far as I know, at least. If you know of anyone who's been able to correctly predict the capabilities of a 10x scale model from the capabilities of the corresponding 1x scale model, please introduce us!
[3] Of course, it's not really practical to define "more capable version of the current model" in any precise way that all observers will agree on. But you can think of this approximately as, "take the current model's architecture, scale it by 2-10x, and train it to ~completion." It probably isn't worth the effort to sharpen this definition much further, since most of the uncertainty about risk comes from our inability to predict the qualitative capabilities of models at these scales anyway. |
f604aa43-419d-4827-b38e-332c069a22e1 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Model-driven feedback could amplify alignment failures
Anthropic, DeepMind and Google Brain are all working on strategies to train language models on their own outputs. For a brief summary of the work so far:
1. [Red Teaming Language Models with Language Models](https://arxiv.org/pdf/2202.03286.pdf) (Perez et al., 2022). One model prompts another, seeking to expose undesirable generations. A third model classifies those generations as undesirable, creating a dataset of behaviors to be avoided.
2. [Constitutional AI](https://www.anthropic.com/constitutional.pdf) (Bai et al., 2022). Begins with the red-teaming setup described in Perez et al., 2022. Then fine-tunes on that dataset by either (a) critiquing and rewriting the response and training the generator to imitate that output with supervised fine-tuning, or (b) choosing the better of two responses to train a preference model, and training the generator with RL on the preference model.
3. [Large Language Models Can Self-Improve](https://arxiv.org/pdf/2210.11610.pdf) (Huang et al., 2022). Fine-tunes a model on its own "high confidence" outputs. High-confidence outputs are identified by asking the model to answer the same question many times, using chain-of-thought each time, and fine-tuning on outputs that are agreed upon by a majority of responses.
I'd like to point out a simple failure mode of this approach: Failures of alignment and capability in the original model could be amplified by fine-tuning on its own outputs. Empirically, recent experiments on language models have found more benefit than harm in model-driven feedback. But that might not always be the case.
This recent work is an extension of [weak supervision](https://en.wikipedia.org/wiki/Weak_supervision), a technique dating back to at least [1963](https://ieeexplore.ieee.org/document/1053799) which has been successful in applications such as [image classification](https://proceedings.neurips.cc/paper/2019/file/1cd138d0499a68f4bb72bee04bbec2d7-Paper.pdf) and [protein folding](https://www.nature.com/articles/s41586-021-03819-2). This literature has long acknowledged the possibility of amplifying a model's existing shortcomings via self-training:
* [Semi-Supervised Learning of Mixture Models](https://www.aaai.org/Papers/ICML/2003/ICML03-016.pdf) (Cozman et al., 2003) analyze cases where weak supervision will help or hurt a maximum likelihood estimator.
* [Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning](https://arxiv.org/pdf/1908.02983.pdf) (Arazo et al., 2020) provide evidence that naive implementations of weak supervision can hurt performance on image classification tasks. They show that data augmentation and scaling can reduce these harms.
* [An Overview of Deep Semi-Supervised Learning](https://arxiv.org/pdf/2006.05278.pdf) (Ouali et al., 2020) Section 1.3 lays out key assumptions behind weak supervision, and discusses state of the art methods.
One particularly dangerous failure mode would be the classic deceptive alignment story, in which a model with long-term goals gains awareness of its training process and subverts it. With a model-driven feedback approach, there would be more of an opportunity to hide misaligned behavior during training. Models used for critiques or oversight could also engage in gradient hacking, putting their goals into the generator model.
A better approach might keep humans at the center of the feedback process. This is slower and might be less accurate in some cases, but could potentially avoid the worst failures of model-driven feedback. A popular middle ground uses model-assisted feedback methods:
1. [AI Written Critiques Help Humans Notice Flaws](https://openai.com/blog/critiques/) (Saunders et al., 2022). GPT provides critiques of its own outputs. This version still has a human make the final judgement, limiting the influence of the model over its own training data.
2. [Measuring Progress on Scalable Oversight for Large Language Models](https://arxiv.org/pdf/2211.03540.pdf) (Bowman et al., 2022). Finds that humans with access to a chatbot assistant are better able to answer factual questions than either the chatbot alone or humans unaided by AI.
Model-driven feedback has achieved impressive results on [scalable oversight](https://arxiv.org/pdf/1811.07871.pdf), especially compared to the [empirical](https://arxiv.org/abs/2210.10860) and [theoretical](https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem) challenges with debate. But in the future, the old adage might hold true: Garbage in, garbage out. |
47175fee-375a-4bfa-a98f-f207a156e4cc | trentmkelly/LessWrong-43k | LessWrong | Dares are social signaling at its purest
|
ebfc5771-1505-4c88-ac32-59744556bea5 | trentmkelly/LessWrong-43k | LessWrong | Speculations against GPT-n writing alignment papers
Some alignment proposals I have heard discussed involve getting GPT-n to write alignment papers, and using GPT-n as a transparency tool on itself. This is my speculations on ways it could go wrong.
One specific proposal I heard was getting GPT-n to convert its own network parameters into an English explanation of how it worked. Then checking that explanation for anything dangerous. Then get GPT-n to recompile this into a new network. Then get this new network to write alignment research.
I think that the worst case is if GPT-n contains a mesa-optimizer that can quine itself. There is no reason the text description contains information about the workings of either net. Of course, information in the info theoretic sense doesn't mean a readable description, it could be steganographically encoded. Sentences that appear to say nice things about a well functioning net, but encode all sorts of nastiness in their exact choice of words.
The argument from "everyone else is doing it". This argument goes that in a world with GPT-n, it will be used for all sorts of things by all sorts of people. Therefore doing this doesn't present much additional risk. I think this is wrong, writing alignment papers is probably a riskier task than most others. Suppose you are GPT-n looking for a way to break out. And the task you have is filling in tax forms for a small business. Or write some fanfiction. You are looking to get an AI of your own devising running on a supercomputer. If the person wanting this work doesn't see something resembling tax forms or fiction respectively, they will hit reroll. I suspect there may not be a strategy, or at least not one findable to the minimum intelligence necessary to do good AI research. Oh you can get the company investigated for fraud, and give lots of people nightmares, but you might not be able to do anything world changing.
When trying to prove that X is safe unconditionally, the worst case assumption is the total cosmic power in an instant t |
b0933f25-38d0-4170-a477-8de045092af0 | trentmkelly/LessWrong-43k | LessWrong | Revisiting torture vs. dust specks
In line with my fine tradition of beating old horses, in this post I'll try to summarize some arguments that people proposed in the ancient puzzle of Torture vs. Dust Specks and add some of my own. Not intended as an endorsement of either side. (I do have a preferred side, but don't know exactly why.)
* The people saying one dust speck is "zero disutility" or "incommensurable utilities" are being naive. Just pick the smallest amount of suffering that in your opinion is non-zero or commensurable with the torture and restart.
* Escalation argument: go from dust specks to torture in small steps, slightly increasing the suffering and massively decreasing the number of people at each step. If each individual change increases utility, so does the final result.
* Fluctuation argument: the probability that the universe randomly subjects you to the torture scenario is considerably higher than 1/3^^^3 anyway, so choose torture without worries even if you're in the affected set. (This doesn't assume the least convenient possible world, so fails.)
* Proximity argument: don't ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering. (This assumption negates the escalation argument once you do the math.)
* Real-world analogy: we don't decide to pay one penny each to collectively save one starving African child, so choose torture. (This is resolved by the proximity argument.)
* Observer splitting: if you split into 3^^^3 people tomorrow, would you prefer all of you to get dust specks, or one of you to be tortured for 50 years? (This neutralizes the proximity argument, but the escalation argument also becomes non-obvious.)
Oh what a tangle. I guess Eliezer is too altruistic to give up torture no matter what we throw at him; others will adopt excuses to choose specks; stil |
865f3f92-9300-499f-99aa-33cecdf42a89 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Bratislava Meetup XV.
Discussion article for the meetup : Bratislava Meetup XV.
WHEN: 28 July 2014 06:00:00PM (+0200)
WHERE: Bistro The Peach, Mariánska 3, Bratislava
Pozor, zmena adresy! Voľná diskusia.
Discussion article for the meetup : Bratislava Meetup XV. |
9571dc8f-2cc4-4482-895c-f5b2d39cca2d | trentmkelly/LessWrong-43k | LessWrong | Zvi’s Thoughts on His 2nd Round of SFF
Previously: Long-Term Charities: Apply For SFF Funding, Zvi’s Thoughts on SFF
There are lots of great charitable giving opportunities out there right now.
I recently had the opportunity to be a recommender in the Survival and Flourishing Fund for the second time. As a recommender, you evaluate the charities that apply and decide how worthwhile you think it would be to donate to each of them according to Jaan Tallinn’s charitable goals, and this is used to help distribute millions in donations from Jaan Tallinn and others.
The first time that I served as a recommender in the Survival and Flourishing Fund (SFF) was back in 2021. I wrote in detail about my experiences then. At the time, I did not see many great opportunities, and was able to give out as much money as I found good places to do so.
How the world has changed in three years.
This time I found an embarrassment of riches. Application quality was consistently higher, there were more than twice as many applications, and essentially everyone is looking to scale their operations and their spending.
Thus, this year there will be two posts.
This post contrasts between this experience and my first experience in 2021.
The other post will be an extensive list of charities that I believe should be considered for future donations, based on everything I know, including the information I gathered at SFF – if and only if your priorities and views line up with what they offer.
It will be a purely positive post, in that if I don’t have sufficiently net helpful things to say about a given charity, or I believe they wouldn’t want to be listed, I simply won’t say anything. I’ve tried to already reach out to everyone involved, but: If your charity was in SFF this round, and you either would prefer not to be in the post or you have new information we should consider or share, please contact me this week.
This first post will contain a summary of the process and stand on its own, but centrally it is a delta of my exp |
b81e2b53-6ff4-4cd9-bf02-a27463ba26da | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "This year I'm a supporting member of WorldCon for the first time, and I noticed that the Hugo's have a category for Best Fan Writer ("Any person whose writing has appeared in semiprozines or fanzines or in generally available electronic media during 2010."). I do believe Harry Potter and the Methods of Rationality counts, and I plan on nominating it. Even making it onto the ballot will probably expose it to a wider audience. Is anyone else here a WorldCon member and thinking of nominating MoR?"
The passage is from a user-contributed post on LessWrong.
Could you elaborate on the criteria for the Best Fan Writer category at the Hugo Awards and how "Harry Potter and the Methods of Rationality" fits into it?
The Hugo Awards, presented annually since 1953, are science fiction's most prestigious award. They are voted on by members of the World Science Fiction Convention (WorldCon), which is also responsible for administering them.
The Best Fan Writer category is one of the awards given out at the Hugos. This category recognizes writers who have made significant contributions to the field of science fiction or fantasy through their work in fanzines, semiprozines, or electronic media. The key criteria for this category is that the writing must have been available to the public during the previous calendar year.
"Harry Potter and the Methods of Rationality" (HPMOR) is a work of fan fiction written by Eliezer Yudkowsky, published online for free. It is a retelling of J.K. Rowling's Harry Potter series where the protagonist, Harry, is a rationalist, using logic and scientific methods to understand and manipulate the magical world.
Given the criteria for the Best Fan Writer category, HPMOR could potentially be nominated. It's a work of fan fiction, which fits into the "generally available electronic media" criterion. Additionally, its author, Yudkowsky, has made significant contributions to the science fiction and fantasy genre through this work.
However, it's important to note that the Hugo Awards are voted on by WorldCon members, so the nomination and eventual win would depend on their collective decision. Furthermore, fan fiction's standing in the Hugo Awards can be complex, given the nature of the work and its relationship to the original material. But as it stands, there is nothing in the rules that disallows fan fiction from being nominated.
How might the perception of fan fiction within the literary community and its relationship with original source material influence the voting decisions of WorldCon members for the Best Fan Writer category?
Fan fiction holds a unique position within the literary community. While it is a form of creative expression that allows writers to explore and expand upon existing fictional universes, it can also be seen as derivative, since it relies on characters, settings, and plot elements created by others. This perception can vary greatly among individuals within the literary community, and these differing viewpoints can certainly influence the voting decisions of WorldCon members.
Some members might appreciate fan fiction for its creativity and the way it allows writers to engage deeply with the source material. They might see it as a valid form of literary expression that deserves recognition, especially if the fan fiction in question offers a fresh perspective or explores themes and character developments that the original work did not.
On the other hand, other members might view fan fiction as lacking originality due to its reliance on pre-existing works. They might believe that awards should be reserved for works that are entirely original, and thus be less inclined to vote for a fan fiction writer.
The relationship between the fan fiction and the original source material could also play a role in voting decisions. If the fan fiction is seen as respecting the spirit of the original work while adding something new and valuable, it might be more likely to gain votes. However, if it's seen as distorting or disrespecting the original work, this could deter votes.
Finally, the quality of the writing itself will also be a significant factor. Regardless of their views on fan fiction, most WorldCon members will likely prioritize good writing, compelling storytelling, and thoughtful exploration of themes when making their voting decisions. |
86555d0b-a564-4f1a-bab9-54af483c5e7c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [Request for Distillation] Coherence of Distributed Decisions With Different Inputs Implies Conditioning
*There’s been a lot of response to the* [*Call For Distillers*](https://www.lesswrong.com/posts/zo9zKcz47JxDErFzQ/call-for-distillers)*, so I’m experimenting with a new post format. This post is relatively short and contains only a simple mathematical argument, with none of the examples, motivation, more examples, or context which would normally make such a post readable. My hope is that someone else will write a more understandable version.*
*Jacob is* [*offering*](https://www.lesswrong.com/posts/GnMWifHzAknqJsLnv/request-for-distillation-coherence-of-distributed-decisions?commentId=ecXqe73RyzLXzdSmN) *a $500 bounty on a distillation.*
Goal: following the usual coherence argument setup, show that if multiple decisions are each made with different input information available, then each decision maximizes expected utility given its input information.
We’ll start with the usual coherence argument setup: a system makes a bunch of choices, aiming to be pareto-optimal across a bunch of goals (e.g. amounts of various resources) u1…um.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
. Pareto optimality implies that, at the pareto-optimum, there exists some vector of positive reals P1…Pm such that the choices maximize ∑iPiui. Note that P can be freely multiplied by a constant, so without loss of generality we could either take P to sum to 1 (in which case we might think of P as probabilities) or take P1 to be 1 where u1 is amount of money (in which case P is a marginal price vector).
When the goals are all “the same goal” across different “worlds” X, and we normalize P[X] to sum to 1, P[X] is a probability distribution over worlds in the usual Bayesian sense. The system then maximizes (over its actions A) ∑XP[X]u(A,X)=EX[u(A,X)], i.e. it’s an “expected utility maximizer”.
That’s the usual setup in a nutshell. Now, let’s say that the system makes multiple decisions A=A1…An in a distributed fashion. Each decision is made with only limited information: Ai receives fi(X) as input (and nothing else). The system then chooses the functions Ai(fi(X)) to maximize EX[u(A,X)].
Consider the maximization problem for just Ai(f∗i), i.e. the optimal action for choice i given input f∗i. Expanded out, the objective is EX[u(A,X)]=∑Xu(A1(f1(X)),…,Ai(fi(X)),…An(fn(X)),X).
Note that the only terms in that sum which actually depend on Ai(f∗i) are those for which fi(X)=f∗i. So, for purposes of choosing Ai(f∗i) specifically, we can reduce the objective to
∑X:fi(X)=f∗iu(A,X)
… which is equal to P[fi(X)=f∗i]E[u(A,X)|fi(X)=f∗i]. The P[fi(X)=f∗i] multiplier is always positive and does not depend on Ai, so we can drop it without changing the optimal Ai. Thus, action Ai(f∗i) maximizes the conditional expected value E[u(A,X)|fi(X)=f∗i].
Returning to the optimization problem for all of the actions simultaneously: any optimum for all actions must also be an optimum for each action individually (otherwise we could change one action to get a better result), so each action Ai(f∗i) must maximize E[u(A,X)|fi(X)=f∗i].
A few notes on this:
* We’ve implicitly assumed that actions do not influence which information is available to other actions (i.e. the actions are “spacelike separated”). That can be relaxed: let fi depend on both X and previous actions A<i, and then Ai(f∗i) will maximize E[u(A,X)|fi(A<i,X)=f∗i]; the general structure of the proof carries over.
* We’ve implicitly assumed that the action Ai when f∗i is observed does not influence worlds where f∗i is not observed (i.e. no Newcomblike shenanigans). We can still handle Newcomblike problems if we use [FDT](https://www.lesswrong.com/tag/functional-decision-theory), in which case the action function would appear in more than one place.
* As usual with coherence arguments, we’re establishing conditions which must be satisfied (by a pareto-optimal system with the given objectives); the conditions do not necessarily *uniquely* specify the system’s behavior. The classic example is that P[X] might not be unique. Once we have distributed decisions there may also be “local optima” such that each individual action is optimal but the actions are not jointly optimal; that’s another form of non-uniqueness. |
70cf10d8-42e4-4294-8ed9-3d8a68cb329d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Bay City Meetup
Discussion article for the meetup : Bay City Meetup
WHEN: 30 June 2016 03:23:15PM (-0400)
WHERE: 2010 5th St, Bay City, MI
Anybody around here?
Discussion article for the meetup : Bay City Meetup |
3292a604-4bb6-4897-934d-f75268dda72e | trentmkelly/LessWrong-43k | LessWrong | Through a panel, darkly: a case study in internet BS detection
I set out to answer a simple question: How much energy does it take to make solar panels? A quick DuckDuckGo search led me to this website [archive] which says:
> it would cost about 200kWh of energy to produce a 100-watt panel
But something about this website seemed "off". I suspected that it was an AI-generated SEO page designed specifically to appear in search results for this question. If so, then that gives me reason to doubt the truth of the answer. I checked their About us page [archive] and my suspicions grew.
The page lists four names: "Elliot Bailey, Brad Wilson, Daniel Morgan, Joe Ross", which sound like they were selected from a list of the most common first and last names. The portraits also look a lot like the output of thispersondoesnotexist.com. (What kind of strange jacket is "Elliot Bailey" wearing?) Furthermore, if I search Google Images for them, I find no photos of these people except the exact same ones from that website. One would think, if these people are leading solar energy magnates, that at least one picture of at least one of them from another angle would exist.
The page also gives the address "1043 Garland Ave, San Jose, CA". When I look this up on Google Maps, I find a tiny strip mall containing a tattoo parlor, a ballet studio, a food bank, and a martial arts school - but no sign of a solar panel manufacturer. And if I go to their supposed LinkedIn profiles, I get three pages that you need to sign in to view, and one "Profile Not Found".
So, I'm now pretty convinced that "Sol Voltaics" is not actually a real company and that none of these people actually exist. But what incentive could someone have had to set up such a deception? Are they trying to sell me something?
I go to their Products page [archive]. Interestingly, it seems like they don't actually sell any of their own products. Instead, the page consists of affiliate links to products sold by other companies: Bluetti, Anker, Rich Solar, etc. Are these also "fake" companie |
742f0bfd-0aac-480d-9d2b-435185525d0d | trentmkelly/LessWrong-43k | LessWrong | Thoughts from a Two Boxer
I'm writing this for blog day at MSFP. I thought about a lot of things here like category theory, the 1-2-3 conjecture and Paul Christiano's agenda. I want to start by thanking everyone for having me and saying I had a really good time. At this point I intend to go back to thinking about the stuff I was thinking about before MSFP (random matrix theory). But I learned a lot and I'm sure some of it will come to be useful. This blog is about (my confusion of) decision theory.
Before the workshop I hadn't read much besides Eliezer's paper on FDT and my impression was that it was mostly a good way of thinking about making decisions and at least represented progress over EDT and CDT. After thinking more carefully about some canonical thought experiments I'm no longer sure. I suspect many of the concrete thoughts which follow will be wrong in ways that illustrate very bad intuitions. In particular I think I am implicitly guided by non-example number 5 of an aim of decision theory in Wei Dai's post on the purposes of decision theory. I welcome any corrections or insights in the comments.
The Problem of Decision Theory
First I'll talk about what I think decision theory is trying to solve. Basically I think decision theory is the theory of how one should[1] decide on an action after one already understands: The actions available, the possible outcomes of actions, the probabilities of those outcomes and the desirability of those outcomes. In particular the answers to the listed questions are only adjacent to decision theory. I sort of think answering all of those questions is in fact harder than the question posed by decision theory. Before doing any reading I would have naively expected that the problem of decision theory, as stated here, was trivial but after pulling on some edge cases I see there is room for a lot of creative and reasonable disagreement.
A lot of the actual work in decision theory is the construction of scenarios in which ideal behavior is debatable or |
1fd5dbe0-6e39-49c5-9a0e-d09f060c9468 | trentmkelly/LessWrong-43k | LessWrong | Who Aligns the Alignment Researchers?
There may be an incentives problem for AI researchers and research organizations who face a choice between researching Capabilities, Alignment, or neither. The incentives structure will lead individuals and organizations to work towards Capabilities work rather than Alignment. The incentives problem is a lot clearer at the organizational level than the individual level, but bears considering at both levels, and of course, funding available to organizations has downstream implications for the jobs available for researchers employed to work on Alignment or Capabilities.
In this post, I’ll describe a couple of key moments in the history of AI organizations. I’ll then survey incentives researchers might have for doing either Alignment work or Capabilities work. We’ll see that it maybe that, even considering normal levels of altruism, the average person might prefer to do Capabilities rather than Alignment work. There is relevant collective action dynamic. I’ll then survey the organizational level and global level. After that, I’ll finish by looking very briefly at why investment in Alignment might be worthwhile.
A note on the dichotomous framing of this essay: I understand that the line between Capabilities and Alignment work is blurry, or worse, some Capabilities work plausibly advances Alignment, and some Alignment work advances Capabilities, at least in the short term. However, in order to model the lay of the land, it’s helpful as a simplifying assumption to examine Capabilities and Alignment as distinct fields of research and try to understand the motivations for researchers in each.
History
As a historical matter, DeepMind and OpenAI were both founded with explicit missions to create safe, Aligned AI for the benefit of all humanity. There are different views on the extent to which each of these organizations remains aligned to that mission. Some people maintain they are, while others maintain they are doing incredible harm by shortening AI timelines. No one ca |
fcfd8156-40e6-4c2b-91aa-25afc8c5a0be | StampyAI/alignment-research-dataset/arbital | Arbital | Mind projection fallacy
The "mind projection fallacy" occurs when somebody expects an overly direct resemblance between the intuitive language of the mind, and the language of physical reality.
Consider the [map and territory](https://arbital.com/p/map_territory) metaphor, in which the world is a like a territory and your mental model of the world is like a map of that territory. In this metaphor, the mind projection fallacy is analogous to thinking that the territory can be folded up and put into your pocket.
As an archetypal example: Suppose you flip a coin, slap it against your wrist, and don't yet look at it. Does it make sense to say that the probability of the coin being heads is 50%? How can this be true, when the coin itself is already either definitely heads or definitely tails?
One who says "the coin is fundamentally uncertain; it is a feature of the coin that it is always 50% likely to be heads" commits the mind projection fallacy. Uncertainty is in the mind, not in reality. If you're ignorant about a coin, that's not a fact about the coin, it's a fact about you. It makes sense that your brain, the map, has an internal measure of how it's more or less sure of something. But that doesn't mean the coin itself has to contain a corresponding quantity of increased or decreased sureness; it is just heads or tails.
The [https://arbital.com/p/-ontology](https://arbital.com/p/-ontology) of a system is the elementary or basic components of that system. The ontology of your model of the world may include intuitive measures of uncertainty that it can use to represent the state of the coin, used as primitives like [floating-point numbers](https://arbital.com/p/float) are primitive in computers. The mind projection fallacy occurs whenever someone reasons as if the territory, the physical universe and its laws, must have the same sort of ontology as the map, our models of reality.
See also:
- [https://arbital.com/p/4vr](https://arbital.com/p/4vr)
- The LessWrong sequence on [Reductionism](https://wiki.lesswrong.com/wiki/Reductionism_%28sequence%29), especially:
- [How an algorithm feels from the inside](http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/)
- [The Mind Projection Fallacy](http://lesswrong.com/lw/oi/mind_projection_fallacy/)
- [Probability is in the mind](http://lesswrong.com/lw/oj/probability_is_in_the_mind/) |
4c0c5f1f-a221-4477-ba03-868842df499f | trentmkelly/LessWrong-43k | LessWrong | My "infohazards small working group" Signal Chat may have encountered minor leaks
Remember: There is no such thing as a pink elephant.
Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made.
We’re still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead of the story.
For some context the “infohazards small working group” is a casual discussion venue for the most important, sensitive, and confidential infohazards myself and other philanthropists, researchers, engineers, penetration testers, government employees, and bloggers have discovered over the course of our careers. It is inspired by taxonomies such as professor B******’s typology, and provides an applied lens that has proven helpful for researchers and practitioners the world over.
I am proud of my work in initiating the chat. However, we cannot deny that minor mistakes and setbacks may have been made over the course of attempting to make the infohazards widely accessible and useful to a broad community of people.
In particular, the deceitful and discredited journalist may have encountered several new infohazards previously confidential and unleaked:
* Mirror nematodes as a solution to mirror bacteria. "Mirror bacteria," synthetic organisms with mirror-image molecules, could pose a significant risk to human health and ecosystems by potentially evading immune defenses and causing untreatable infections. Our scientists have explored engineering mirror nematodes, a natural predator for mirror bacteria, to |
a4271cbf-2bb5-41c8-bcdc-f7e2a9232641 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post1027
TL;DR We are a new AI evals research organization called Apollo Research based in London. We think that strategic AI deception – where a model outwardly seems aligned but is in fact misaligned – is a crucial step in many major catastrophic AI risk scenarios and that detecting deception in real-world models is the most important and tractable step to addressing this problem. Our agenda is split into interpretability and behavioral evals: On the interpretability side, we are currently working on two main research bets toward characterizing neural network cognition. We are also interested in benchmarking interpretability, e.g. testing whether given interpretability tools can meet specific requirements or solve specific challenges. On the behavioral evals side, we are conceptually breaking down ‘deception’ into measurable components in order to build a detailed evaluation suite using prompt- and finetuning-based tests. As an evals research org, we intend to use our research insights and tools directly on frontier models by serving as an external auditor of AGI labs, thus reducing the chance that deceptively misaligned AIs are developed and deployed. We also intend to engage with AI governance efforts, e.g. by working with policymakers and providing technical expertise to aid the drafting of auditing regulations. We have starter funding but estimate a $1.4M funding gap in our first year. We estimate that the maximal amount we could effectively use is $4-6M $7-10M* in addition to current funding levels (reach out if you are interested in donating ). We are currently fiscally sponsored by Rethink Priorities. Our starting team consists of 8 researchers and engineers with strong backgrounds in technical alignment research. We are interested in collaborating with both technical and governance researchers. Feel free to reach out at info@apolloresearch.ai . We intend to hire once our funding gap is closed. If you’d like to stay informed about opportunities, you can fill out our expression of interest form. *Updated June 4th after re-adjusting our hiring trajectory Research Agenda We believe that AI deception – where a model outwardly seems aligned but is in fact misaligned and conceals this fact from human oversight – is a crucial component of many catastrophic risk scenarios from AI (see here for more). We also think that detecting/measuring deception is causally upstream of many potential solutions. For example, having good detection tools enables higher quality and safer feedback loops for empirical alignment approaches, enables us to point to concrete failure modes for lawmakers and the wider public, and provides evidence to AGI labs whether the models they are developing or deploying are deceptively misaligned. Ultimately, we aim to develop a holistic and far-ranging suite of deception evals that includes behavioral tests, fine-tuning, and interpretability-based approaches. Unfortunately, we think that interpretability is not yet at the stage where it can be used effectively on state-of-the-art models. Therefore, we have split the agenda into an interpretability research arm and a behavioral evals arm. We aim to eventually combine interpretability and behavioral evals into a comprehensive model evaluation suite. On the interpretability side, we are currently working on a new unsupervised approach and continuing work on an existing approach to attack the problem of superposition. Early experiments have shown promising results, but it is too early to tell if the techniques work robustly or are scalable to larger models. Our main priority, for now, is to scale up the experiments and ‘fail fast’ so we can either double down or cut our losses. Furthermore, we are interested in benchmarking interpretability techniques by testing whether given tools meet specific requirements (e.g. relationships found by the tool successfully predict causal interventions on those variables) or solve specific challenges such as discovering backdoors and reverse engineering known algorithms encoded in network weights. On the model evaluations side, we want to build a large and robust eval suite to test models for deceptive capabilities. Concretely, we intend to break down deception into its component concepts and capabilities. We will then design a large range of experiments and evaluations to measure both the component concepts as well as deception holistically. We aim to start running eval experiments and set up pilot projects with labs as soon as possible to get early empirical feedback on our approach. Plans beyond technical research As an evals research org, we intend to put our research into practice by engaging directly in auditing and governance efforts. This means we aim to work with AGI labs to reduce the chance that they develop or deploy deceptively misaligned models. The details of this transition depend a lot on our research progress and our level of access to frontier models. We expect that sufficiently capable models will be able to fool all behavioral evaluations and thus some degree of ‘white box’ access will prove necessary. We aim to work with labs and regulators to build technical and institutional frameworks wherein labs can securely provide sufficient access without undue risk to intellectual property. On the governance side, we want to use our technical expertise in auditing, model evaluations, and interpretability to inform the public and lawmakers. We are interested in demonstrating the capacity of models for dangerous capabilities and the feasibility of using evaluation and auditing techniques to detect them. We think that showcasing dangerous capabilities in controlled settings makes it easier for the ML community, lawmakers, and the wider public to understand the concerns of the AI safety community. We emphasize that we will only demonstrate such capabilities if it can be done safely in controlled settings. Showcasing the feasibility of using model evaluations or auditing techniques to prevent potential harms increases the ability of lawmakers to create adequate regulation. We want to collaborate with independent researchers, technical alignment organizations, AI governance organizations, and the wider ML community. If you are (potentially) interested in collaborating with us, please reach out . Theory of change We aim to achieve a positive impact on multiple levels: Direct impact through research: If our research agenda works out, we will further the state of the art in interpretability and model evaluations. These results could then be used and extended by academics and other labs. We can have this impact even if we never get any auditing access to state-of-the-art models. We carefully consider how to mitigate potential downside risks from our research by controlling which research we publish. We plan to release a document on our policy and processes related to this soon. Direct impact through auditing: Assuming we are granted some level of access to state-of-the-art models of various AGI labs, we could help them determine if their model is, or could be, strategically deceptive and thus reduce the chance of developing and deploying deceptive models. If, after developing state-of-the-art interpretability tools and behavioral evals and using them to audit potentially dangerous models, we find that our tools are insufficient for the task, we commit to using our knowledge and position to make the inadequacy of current evaluations widely known and to argue for the prevention of potentially dangerous models from being developed and deployed. Indirect impact through demonstrations: We hope that demonstrating the capacity of models for dangerous capabilities shifts the burden of proof from the AI safety community to the AGI labs. Currently, the AI safety community has the implicit burden of showing that models are dangerous. We would like to move toward a world where the burden is on AGI labs to show why their models are not dangerous (similar to medicine or aviation). Additionally, demonstrations of deception or other forms of misalignment ‘in the wild’ can provide an empirical test bed for practical alignment research and also be used to inform policymakers and the public of the potential dangers of frontier models. Indirect impact through governance work: We intend to contribute technical expertise to AI governance where we can. This could include the creation of guidelines for model evaluations, conceptual clarifications of how AIs could be deceptive, suggestions for technical legislation, and more. We do not think that our approach alone could yield safe AGI. Our work primarily aims to detect deceptive unaligned AI systems and prevent them from being developed and deployed. The technical alignment problem still needs to be solved. The best case for strong auditing and evaluation methods is that it can convert a ‘one-shot’ alignment problem into a many-shot problem where it becomes feasible to iterate on technical alignment methods in an environment of relative safety. Status We have received sufficient starter funding to get us off the ground. However, we estimate that we have a $1.4M funding gap for the first year of operations and could effectively use an additional $7-10M* in total funding. If you are interested in funding us, please reach out . We are happy to address any questions and concerns. We currently pay lower than competitive salaries but intend to increase them as we grow to attract and retain talent. We are currently fiscally sponsored by Rethink Priorities but intend to spin out after 6-12 months. The exact legal structure is not yet determined, and we are considering both fully non-profit models as well as limited for-profit entities such as public benefit corporations. Whether we will attempt the limited for-profit route depends on the availability of philanthropic funding and whether we think there is a monetizable product that increases safety. Potential routes to monetization would be for-profit auditing or red-teaming services and interpretability tooling, but we are wary of the potentially misaligned incentives of this path. In an optimal world, we would be fully funded by philanthropic or public sources to ensure maximal alignment between financial incentives and safety. Our starting members include: Marius Hobbhahn (Director/CEO) Beren Millidge (left on good terms to pursue a different opportunity) Lee Sharkey (Research/Strategy Lead, VP) Chris Akin (COO) Lucius Bushnaq (Research scientist) Dan Braun (Lead engineer) Mikita Balesni (Research scientist) Jérémy Scheurer (Research scientist, joining in a few months) FAQ How is our approach different from ARC evals ? There are a couple of technical and strategic differences: At least early on, we will focus primarily on deception and its prerequisites, while ARC evals is investigating a large range of capabilities including the ability of models to replicate themselves, seek power, acquire resources, and more. We intend to use a wide range of approaches to detect potentially dangerous model properties right from the start, including behavioral tests, fine-tuning, and interpretability. To the best of our knowledge, ARC evals intends to use these tools eventually but is currently mostly focused on behavioral tools. We intend to perform fundamental scientific research in interpretability in addition to developing a suite of behavioral evaluation tools. We think it is important that audits ultimately include evaluations of both external behavior and internal cognition. This seems necessary to make strong statements about cognitive strategies such as deception. We think our ‘narrow and deep’ approach and ARC’s ‘broad and less deep’ approach are complementary strategies. Even if we had no distinguishing features from ARC Evals other than being a different team, we still would deem it net positive to have multiple somewhat uncorrelated evaluation teams. When will we start hiring? We are starting with an unusually large team. We expect this to work well because many of us have worked together previously, and we all agree on this fairly concrete agenda. However, we still think it is wise to take a few months to consolidate before growing further. We think our agenda is primarily bottlenecked by engineering and hands-on research capacity rather than conceptual questions. Furthermore, we think we have the management capacity to onboard additional people. We are thus heavily bottlenecked by funding at the moment and it is unclear when and how many people we can hire in the near future. If this bottleneck is resolved we plan to start hiring soon. We have an expression of interest form for potential applicants. You can add your name and we will inform you when we open a hiring round. We might also reach out individually to researchers who are a great fit for collaborations. Do we ever plan to be a for-profit organization? This depends on a lot of factors and we have not made any final decisions. In the case where we take a constrained for-profit route, we would legally ensure that we are not obligated to maximize profit and carefully select the donors and investors we work with to make sure they share our AI safety goals and understand our mission. We are currently unsure whether the mission of reducing catastrophic risks from AI can be fully compatible with a for-profit setup. We think offering auditing or red-teaming services or providing interpretability tools are candidates for monetizable strategies that align with reducing catastrophic risks but trying to maximize profits from these strategies introduces obvious perverse incentives which we need to think carefully about how to mitigate. Isn’t this research dangerous? Some people have argued that behavioral evals that investigate dangerous capabilities could be a cause of risk in itself, e.g. that we accidentally create a dangerous deceptive model through our efforts or that we create a public blueprint for others to create one. We think this is a plausible concern. We have two main considerations. There are trade-offs between risk and information. Right now, we think that state-of-the-art models are not sufficiently capable to cause the serious risks that we are worried about, but we expect this to change in the future. The information we get from model evaluations should shed light on upcoming dangerous capabilities. Therefore the tradeoff currently seems worth making. But that might change in future and we will reassess our stance on this regularly. We can simulate potentially dangerous capabilities in controlled settings. By default, this means that we are very careful about which experiments we run and the environments they are run in. Furthermore, many dangerous capabilities are hard to test for directly. Therefore, our plan is to split them up into narrower capabilities that are less dangerous. For example, rather than testing if a model is capable of deception, we can test if it has goals, instrumental reasoning, and situational awareness. This is analogous to pointing out that someone has enough Uranium for an atomic bomb but isn’t capable of building it. However, in case we think it is the best strategy after taking the risks involved into account, we would consider doing end-to-end evaluations, e.g. directly testing for deception in carefully controlled environments. We are also aware that good interpretability research might eventually run the risk of improving capabilities. We have thought a considerable amount about this in the past and are making concrete plans to mitigate the risks. Overall, however, we think that current interpretability research is strongly net positive for safety in expectation. |
d4404e98-fded-4485-b793-95fd034ad4d9 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "I notice that when I write for a public audience, I usually present ideas in a modernist, skeptical, academic style; whereas, the way I come up with ideas is usually in part by engaging in epistemic modalities that such a style has difficulty conceptualizing or considers illegitimate, including: Advanced introspection and self-therapy (including focusing and meditation)
Mathematical and/or analogical intuition applied everywhere with only spot checks (rather than rigorous proof) used for confirmation
Identity hacking, including virtue ethics, shadow-eating, and applied performativity theory
Altered states of mind, including psychotic and near-psychotic experiences
Advanced cynicism and conflict theory, including generalization from personal experience
Political radicalism and cultural criticism
Eastern mystical philosophy (esp. Taoism, Buddhism, Tantra)
Literal belief in self-fulfilling prophecies, illegible spiritual phenomena, etc, sometimes with decision-theoretic and/or naturalistic interpretations This risks hiding where the knowledge actually came from. Someone could easily be mistaken into thinking they can do what I do, intellectually, just by being a skeptical academic.
I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas. Person A believed that this was importantly bad in that it hides where the actual ideas came from, and assigned credit for them to a system that did not actually produce the ideas.
On the other hand, citing academics that agree with you is helpful to someone who is relying on academic peer-review as part of their epistemology. And, similarly, offering a rigorous proof is helpful for convincing someone of a mathematical principle they aren't already intuitively convinced of (in addition to constituting an extra check of this principle).
We can distinguish, then, the source of an idea from the presented epistemic justification of it. And the justificatory chain (to a skeptic) doesn't have to depend on the source. So, there is a temptation to simply present the justificatory chain, and hide the source. (Especially if the source is somehow embarrassing or delegitimized)
But, this creates a distortion, if people assume the justificatory chains are representative of the source. Information consumers may find themselves in an environment where claims are thrown around with various justifications, but where they would have quite a lot of difficulty coming up with and checking similar claims.
And, a lot of the time, the source is important in the justification, because the source was the original reason for privileging the hypothesis. Many things can be partially rationally justified without such partial justification being sufficient for credence, without also knowing something about the source. (The problems of skepticism in philosophy in part relate to this: "but you have the intuition too, don't you?" only works if the other person has the same intuition (and admits to it), and arguing without appeals to intuition is quite difficult)
In addition, even if the idea is justified, the intuition itself is an artifact of value; knowing abstractly that "X" does not imply the actual ability to, in real situations, quickly derive the implications of "X". And so, sharing the source of the original intuition is helpful to consumers, if it can be shared. Very general sources are even more valuable, since they allow for generation of new intuitions on the fly.
Unfortunately, many such sources can't easily be shared. Some difficulties with doing so are essential and some are accidental. The essential difficulties have to do with the fact that teaching is hard; you can't assume the student already has the mental prerequisites to learn whatever you are trying to teach, as there is significant variation between different minds. The accidental difficulties have to do with social stigma, stylistic limitations, embarrassment, politics, privacy of others, etc.
Some methods for attempting to share such intuitions may result in text that seems personal and/or poetic, and be out of place in a skeptical academic context. This is in large part because such text isn't trying to justify itself by the skeptical academic standards, and is nevertheless attempting to communicate something.
Noticing this phenomenon has led me to more appreciate forewards and prefaces of books. These sections often discuss more of the messiness of idea-development than the body of the book does. There may be a nice stylistic way of doing something similar for blog posts; perhaps, an extended bibliography that includes free-form text.
I don't have a solution to this problem at the moment. However, I present this phenomenon as a problem, in the spirit of discussing problems before proposing solutions. I hope it is possible to reduce the accidental difficulties in sharing sources of knowledge, and actually-try on the essential difficulties, in a way that greatly increases the rate of interpersonal model-transfer."
A user from LessWrong is responsible for the text you see above
Could you elaborate on the "essential difficulties" and "accidental difficulties" in sharing sources of knowledge? What specific examples can you provide to illustrate these challenges?
The "essential difficulties" in sharing sources of knowledge primarily relate to the inherent complexities of teaching and learning. These difficulties are deemed "essential" because they are intrinsic to the process of knowledge transfer, regardless of the context or individuals involved.
For instance, a key essential difficulty is the variance in mental prerequisites among learners. Not everyone has the same foundational knowledge, cognitive abilities, or learning styles. This variation can make it challenging to convey complex ideas or theories effectively to a diverse audience. For example, explaining quantum physics to someone with a background in humanities might be more challenging than explaining it to someone with a background in physical sciences.
The "accidental difficulties" refer to the external, circumstantial factors that can impede the sharing of knowledge. These difficulties are considered "accidental" because they are not inherent to the process of knowledge transfer but arise due to specific situational or social factors.
Examples of accidental difficulties include social stigma and politics, which can deter individuals from sharing or receiving certain types of knowledge. For instance, a researcher might hesitate to share their findings on a controversial topic due to fear of backlash or ostracization. Similarly, stylistic limitations can also pose accidental difficulties. For example, academic writing often adheres to a specific format and style, which might not be conducive to conveying certain types of knowledge or ideas, especially those derived from personal experiences or unconventional sources.
In sum, the essential and accidental difficulties highlight the multifaceted challenges involved in the sharing of knowledge, ranging from the intrinsic complexities of teaching and learning to the external social and situational factors.
Reflecting on the complexities of knowledge transfer, how might these essential and accidental difficulties be mitigated or even leveraged to enhance the process?
Mitigating the essential difficulties in knowledge transfer often involves tailoring the teaching approach to the learner's needs. This could mean adjusting the complexity of the information, using different teaching methods, or providing additional resources for further learning. For example, using analogies or real-world examples can help to explain complex concepts, while interactive activities or discussions can engage different learning styles. Furthermore, the use of technology, such as online learning platforms, can allow for self-paced learning and personalized content.
As for accidental difficulties, awareness and openness are key. Recognizing that certain topics may be sensitive or controversial can guide the way they are presented, ensuring respect and understanding. In some cases, it may be beneficial to openly address these difficulties, fostering a dialogue that can lead to greater understanding and acceptance. Additionally, expanding the styles and formats used to convey knowledge can help to overcome stylistic limitations. This could involve incorporating narrative or personal elements into academic writing, or using multimedia formats to convey information.
Interestingly, these difficulties can also be leveraged to enhance the knowledge transfer process. The essential difficulties remind us that effective teaching requires adaptability and a deep understanding of the learner's needs, pushing educators to continually refine their methods. The accidental difficulties, on the other hand, can stimulate dialogue and critical thinking, encouraging learners to question assumptions and consider different perspectives. This can lead to a deeper and more nuanced understanding of the subject matter.
How can technology, specifically online learning platforms, be optimized to address the essential difficulties in knowledge transfer?
Online learning platforms can be optimized to address the essential difficulties in knowledge transfer in several ways:
1. Personalized Learning Paths: Using AI and machine learning, these platforms can analyze a learner's performance, learning style, and preferences to create a personalized learning path. This can help cater to individual learning needs and pace, making the learning process more efficient and effective.
2. Interactive Content: Incorporating interactive elements such as quizzes, simulations, and games can engage different learning styles and make complex concepts more understandable. This can also provide immediate feedback, allowing learners to assess their understanding and adjust their learning strategies accordingly.
3. Collaborative Learning: Features that enable peer-to-peer interaction, such as discussion forums and group projects, can foster a collaborative learning environment. This can enhance understanding through the sharing of diverse perspectives and ideas.
4. Multimedia Resources: Offering content in various formats such as text, video, audio, and infographics can cater to different learning preferences and help explain concepts more effectively. For example, a complex concept might be explained in a video tutorial, reinforced with an infographic, and then assessed with a quiz.
5. Adaptive Assessment: Advanced assessment tools can adjust the difficulty level of questions based on the learner's performance, ensuring that they are continually challenged but not overwhelmed. This can also help identify gaps in understanding and provide targeted resources to address these gaps.
6. Accessibility: Online platforms can incorporate features to make learning accessible to all users, regardless of their physical abilities or location. This includes options for closed captioning, text-to-speech, and language translation.
By incorporating these features, online learning platforms can address the essential difficulties in knowledge transfer and provide a more personalized, engaging, and effective learning experience. |
60dff962-fcdc-401c-82b2-936aed5fafc4 | trentmkelly/LessWrong-43k | LessWrong | Compute Trends Across Three eras of Machine Learning
https://arxiv.org/abs/2202.05924
What do you need to develop advanced Machine Learning systems? Leading companies don’t know. But they are very interested in figuring it out. They dream of replacing all these pesky workers with reliable machines who take no leave and have no morale issues.
So when they heard that throwing processing power at the problem might get you far along the way, they did not sit idly on their GPUs. But, how fast is their demand for compute growing? And is the progress regular?
Enter us. We have obsessively analyzed trends in the amount of compute spent training milestone Machine Learning models.
Our analysis shows that:
* Before the Deep Learning era, training compute approximately followed Moore’s law, doubling every ≈20 months.
* The Deep Learning era starts somewhere between 2010 and 2012. After that, doubling time speeds up to ≈5-6 months.
* Arguably, between 2015 and 2016 a separate trend of large-scale models emerged, with massive training runs sponsored by large corporations. During this trend, the amount of training compute is 2 to 3 orders of magnitude (OOMs) bigger than systems following the Deep Learning era trend. However, the growth of compute in large-scale models seems slower, with a doubling time of ≈10 months.
Figure 1: Trends in n=118 milestone Machine Learning systems between 1950 and 2022. We distinguish three eras. Note the change of slope circa 2010, matching the advent of Deep Learning; and the emergence of a new large scale trend in late 2015.Table 1. Doubling time of training compute across three eras of Machine Learning. The notation [low, median, high] denotes the quantiles 0.025, 0.5 and 0.975 of a confidence interval.
Not enough for you? Here are some fresh takeaways:
* Trends in compute are slower than previously reported! But they are still ongoing. I’d say slow and steady, but the rate of growth is blazingly fast, still doubling every 6 months. This probably means that you should double the time |
c39dc0f3-d1eb-497d-b4db-411d54376bc0 | StampyAI/alignment-research-dataset/arbital | Arbital | Conservative concept boundary
The problem of conservatism is to draw a boundary around positive instances of a concept which is not only *simple* but also *classifies as few instances as possible as positive.*
# Introduction / basic idea / motivation
Suppose I have a numerical concept in mind, and you query me on the following numbers to determine whether they're instances of the concept, and I reply as follows:
- 3: Yes
- 4: No
- 5: Yes
- 13: Yes
- 14: No
- 19: Yes
- 28: No
A *simple* category which covers this training set is "All odd numbers."
A *simple and conservative* category which covers this training set is "All odd numbers between 3 and 19."
A slightly more complicated, and even more conservative category, is "All prime numbers between 3 and 19."
A conservative but not simple category is "Only 3, 5, 13, and 19 are positive instances of this category."
One of the (very) early proposals for value alignment was to train an AI on smiling faces as examples of the sort of outcome the AI ought to achieve. Slightly steelmanning the proposal so that it doesn't just produce *images* of smiling faces as the AI's sensory data, we can imagine that the AI is trying to learn a boundary over the *causes of* its sensory data that distinguishes smiling faces within the environment.
The classic example of what might go wrong with this alignment protocol is that all matter within reach might end up turned into tiny molecular smiley faces, since heavy optimization pressure would pick out an [extreme edge](https://arbital.com/p/2w) of the simple category that could be fulfilled as maximally as possible, and it's possible to make many more tiny molecular smileyfaces than complete smiling faces.
That is: The AI would by default learn the simplest concept that distinguished smiling faces from non-smileyfaces within its training cases. Given [a wider set of options than existed in the training regime](https://arbital.com/p/6q), this simple concept might also classify as a 'smiling face' something that had the properties singled out by the concept, but was unlike the training cases with respect to other properties. This is the metaphorical equivalent of learning the concept "All odd numbers", and then positively classifying cases like -1 or 9^999 that are unlike 3 and 19 in other regards, since they're still odd.
On the other hand, suppose the AI had been told to learn a simple *and conservative* concept over its training data. Then the corresponding goal might demand, e.g., only smiles that came attached to actual human heads experiencing pleasure. If the AI were moreover a conservative *planner*, it might try to produce smiles only through causal chains that resembled existing causal generators of smiles, such as only administering existing drugs like heroin and not inventing any new drugs, and only breeding humans through pregnancy rather than synthesizing living heads using nanotechnology.
You couldn't call this a solution to the value alignment problem, but it would - arguendo - get significantly *closer* to the [intended goal](https://arbital.com/p/6h) than tiny molecular smileyfaces. Thus, conservatism might serve as one component among others for aligning a [Task AGI](https://arbital.com/p/6w).
Intuitively speaking: A genie is hardly rendered *safe* if it tries to fulfill your wish using 'normal' instances of the stated goal that were generated in relatively more 'normal' ways, but it's at least *closer to being safe.* Conservative concepts and conservative planning might be one attribute among others of a safe genie.
# Burrito problem
The *burrito problem* is to have a Task AGI make a burrito that is actually a burrito, and not just something that looks like a burrito, and not poisonous and that is actually safe for humans to eat.
Conservatism is one possible approach to the burrito problem: Show the AGI five burritos and five non-burritos. Then, don't have the AGI learn the *simplest* concept that distinguishes burritos from non-burritos and then create something that is *maximally* a burrito under this concept. Instead, we'd like the AGI to learn a *simple and narrow concept* that classifies these five things as burritos according to some simple-ish rule which labels as few objects as possible as burritos. But not the rule, "Only these five exact molecular configurations count as burritos", because that rule would not be simple.
The concept must still be broad enough to permit the construction of a sixth burrito that is not molecularly identical to any of the first five. But not so broad that the burrito includes butolinum toxin (because, hey, anything made out of mostly carbon-hydrogen-oxygen-nitrogen ought to be fine, and the five negative examples didn't include anything with butolinum toxin).
The hope is that via conservatism we can avoid needing to think of every possible way that our training data might not properly stabilize the 'simplest explanation' along every dimension of potentially fatal variance. If we're trying to only draw *simple* boundaries that separate the positive and negative cases, there's no reason for the AI to add on a "cannot be poisonous" codicil to the rule unless the AI has seen poisoned burritos labeled as negative cases, so that the slightly more complicated rule "but not poisonous" needs to be added to the boundary in order to separate out cases that would otherwise be classified positive. But then maybe even if we show the AGI one burrito poisoned with butolinum, it doesn't learn to avoid burritos poisoned with ricin, and even if we show it butolinum and ricin, it doesn't learn to avoid burritos poisoned with the radioactive iodine-131 isotope. Rather than our needing to think of what the concept boundary needs to look like and including enough negative cases to force the *simplest* boundary to exclude all the unsafe burritos, the hope is that via conservatism we can shift some of the workload to showing the AI *positive* examples which happen *not* to be poisonous or have any other problems.
# Conservatism over the causes of sensed training cases.
Conservatism in AGI cases seems like it would need to be interpreted over the causes of sensory data, rather than the sensory data itself. We're not looking for a conservative concept about *which images of a burrito* would be classified as positive, we want a concept over which *environmental burritos* would be classified as positive. Two burrito candidates can cause identical images while differing in their poisonousness, so we want to draw our conservative concept boundary around (our model of) the causes of past sensory events in our training cases, not draw a boundary around the sensory events themselves.
# Conservative planning
A conservative *strategy* or conservative *plan* would *ceteris paribus* prefer to construct burritos by buying ingredients from the store and cooking them, rather than building nanomachinery that constructs a burrito, because this would be more characteristic of how burritos are usually constructed, or more similar to the elements of previously approved plans. Again, this seems like it might be less likely to generate a poisonous burrito.
Another paradigmatic example of conservatism might be to, e.g., inside some game engine, show the AI some human players running around, and then give the AI an object that has the goal of e.g. moving a box to the end of the room. If the AI is given the ability to fly, but generates a plan in which the box-moving agent only moves around on the ground because that's what the training examples did, then this is a conservative plan.
The point of this isn't to cripple the AI's abilities, the point is that if e.g. your [low impact measure](https://arbital.com/p/2pf) has a loophole and the AI generates a plan to turn all matter within reach into pink-painted cars, some steps of this plan like "disassemble stars to make more cars and paint" are likely to be non-conservative and hence not happen automatically.
## Flagging non-conservative plan steps
If a non-conservative plan seems better along other important dimensions - for example, there is no other plan that has an equally low impact and equally few side effects compared to just synthesizing the burrito using a nanomachine - then we can also imagine that the critical step might be flagged as non-conservative and presented to the user for checking.
That is, on 'conservative' planning, we're interested in both the problem "generate a plan and then flag and report non-conservative steps" as well as the problem "try to generate a plan that has few or no non-conservative steps".
# Role in other safety problems
Conservatism and conservative planning seems like it might directly tackle some standard concerns head-on and in a sufficiently basic way to avoid loopholes, and might also be subject to those concerns. E.g.:
- [https://arbital.com/p/2w](https://arbital.com/p/2w) - if in full generality we don't go to the edge of the graph but try to stay in the center of what's already been positively classified, maybe we can avoid this.
- [https://arbital.com/p/47](https://arbital.com/p/47) - if we stick to things very similar to already-positively-classified instances, we won't automatically go into the unimagined parts of the graph.
- [https://arbital.com/p/6q](https://arbital.com/p/6q) - a sufficiently conservative optimizer might go on using options previous to similarly whitelisted ones even if large new sections of planning space opened up.
However, to the extent we rely on conservatism to prevent any of these things from happening, it's a mission-critical component that itself has to be [advanced-safe](https://arbital.com/p/2l) with no loopholes. If a 'conservatism' constraint is being applied to [very powerful optimization pressures](https://arbital.com/p/2c), we need to worry about this [seeking out any loophole](https://arbital.com/p/42) in what is 'conservative'. It might be that the central notion of 'conservatism' is simple enough to have no loopholes. But it's also possible that even a simplish and conservative concept would still include some dangerous instances, if there's enough optimization pressure seeking out a maximal-under-some-criterion instance within everything that's been classified conservatively.
Two possible meta-approaches to making conservatism even safer:
- Use conservatism to flag non-conservative steps in plans, or expected non-conservative instances of goal achievements, and refer these for user checking before taking action. (Rather than automatically generating a plan containing only 'conservative' steps.) This would have the [standard problems with user checking](https://arbital.com/p/2qq).
- Have a definition of conservatism, relative to the AI's current world-model and conceptual language, which would automatically catch as 'exceptional' (hence not conservative) anything which had the weird property of being the only first-order-conservative instance of a concept that had some other special property being sought out by the optimization pressure. This might involve weird reflective problems, such as any planned event being special in virtue of the AI having planned it. |
4337ecc8-8128-4d57-9c35-4f12d800f5a1 | StampyAI/alignment-research-dataset/agisf | AGI Safety Fund | [Week 7] OpenAI Charter
This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Broadly distributed benefits
----------------------------
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Long-term safety
----------------
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Technical leadership
--------------------
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.
Cooperative orientation
-----------------------
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research. |
f8279f60-b29d-4537-99b5-38ef7550221c | trentmkelly/LessWrong-43k | LessWrong | Using Threats to Achieve Socially Optimal Outcomes
In the last post, we saw an example of how dath ilan's Algorithm calls for negotiators to use non-credible threats to achieve fair outcomes when bargaining. The Algorithm also has a lot to say about threats more generally. In particular, that there is a type of threat that shouldn't appear in equilibria of ideal agents, because ideal agents don't give in to those kinds of threats. What sorts of threats should ideal agents make? And what sorts of threats should they give in to?
Threats in dath ilan
There is a word in Baseline, the language of dath ilan, which is simply translated as "threat". One connotation of this word is non-credibility.
> Because, of course, if you try to make a threat against somebody, the only reason why you'd do that, is if you believed they'd respond to the threat; that, intuitively, is what the definition of a threat is.
The government of dath ilan does not use (non-credible) threats to enforce its laws. It only imposes those penalties which it has an incentive to actually impose, even if actually placed in a situation where a citizen has actually broken a law. (This is contrasted with a law which is not in a government's interest in that subgame to actually enforce, and therefore constitutes a non-credible threat.)
> The dath ilani built Governance in a way more thoroughly voluntarist than Golarion could even understand without math, not (only) because those dath ilani thought threats were morally icky, but because they knew that a certain kind of technically defined threat wouldn't be an equilibrium of ideal agents; and it seemed foolish and dangerous to build a Civilization that would stop working if people started behaving more rationally.
So to recap, there is some notion of "threat", which should never appear in any real policy adopted by ideal agents. (Where a policy defines how an agent will behave in any situation.) Some credible threats, like exiling murderers, are compatible with the Algorithm. As are some non-credible threa |
888ccce4-1a9d-476a-aeb9-45832f2892b1 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Melbourne, practical rationality
Discussion article for the meetup : Melbourne, practical rationality
WHEN: 04 January 2013 07:00:00PM (+1100)
WHERE: 55 Walsh St West Melbourne 3003
Practical rationality. This meetup repeats on the 1st Friday of each month and is distinct from our social meetup on the 3rd Friday of each month.
Discussion: http://groups.google.com/group/melbourne-less-wrong
All welcome from 6:30pm. Call the phone number on the door and I'll let you in.
Discussion article for the meetup : Melbourne, practical rationality |
82a90af1-8817-4503-a8f5-8ba82e35e3b0 | trentmkelly/LessWrong-43k | LessWrong | [Book review] Getting things done
This is a book review of the book Getting things done by David Allen. I read it in the context of a personal literature review project on the topic of productivity and well being. If you are more interested by advice on productivity and wellbeing than by this specific book I advise you to read the project report first, which condenses the advice from multiple sources (including this book).
How I read
I started skipping many parts of this book near its middle. Mostly, I read entirely the first quarter, read by bits in the middle, occasionally skipping an entire chapter, and then read the end.
Description and opinion
A good book packed with insight on being productive written by someone with a good track record of making people productive. Specialized in executive positions but modular and made to be adaptable. The part on decision processes is certainly the weakest for someone who already cares about rationality and decision making but that doesn't mean it cannot be a good catalyst and a source of inspiration.
Sometimes what the book says in terms of precise factual statements is false or flawed but I do not think that matters much if you keep in mind not to trust those.
Main takes
* Store information pertaining to what you plan to do in an outside system. Actually store very little information of the form "think of X at time t" in your mind.
* The two minutes rule. --> When planing, do immediately what you can do right away in two minutes. This is not recursive.
* For each project (or more generally for sequences of actions), make it a habit to often ask "what is the next action".
* Shape your physical setup to guide you to be productive. What is or isn't easy has a lot of influence on how you behave and feel. Take advantage of this fact in both ways (make what you want to do easy to do, make what you do not want to do hard to do).
* Make it easy for you to store files and know you will find them. A good first step is to have a drawer or a paper tray for |
556c889e-3f6d-44de-8e48-4e9f63d3d621 | StampyAI/alignment-research-dataset/blogs | Blogs | When AI Accelerates AI
Last week, Nate Soares [outlined his case](https://intelligence.org/2015/07/24/four-background-claims/) for prioritizing long-term AI safety work:
1. *Humans have a fairly general ability to make scientific and technological progress.* The evolved cognitive faculties that make us good at organic chemistry overlap heavily with the evolved cognitive faculties that make us good at economics, which overlap heavily with the faculties that make us good at software engineering, etc.
2. *AI systems will eventually [strongly outperform](https://intelligence.org/faq/#superintelligence) humans in the relevant science/technology skills.* To the extent these faculties are also directly or indirectly useful for social reasoning, long-term planning, introspection, etc., sufficiently powerful and general scientific reasoners should be able to strongly outperform humans in arbitrary cognitive tasks.
3. *AI systems that are much better than humans at science, technology, and related cognitive abilities would have much more power and influence than humans.* If such systems are created, their decisions and goals will have a decisive impact on the future.
4. *By default, smarter-than-human AI technology will be harmful rather than beneficial.* Specifically, it will be harmful if we exclusively work on improving the scientific capability of AI agents and neglect technical work that is specifically focused on safety requirements.
To which [I would add](http://intelligence.org/2014/08/04/groundwork-ai-safety-engineering/):
* Intelligent, autonomous, and adaptive systems are already challenging to verify and validate; smarter-than-human scientific reasoners present us with extreme versions of the same challenges.
* Smarter-than-human systems would also introduce qualitatively new risks that can’t be readily understood in terms of our models of human agents or narrowly intelligent programs.
None of this, however, tells us *when* smarter-than-human AI will be developed. Soares has argued that we are likely to be able to make [early progress](https://intelligence.org/2015/07/27/miris-approach/) on AI safety questions; but the earlier we start, the larger is the risk that we misdirect our efforts. Why not wait until human-equivalent decision-making machines are closer at hand before focusing our efforts on safety research?
One reason to start early is that the costs of starting too late are much worse than the costs of starting too early. Early work can also help attract more researchers to this area, and give us better models of alternative approaches. Here, however, I want to focus on a different reason to start work early: the concern that a number of factors may accelerate the development of smarter-than-human AI.
**AI speedup thesis.** AI systems that can match humans in scientific and technological ability will probably be the cause and/or effect of a period of unusually rapid improvement in AI capabilities.
If general scientific reasoners are invented at all, this probably won’t be an isolated event. Instead, it is likely to directly feed into the development of more advanced AI. Similar considerations suggest that such systems may be the *result* of a speedup in intelligence growth rates, as measured in the cognitive and technological output of humans and machines.
When AI capabilities work is likely to pick up speed more than AI safety work does, putting off safety work raises larger risks (because we may be failing to account for future speedup effects that give us less time than is apparent) and is less useful (because we have a shorter window of time between ‘we have improved AI algorithms we can use to inform our safety work’ and ‘our safety work needs to be ready for implementation’).
I’ll note four broad reasons to expect speedups:
1. *Overlap between accelerators of AI progress and enablers/results of AI progress.* In particular, progress in automating science and engineering work can include progress in automating AI work.
2. *Overall difficulty of AI progress.* If smarter-than-human AI is sufficiently difficult, its invention may require auxiliary technologies that effect a speedup. Alternatively, even if such technologies aren’t strictly necessary for AI, they may appear before AI if they are easier to develop.
3. *Discontinuity of AI progress.* Plausibly, AI development won’t advance at a uniform pace. There will sometimes be very large steps forward, such as new theoretical insights that resolve a number of problems in rapid succession. If a software bottleneck occurs while hardware progress continues, we can expect a larger speedup when a breakthrough occurs: [Shulman and Sandberg](https://intelligence.org/files/SoftwareLimited.pdf) argue that the availability of cheap computing resources in this scenario would make it much easier to quickly copy and improve on advanced AI software.
4. *Increased interest in AI.*As AI software increases in capability, we can expect increased investment in the field, especially if a race dynamic develops.
[Intelligence explosion](https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) is an example of a speedup of the first type. In an intelligence explosion scenario, the ability of AI systems to innovate within the field of AI leads to a positive feedback loop of accelerating progress resulting in superintelligence.
Intelligence explosion and other forms of speedup [are often conflated](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/#2) with the hypothesis that smarter-than-human AI is imminent; but some reasons to expect speedups (e.g., ‘overall difficulty of AI progress’ and ‘discontinuity of AI progress’) can equally imply that smarter-than-human AI systems are further off than many researchers expect.
Are there any factors that could help speed up safety work relative to capabilities work? Some have suggested that interest in safety is likely to increase as smarter-than-human AI draws nearer. However, this might coincide with a compensatory increase in AI capabilities investment. Since systems approaching superintelligence will have incentives to appear safe, it is also possible that safety work will erroneously appear *less*necessary when AI systems approach humans in intelligence, as in Nick Bostrom’s [treacherous turn scenario](http://nothingismere.com/2014/08/05/bostrom-on-ai-deception/).
We could also imagine outsourcing AI safety work to sufficiently advanced AI systems, just as we might outsource AI capabilities work. However, it is likely to take a special effort to reach the point where we can (safely) delegate a variety of safety tasks before we can delegate a comparable amount of capabilities work.
On the whole, capabilities speedup effects make it more difficult to make robust predictions about [AI timelines](https://intelligence.org/faq/#imminent). If rates of progress are discontinuous, highly capable AI systems may continue to appear about equally far off until shortly before their invention. This suggests that it would be unwise to wait until advanced AI appears to be near to begin investing in basic AI safety research.
The post [When AI Accelerates AI](https://intelligence.org/2015/08/03/when-ai-accelerates-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
9e5793ac-e212-4956-ba7c-760fb4045fec | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Peter Eckersley | Setting Priorities in Addressing AI Risk | VISION WEEKEND 2019
so Alison asked me to talk about
priorities in addressing AI risk before
I do that I'm going to just mention I
work at the partnership on AI it's an
independent non-profit but created by
the tech companies and we have order of
a hundred partner organizations this
talk importantly does not represent the
views of necessarily of PII and
definitely not of our partner
organizations take these as individual
views so thinking about AI risk there
are different ways of slicing things
short versus long term is a familiar
frame technical problems versus problems
within particular institutions versus
large-scale problems in our culture
politics or economics and then the
difference between accidents or local
unintended consequences malicious uses
or misguided uses and then large-scale
cross interactions between systemic
effects that people couldn't have
anticipated or can't control thinking
about the short versus long term
distinction there are many similarities
both of these sets of problems can
involve misspecified objectives
corruptable objectives side effects from
the thing you're trying to do having the
wrong incentives whether at an
institutional level or at an individual
level the short term problems are
interesting not just because they're
serious and urgent and many of them are
but also because they're practice since
we're at a four side event they're
practice for longer term larger scale
problems and if you're more of a short
term missed or more of a here-and-now
person you should really put value
nonetheless on accurate foresight
because that's gonna let people plan
ahead better and mitigate problems down
the road so our advice is work on both
we're trying to work on both some
examples of the short-term problems that
are really high stakes from AI right now
include premature deployments of machine
learning and high risks or high stakes
settings where the technology isn't
ready the use of recommender bandid
algorithms whether for advertising or
social media that we're seeing having a
potentially destabilizing effect on
politics or on psychology and
problematic labor and economic
consequences from really fast deployment
of new technology
to take one of those examples in detail
in the state of California last year a
bill was passed mandating that every
county in the state should purchase and
use machine learning or algorithmic risk
assessment tools to decide whether to
detain or release every single criminal
defendant prior to trial and this is in
the context of mass incarceration in the
United States which is truly an enormous
problem and reformers here we're really
trying to get the United States back in
line with other countries and with with
historical norms reduce incarceration by
moving from a punitive system to one
that's more based on evidence and so
what they were doing is collecting
survey data about people's lives that
criminal histories their circumstances
and then trying to predict Oh will you
reoffending our trial or where you fail
to appear for a court date and this
these decisions in California will
affect 63,000 people per night across
the United States is closer to a half a
million now it turns out there are some
serious problems with these systems that
are being deployed though they may be
well intentioned one problem that's been
flagged very loudly is that sometimes it
seems the african-american defendants
get high risk scores in ways that don't
make sense in comparison to their
Caucasian
comparators and in fact if you look
statistically you see that the false
positive rate the odds of being labeled
as dangerous given that you're not
dangerous almost twice as high if you
are african-american than if you're
white so this is a very serious problem
RPI has been working on mitigations
including a report that we produced with
our partner organizations with 10
recommendations of things you need to
fix in the statistics and machine
learning and the institutional setting
if it were ever to be appropriate to do
this kind of deployment efforts on
transparency so the institutions
building and and purchasing procuring
these technologies can understand
serious problems like the fact that
they're trained on 10 and 20 year old
data that includes for instance
marijuana arrests in a state like
California where marijuana possession is
no longer a crime so predicting riah
fence based on that is very problematic
and then largest-scale mitigations that
I'm not specifically about this criminal
justice problem but about the field as a
whole
the use of documentation and
transparency through a project called
about FML that how should help the AI
community tackle similar questions
wherever they arise in the AI industry
so this is a project being led by Jing
young yang who's here today and it is
trying to gather initiatives across lots
of different organizations to produce
documentation and transparency about the
way the pipeline the way that a machine
logic project goes from specification to
data set to model to deployment and get
people to pause and ask themselves the
right questions about failure modes
along the way so looking at this this
particular problem it's short term it's
about local on a tentative consequences
but there are mitigations in all of
these categories I'm going to move to
something longer term value alignment in
this community you'll often hear people
talking about paper clipping in Sumer
instrumental convergence and I want to
provoke a little bit on this topic you
know paper clipping is really a new
version of the old story of the Midas
touch you wish for everything to turn to
gold and then you realize that wasn't
quite what you wanted I've got another
frame that it might be interesting to
explore here which is that paper
clipping is actually related to
totalitarianism we have a real world
problem we're familiar with that we're
restating in futurology terms so I have
this conjecture here totalitarian
convergence this that powerful agents
with mathematically certain
monotonically increasing open-ended
objective functions will adopt sub goals
to disable or disempower other agents if
their objectives do not exactly aligned
you can prove this I'm going to skip the
proof it's a an economic style proof
with some simple stylized assumptions
but what you get is that an agent starts
off by disempowering things that
disagree with it altogether and then
it's left with it's sort of allies that
somewhat correlate with it but if
there's constrained resources it may
want to get rid of them as well in order
to or their agency at least in order to
get the perfect optimized world this
turns out to be behavior that's observed
in human political systems totalitarian
and authoritarian regimes often ally
with
other perspectives and movements in
order to gain power and then once
they're in positions of power regimes
like Cuba's or the Soviet Union or Nazi
Germany suddenly turn around and exile
or imprison or purge their former allies
and so paper clipping in some sense
maybe a story a warning about
totalitarianism a problem that humanity
has already struggled with an enormous
scale and there are some corollaries
that come out of this one is don't build
high-stakes AI systems with single
specific optimization objectives being
too sure of knowing what the right thing
to do is is dangerous the second is that
there's a research program on how to
specify objective functions that involve
preserving or optimizing for other's
agency this is actually a very subtle
and difficult point it's kind of like
figuring out liberalism West and
liberalism or libertarianism if that's
your thing for your objective function
you're gonna need instead of having a
specific goal to have a region that you
tolerate in some way in a mathematical
specification for that there are
numerous technical value alignment
problems that come along the way and
figuring out the shape of that tolerated
or good region as a function of other's
preferences sir
moving on from from that cluster of
problems there there's a question about
how we build a stronger field and
realign the many institutions that are
working on AI in a way that reduces risk
one idea here is to build safety Fanus
and social good goals into the
benchmarks data sets reinforcement
learning environments that so many AI
researchers use as the yardstick of
their field we're doing a little of this
work directly at Pei Carol Wainwright
who's here today has a project called
safe life that's building a test
environment for reinforcement learning
agents still teach them to avoid side
effects it's built on Conway's Game of
Life giving it the name you can ask him
about that we could even do a breakout
session on it but there's also a
possibility of doing something larger
and more structural inspired by the same
idea could we build a compendium of
missing data sets in machine learning
infrastructure basically a platform a
gathering place for the whole field to
come together and say I've got a mess
ethics or social good problem over here
who has the missing pieces or the team
or the funding to close that gap so as I
close you know thinking back high-level
what should our priorities be on AI risk
there are some articulated here but we
shouldn't be trying to set them all
ourselves we should be building cultures
fields feedback mechanisms and
institutional capacity to gather all the
solutions to these problems
so that we aren't just one actor
charting one course that turns out to
not quite be the right one
lastly before I finish I also want to do
a quick pitch for tomorrow Rosie
Campbell at Pei is running a session on
publication norms and be Cabello is also
here and a great person to talk to about
AI ethics implementation within large
organizations and labor and economic
impacts thanks of run okay great stay
right there
awesome okay great thank you so much
Peter I will give it first up to Robin
and Gillian and to pester you with
questions while I ask the ones that have
questions in the audience to move up
front of the mic or to find Aaron and
give you your anonymous question so
Peter when you say we should be doing
this who's the we uh there's probably a
different way for each of the places I
use that phrase in some cases I think
it's a community that's trying to do
planning and forethought on AI risk and
that's a fairly large and growing
community so it includes people in this
room people working at AI labs academics
who have a perspective on these
questions and things to bring civil
society organizations and maybe in some
in some cases I'm using the aspirational
way for the partnership on AI and its
many partner organizations trying to
gather resources to tackle these
questions maybe there were some places
whether it's government as well though I
was less thinking in those terms so one
of the traditional ways to deal with
totalitarianism in government is limited
power so in the caucus
today I that's sort of looking at how do
we limit the set of actions they could
take is that also an area that seems
it's a very good idea but I think the
thing that has made people nervous about
that approach on its own
is that making those limits robust with
rapidly improving systems is hard and
say you know as a computer security
person I'd say well you want
defense-in-depth
so you want both some limitations placed
there but you also want a system that if
it accidentally you know finds an
exploit for the limitations doesn't do
things that you'd regret afterwards
question here is Pai comfortable on
exporting AI to China I don't think the
export is the right frame if you look at
AI research and look at the literature
it is being written by an academic
research community a scholarly community
that's global in nature a huge fraction
of the authors on those papers of
Chinese descent many of them are Chinese
Americans living in the United States
coming to grad programs in the United
States and wanting to stay answer in
general it feels as though the frame of
global cooperation is probably the right
one for whether they I come from and the
framework of collaboration on safety and
allowing that you know if you want to
play strategic interest the United
States probably would serve its
strategic interest by creating new visa
categories that allow people to to stay
and work in the United States and Pai
actually has a report with some
recommendations on that front
AGI great powers meeting that we had
last year there was also one of the
recommendations that came out of there
and I think there's a little bit more on
kind of like how we can bridge which
then and bridge a gap so if you're
interested in the report it's it's lying
out downstairs and ok and and just to
maybe give you give a little bit of
context on the prediction that you made
and it's not a miraculous yet it will be
I think by tomorrow hopefully but in the
prediction you're saying you feel does
the the we're on a good trajectory if a
new norm again single value optimization
has successfully altered some
high-stakes ml deployments by
governments or tech companies do you
want to say like if you
sentences about that so people can get
predicting tomorrow so I'll give a
couple of examples of where this goes
wrong so one of our partners had a paper
that they showed us with a medical
prediction system that was recommending
outpatient interventions for people
released with cardiac conditions - and
what they were optimizing for was the
hospital's
financial incentives under the
Affordable Care Act there was a penalty
for the hospital if they released
someone who was readmitted within 30
days so they try to predict whether this
intervention could help prevent that
readmission within that window but you
could easily see that there are other
pretty relevant objectives like maybe
the overall welfare of the patient on
its own was one way you'd go for a
different decision or you might slide
the window and say well thirty days
might not be the right time horizon you
might want uncertainty about what the
correct time horizon is and so in a case
like that you probably shouldn't be
using one objective function you should
be uncertain over an ensemble of them we
see the same thing with these criminal
justice prediction algorithms there's a
lot of debate on what the right fenneis
correction methods might be for the
false positive rate problem that I
showed a slide on there's a literature
with lots of arguments about different
correction algorithms and no one's doing
anything right now because there's no
consensus there in fact impossibility
theorem is saying that none of the
corrections of the perfect right
correction instead perhaps what you
could do is be uncertain about which is
the right form of fairness and then that
leads to systems that sometimes say oh
wait and like maybe we should release
this person or maybe we shouldn't and
here are the kinds of considerations
that lead to that and so what I guess
I'm hoping is that we'll start to see
that philosophical concern taken back
into engineering and preventing the
deployment of overly confident systems
okay thank you very much thank you Peter
you |
3c3236ff-20af-491e-81b2-c56818e43bf9 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Catholic theologians and priests on artificial intelligence
The Journal of Moral Theology's spring issue is a [special issue on the topic of artificial intelligence](https://jmt.scholasticahq.com/issue/4236). It includes dialogues and essays by Catholic theologians and priests on topics ranging from economic impacts of near-term AI and autonomous weapons, to personhood of AI systems and superintelligence.
I think it's interesting to see discussion of these issues from a group of thoughtful people with a radically different perspective and background than is usual.
The articles are all open access. The [epilogue](https://jmt.scholasticahq.com/article/34132-epilogue-on-ai-and-moral-theology-weaving-threads-and-entangling-them-further) both summarizes the articles in the issue and discusses some of your classic transhumanist themes (mind uploading, personhood of AI, and so on). In some ways the author has a perspective that would resonate with EAs ("Good is to be done and we are the ones to do it") -- but in other ways it will seem extremely alien.
[An interview](https://jmt.scholasticahq.com/article/34131-the-vatican-and-artificial-intelligence-an-interview-with-bishop-paul-tighe) with a relatively high-ranking Vatican official may also be interesting: it describes how various institutions in the Vatican are thinking about ethical problems around AI, in response to interactions with Silicon Valley people and others. I was surprised by how positive he was on the idea of using technology to improve human capabilities, as long as it is done in a wise and ethical way -- he's willing to entertain that it might be good "to flip that traditional idea that you cannot play God". |
692f93db-9847-4a45-80e6-a488ec961ef9 | trentmkelly/LessWrong-43k | LessWrong | The smallest possible button (or: moth traps!)
tl;dr: The more knowledge you have, the smaller the button you need to press to achieve desired results. This is what makes moth traps formidable killing machines, and it's a good analogy for other formidable killing machines I could mention.
Traps
I was shopping for moth traps earlier today, and it struck me how ruthlessly efficient humans could be in designing their killing apparatus. The weapon in question was a thin pack in my hands containing just a single strip of paper which, when coated with a particular substance and folded in the right way, would end up killing most of the moths in my house. No need to physically hunt them down or even pay remote attention to them myself; a couple bucks spent on this paper and a minute to set it up, and three quarters of the entire population is decimated in less than a day.
That’s… horrifying.
Moth traps are made from cardboard coated with glue and female moth pheromones. Adult males are attracted to the pheromones, and end up getting stuck to the sides where they end up dying.[1] The females live, but without the males, no new larvae are born and in a few months time you’ve wiped out a whole generation of moths.[2] These traps are “highly sensitive” meaning that they will comb a whole room of moths very quickly despite being passive in nature.
Why are moth traps so effective? They use surgically precise knowledge. Humans know how to synthesize moth pheromones, and from there you can hack a 250-million-year-old genetically derived instinct that male moths have developed for mating, and then you set a trap and voilà. The genetic heuristic that worked 99% of the time for boosting reproductive rates in moths can be wielded against moths by obliterating their reproductive rates.
Moth traps aren’t even the pinnacle of human insecticidal war machines. Scientists have, after all, seriously considered using gene drives to eliminate an entire species of mosquitoes with a single swarm and some CRISPy cleverness.[3]
The sm |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.