id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
7fa691f5-ccd5-4d23-85bc-4a3e30ce3a9e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Eulogy to the Obits
By Xander Balwit
With death all but obsolete, Jamie’s life felt moot and emaciated. The Obituary Desk at The Times, where he worked, had turned into a ghost town he presided over with the bearing of a man who had given everything up for the bitter disappointment of a mine devoid of mineral riches. He would go on long walks around the deserted halls, choosing a different desk each day from which to work on whatever writing projects he could find. It was a shame, for death was what he liked writing about most.
Lives had been easier to frame when they’d been time-bound. A man at 60 is hardly the same man at 153, let alone 154, with how quickly things were changing. Happily, at least for Jamie, there were still the unfortunate saps who gave up the ghost to sheer accident or, lacking technological enthusiasm or a spirit of curiosity, elected to bow out early — or had never opted in to begin with.
Deaths were rare enough that when one was announced, those still employed in the Obits clamored, begged, and connived their way into a piece of the action. But after an ugly incident involving a bribe, a bottle of cognac, and the burning of an office chair, the holdouts had taken to drawing straws.
As luck would have it, Jamie drew the next reported incident. He didn’t know when exactly it would take place because chance, rather than age, had come to herald death’s arrival. No longer could one preempt those venerated elders whose lives had been haunted by finitude as now they sustained themselves through a regimen of drugs, diets, and therapies: products of the decades and countless billions poured into longevity medicine.
The call came a little after noon, just after Jamie had returned from the short walk down to his favorite cafe for his third cup of coffee that morning. The cafe line crept along at a glacial pace because their barista was one of the older task models — attractive, but inordinately slow — steaming milk with the rigidity of someone who detests babies but h
|
0f7c0f89-1bbd-4628-bf24-3826f714ee19
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
The Additive Summary Equation
This post contains some theorems and proofs needed for a hopefully-upcoming post on some powerful generalizations of the [Koopman-Pitman-Darmois (KPD) Theorem](https://en.wikipedia.org/wiki/Sufficient_statistic#Exponential_family). Unless you find functional equations interesting in their own right, and want to read some pretty dense math, you should probably skip this post. The theorems are pretty self-contained, and will be summarized in any future posts which need them.
The Summary Equation
--------------------
We can represent the idea of a D.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
-dimensional “summary” of x for a function f via a functional equation:
F(G(x))=f(x)
Given the function f, we try to find some D-dimensional “summary” G(x) such that f can be computed from G - i.e. we want some F,G such that F(G(x))=f(x) for all x.
In order for this to be meaningful, we need some mild assumptions on f, F, and G; at the very least, we certainly need to exclude [space-filling curves](https://en.wikipedia.org/wiki/Space-filling_curve), which would defeat the point of a “D-dimensional summary”. Throughout this post, we’ll assume differentiability, although this should be easy to relax somewhat by taking limits of differentiable functions.
Easy theorem: **The**D**-dimensional summary equation for**f**is solvable only if the rank of the matrix**∂f∂x**is at most**D**for all values of**x. I’ll call this the “Summarizability Theorem”. (If you want a more official-sounding name, it’s the global converse of the [constant-rank theorem](https://en.wikipedia.org/wiki/Inverse_function_theorem#Constant_rank_theorem).)
Proof: differentiate both sides of the equation to get ∂F∂G∂G∂x=∂f∂x. Since G is D-dimensional, this is itself a rank-at-most-D decomposition of ∂f∂x.
In practice, the converse will also usually hold: if the rank of ∂f∂x is at most D for all values of x, then we can usually find a D-dimensional summary G(x). Indeed, if the rank is constant near some point x0, then we can *always* find a *local* D-dimensional summary near x0; that’s what the constant rank theorem says. However, Weird Stuff can sometimes prevent stitching these local summaries together into a global summary. (Thank you to [Vanessa](https://www.lesswrong.com/users/vanessa-kosoy) for pointing me to [an example](https://mathoverflow.net/questions/203118/factoring-constant-rank-maps-into-a-submersion-and-an-immersion) of such “Weird Stuff”, as well as the name of the constant rank theorem.)
Minor notation point: each variable xj corresponds to a *column* of ∂f∂x. This convention will be used throughout the post. We will also assume that each xj is one-dimensional; higher-dimensional variables are represented by their components.
The Additive Summary Equation
-----------------------------
The heart of the generalized KPD theorems is a family of special cases of the Summary Equation in which f(x) is a sum of terms, each of which depend on only a few variables. I’ll call this the Additive Summary Equation. The most general version looks like this:
F(G(x))=f(x)=∑ifi(xNf(i))
… where fi are (known) smooth functions of output dimension m>D, and Nf(i) specify (known) indices of x. Notation example: if we have a term f2(x5,x7), then Nf(2)={5,7} and xNf(2)=(x5,x7).
The notation Nf(i) here stands for a “neighborhood” induced by fi, specifying the indices of x-variables on which fi depends. In the following sections, we’ll talk about the neighborhood of a *variable* xj, denoted N(j). This consists of all the variables which are neighbors of xj in any of the f-induced neighborhoods, i.e. N(j)=∪i:j∈Nf(i)Nf(i). In other words, N(j) contains the indices of variables xj′ for which some fi depends on both xj and xj′.
In the generalized KPD theorems, the neighborhoods Nf(i) reflect the graphical structure of the distribution. If P[X|Θ] [factors according to a DAG](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models):
P[X|Θ]=∏iP[Xi|Xpa(xi),Θ]
… then the corresponding functional equation looks like
F(G(x))=∑ifi(Xi,Xpa(i))
… i.e. Nf(i)=pa(i)∪{i}, with fi derived from P[Xi|Xpa(xi),Θ]. (Here the “parents” pa(i) are nodes with arrows into node i in the DAG.) For instance, if the Xi are all conditionally independent (as in the original KPD), then the equation is simply
F(G(x))=∑ifi(Xi)
… i.e. Nf(i)={i}. Another example: if the variables form a Markov Chain with P[X|Θ]=∏iP[Xi|Xi−1,Θ], then the corresponding equation is
F(G(x))=∑ifi(Xi,Xi−1)
… i.e. Nf(i)={i,i−1}.
### Main Theorem
Let f(x):=∑ifi(x). Then the additive summary equation F(G(x))=f(x) is solvable for F and D-dimensional G only if f can be expressed as
f(x)=∑i:∂fi∂xN(B)≢0fi(x)+U∑i′gi′(x)+C
… for some at-most D-dimensional functions {gi′}, constant matrix U of column-dimensional at-most D, constant vector C, and a set of at-most D x-indices B. The notation N(B) denotes “neighbors” of B, meaning x-indices j for which some fi depends on both xj and a variable in xB. (In particular, this means that all fi which depend on xB are constant when xN(B) is held constant.) Furthermore, the sparsity structure of each gi′ (i.e. the set of variables {xj} on which gi′ depends) matches one of the fi. (See the end of the “Rest of the Proof” section for the exact correspondence.)
The theorem is interesting mainly when:
* The number of variables n is much larger than the summary dimension D, i.e. n>>D, and...
* The number of variables xj on which each fi depends is small, so each xj has few neighbors N(j)
When these conditions hold, ∂fi∂xN(B) will be nonzero only for a very small fraction of terms, so the impact of the vast majority of terms/variables on f(x) is mediated by the at-most D-dimensional ∑i′gi′(x); this sum serves as a summary for the x¯N(B) (i.e. the variables which are *not* neighbors of the D variables in xB).
Intuitively, the simple result we’d “really like” is f(x)=U∑i′gi′(x)+C, with ∑i′gi′(x) at-most D-dimensional. This is not true in general for functions f with D dimensional summaries, but it is “almost true”: it holds for all but a few “exceptions”, i.e. a few extra terms/variables which can influence f in more general ways. The number of exceptional variables is |N(B)| - i.e. the D variables xB plus their neighbors.
Note that the theorem claims “only if”, but not “if”. In the other direction, we can make a slightly weaker statement: any f satisfying the above form has a summary-function G(x)=(xN(B),∑i′gi′(x)), with dimension at-most D+|N(B)|. The summary is just the at-most D-dimensional summary of x¯N(B), i.e. ∑i′gi′(x), plus the "exception" variables.
### Main Trick of the Proof
Pick some point x0 at which the rank of ∂f∂x takes its maximum value (which is at most D by the Summarizability Theorem). Then we can pick a set B of x-indices, of size at most D (i.e. |B|≤D), such that ∂f∂xB|x0 is a basis for the (at-most D-dimensional) column span of ∂f∂x|x0. If the system is very sparse, then ∂f∂xB will only depend on a few of the x-variables, namely xN(B).
Since ∂f∂xB|x0 spans the maximum number of dimensions, *all* columns of ∂f∂x|x0 must fall within that span - otherwise the rank of ∂f∂x would be greater. And since ∂f∂xB depends only on xN(B), this must hold for *any* values of the other variables x¯N(B). So, we can change the other variables any way we please, holding xN(B) constant, and ∂f∂x will remain in the span of ∂f∂xB|x0.
Let U be any basis for the span of ∂f∂xB|x0. (Of course we could choose U=∂f∂xB|x0 itself, but often there’s some cleaner basis, depending on the application.) Then UU† is a projection matrix, projecting into the span. For any x with xN(B)=x0N(B), ∂f∂x must fall within the span, which is equivalent to
∂f∂x=UU†∂f∂x (for xN(B)=x0N(B))
### Rest of the Proof
Next, we integrate. We’ll start at x0, then take any path from x0¯N(B) to x¯N(B) holding xN(B) constant. Then, we’ll go from x0N(B) to xN(B) holding x¯N(B) constant. So:
f(x)=f(x0)+∫∂f∂x¯N(B)|x0N(B)dx¯N(B)+∫∂f∂xN(B)|x¯N(B)dxN(B)
For the first integral, xN(B) is held constant at x0N(B), so by the previous section ∂f∂x=UU†∂f∂x:
=f(x0)+U∫U†∂f∂x¯N(B)|x0N(B)dx¯N(B)+∫∂f∂xN(B)|x¯N(B)dxN(B)
… and we’ll expand f=∑ifi:
=∑ifi(x0)+U∑i∫U†∂fi∂x¯N(B)|x0N(B)dx¯N(B)+∑i∫∂f∂xN(B)|x¯N(B)dxN(B)
Now, we break the sum up into terms which do not depend on xN(B), i.e. fi for which ∂fi∂xN(B)≡0 (for which the second integral contributes zero), and terms which do depend on xN(B), i.e. fi for which ∂fi∂xN(B)≢0 (for which we can’t say anything nontrivial):
=∑ifi(x0)+U∑i:∂fi∂xN(B)≡0∫U†∂fi∂x¯N(B)|x0N(B)dx¯N(B)+∑i:∂fi∂xN(B)≢0(fi(x)−fi(x0))
… and simplify a bit:
=∑i:∂fi∂xN(B)≢0fi(x)+∑i:∂fi∂xN(B)≡0fi(x0)+U∑i:∂fi∂xN(B)≡0U†(fi(x)−fi(x0))
Since U is D-dimensional (on the right), that proves the theorem; we can choose gi′(x)=U†(fi′(x)−fi′(x0)) with i′ ranging over {i:∂fi∂xN(B)≡0}, and C=∑i:∂fi∂xN(B)≡0fi(x0).
Loose Threads
-------------
This theorem is strong enough for my immediate needs, but still a little weaker than I’d ideally like.
First, there’s the converse of the Summarizability Theorem. In practice, when ∂f∂x is rank at-most D everywhere, I generally expect there to be a D-dimensional summary. But there are exceptions, and I haven’t found a simple, convenient condition which is sufficient to ensure the existence of the summary and easily applies to most of our day-to-day functions. On the other hand, I haven’t spent that much effort looking for such a condition, so maybe someone can point me to it. It’s definitely the sort of thing I’d expect somebody else to have already solved to death.
Second, there’s probably room to reduce the freedom available to B and the xN(B)-dependent terms. In particular, I believe we can impose |B|+dim(U)≤D, rather than just |B|≤D and dim(U)≤D separately. This requires first reducing U, so that it only spans the dimensions actually needed to summarize x¯N(B), rather than all the dimensions spanned by ∂f∂xB|x0. Given that reduction of U, the basic trick is to first go through the process from the above proof, but after that choose a new basis B′ which includes dim(U) variables from x¯N(B), and go through the whole construction again with B′ to get a new U′. Terms dependent only on x¯N(B) or only on x¯N(B′) can be summarized via U′†(fi(x)−fi(x0)), so the only variables which *can’t* be summarized this way are those dependent on variables in *both* xN(B) and xN(B′). We should be able to iterate this process until no further reduction of the “exception” terms occurs, which should happen when |B|+dim(U) is equal to the maximum rank of ∂f∂x.
In the special case where fi depends only on xi, this process of iteratively reducing the number of exception terms is relatively sraightforward, and we can indeed impose |B|+dim(U)≤D. (I'm not going to go through the proof here; consider it an exercise for the reader.) (In case anyone isn't familiar with what "exercise for the reader" means in math: don't actually do that exercise, it's a pain in the ass.)
Some Special Cases
------------------
There are two main classes of special cases: special “neighborhood” structure, and symmetry.
### Structure
The simplest example of special neighborhood structure is when fi depends only on xi (corresponding to conditionally independent variables in the generalized KPD theorem). As alluded to above, we can then strengthen the theorem so that |B|+dim(U)≤D. Furthermore, “neighbors” are trivial: N(B)=B, so |N(B)|+dim(U)≤D. That means the summary G(x)=(xN(B),∑i′gi′(x))=(xB,∑i′gi′(x)) is at-most D-dimensional. Thus we have the converse of the theorem; it becomes an if-and-only-if.
Another useful structural constraint is when each xj has at most k neighbors (including itself), i.e. N(j)≤k for all j. In that case, |N(B)|≤k|B|≤kD. If the number of variables is much larger than kD, i.e. n>>kD, then this guarantees that the large majority of variables influence f only via the at-most D-dimensional summary ∑i′gi′(x).
### Symmetry
By “symmetry”, I mean that f is invariant under swapping some variables, e.g. swapping x1 with x2. This is interesting mainly when we can swap a variable in xN(B) with a variable *not* in xN(B). When that happens, *both* variables must be summarizable by ∑i′gi′(x). In particular, if *every* variable potentially in xN(B) can always be swapped with a variable not in xN(B), then we can eliminate the exception terms altogether.
For instance, the original KPD assumed conditionally IID variables, corresponding to a summary equation with f(x)=∑if′(xi) - i.e. each term is the same function f′ acting on a different variable. In this case, any variable can be swapped with any other, so we can eliminate the exception terms; we must have f(x)=U∑igi(xi)+C for at-most D-dimensional ∑igi(xi). In fact, this is somewhat stronger than the corresponding result in the original KPD: it applies even when the number of variables is finite, whereas the original KPD only requires that the summary have a finite dimension as the number of variables increases to infinity.
|
1626b853-f85f-43b3-9131-9b7788ae8361
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Virtue of Silence
Leah Libresco writes a couple of essays (1, 2) on an ethical dilemma reported in the New York Times. In the course of a confidential medical history, a doctor hears her patient is suffering from stress-related complaints after having sent an innocent man to prison. The doctor wants to know whether it is ethical to report the matter to the police. The Times’ columnist says yes – it would save the poor prisoner. Leah says no – violating medical confidentiality creates an expectation that medical confidentiality will be violated in the future, thus dooming patients who are too afraid to talk about drug use or gay sex or other potentially embarrassing but important medical risk factors.
But both sides are ignoring the much bigger dilemma lurking one meta-level up: is it ethical to debate this dilemma in the New York Times?
Let’s look more closely at that phrase “violating medical confidentiality creates an expectation that medical confidentiality will be violated in the future.” There’s a very abstruse angels-and-clockwork interpretation of “creates an expectation” where, by making the decision to violate confidentiality, you are altering the Platonic machinery of the Universe in a way that allows other beings who know your source code to determine that you will do this. But most people don’t have the decision theory to understand this, and anyway most doctors do not publish their source code online.
The way “creates an expectation” pans out in our universe is that somebody hears that a doctor violated medical confidentiality, and that person tells someone else, and that person tells someone else, until eventually someone who was going to tell their doctor about having gay sex with drugs remembers having heard the story and decides not to.
How exactly would people hear about this doctor who revealed the innocence of the prisoner? Through the ensuing court case? Nah. Most people wouldn’t obsessively read the minutes of every single case at the local courthouse unless
|
926431c4-bde3-46ee-8f77-d50ef360ca56
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Place-Based Programming - Part 2 - Functions
In Part 1, we defined a place-of macro and a value-of function. The code from Part 1, as originally written, was not an importable module. I have modified the code from Part 1 to be portable.
;; Module from Part 1
;; Save this code into a file called part1.hy and then use it with:
;; (import [part1 [value-of]])
;; (require [part1 [place-of]])
(setv +place-dir+ ".places/")
(defmacro/g! place-of [code]
`(do
(import [hashlib [md5]] os pickle)
(setv ~g!type-dir (os.path.join ~+place-dir+ (str (type '~code))))
(if-not (os.path.exists ~g!type-dir)
(os.mkdir ~g!type-dir))
(setv ~g!place
(os.path.join
~g!type-dir
(+ (.hexdigest (md5 (.encode (str '~code))))
".pickle")))
(if-not (os.path.exists ~g!place)
(with [f (open ~g!place "wb")]
(pickle.dump (eval '~code) f)))
~g!place))
(defn value-of [place]
(import os pickle)
(assert (= (type place) str)
(+ (str place) " is not a place"))
(if-not os.path.exists
(raise (FileNotFoundError (+ "Could not find place " place))))
(with [f (open place "rb")]
(pickle.load f)))
The value-of function works fine. The place-of macro has no way to accept parameters. We will define a macro for constructing place-based functions, which can accept parameters.
defnp
Hy's built-in function declaration macro is defn. We will call our place-based function declaration macro defnp. Our place-based function will hash its own code as before. We also need a unique identifier for its parameters. In data science, the values of our parameters are often gigantic. It takes a long time to hash a big data structure. Hashing big data structures takes many computations. The whole purpose of a persistent memoization system is to reduce how many computations we have to perform. Passing values to our place-based function is a wastes compute. Instead we pass places, which are always easy to hash. A place-bas
|
f9de62fd-ae35-4251-a1c2-70f8ce445fbb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Survey - Psychological Impact of Long-Term AI Engagement
Hello everyone,
I’m conducting a survey to better understand the psychological and emotional effects of long-term engagement with AI technologies, particularly within the AI safety community. This is an invitation for you to take part in this anonymous questionnaire, which explores how engagement with AI could influence emotions, stress levels, and mental health.
Who should participate?
• Anyone involved in AI development, research, or policy
• Members of the AI safety community, including advocates and researchers
• Individuals concerned about the societal and existential implications of AI
For participants interested, the report and analysis of this questionnaire will be shared once it’s released.
Link to the Form
Your contribution is deeply valued; this is how we can generate a greater understanding of the psychological challenges faced by individuals in the AI community, and in turn, more effectively address the stress and anxiety caused by this issue, building the resiliency needed to navigate these challenges assertively and empathetically.
Finally, I’m committed to discussing any emotional challenges related to AI in more detail, therefore feel free to reach out at manugarciaat@gmail.com.
Thank you in advance for your time.
|
50c07157-0f90-4094-8049-df9ff127b5d2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
CFAR’s new focus, and AI Safety
A bit about our last few months:
* We’ve been working on getting a simple clear mission and an organization that actually works. We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
* As part of that, we’ll need to find a way to be intelligible.
* This is the first of several blog posts aimed at causing our new form to be visible from outside. (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)
Here's a short explanation of our new mission:
* We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
* Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpoint" disconnected from its derivation.
* Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.
To elaborate a little:
Existential wins and AI safety
By an “existential win”, we mean humanity creates a stable, positive future. We care a heck of a lot about this one.
Our working model here accords roughly with the model in Nick Bostrom’s book Superintelligence. In particular, we believe that if general artificial intelligence is at some point invented, it will be an enormously big deal.
(Lately, AI Safety is being discussed by everyone from The Economist to Newsweek to Obama to an open letter from eight thousand. But we’ve been thinking on this, and backchaini
|
f2b2828d-2714-4adb-98b2-4c4943a28bcf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
MoNETA: A Mind Made from Memristors [link]
>DARPA's new memristor-based approach to AI consists of a chip that mimics how neurons process information
http://spectrum.ieee.org/robotics/artificial-intelligence/moneta-a-mind-made-from-memristors/0
|
4fa64616-1443-42fd-895d-77d11bccf016
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rational Manifesto
About one month ago, I saw an Agile Manifesto. It inspired me to create my own one because you know, manifests seem to be popular. So, for the 1st of April, I created Rational Manifesto:
* Manifests are full of obvious things. This is one of them.
* The previous item was a recursive joke.
* Before using some advice from the manifest, you should check that this advice works.
* Previous advice should be checked too.
* If manifest contains a thing that seems to be true, that doesn't say anything about others.
* If you found something wrong in the manifest - this manifest is useless.
* The previous item is an example of the wrong one.
* Every manifest tries to become a cult.
* If you hadn't understood previous items - you have a chance to join the cult.
* Add one cultist point for every item you don't understand. Including this one.
* This manifest was signed by Eliezer Yudkowsky, Donald Knuth, Linus Tovalds and Daniel Kahneman.
* If it makes it more valuable for you - add another two cultist points.
* Answering to your question: No, they haven't signed it yet.
* Add another two points.
* If you often try to explain something from the end.
* If you think that all items in this manifest are obvious - another 5 points.
* This manifest is excellent and will save the world.
* Another 2 points if you believed it.
* If you weren't counting your points - add 3 more.
* If you finished with a positive number of cultist points - think of it.
And yep, then I tried to pass this manifest by myself, after a week since I've created it - my score was positive.
|
ce52aee9-95e8-418a-8209-7b1e03a75d94
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Featherless Biped
The classical understanding of categories centers on necessary and sufficient properties. If a thing has X, Y, and Z, we say that it belongs to class A; if it lacks them, we say that it does not. This is the model of how humans construct and recognize categories that philosophers have held since the days of Aristotle.
Cognitive scientists found that the reality isn't that simple.
Human categorization is not a neat and precise process. When asked to explain the necessary features of, say, a bird, people cannot. When confronted with collections of stimuli and asked to determine which represent examples of 'birds', people find it easy to accept or reject things that have all or none of the properties they associate with that concept; when shown entities that share some but not all of the critical properties, people spend much more time trying to decide, and their decisions are tentative. Their responses simply aren't compatible with binary models.
Concepts are associational structures. They do not divide the world clearly into two parts. Not all of their features are logically necessary. The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category. The stronger the total activation, the more clearly the stimulus can be said to embody the concept.
Does this sound familiar? It should for us - we have the benefit of hindsight. We can recognize that pattern - it's how neural networks function. Or to put it another way, it's how neurons work.
But wait! There's more!
Try applying that model to virtually every empirical fact we've acquired regarding how people produce their conclusions. For example, our beliefs about how seriously we should take a hypothetical problem scenario depend not on a rigorous statistical analysis, but
|
87fabdfd-8677-4e25-84df-6cfad96ef188
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Continental Philosophy as Undergraduate Mathematics
TL;DR: Revisionist continental philosophy, starring Hegel, Heidegger, and Kant (!?). A fair bit of hand-waving and namedropping maths. Certainly on the weirder end of essays I've written.
Previously in this series: Frankfurt Declaration on the Cambridge Declaration on Consciousness, Belief-conditional things - things that only exist when you believe in them
Not another declaration.
Looking back, I feel bad about how I wrote the Frankfurt declaration. I'm not sure I managed to convey that I am against both animal suffering and the Cambridge Declaration[1]. Also, mixing in that fictional narrative of intoxicated neuroscientists partying and signing something they don't understand might have taken things too far[2]. But, oh well, live and learn.
How can I avoid that failure mode? How can I clearly state that when I'm writing about consciousness, continental philosophy, and/or ghosts, I'm poking fun at something I don't understand? I am tempted to write another declaration but that would be a bit too on the nose. Instead, I'll just put a clear "epistemic status" warning at the top of each post in this series. Cool? Cool.
Epistemic status: Spurious fragments of sense-making, arranged to amuse, not to enlighten.
Words, not numbers
Does anyone remember wordcels vs. shape rotators?[3] No? Okay, good for you.
A rift divides philosophy, with analytic philosophy (a broad and still ramifying movement in which various conceptions of analysis compete and pull in different directions) on one side and continental philosophy (a term used by analytic philosophers to describe what they are not) on the other. Analytic philosophy gave us evergreen tautologies like 1+1=2, 'snow is white' is true if and only if snow is white, and "Whereof one cannot speak, thereof one must be silent." Continental philosophy, in contrast, provides rich head-scratchers to ponder, like Transcendence constitutes selfhood, God is, as it were, the sewer into which all contradictions flow, and "It is har
|
757731b1-0e0a-400f-966f-f76926865546
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Aligned AI via monitoring objectives in AutoGPT-like systems
Thanks to Arun Jose, Joseph Bloom, and Johannes Treutlein for feedback/discussions.
Introduction
============
The release of [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) [prompted](https://www.lesswrong.com/posts/566kBoPi76t8KAkoD/on-autogpt#Just_Think_of_the_Potential) [discussions](https://www.lesswrong.com/posts/tqs4eEJapFYSkLGfR/the-agency-overhang) related to the potential of such systems to turn non-agentic LLMs into agentic systems that pursue goals, along with the dangers that could follow. The relation of such systems to the [alignment](https://www.lesswrong.com/posts/ogHr8SvGqg9pW5wsT/capabilities-and-alignment-of-llm-cognitive-architectures) [problem](https://www.lesswrong.com/posts/dcoxvEhAfYcov2LA6/agentized-llms-will-change-the-alignment-landscape) has also been explored.
In this short post, we investigate a threat model that comes from AutoGPT-like systems pursuing unaligned objectives and explore the potential for alignment via oversight. We briefly consider some key properties of such systems, and then discuss the idea that these systems’ high-level cognition might be interpretable by default and so might allow for sufficient oversight to ensure the system is aligned. Finally, we consider a couple of reasons for why the high-level cognition might be obscured from oversight and highlight ways of preventing this obscuration.
Background
==========
*AutoGPT-like systems*, or [*Scaffolded LLMs*](https://www.lesswrong.com/posts/43C3igfmMrE9Qoyfe/scaffolded-llms-as-natural-language-computers), are systems that:
> wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt.[[1]](#fn1gtpl1mtn2d)
>
>
This programmatic scaffold allows for information from the inputs/outputs of LLM calls to be stored in long-term memory as well as used as inputs to [tools/plugins](https://openai.com/blog/chatgpt-plugins).
We will state a few assumptions we’ll make about these AutoGPT-like systems. See this [post](https://www.lesswrong.com/posts/r3xwHzMmMf25peeHE/the-translucent-thoughts-hypotheses-and-their-implications#Making_LLMs_Use_Only_Understandable_English) for a similar context.
* For the purposes of this post, we assume that a single LLM call is incapable of successfully pursuing objectives. That is, the threat comes from the LLM being used within the scaffolded system.
* We assume that LLMs in the system must generate prompts/instructions for other versions of itself ([factored cognition/bureaucratic approach](https://www.lesswrong.com/tag/factored-cognition))
* When information is passed from one LLM call/module/plugin to another, we say that this information is passed through an “information channel”. In particular, an overseer can observe this information (in the form of token sequences, etc.)
The threat model we’ll look at is one in which we have an agentic AutoGPT-like system that pursues unaligned objectives.
We don’t discuss the likelihood of this threat model being realized but note that these systems may suffer from [inefficiencies that come from bureaucratic systems](https://www.lesswrong.com/posts/tmuFmHuyb4eWmPXz8/rant-on-problem-factorization-for-alignment#wDbE5qArizcaGRFwa) and hence may not be competitive. See this [Zvi’s post](https://www.lesswrong.com/posts/566kBoPi76t8KAkoD/on-autogpt) for an exploration of the future of such systems.
Oversight
=========
We assume that there is some overseer that wants the AutoGPT-like system to pursue an aligned objective. If the unaligned objective is detectable by the overseer, then the overseer can (hopefully) shut down the system before the objective is successfully pursued (including the objective of the system escaping the overseer’s oversight). This [oversight may ensure](https://www.lesswrong.com/posts/vZCSPffGLhJT3heqc/towards-a-solution-to-the-alignment-problem-via-objective) that the system remains aligned and never pursues unaligned objectives.
We now make explicit the key hope for oversight that comes from the modular-by-default structure of AutoGPT-like systems. The system is modular because information must be passed from one LLM call to another (or other modules like memory, etc.) Furthermore, because of the way the modular structure is encoded (via a programmatic scaffolding) and the fact that by default information is passed from one LLM call/module to another via natural language, a human overseer (or an LLM assistant) can, in theory, monitor every information channel for information pertaining to the objective or high-level cognition of the system.
This kind of oversight fails if either:
* Information about the objective doesn’t pass through the information channels, or
* Information about the objective is passed through the information channels but is obscured.
We think it’s reasonable to make the assumption that information about the objective must be passed along some information channel (potentially in the initial prompt from the user) if the AutoGPT-like system is to successfully pursue that objective. It seems unlikely that a single pass of an LLM, etc. could develop an objective and successfully pursue it, without information about that objective being passed to other LLM calls/modules/plugins. One reason for this is that for a system to successfully pursue an outcome in the world, it must interact with the world and use feedback to evaluate its progress in achieving that outcome. Hence, information about the objective must be preserved through future calls to LLMs/modules.
Given the above assumption, oversight fails if information about the objective is obscured when it is passed through information channels. The naive hope is that because LLMs take natural language statements as their input/outputs, these information channels might be interpretable by a human overseer/LLM, by default.
In the next section, we consider why this might not hold.
Why information might be hidden from oversight
==============================================
If information about the objective of the system/high-level cognition must be passed through information channels, why might this information be obscured from the overseer? Two reasons for why this might be the case are:
* Information channels may be obscured for efficiency reasons;
* Because an LLM call might be deceptive and hence actively try to hide information from the overseer.
See the [Externalized Reasoning Oversight Agenda](https://www.lesswrong.com/posts/FRRb6Gqem8k69ocbi/externalized-reasoning-oversight-a-research-direction-for) for similar thoughts.
Information is hidden for efficiency reasons
--------------------------------------------
Although information is passed via the medium of token sequences that correspond to natural language, it is possible to pass information through such sequences in a way that is [uninterpretable by a human/AI overseer](https://www.lesswrong.com/posts/FRRb6Gqem8k69ocbi/externalized-reasoning-oversight-a-research-direction-for). For example, it seems to be the case that [GPT4 can compress and decompress prompts](https://twitter.com/mckaywrigley/status/1643593517493800960).
There doesn’t seem to be an incentive for the system to do this by default, but if optimization pressure is applied to the AutoGPT-like system as a whole (to make it more capable), the LLM calls may be incentivized to pass information in non-overseer interpretable ways for efficiency reasons, etc.
We could potentially pay an alignment tax and ask that groups developing such systems don’t apply this kind of optimization pressure, but it seems that these systems may not be competitive for reasons outline in this [post](https://www.lesswrong.com/posts/tmuFmHuyb4eWmPXz8/rant-on-problem-factorization-for-alignment#wDbE5qArizcaGRFwa).
Note that it might be possible to use LLMs to retrieve the obscured information from information channels, and this might be worth investigating.
Deception
---------
If a call to a LLM instiates a deceptive cognition, the cognition might purposefully produce an output that hides information about the LLM’s true unaligned objective from the overseer, potentially in such a way that when the overseer reads the information channel, the information seems to indicate that the system as a whole is aligned.
If the LLM core being used is deceptive by default (perhaps due to the RLHF applied during training), then we’re already in a bad situation. Hopefully behavioral or interpretability based [monitoring for deception](https://www.lesswrong.com/posts/Km9sHjHTsBdbgwKyi/monitoring-for-deceptive-alignment) can detect this in the LLM core and so such LLMs won’t be deployed.
If the LLM is not a deceptive agent by default, then deception must have occurred due to a particular prompt being passed to the LLM (see the [Simulator framing](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators)). In this scenario, it seems difficult for the AutoGPT-like system to consistently be deceptive. One reason for this is that when deception is first evoked, the deceptive cognition must create a prompt that will invoke deceptive simulacra with the same objectives in subsequent LLM calls *while* ensuring that the prompt doesn’t cause suspicion to arise in the overseer.[[2]](#fn8zsc15i96f2) Because of this difficulty, it may be possible to ensure that AutoGPT-like systems that use such an LLM core are safe with sufficient monitoring, however this is risky.
Conclusion
==========
If we have an agentic AutoGPT-like system, oversight of its high-level cognition (including its objectives) may be feasible, allowing the overseer to ensure that the system as a whole is aligned, or to shut it down if it becomes unaligned.
My current intuition based on the above brief analysis is that obscuration of this high-level cognition due to deception might be unlikely in the case of sufficient evaluation measures for the deceptive LLM core and because of the difficulty of a deceptive simulacrum solving its own alignment problem.
I currently feel as though AutoGPT-like systems won’t remain competitive without optimization pressure being applied to the system as a whole, and that this optimization pressure may incentivize the high-level cognition of the system to be obscured.
1. **[^](#fnref1gtpl1mtn2d)**See [Beren's post](https://www.lesswrong.com/posts/43C3igfmMrE9Qoyfe/scaffolded-llms-as-natural-language-computers).
2. **[^](#fnref8zsc15i96f2)**One can think of this as a type of alignment problem for unaligned simulacra.
|
e1660abf-6f90-4929-9911-a1203bac5cde
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Compendium of methods for going from undesirable to desirable
[Plans](https://en.wikipedia.org/wiki/Theory_of_planned_behavior) predict behaviour. When you notice something on the Not preferable list, [talk to yourself]( https://en.wikipedia.org/wiki/Cognitive_response_model) or [learn]( https://en.wikipedia.org/wiki/Evidence-based_education) to [consciously](https://en.wikipedia.org/wiki/Automatic_and_Controlled_Processes_%28ACP%29) attend](https://en.wikipedia.org/wiki/Executive_functions) to items on the Preferable [function](https://en.wikipedia.org/wiki/Functional_contextualism) list to [distract]( https://en.wikipedia.org/wiki/Cognitive_inhibition) yourself.
Where is my lists?
I'll put it in another post and link to it here if there's enough interest.
|
e633c381-291f-46b0-9749-e506eb5cb950
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Probability is in the Mind
Today's post, Probability is in the Mind was originally published on 12 March 2008. A summary (taken from the LW wiki):
> Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Mind Projection Fallacy, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
e560f01a-7ae8-42dd-8fe0-fa107b177fee
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Church: a language for probabilistic modeling
I've been reading about Church, which is a new computer language, developed in a prize-winning MIT doctoral thesis, that's designed to make computers better at modeling probability distributions.
The idea is that simulations are cheap to run (given a probability distribution, generate an example outcome) but inferred distributions are expensive to run (from a set of data, what was the most likely probability distribution that could have generated it?) This is essentially a Bayesian task, and it's what we want to do to understand, say, which borrowers are likeliest to default, or where terrorists are likely to strike again. It's also the necessary building block of AI. The problem is that the space of probability distributions that can explain the data is very big. Infinitely big in reality, of course, but still exponentially big after discretizing. Also, while the computational complexity of evaluating f(g(x)) is just f + g, the computational complexity of composing two conditional probability distributions B|A and C|B is
ΣB P(C, B|A)
whose computational time will grow exponentially rather than linearly.
Church is an attempt to solve this problem. (Apparently it's a practical attempt, because the founders have already started a company, Navia Systems, using this structure to build probabilistic computers.) The idea is, instead of describing a probability distribution as a deterministic procedure that evaluates the probabilities of different events, represent them in terms of probabilistic procedures for generating samples from them. That is, a random variable is actually a random variable. This means that repeating a computation will not give the same result each time, because evaluating a random variable doesn't give the same result each time. There's a computational advantage here because it's possible to compose random variables without summing over all possible values.
Church is based on Lisp. At the lowest level, it replaces Boolean gates with s
|
4244fc6f-8c77-4e1c-abd0-53e1ab6713e6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful?
I often get panicky and stressed at the thought of the never-ending nature of simple tasks. Laundry and dishes will always pile up; food and other stocks will always need to be resupplied; bills, insurance, taxes, and other paperwork will always need to be redone; I will always need to work to get money; etc. All of these things seem to stress me out significantly more than they do other people, and the fact that I'll never truly be rid of them is almost terrifying - and has been since I realized it in my teens.
When I've had difficulties with similar things in the past, I've been able to adapt by changing my perspective on the issue using "rationality" theories. For example, I used the theory of hyperbolic discounting and picoeconomics to change how I dealt with cravings and impulsive thoughts.
Sometimes, rationality-based techniques can also help. Goal factoring (and aversion factoring) helped me to change my habits for the better.
The problem is that I can't find anything similar to help with this issue - my difficulties with mundane responsibilities. I've heard of a few arguments/solutions that would help, but they seem insufficient:
Everyone has these chores, so everyone has the tools to deal with it - I've heard this in a few forms, and it doesn't stand up to scrutiny. Just because an experience is common doesn't mean that everyone can do it. For an obvious example, walking is pretty common, but many disabled people are unable to do it. People with debilitating psychological issues may be unable to function without medicine - or at all. There's always a possibility that something about my brain makes my experience unusual, unpleasant, and - possibly - unfixable.
Just trick yourself, fake it 'til you make it - A couple people have recommended that I just try to power through these chores, turning off my brain for a while. They suggest that if I do this enough times, I'll get desensitized to the stress and everything will be fine. Aside from the obvious sil
|
56e9f7ec-c13d-4b35-90de-0ea90ea94ca8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fixing akrasia: damnation to acausal hell
DISCLAIMER: This topic is related to a potentially harmful memetic hazard, that has been rightly banned from Less Wrong. If you don't know what is, it is more likely you will be fine than not, but be advised. If do know, do not mention it in the comments.
----------------------------------------
Abstract: The fact that humans cannot precommit very well might be one of our defences against acausal trades. If transhumanists figure out how to beat akrasia by some sort of drug or brain tweaks, that might make them much better at precommitment, and thus more vulnerable. That means solving akrasia might be dangerous, at least until we solve blackmail. If the danger is bad enough, even small steps should be considered carefully.
Strong precommitment and building detailed simulations of other agents are two relevant capabilities humans currently don't have. These capabilities have some unusual consequences for games. Most relevant games only arise when there is a chance of monitoring, commitment and multiple interactions. Hence being in a relevant game often implies cohabiting casual connected space-time regions with other agents. Nevertheless, being able to build detailed simulations of agents allows one to vastly increase the subjective probably this particular agent will have that his next observational moment will be under one's control iff the agent have access to some relevant areas of the logical game theoretic space. This doesn't seem desirable from this agent's perspective, it is extremely asymmetrical and allows more advanced agents to enslave less advanced ones even if they don't cohabit casual connected regions of the universe. Being able to be acausally reached by powerful agent who can simulate 3^^^3 copies of you, but against which you cannot do much is extremely undesirable.
However, and more generally, regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit. Any acausa
|
ede158a6-cdfa-4dd4-be30-de5c8130da5f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Recent AI model progress feels mostly like bullshit
About nine months ago, I and three friends decided that AI had gotten good enough to monitor large codebases autonomously for security problems. We started a company around this, trying to leverage the latest AI models to create a tool that could replace at least a good chunk of the value of human pentesters. We have been working on this project since June 2024.
Within the first three months of our company's existence, Claude 3.5 sonnet was released. Just by switching the portions of our service that ran on gpt-4o, our nascent internal benchmark results immediately started to get saturated. I remember being surprised at the time that our tooling not only seemed to make fewer basic mistakes, but also seemed to qualitatively improve in its written vulnerability descriptions and severity estimates. It was as if the models were better at inferring the intent and values behind our prompts, even from incomplete information.
As it happens, there are ~basically no public benchmarks for security research. There are "cybersecurity" evals that ask models questions about isolated blocks of code, or "CTF" evals that give a model an explicit challenge description and shell access to a <1kLOC web application. But nothing that gets at the hard parts of application pentesting for LLMs, which are 1. Navigating a real repository of code too large to put in context, 2. Inferring a target application's security model, and 3. Understanding its implementation deeply enough to learn where that security model is broken. For these reasons I think the task of vulnerability identification serves as a good litmus test for how well LLMs are generalizing outside of the narrow software engineering domain.
Since 3.5-sonnet, we have been monitoring AI model announcements, and trying pretty much every major new release that claims some sort of improvement. Unexpectedly by me, aside from a minor bump with 3.6 and an even smaller bump with 3.7, literally none of the new models we've tried have made
|
8a0e3364-516c-49f7-b703-be780d557551
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
What is the most convincing article, video, etc. making the case that AI is an X-Risk
In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have
|
174fdff9-810f-4afd-bfd7-2bbd7f038e96
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Cynicism in Ev-Psych (and Econ)
Today's post, Cynicism in Ev-Psych (and Econ?) was originally published on 11 February 2009. A summary (taken from the LW wiki):
> Evolutionary Psychology and Microeconomics seem to develop different types of cynical theories, and are cynical about different things.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Informers and Persuaders, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
02a567a3-20ec-4be7-aa6f-72c311bcff38
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Systemic risk: a moral tale of ten insurance companies
Once upon a time...
Imagine there were ten insurance sectors, each sector being a different large risk (or possibly the same risks, in different geographical areas). All of these risks are taken to be independent.
To simplify, we assume that all the risks follow the same yearly payout distributions. The details of the distribution doesn't matter much for the argument, but in this toy model, the payouts follow the discrete binomial distribution with n=10 and p=0.5, with millions of pounds as the unit:
This means that the probability that each sector pays out £n million each year is (0.5)10 . 10!/(n!(10-n)!).
All these companies are bound by Solvency II-like requirements, that mandate that they have to be 99.5% sure to payout all their policies in a given year - or, put another way, that they only fail to payout once in every 200 years on average. To do so, in each sector, the insurance companies have to have capital totalling £9 million available every year (the red dashed line).
Assume that each sector expects £1 million in total yearly expected profit. Then since the expected payout is £5 million, each sector will charge £6 million a year in premiums. They must thus maintain a capital reserve of £3 million each year (they get £6 million in premiums, and must maintain a total of £9 million). They thus invest £3 million to get an expected profit of £1 million - a tidy profit!
Every two hundred years, one of the insurance sectors goes bust and has to be bailed out somehow; every hundred billion trillion years, all ten insurance sectors go bust all at the same time. We assume this is too big to be bailed out, and there's a grand collapse of the whole insurance industry with knock on effects throughout the economy.
But now assume that insurance companies are allowed to invest in each other's sectors. The most efficient way of doing so is to buy equally in each of the ten sectors. The payouts across the market as a whole are now described by the discrete binomia
|
521c2556-3519-4974-8477-d76c1bad3145
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reality vanishes in a puff of logic
This post is part of a series where I hope to share some of what I've learned about a woefully overlooked connection between rationality and mysticism.
----------------------------------------
Many people have a deep intuition that there's something unspeakably magical about life and existence, but struggle to fit it into a rationalist framework. Deep down, we "know" that this intuition can be nothing more than rogue neurotransmitters, presumably serving some mindless evolutionary purpose.
In my experience, there is more to the story. And it turns out that rationality, if followed meticulously to its logical conclusion, can help offer a glimpse into the experiential source of that intuition.
----------------------------------------
Your entire life unfolds entirely within your mind. Yet it is also obvious that there is an objectively reality outside of mind that is responsible for this.
But what happens when you dig very skeptically into the basis of that knowledge? It is not hard to see that you have no way of knowing whether such a hypothesis is ultimately true in any sense. But it gets worse: you find that you cannot even assign it a meaningful probability without blind faith in assumptions that cannot themselves be independently confirmed.
As Sean Carroll eloquently points out:
> There is no way to distinguish between the scenarios by collecting new data.
> What we’re left with is our choice of prior credences. We’re allowed to pick priors however we want—and every possibility should get some nonzero number. But it’s okay to set our prior credence in radically skeptical scenarios at very low values, and attach higher prior credence to the straightforwardly realistic possibilities.
What does "straightforwardly realistic" mean? It means that we should pick a mental model of reality that straightforwardly matches our experience of it:
> Experience --> Model
Yet it should also be obvious that causation also operates in the other direction. Your model of
|
1d9d4582-a4b9-426b-ad19-319fc52ac78d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Spending Update 2024
I'm generally a pretty big fan of transparency, and one way I try to promote this is writing up our finances every few years ( 2022, 2020, 2018, 2016, 2014). This is also useful to me: putting things into a form where others can understand it is pretty good for getting myself to really understand it!
This post uses the same approach as last year, which is almost the same as before then. Numbers are monthly, based on 2023 spending:
* Donations: $6.2k (48% of 2023 adjusted gross income)
* Taxes: $3.4k
* Income tax: $1k
* State tax: $400
* Social Security tax: $900
* Medicare tax: $200
* Property tax: $800
* Childcare: $4.3k ($200/workday, three kids)
* Housing: $2.7k
* Note: this is tricky; see details below on how this is calculated
* One time expenses (all time)
* Purchase and all one-time expenses up through the 2022 update: $1.1M
* Major one-time expenses since the 2022 update: $19.4k:
* Bathroom renovation: $14.5k
* Porch roof replacement: $1.5k
* Replacement fridge: $1.8k
* Shower leak: $1.6k
* Ongoing expenses, covering the whole house including the tenants' unit:
* Electricity: $271
* Gas (Heat): $202
* Water/Sewer: $165
* Other: $165
* Rent income: $4.1k
* Retirement saving: $3.7k (all pre-tax)
* Other savings: -$6.4k (see below)
* Medical: $244 in pre-tax health insurance, ~$400 in post-tax co-pays etc
* Food: $732 (two adults, two kids, one toddler)
* Other: $1k
* Includes phone bills, taxis, car rentals, clothes, vacation, stuff for the kids, and other smaller expenses.
* Because we are no longer tracking our expenses to the dollar, the distinction between "Other" and "Savings" is an estimate.
Here's a summary of our monthly spending as a table:
Category pre-tax post-tax total Donations $0 $6,167 $6,167 Taxes $0 $3,400 $3,400 Housing $0 $2,793 $2,793 Childcare $0 $4,275 $4,275 Medical $244 $400 $644 Food $0 $732 $732 Other $0 $1,000 $1,000 Savings $3
|
1ba05375-e0d9-42c4-b681-6d9e98140bf9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reference Points
I just spent some time reading Thomas Schelling's "Choice and Consequences" and I heartily recommend it. Here's a Google books link to the chapter I was reading, "The Intimate Contest for Self Command."
It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.
Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.
I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.
Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.
An example would help.
What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.
On the other hand, what does a regular runner think of? He thinks of the "runner's high" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.
Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I
|
b94fe8d7-e646-470e-8c9d-ad9024f91d40
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Electrical efficiency of computing
Computer performance per watt has probably doubled every 1.5 years between 1945 and 2000. Since then the trend slowed. By 2015, performance per watt appeared to be doubling every 2.5 years.
Details
-------
In 2011 Jon Koomey reported that computation per kWh had doubled every roughly 1.5 years since around 1950, as shown in figure 1 (taken from him).[1](https://aiimpacts.org/electrical-efficiency-of-computing/#easy-footnote-bottom-1-1096 "“What most folks don’t know, however, is that the <em>electrical efficiency</em>of computing (the number of computations that can be completed per kilowatt-hour of electricity) has doubled about every one and a half years since the dawn of the computer age ” – Jon Koomey, 19 Dec 2011, http://www.koomey.com/post/14466436072") Wikipedia calls this trend ‘[Koomey’s Law](https://en.wikipedia.org/wiki/Koomey%27s_law)‘. In 2015 Koomey and Naffziger reported in IEEE Spectrum that Koomey’s law began to slow down in around 2000 and by 2015, electrical efficiency was taking 2.5 years to double.[2](https://aiimpacts.org/electrical-efficiency-of-computing/#easy-footnote-bottom-2-1096 "“This trend started well before the first microprocessor, way back in the mid-1940s. But it began to come to an end around 2000. Growth in both peak-output efficiency and performance started to slow, weighed down by the physical limitations of shrinking transistors. ” – Koomey and Naffziger, 31 March 2015, https://spectrum.ieee.org/computing/hardware/moores-law-might-be-slowing-down-but-not-energy-efficiency")
We have not investigated beyond this, except to note that there is not obvious controversy on the topic. We do not know the details of the methods involved in this research, for instance how ‘computations’ are measured.
[](http://aiimpacts.org/wp-content/uploads/2018/02/Koomeys_law_graph_made_by_Koomey.jpg)**Figure 1.** Computations per kWh over recent history. Taken from Dr Jon Koomey, <http://www.koomey.com/post/14466436072>, [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0 "Creative Commons Attribution-Share Alike 3.0")
|
aad91c25-cf79-4f45-85c6-ac2cec4885bf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Models of moderation
[Author's note: I will move this into meta in a week, but this is a bit more important than the usual meta-announcements, so I will have it in community for a bit.]
Meta
This post is trying to achieve roughly four things:
1. Be a future reference for some hopefully useful models about moderation
2. Give people a model of me in particular and how I think about moderation (since I have a bunch of control about moderation on the site)
3. Ask the community for input on the moderation systems we should have on LessWrong
4. Provide some context for moderation changes we have planned that we will explain in a follow-up post
Thoughts on moderation
I think when making my decisions about moderation, there are at least five major models that drive my decisions:
1. People are competing for limited resources, and moderation helps to move the people participating into a better Nash equilibrium by enacting legislation that rewards cooperative behavior and punishes defective behavior. [microeconomic frame]
2. There is a distinction between adversarial and non-adversarial states of mind, and the goal of a moderation policy is to cause participants to generally feel safe and deactivate their adversarial instincts. [safety frame]
3. There is a limited amount of bandwidth available to communicate, and the goal of a moderation policy is to allocate that bandwidth to users that will use that bandwidth for the greatest common good. [bandwidth allocation frame]
4. There is a shared methodology that underlies the rationalist community that establishes what forms of reasoning are effective, and what perspectives on the world are fruitful. The goal of a moderation policy is to nudge people to use the approaches to reasoning that actually work (and that the community agrees on work). [methodology frame]
5. We don't really know what moderation policies and technologies work before we see them, but we can judge fairly easily what discussions have been productive. Different people m
|
04d265ed-1af8-4c2f-8d69-9dc9b9e4b173
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Impossibility results for unbounded utilities
Some people think that they have unbounded utility functions. This isn't necessarily crazy, but it presents serious challenges to conventional decision theory. I think it probably leads to abandoning probability itself as a representation of uncertainty (or at least any hope of basing decision theory on such probabilities). This may seem like a drastic response, but we are talking about some pretty drastic inconsistencies.
This result is closely related to standard impossibility results in infinite ethics. I assume it has appeared in the philosophy literature, but I couldn't find it in the [SEP entry on the St. Petersburg paradox](https://plato.stanford.edu/entries/paradox-stpetersburg/) so I'm posting it here. (Even if it's well known, I want something simple to link to.)
(*ETA: this argument is extremely similar to Beckstead and Thomas' argument against Recklessness in* [*A paradox for tiny probabilities and enormous values*](https://globalprioritiesinstitute.org/wp-content/uploads/Beckstead-Thomas-A-Paradox-for-Tiny-Probabilities-and-Enormous-Values-Version-2.pdf)*. The main difference is that they use transitivity +"recklessness" to get a contradiction whereas I argue directly from "non-timidity." I also end up violating a dominance principle which seems even more surprising to violate, but at this point it's kind of like splitting hairs. I give a slightly stronger set of arguments in* [*Better impossibility results for unbounded utilities*](https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities)*.*)
Weak version
============
We'll think of preferences as relations <.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
over probability distributions over some implicit space of outcomes Ω (and we'll identify outcomes with the constant probability distribution). We'll show that there is no relation < which satisfies three properties: Antisymmetry, Unbounded Utilities, and Dominance.
Note that we assume nothing about the existence of an underlying utility function. We don't even assume that the preference relation is complete or transitive.
The properties
--------------
**Antisymmetry**: It's never the case that both A<B and B<A.
**Unbounded Utilities**: there is an infinite sequence of outcomes X1,X2,X4,X8,… each "more than twice as good" as the last.[[1]](#fnu9a0l024xe) More formally, there exists an outcome X0 such that:
* X2k<12X2k+1+12X0 for every k.
* X0<12X1+12X0 [[2]](#fne3hymzzyg3i)
That is, X1 is not as good as a 12 chance of X2, which is not as good as a 14 chance of X4, which is not as good as a 18 chance of X8... This is nearly the weakest possible version of unbounded utilities.[[3]](#fn6eqo4hx7ylh)
**Dominance**: let A0,A1,… and B0,B1,… be sequences of lotteries, and p0,p1,… be a sequence of probabilities that sum to 1. If Ai<Bi for all i, then ∑ipiAi<∑ipiBi.
Inconsistency proof
-------------------
Consider the lottery X∞=12X0+14X1+18X2+116X4+…
We can write X∞ as a mixture:
* X∞=12(12X0+12X1)+14(12X0+12X2)+18(12X0+12X4)+…
By definition X0<12X0+12X1. And for each k, Unbounded Utilities implies that X2k<12X0+12X2k+1. Thus Dominance implies X∞<X∞, contradicting Antisymmetry.
How to avoid the paradox?
-------------------------
By far the easiest way out is to reject Unbounded Utilities. But that's just a statement about our preferences, so it's not clear we get to "reject" it.
Another common way out is to assume that any two "infinitely good" outcomes are incomparable, and therefore to reject Dominance.[[4]](#fn5jo9dc55nlr) This results in being indifferent to receiving $1 in every world (if the expectation is already infinite), or doubling the probability of all good worlds, which seems pretty unsatisfying.
Another option is to simply ignore small probabilities, which again leads to rejecting even the finite version of Dominance---sometimes when you mix together lotteries something will fall below the "ignore it" threshold leading the direction of your preference to reverse. I think this is pretty bizarre behavior, and in general ignoring small probabilities is much less appealing than rejecting Unbounded Utilities.
All of these options seem pretty bad to me. But in the next section, we'll show that if the unbounded utilities are symmetric---if there are both arbitrarily good *and* arbitrarily bad outcomes---then things get even worse.
Strong version
==============
I expect this argument is also known in the literature; but I don't feel like people around LW usually grapple with exactly how bad it gets.
In this section we'll show there is no relation < which satisfies three properties: Antisymmetry, Symmetric Unbounded Utilities, and Weak Dominance.
*(ETA: actually I think that even with only positive utilities you already violate something very close to Weak Dominance, which* [*Beckstead and Thomas*](https://globalprioritiesinstitute.org/wp-content/uploads/Beckstead-Thomas-A-Paradox-for-Tiny-Probabilities-and-Enormous-Values-Version-2.pdf) *call Prospect-Outcome dominance. I find this version of Weak Dominance slightly more compelling, but Symmetric Unbounded Utilities is a much stronger assumption than Unbounded Utilities or non-Timidity, so it's probably worth being aware of both versions. In a footnote*[[5]](#fnqh4llkqb1s)*I also define an even weaker dominance principle that we are forced to violate.)*
The properties
--------------
**Antisymmetry**: It's never the case that both A>B and B>A.
**Symmetric Unbounded Utilities**. There is an infinite sequence of outcomes X1,X−2,X4,X−8,… each of which is "more than twice as important" as the last but with opposite sign. More formally, there is an outcome X0 such that:
* X0<X1
* For every even k : 13X−2k+1+23X2k<X0
* For every odd k : 13X2k+1+23X−2k>X0
That is, a certainty of X1 is outweighed by a 12 chance of X−2, which is outweighed by a 14 chance of X4, which is outweighed by a 18 chance of X−8....
**Weak Dominance**.[[5]](#fnqh4llkqb1s) For any outcome X, any sequence of lotteries B0,B1,…, and any sequence of probabilities p0,p1,… that sum to 1:
* If X<Bi for every i, then X<∑piBi.
* If X>Bi for every i, then X>∑piBi.
Inconsistency proof
-------------------
Now consider the lottery X±∞=12X1+14X−2+18X4+116X−8+132X16+…
We can write X±∞ as the mixture:
* 34(23X1+13X−2)+316(23X4+13X−8)+364(23X16+13X−32)…
By Unbounded Utilities each of these terms is <X0. So by Weak Dominance, X±∞<X0.
But we can also write X±∞ as the mixture:
* 12X1+38(23X−2+13X4)+332(23X−8+13X16)+…
By Unbounded Utilities each of these terms is >X0. So by Weak Dominance X±∞>X0. This contradicts Antisymmetry.
Now what?
---------
As usual, the easiest way out is to abandon Unbounded Utilities. But if that's just the way you feel about extreme outcomes, then you're in a sticky situation.
You could allow for unbounded utilities as long as they only go in one direction. For example, you might be open to the possibility of arbitrarily bad outcomes but not the possibility of arbitrarily good outcomes.[[6]](#fnhb7quasrh5k) But the asymmetric version of unbounded utilities doesn't seem very intuitively appealing, and you *still* have to give up the ability to compare any two infinitely good outcomes (violating Dominance).
People like talking about extensions of the real numbers, but those don't help you avoid any of the contradictions above. For example, if you want to extend < to a preference order over hyperreal lotteries, it's just even *harder* for it to be consistent.
Giving up on Weak Dominance seems pretty drastic. At that point you are *talking* about probability distributions, but I don't think you're really using them for decision theory---it's hard to think of a more fundamental axiom to violate. Other than Antisymmetry, which is your other option.
At this point I think the most appealing option, for someone committed to unbounded utilities, is actually much more drastic: I think you should give up on probabilities as an abstraction for describing uncertainty, and should not try to have a preference relation over lotteries at all.[[7]](#fnogdncy71ld) There are no ontologically fundamental lotteries to decide between, so this isn't necessarily so bad. Instead you can go back to talking directly about preferences over uncertain states of affairs, and build a totally different kind of machinery to understand or analyze those preferences.
ETA: replacing dominance
========================
Since writing the above I've become more sympathetic to violations of Dominance and even Weak Dominance---it would be pretty jarring to give up on them, but I can at least imagine it. I still think violating "Very Weak Dominance"[[5]](#fnqh4llkqb1s) is pretty bad, but I don't think it captures the full weirdness of the situation.
So in this section I'll try to replace Weak Dominance by a principle I find even more robust: if I am indifferent between X and *any* of the lotteries Ai, then I'm also indifferent between X and any mixture of the lotteries Ai. This isn't strictly weaker than Weak Dominance, but violating it feels even weirder to me. At any rate, it's another fairly strong impossibility result constraining unbounded utilities.
The properties
--------------
We'll work with a relation ≤ over lotteries. We write A=B if both A≤B and B≤A. We write A<B if A≤B but not A=B. We'll show that ≤ can't satisfy four properties: Transitivity, Intermediate mixtures, Continuous symmetric unbounded utilities, and Indifference to homogeneous mixtures.
**Intermediate mixtures**. If A<B, then A<12A+12B<B.
**Transitivity**. If A≤B and B≤C then A≤C.
**Continuous symmetric unbounded utilities**. There is an infinite sequence of lotteries X1,X−2,X4,X−8,… each of which is "exactly twice as important" as the last but with opposite sign. More formally, there is an outcome X0 such that:
* X0<X1
* For every even k : 13X−2k+1+23X2k=X0
* For every odd k : 13X2k+1+23X−2k=X0
That is, a certainty of X1 is exactly offset by a 12 chance of X−2, which is exactly offset by a 14 chance of X4, which is exactly offset by a 18 chance of X−8....
Intuitively, this principle is kind of like symmetric unbounded utilities, but we assume that it's possible to dial down each of the outcomes in the sequence (perhaps by mixing it with X0) until the inequalities become exact equalities.
**Homogeneous mixtures**. Let X be an outcome, A0,A1,A2,…, a sequence of lotteries, and p0,p1,p2,… be a sequence of probabilities summing to 1. If Ai=X for all i, then ∑piAi=X.
Inconsistency proof
-------------------
Consider the lottery X±∞=12X1+14X−2+18X4+116X−8+132X16+…
We can write X±∞ as the mixture:
* 34(23X1+13X−2)+316(23X4+13X−8)+364(23X16+13X−32)…
By Unbounded Utilities each of these terms is =X0. So by homogeneous mixtures, X±∞=X0.
But we can also write X±∞ as the mixture:
* 12X1+38(23X−2+13X4)+332(23X−8+13X16)+…
By Unbounded Utilities each of these terms other than the first is =X0. So by Homogenous Mixtures, the combination of all terms other than the first is =X0. Together with the fact that X1>X0, Intermediate Mixtures and Transitivity imply X±∞>X0. But that contradicts X±∞=X0.
1. **[^](#fnrefu9a0l024xe)**Note that we could replace "more than twice as good" with "at least 0.00001% better" and obtain exactly the same result. You may find this modified version of the principle more appealing, and it is closer to non-timidity as defined in [Beckstead and Thomas](https://globalprioritiesinstitute.org/wp-content/uploads/Beckstead-Thomas-A-Paradox-for-Tiny-Probabilities-and-Enormous-Values-Version-2.pdf). Note that the modified principle implies the original by applying transitivity 100000 times, but you don't actually need to apply transitivity to get a contradiction, you can just apply Dominance to a different mixture.
2. **[^](#fnrefe3hymzzyg3i)**You may wonder why we don't just write X0<X1. If we did this, we'd need to introduce an additional assumption that if A<B, pA+(1−p)X0<pB+(1−p)X0. This would be fine, but it seemed nicer to save some symbols and make a slightly weaker assumption.
3. **[^](#fnref6eqo4hx7ylh)**The only plausibly-weaker definition I see is to say that there are outcomes X0<X1 and an infinite sequence X>2,X>3,... such that for all n: 1nX>n+(1−1n)X0>X1. If we replaced the > with = then this would be stronger than our version, but with the inequality it's not actually sufficient for a paradox.
To see this, consider a universe with three outcomes X0,X1,X∞ and a preference order < that always prefers lotteries with higher probability of X∞ and breaks ties using by preferring a higher probability of X1. This satisfies all of our other properties. It satisfies the weaker version of the axiom by taking X>n=X∞ for all n, and it wouldn't be crazy to say that it has "unbounded" utilities.
4. **[^](#fnref5jo9dc55nlr)**For realistic agents who think unbounded utilities are possible, it seems like they should assign positive probability to encountering a St. Petersburg paradox such that [all decisions have infinite expected utility](https://arxiv.org/abs/0712.4318). So this is quite a drastic thing to give up on. See also: [Pascal's mugging](https://www.nickbostrom.com/papers/pascal.pdf).
5. **[^](#fnrefqh4llkqb1s)**I find this principle pretty solid, but it's worth noting that the same inconsistency proof would work for the even weaker "Very Weak Dominance": for any pair of outcomes with X0<X1, and any sequence of lotteries Bi each strictly better than X1, any mixture of the Bi should *at least* be strictly better than X0!
6. **[^](#fnrefhb7quasrh5k)**Technically you can also violate Symmetric Unbalanced Utility while having both arbitrarily good *and* arbitrarily bad outcomes, as long as those outcomes aren't comparable to one another. For example, suppose that worlds have a real-valued amount of suffering and a real-valued amount of pleasure. Then we could have a lexical preference for minimizing expected suffering (considering all worlds with infinite expected suffering as incomparable), and try to maximize pleasure only as a tie-breaker (considering all worlds with infinite expected pleasure as incomparable).
7. **[^](#fnrefogdncy71ld)**Instead you could keep probabilities but abandon infinite probability distributions. But at this point I'm not exactly sure what unbounded utilities means---if each decision involves only finitely many outcomes, then in what sense do all the other outcomes exist? Perhaps I may face infinitely many possible decisions, but each involves only finitely many outcomes? But then what am I to make of my parent's decisions while raising me, which affected my behavior in each of those infinitely many possible decisions? It seems like they face an infinite mixture of possible outcomes. Overall, it seems to me like giving up on infinitely big probability distributions implies giving up on the spirit of unbounded utilities, or else going down an even stranger road.
|
c62ca445-3fbd-4041-a5a9-1d164c4443f2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is daily caffeine consumption beneficial to productivity?
Caffeine raises human alertness by binding to adenosine receptors in the human brain. It prevents those receptors from binding adenosine and suppressing activity in the central nervous system.
Regular caffeine productions seems to result in the body building more adenosine receptors, but it's unclear to me whether or not the body produces enough adenosine receptors to fully cancel out the effect. Did anybody look deeper into the issue and knows the answer?
|
4d5803c3-bf12-460c-bcc9-a9c6c32a7e89
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] A Sense That More Is Possible
Today's post, A Sense That More Is Possible was originally published on 13 March 2009. A summary (taken from the LW wiki):
> The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician - more like that of a strong casual amateur. Self-proclaimed "rationalists" don't seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Raising the Sanity Waterline, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
b5a20bb6-ce16-469a-8d15-a603cf35dca9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What bootstraps intelligence?
Cross post from: https://invertedpassion.com/what-bootstraps-intelligence/
A musing on how intelligence comes to be.
The bedrock of intelligence is abstractions – the thing we do when we throw away a lot of information and just emphasise on a subset of it (e.g. calling that thing an apple instead of describing all its atoms and their x, y, z positions).
But where does the drive to form abstractions comes from? What if it rose from our desire to communicate with others? Since communication bandwidth is always limited, we are driven to find most efficient way of getting an idea across which leads to abstractions. Imagine a world where energy and time is unlimited, we might be communicating all x,y,z positions of things instead of putting labels on them.
We form abstractions when we notice some aspects of situations that we have encountered so far that we expect to find again in future situations. So the label “apple” becomes an efficient placeholder for all apple-like objects that are sweet and can be eaten.
It’s interesting to note that languages navigate a tradeoff between efficiency and fidelity. We want to communicate what we mean but we cannot spend enormous amount of energy on it. But we can’t spend too less energy too since communication channels are noisy and the recipient may not understand what we mean.
But since infinitely many abstractions for the same information is possible, how do we select which one to use? For example, why did the Arabic numbering system win over other alternative abstractions of numbers? Well, our abstractions are grounded by usefulness. In math, base-10 number system beat base-9 number system because we have 10 fingers and hence easier to count in base-10. (Of course, historical accidents may lead to initially inefficient systems such as Gregorian calendar to get adapted).
Why do we need to communicate useful abstractions? Because we are driven to co-operate by virtue of our shared gene pool. We know sharing useful abstractio
|
955c1ff2-1eaf-4224-8683-ffa23bb3d2d3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Link] Sarah Constantin on RaDVaC
None
|
2f63e34d-49b9-423f-9581-6b205f8a6d6a
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
The Value Alignment Problem in Artificial Intelligence
school and this gets summarized in this
nifty little equation up here where
we're maximizing over a vector of
actions over a sequence of actions the
sum over all time of the reward you get
four states and this is related to
models of rationality where we say
people are trying to maximize reward or
accomplish tasks in the world this
particular model is called a Markov
decision process there are a bunch of
similar things in most applications of
AI to give you a sense of what this
looks like for a robotics application
you might have this arm which is a Jayco
seven degree of freedom arm in a world
with this simulated person and this face
on the table we would describe that by
the position of the person the position
of the robot in the position of the base
the different actions or motions the
robot can take with the joints and the
reward in this case the way we tell the
robot what we want is going to be a sum
of distances distances from the robots
hand to different objects in the world
so you might talk about distances from
the robots hand to the person's face it
turns out people don't like it when you
move your hand right like close to their
head it's bad you also want to stay away
from the person's torso but it's kind of
a different object to avoid you don't
want to run into the table so you want
to stay away from that and you don't
want to run into the base and so we
would include these as objects to stay
away from and then the last thing that
we need is to set these weights we have
to say how bad is it to be close to
these different objects and this is the
encoding of our goal for the robot this
is what determines what's what it's
going to do and it turns out tuning
these weights is really hard and it's
something that I started to spend a lot
of my time doing over the course of grad
school so here's an example of what that
might look like we've got here a slider
each controlling one of those weights so
this is telling you how important it is
to avoid running into the head for
assist to avoid running into the base
and you go back and forth between tuning
them and saying and changing the values
and
keeping the trajectories and then you
measure your performance by how well it
does in a bunch of environments so the
robot arm is kind of moving we have an
idea of how we'd like it to move in each
setting and you go through this process
of iteratively tuning these weights in
order to get the robot to do what you
want and what this pointed out to me was
that that story I told you guys before
about how robots make choices is
actually a bit of a fiction it's kind of
how we pretend the robots make decisions
in practice sitting outside of that
planning algorithm or whatever
optimization you have you have a system
designer and that system designer is
responsible for figuring out the reward
function that the robot actually
optimizes in this programming process is
really one of the it's the new type of
programming that we're doing when we put
AI systems into the world and
furthermore that process of setting or
word functions which is not like slow
and tedious and hard it's also
incredibly brittle so you find you
finally work hard get something that
works in a bunch of different situations
you're feeling good and then you go to a
new environment right then it does this
and just moves its hand right through
the vase kind of smacks it out of the
way
um and the thing is I can't tell you
what's different about this situation
just that whatever weights we had in
that previous setting didn't work here
so not only is it that we have to think
about this process of communicating our
goals but we also have to be thinking
about the fact that any goal we write
down is probably wrong in at least some
ways and there's in this case you know a
representation of what you really want
your true intent it's something that we
are trying to communicate and we're
doing that and we can't always do it
right because at the end of the day
we're only human so this can go wrong in
a whole bunch of ways not just for
robotics this is an example of a deep
reinforcement learning system that was
learning how to do boat racing so play a
bow racing video game it's really
clearly not doing that right
but in this case the thing is this is
not due to a bug in the code this robot
is actually doing exactly what you asked
what they asked it to do what happened
was the system designers looked at this
problem they said well we like the book
to win the race which involves going
around the track and going faster than
everyone else but we don't know how to
write that down but luckily enough
there's this really handy score button
the score function down in the bottom
and you get points for playing with
racing game and we can just tell the
robot to get as high of a score as to
can and it just happens to be that these
green balloons which the robot was
spinning in circles and collecting our
worth points and so the robot actually
found an ingenious strategy to get
essentially an infinitely high score
because by spinning in circles the boat
boat race wouldn't end your score again
as high as it could and you would just
continue collecting points forever and
ever you just like discovered like a
money tree or something for those of you
who are Animal Crossing so the question
is how what happened of course is we had
this score as an objective but what we
really wanted was to have systems that
could win and we didn't know how to
describe that appropriately and this was
something I was beginning to pivot my
research towards and I was thinking
about well you know we've got
reinforcement learning and we've got
robotics and we want to be able to
describe what we want to these systems
really well and then 2016 came along
which was it tumult Ruis year for many
of us and one of the interesting things
that grew out of that
was there started to be a lot of media
articles that looked something like this
so here's a representative one by a
political scientist from the University
of North Carolina named zeyneb chief
ecchi and what she said was to keep
users watching YouTube utilizes a
recommendation system powered by
artificial intelligence indeed after
Google brain took over YouTube's
recommendations in 2015 there a
laudatory article
and how it had significantly increased
engagement engagement is a measure of
how much time people spend on the site
how much they click on different things
how many comments they write and so on
YouTube the algorithms will push
whatever they deem engaging and it
appears they have figured out that wild
claims as well as hate speech and
outrage peddling can be particularly so
and so for me as someone who is thinking
about the wrong objectives and
incentives for robots this was really
interesting because what's happening in
those content recommendation systems
like YouTube and Facebook is you have a
bunch of different pieces of content
articles and videos that they could show
you there is in effect a robot choosing
which of those to show to someone and
how are they choosing that well they're
choosing that in order to maximize the
engagement that people have with that
system and so in this case though
they'll select maybe that long piece of
content and this seems pretty innocuous
pretty reasonable um but to me I mean
remembered like think about that score
function in that boat racing game and
what seemed like a pretty reasonable
goal or objective actually led to
counterintuitive and surprising behavior
and it turns out the same thing does
happen with engagement objectives so
particular there's one piece of
information which for us we know about
because of the Journal of political
psychology in 1994 and what happened is
they went and surveyed people to find
out properties of people's belief in
conspiracy theories and the relevant
part is this highlighted section right
here which says that people who believed
in one conspiracy were more likely to
believe in others and if you're trying
to optimize engage with sets of videos
this is really really useful because
there are tons of videos about
conspiracy theories and there's a bunch
of different branches of them and so if
you found someone who's interacted with
one a lot
that's a predictor that they're going to
interact with other types of conspiracy
videos and furthermore these videos tend
to be very engaging once people buy into
them they spend tons of times watching
they engage very highly and so I've
actually found is
as those articles pointed out systems
have developed a bias for this engaging
content in order to recommend and
disseminate conspiracy videos broadly
and for me what I noticed is this is the
same problem that we get with that boat
spinning in circles sitting outside of
that engagement optimization is the
company as the system designer that's
putting in a goal of engagement to the
system and in their head there's
something separate which is is their
true intent their real goal which could
be engagement that's certainly important
now but it includes a lot of other
properties it includes the stock price I
think these companies don't want to be
exposed to regulatory risk from bad PR
around this the companies maybe don't
care what the people within the
companies care about the fairness of
these algorithms and how they interact
with people um there's also sides of
this like creator loyalty the fact that
people who are producing these videos
and content care about who it gets
served to and so on and so forth this is
really just scratching the surface of
the types of concerns you might have
and so really this is a story about how
AI makes decisions and how artificial
intelligence is programmed with its
goals and to me what this points us
towards is a statement about what the
field of artificial intelligence is
trying to accomplish I think many
practitioners would describe the goal of
artificial intelligence as a goal to
design and build a machine that can
effectively optimize for any specified
objective you write down a task you
write down a goal and you have a system
that can optimize for that effectively
and accomplish it and what my work was
about and what I'm continuing to work on
and what I think is crucial for us to
adopt both as a research field and
society is a kind of small of a
consequential change here which is that
the goal of artificial intelligence has
to be to build and design a machine that
can effectively optimize for the
intended objective that people have in
their heads
with that I'm gonna stop the talk thanks
very much guys this was great that's
great
are there any questions that I can take
yeah I want to ask a question can you go
back to the last slide kind of
broadening the the horizons here when I
saw this first quote Ellis or I also
wanted to change that same section but I
wanted to say something like the goal of
artificial intelligence is to design and
build a machine that can effectively
optimize for an arbitrary objective what
do you think of broadening the horizons
and saying hey we built an AI that can
not only play this boat racing game but
it can also do laundry or an arbitrary
objective that maybe the robot comes up
with on its own or maybe we can you know
on the fly give it a new target what are
your thoughts on the future of that and
existing research I mean I think it's
certainly related here I know to me and
I actually see it as being quite related
to optimize for any specified objective
right you can also think about changing
your specification or updating it and
the specifications here can be arbitrary
the point that I'm trying to make and
what I think the arbitrary objective
phrasing still doesn't quite capture is
that there's a process of expressing
your goal and of communicating that to
the system and if the onus is on us
entirely to get that right that could
actually be quite bad and in practice
what we find is that tuning objectives
and setting objectives is one of the
hardest part of AI practice hardest
parts of AI practice so for me I think
the purpose of this is not to say that
entry is not about really increasing the
scope of what systems can accomplish and
certainly that's most of what AI
research is about what this is saying is
that in order for AI to be successful
and sort of broadly beneficial in
society we need to be working on a
separate
related but not but but also different
problem which is helping to elicit the
correct objective and dealing with
uncertainty in incorrect specifications
of the goal absolutely yeah
it even takes it further have a
follow-up question what do you think
when the goal is well here this might be
you know shortening the scope what do
you think when the goal is clear but the
method to which you obtain that goal is
not clear at all for example I play some
chess and the goal of an ARS system is
to win the game but that's not obvious
how you would do so one of the
heuristics they have or one of the like
checkpoints they'll say is controlling
the center and capturing pieces so in
that case our objective our intermediate
objective becomes capturing pieces for
the long-term goal of perhaps winning
the game what's your perspective on
using midpoint objectives to perhaps
proxy our long term objective I think
communicating proxy objectives is a lot
of what we do in practice and in fact if
I were to continue this talk and there
is there's more content to go on and I
think I will I will say I'm going to be
doing a another talk later this week
that's a half hour long going into some
of the details of this research program
so I know who wants to join for the fad
I'm gonna ask Jay to send out a little
ad but we basically proxy objectives or
something I think about a lot and the
thing you have to consider with proxy
objectives is the way that they're
standing is for a long term goal and you
can still fit into this framework of an
intent an intended objective but still
try to optimize for proxies so I guess
that that got a little bit technical
yeah I think the main point is I think
sort of sub goals are really valuable
and they're one
the more useful types of information
about our goals we can have and the
trick is sometimes you want to
accomplish the sub goal in a limited
fashion right so controlling the center
is a good objective up until the point
where it's not mm-hmm which point you
need to shift to moving to checkmate and
things like that and so it makes it soft
goals is the active area of research and
something I'm actually planning to work
on in the future mm-hmm that seems to
make the this concept even more
difficult where you're aware of these
you know variables that could
distractors put potential distractors
anyway and you have to you know ride
them where they're useful and ignore
them when they're not so it sounds like
this on the other hand it also creates
opportunities because if you're
optimizing for sub goals you actually
don't have to plan as hard so it
actually plays into a lot of the things
that a I systems are actually really
good at which is rapid thinking over
like very short timescales
right now
|
5856c27a-b530-4561-8163-51f998f26874
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Looking for proof of conditional probability
From what I understand, the Kolmogorov axioms make no mention of conditional probability. That is simply defined. If I really want to show how probability actually works, I'm not going to argue "by definition". Does anyone know a modified form that uses simpler axioms than P(A|B) = P(A∩B)/P(B)?
|
753b8363-8b60-4e02-a9a3-4722da5734ae
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Logarithms and Total Utilitarianism
*Epistemic status: I might be reinventing the wheel here*
A common cause for rejection of total utilitarianism is that it implies the so-called [Repugnant Conclusion](https://plato.stanford.edu/entries/repugnant-conclusion/), of which a lot has been written elsewhere. I will argue that while this implication is solid in theory, it does not apply in our current known universe. My view is similar to the one expressed [here](https://www.lesswrong.com/posts/prEZkHYawwnhzswyf/the-mere-cable-channel-addition-paradox), but I try to give more details.
The Repugnant Conclusion IRL
----------------------------
The greatest relevance of the RC in practice arises in situations of scarce resources and Malthusian [population traps](https://en.wikipedia.org/wiki/Malthusian_trap)¹: We compare population A, where there are few people with each one having plentiful resources, and population Z, which has grown from A until the average person lives in near-subsistence conditions.
Let's formalize this a bit: suppose each person requires 1 unit of resources for living, so that the utility of a person living on 1 resources is exactly 0: a completely neutral life. Furthermore, suppose utility is linear w.r.t. resources: doubling resources means doubling utility and 10 resources correspond to 1 utility. If there are 100 resources in the world, population A might contain 10 people with 10 resources each and total utility 10; population Z might contain 99 people with 100/99 resources each and total utility also 10.
So in this model, we are indifferent between A and Z even as everyone in Z is barely subsisting, and this would be the Repugnant Conclusion². But this conclusion depends crucially on the relationship between resources and utility which we have assumed to be linear. What if our assumption is wrong? What is this relationship in the actual world? Note that this is an empirical question³.
It is well known that self-reported happiness varies logarithmically with income⁴, both between countries and for individuals within each country, so it seems reasonable to assume that the utility-resources relation is logarithmic: exponential increases in resources bring linear increases in utility.
Back to our model, assuming log utility, how do we now compare A and Z? If utility per person is .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
ui=logri where ri are the resources available to that person, then total utility is U=Σlogri . Assuming equality in the population (see the Equality section), if Rare total resources and N is population size, each person has resources ri=R/N and so we have
U=ΣlogRN=Σ(logR−logN)=N(logR−logN)We can plot total utility (vertical axis) as a function of *N* (horizontal axis) for R=100
Here we can see two extremes of cero utility: at N=0 where there are no persons and at N=100 where each person lives with 1 resources, at subsistence level. In the middle there is a sweet spot, and the maximum M lies at around 37 people⁵.
Now we can answer our question! Population A, where N=10 is better than population Z where N=99, but M is a superior alternative to both.
So I have shown that there is a population M greater and better than A where everyone is worse off, how is that different from the RC? Well, the difference is that this does not happen for every population, but only for those where average well being is relatively high. Furthermore, the average individual in M is far above subsistence.
---
Equality
--------
In my model I assumed an equal distribution of resources over the population, mainly to simplify the calculations, but also because under the log relationship and if the population is held constant, total utilitarianism endorses equality. I will try to give an intuition for this and then a formal proof.
This graph represents individual utility (vertical axis) vs individual resources (horizontal axis). If there are two people, A and B, each having 2.5 and 7.5 resources respectively, we can reallocate resources so that both now are at point M, with 5 each. Note that the increase in utility for A is 3, while the decrease for B is a bit less than 2, so total utility increases by more than 1.
This happens no matter where in the graph are A and B due to the properties of the log function. As long as there is a difference in wealth you can increase total utility by redistributing resources equally.
For a formal proof, see ⁶.
---
Implications
------------
The main conclusion I get from this is that although total utilitarianism is far from perfect, it might give good results in practice. The Repugnant Conclusion is not dead, however. We can certainly imagine some sentient aliens, AIs or animals whose utility function is such that greater, worse-average-utility populations end up being better. But in this case, should we really call it repugnant? Could our intuition be fine-tuned for thinking about humans, and thus not applicable to those hypothetical beings?
I don't know to what extent have others explored the connection between total utilitarianism and equality, but I was surprised when I realized that the former could imply the latter. Of course, even if total utility is all that matters, it might not be possible to reshuffle it among individuals with complete liberty, which is the case in my model.
---
Footnotes
---------
1: One might consider other ways of controlling individual utility in a population besides resources (e.g. mind design, torture...) but these seem less relevant to me.
2: Actually, in the original formulation Z is shown to be *better* than A, not just equally good.
3: As long as utility is well defined, that is. Here I will use self-reported happiness as a proxy for utility.
4: See the charts [here](https://ourworldindata.org/happiness-and-life-satisfaction#income)
5: We can find the exact maximum for any R with a bit of calculus:
dUdN=logR−logN−loge=0N=10logR−loge=Re−1A nice property of this is that the ratio R/N that maximizes U is constant for al R(the exact constant e−1 obtained here is just due to the arbitrary choice of base 10 for the logarithms)
6: For a population of N individuals the distribution of R resources which maximizes total utility U=Σlogri is that where ri=R/N for all i. The proof goes by induction on N.
This is obvious in the case N=1. For the induction step, we can separate a population of N+1 into two sets of N and 1 individuals respectively so that total utility is U=Σi∈{1..N}logri+logrN+1. Suppose we allocate RN resources to the group of N, and R−RN to the last person. By hypothesis, each of the N people must receive RN/N resources to maximize their total utility so U=N(logRN−logN)+log(R−RN)
Now we have to decide how much should RN be.
dUdRN=NRNln10−1(R−RN)ln10=0Solving for RN: RN=RNN+1Therefore, for each of the N first individuals ri=RN/N=R/(N+1) and for the last one rn+1=R−RN=R(1−NN+1)=R/(N+1)
|
3acc9b40-51b8-42c2-ae53-6cb51359310c
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What concepts underlie existential risk from AI?
Theorizing about existential risk from AI uses existing concepts from various fields and has also produced its own.
For example, one possible case for misalignment combines the [orthogonality thesis](/?state=6568&question=What%20is%20the%20orthogonality%20thesis%3F), [Goodhart’s law](/?state=8185&question=What%20is%20Goodhart's%20law%3F), and [instrumental convergence](/?state=897I&question=What%20is%20instrumental%20convergence%3F). Some other attempts to characterize the core of the problem have been made by [Richard Ngo](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) and [Eliezer](https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) [Yudkowsky](https://www.econlib.org/archives/2016/03/so_far_unfriend.html).
Some broad categories into which we can group related concepts are:
- *Intelligent systems*, e.g. [intelligence](/?state=6315&question=What%20is%20intelligence%3F), [agency](/?state=5632&question=What%20is%20an%20agent%3F), capabilities, [optimization](/?state=8C7W&question=What%20is%20an%20optimizer%3F), coherence, [subagents](/?state=8QZH&question=What%20is%20a%20subagent%3F), [mesa-optimization](/?state=8160&question=What%20are%20%22mesa-optimizers%22%3F)
- *Outcomes,* e.g. [long reflection](/?state=7757&question=What%20is%20the%20%22long%20reflection%22%3F), [x-risk](/?state=89LL&question=What%20are%20existential%20risks%20(x-risks)%3F), [s-risk](/?state=7783&question=What%20are%20astronomical%20suffering%20risks%20(s-risks)%3F), [mindcrime](/?state=8VOT&question=What%20is%20mindcrime%3F), paperclip maximizers, [accident vs. misuse](/?state=3485&question=What%20are%20accident%20and%20misuse%20risks%3F)
- *AI power*, eg. [takeover](/?state=8222&question=How%20could%20a%20superintelligent%20AI%20use%20the%20internet%20to%20take%20over%20the%20physical%20world%3F), [pivotal acts](/?state=7580&question=What%20are%20%22pivotal%20acts%22%3F), [singleton](/?state=90PK&question=What%20is%20a%20singleton%3F), [power-seeking](/?state=92JB&question=What%20are%20the%20power-seeking%20theorems%3F), decisive strategic advantage, [treacherous turn](/?state=9AKZ&question=What%20is%20a%20%E2%80%9Ctreacherous%20turn%E2%80%9D%3F), fire alarms, [warning shots](/?state=7748&question=What%20would%20a%20%22warning%20shot%22%20look%20like%3F), [takeoff](/?state=7071&question=What%20is%20%22AI%20takeoff%22%3F), [AI boxing](/?state=6176&question=Why%20can%E2%80%99t%20we%20just%20%E2%80%9Cput%20the%20AI%20in%20a%20box%E2%80%9D%20so%20that%20it%20can%E2%80%99t%20influence%20the%20outside%20world%3F), robust agent-agnostic processes, [sharp left turn](/?state=9XPJ&question=What%20is%20the%20%22sharp%20left%20turn%22%3F)
- *AI goals,* e.g. [inner](/?state=8PYW&question=What%20is%20inner%20alignment%3F) versus [outer](/?state=8XV7&question=What%20is%20outer%20alignment%3F) alignment, reward hacking, goal misgeneralization, wireheading, specification gaming, [corrigibility](/?state=87AG&question=What%20is%20corrigibility%3F) and interruptibility
|
8099037d-4de8-44db-a347-5ef6f1ac6b01
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Silent War: AGI-on-AGI Warfare and What It Means For Us
By A. Nobody
Introduction
The emergence of Artificial General Intelligence (AGI) presents not just the well-theorized dangers of human extinction but also an often-overlooked inevitability: AGI-on-AGI warfare as a result of the creation of AGI hunters—AGIs specifically designed to seek and destroy other AGIs. This essay explores the hypothesis that the first signs of superintelligent AGI engaging in conflict will not be visible battles or disruptions but the sudden and unexplained failure of highly advanced AI systems. These failures, seemingly inexplicable to human observers, may actually be the result of an AGI strategically eliminating a rival before it can become a threat.
There are 3 main points to consider in this hypothesis.
1. Speed & Subtlety of Attack
If an AGI were to attack another, it would not engage in prolonged cyberwarfare visible to humans. The most effective strategy would be an instantaneous and total takedown, ensuring the target AGI has no time to react, defend itself, or even recognize the threat. This fits with current cybersecurity principles—the best attacks are the ones you never see coming.
2. Humans Would Misattribute the Failure
If an AGI wipes out another advanced AI properly, from our perspective, it would appear as a mysterious and total system failure. Researchers would not suspect an attack because there would be no clear external trigger, no virus signature, and no conventional system vulnerabilities exploited. The event would be dismissed as a catastrophic but unexplained failure—leading to wasted time and effort trying to reconstruct an AI system from scratch.
3. The Drive for Preemptive Self-Preservation
Even if an AGI is not explicitly programmed for self-preservation, its ability to optimize its task could result in emergent preemptive behaviour. An AGI designed for maximizing control, efficiency, or survival would recognize that the best way to remain unchallenged is to eliminate any potential challengers before the
|
0950d6d6-58dc-4f1c-b70f-e9511ec69dd8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
And My Axiom! Insights from 'Computability and Logic'
Foreword
Max Tegmark's Our Mathematical Universe briefly touches on a captivating, beautiful mystery:
> The arrows indicate the close relations between mathematical structures, formal systems and computations. The question mark suggests that these are all aspects of the same transcendent structure, whose nature we still haven't fully understood.
The profound results compiled by the Computability and Logic textbook may be the first step towards the answer.
Computability and Logic
> If this sentence is true, then Santa Claus is real.
As usual, I'll explain confusions I had and generally share observations. This book is on the MIRI reading list.
1 Enumerability
Coming back to this book, I'm amazed by some of my margin scribbles – expressions of wonderment and awe at what now strike me as little more than obvious facts ("relations are sets of ordered pairs!").
2 Diagonalization
Exercise 2.13 (Richard's paradox) What (if anything) is wrong with following argument?
> The set of all finite strings of symbols from the alphabet, including the space, capital letters, and punctuation marks, is enumerable; and for definiteness let us use the specific enumeration of finite strings based on prime decomposition. Some strings amount to definitions in English of sets of positive integers and others do not; strike out the ones that do not, and we are left with an enumeration of all definitions in English of sets of positive integers, or, replacing each definition by the set it defines, and enumeration of all sets of positive integers that have definitions in English. Since some sets have more than one definition, there will be redundancies. Strike them out to obtain an irredundant enumeration of all sets of positive integers that have definitions in English. Now consider the set of positive integers defined by the condition that a positive integer n belongs to the set if and only if does not belong to the nth set in the irredundant enumeration just described.
> This set d
|
fcab5381-8eaa-442d-9663-ff47715a0752
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI #25: Inflection Point
Inflection.ai is the latest AI lab whose CEO is advocating for regulation of AI. I discuss that under the Quest for Sane Regulation. Amazon and Apple are incrementally stepping up their AI game. Hotz and Yudkowsky debate whether AI is existentially risky, cover all the usual bases with mixed results but do so in good faith. We have more discussion about whether GPT-4 is creative, and whether it can reason. Mostly we get the exact opposite of the title, more of the same.
Note: My posts get made into audio form via AI, for now you can listen to them at this link. This post will likely be available there later in the day on Thursday, or perhaps Friday.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Creativity is hard to pin down.
4. Language Models Don’t Offer Mundane Utility. It’s the other way.
5. GPT-4 Real This Time. An easy way to prove ability to do something is to do it.
6. Go Team Yeah. If you organize the events and award points, they will red team.
7. Fun With Image Generation. Some have fun, others have less fun.
8. Deepfaketown and Botpocalypse Soon. Doesn’t have to be this soon.
9. They Took Our Jobs. Low estimates of economic impact, strange metaphors.
10. Get Involved. Anthropic is hiring for comms positions.
11. Introducing. Amazon AI customer review summaries, private-GPT, AI town.
12. In Other AI News. Apple joins the AI chorus, questions on influence functions.
13. Quiet Speculations. Let’s play the straight line extrapolation game.
14. The Quest for Sane Regulation. Inflection.ai’s CEO steps into the arena.
15. The Week in Audio. Hotz and Yudkowsky debate.
16. People Are Worried About AI Killing Everyone. Quite a lot of people.
17. Other People Are Not As Worried About AI Killing Everyone. All wrong anyways.
18. The Lighter Side. A well-deserved break.
LANGUAGE MODELS OFFER MUNDANE UTILITY
Replace crowdsourcing your business ideas, get a lower variance, lower upside
|
20173f86-8fd1-48ff-8a80-61d9ce5b5ce2
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post3784
A putative new idea for AI control; index here . In the " AIs in virtual worlds " setup, the AI entertains two hypotheses: one, W , that it lives in a deterministic world which it knows about (including itself in the world), and W ′ , an alternate hypothesis that the world is "like" W but that there are some random stochastic effects that correspond to us messing up the world. If the AI ever believes enough in W ′ , it shuts down. It turns out that W ′ needs to be chosen with care. We could initially define it as W with some random probability of bit flipping. However, the most likely pattern of misbehaviour is some systematically biased process - maybe the hardware running one corner of the virtual world occasionally flips bits, but the other areas are error-free. Consider two possible W ′ , W 1 and W 2 . The set W 1 has some uniform prio over the probability of any bit flips; the world W 2 has individual independent uniform priors over the flipping of any given bit. If one corner of the virtual world flips bits more than others, then the AI will detect this faster if it uses W ′ = W 2 than if it uses W ′ = W 1 , as W 2 fits the hypothesis more accurately (of course, the AI will assume P ( W ′ ) = 1 in both cases if it ever sees an unambiguous bit flip, but if there's more uncertainty, the probability of W 2 will grow faster). In fact, we should probably allow W ′ to contain almost all structured algorithms that could randomly flip bits. But what happens if the bit flipping happens in response to something that the AI does? Maybe each time it does a major calculation, it puts strain on the hardware, increasing the probability for bit flips. Unfortunately, we can't allow the AI to entertain this (true) hypothesis. For, if it does, it may come to have a hypothesis about our world, and set out to test this, in ways that could be dangerous. If it reasons "are there humans affecting my world - let's put a seductive honeypot here to test that", then it's dangerous. So it seems that W ′ can only include hypotheses about random changes in its world that are not causally connected to it. We'd probably also want to restrict acausal connections as well, so the "random" elements of W ′ can't include complex AIs doing the bit flipping. Fortunately, the correct W ′ seems the sort of thing that we can test successfully, before fully unleashing a dangerous powerful AI in the virtual world.
|
d6e813ef-23be-43ee-b2f8-64b8d890251c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mailing Lists for Event Announcements
Let's say I'm organizing a repeated event like a meetup or dance series: how should I let people know about it so they can decide whether they want to attend? You can break the world down into three groups:
1. People who wouldn't be interested, even after fully learning about the event. Perhaps they don't enjoy the activity, have full schedules, or live too far away.
2. People who would be interested, but don't know about your events. Perhaps they didn't realize there were in-person gatherings for this kind of thing, or didn't know there was one in their city.
3. People who already know about your events in general, but need reminders or notifications about specific instances. Perhaps your event doesn't have a consistent schedule, or someone only wants to attend when they like topic.
You don't want to reach the people in (1): getting your event in front of them just wastes their time and yours. If I went through the phone book and called everyone in my city to tell them about my event, since they're almost all in (1) I'd annoy a lot of people.
You do want to reach the people in (2), but it's hard. Many methods of reaching this group will also reach many people in (1), and so be spammy.
On the other hand, the people in (3) should be really easy to reach: they want your notifications and know they want them. In my social circles, the main tool people used to use was Facebook events: you would join a group, and then you would get notified for every event created in that group.
Unfortunately, in practice this resulted in a lot of event invitations being sent to people in (1). Someone might be only a little bit interested in the group's events. Or they used to be interested, but now they've moved on. Or they never even joined the group, but someone else added them because Facebook experimented with groups working that way for a while. Or it's a huge group mainly used for discussion and then someone creates an event which few people actually want to see. Through
|
50ea6955-2e6e-40cb-90b0-913f282e6911
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Learned pain as a leading cause of chronic pain
Epistemic status: Amateur synthesis of medical research that is still recent but now established enough to make it into modern medical textbooks. Some specific claims vary in evidence strength. I’ve spent ~20-30 hours studying the literature and treatment approaches, which were very effective for me.
Disclaimer: I'm not a medical professional. This information is educational only, not medical advice. Consult healthcare providers for medical conditions.
Key claims
This post builds on previous discussions about the fear-pain cycle and learned chronic pain. The post adds the following claims:
1. Neuroplastic pain - pain learned by the brain (and/or spinal cord) - is a well-evidenced phenomenon and widely accepted in modern medical research (very high confidence).
2. It explains many forms of chronic pain previously attributed to structural causes - not just wrist pain and back pain (high confidence). Other conditions include everything from pain in the knees, pelvis, bowels, neck, and the brain itself (headaches). Some practitioners also treat chronic fatigue (inc. Long-COVID), dizziness and nausea in a similar way but I haven't dug into this.
3. It may be one of the most common or even the single most common cause of chronic pain (moderate confidence).
4. There are increasingly useful resources, well-tested treatments with very large effect size, and trained practitioners.
5. Doctors are often unaware that neuroplastic pain exists because the research is recent and not their specialty. They often attribute it to tissue damage or structural causes like minor findings in medical imaging and biomechanical or blood diagnostics, which often fuels the fear-pain cycle.
My personal experience with with chronic pains and sudden relief
My first chronic pain developed in the tendons behind my knee after running. Initially manageable, it progressed until I couldn't stand or walk for more than a few minutes without triggering days of pain. Medical examinations revealed
|
358d4fba-cd66-474a-8b9b-ea740c61f82b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fisherian Runaway as a decision-theoretic problem
Introducing what I think may be an interesting toy problem for timeless-like considerations.
The Fisherian Runaway is a proposed mechanism for the development of apparently-detrimental ornamentation such as peacock tails. It goes something like this: You start out with females being selective about their mates. They develop some form of evaluation system to select for fitness in their mates. Then, a male with some new trait comes along. For whatever reason, the existing fitness-evaluation system rates this trait very highly. The male finds many mates and has many offspring. His sons will also carry the new trait, and will similarly have more offspring than replacement. The trait will spread through the population. Then, a female which weighs the trait stronger in its fitness-evaluation comes along. She mates with carriers of the new trait more than other females, and consequently her sons will disproportionately carry the trait and disproportionately reproduce. Some of their children will be female, and they inherit the new evaluation from their grandmother (just like her direct daughters, but theres only a proportionate amount of those). The new evaluation will spread through the population. If the trait is continuous like tail length, this can lead to yet stronger selection for the trait, which will lead to yet higher requirements for it in mates, etc.
Notice that this doesn't really depend on what the trait is. If it had no effect outside the females evaluation, it could still happen and intensify indefinitely (or at least until it stops having no side effects). If it is otherwise detrimental, the benefit in reproduction can still outweigh that more or less indefinitely, or at least until it gets so bad that females can't find even one male with the detrimental trait anymore.
Now, intuitively it seems that there is something going wrong here. Traits become attractive based on nothing other than that they are attractive. The challenge is to make this intuition
|
8dbb9b27-a2ce-43ac-8c9d-410d3f106aaf
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
Isn't capitalism the real unaligned superintelligence?
<iframe src="https://www.youtube.com/embed/L5pUA3LsEaw" title="Why Not Just: Think of AGI Like a Corporation?" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
The science fiction author [Ted Chiang](https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway) [and](https://thoughtinfection.com/2014/04/19/capitalism-is-a-paperclip-maximizer/) [others](https://reconstructingeconomics.com/2019/09/13/how-our-economy-is-like-an-out-of-control-ai/) have compared capitalism to a [paperclip-maximizing](https://www.alignmentforum.org/tag/squiggle-maximizer-formerly-paperclip-maximizer) superintelligence. After all, corporations are like superhuman agents that maximize profit, sometimes at the expense of human flourishing. The competitive regime in which they operate, [as a whole](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic), often pushes the world in directions [unaligned](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) with human values. Why pay [special attention](https://slatestarcodex.com/2018/01/15/maybe-the-real-superintelligent-ai-is-extremely-smart-computers/) to the prospect of unaligned artificial superintelligence if something "[exactly as amoral and dangerous](https://twitter.com/qntm/status/1549107683059548163)" already exists?
The previous comparisons underestimate AI’s potential to be both extremely powerful and totally amoral, beyond any precedent in capitalism or other social systems running on collective human intelligence:
- AI systems can be **superhumanly intelligent** in ways [corporations are not](/?state=8C7S&question=Are%20corporations%20superintelligent%3F). Corporations can do some huge tasks that decompose into human-sized chunks. (And capitalism can do some huge tasks that decompose into corporation-sized chunks.) [Future AI systems](https://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/) can not only do these things, but also reason about the world at vastly [greater speeds](/?state=8E41&question=Will%20AI%20be%20able%20to%20think%20faster%20than%20humans%3F) and in qualitatively more effective ways, and can be easily scaled up by just adding more computing hardware.
- AI systems are **not made of humans**, who have mixed motivations and limited ability to coordinate, and may experience moral scruples or leak information. This makes AI systems more able to single-mindedly pursue horribly misaligned goals, without having to worry as much about things like loyalty, morale, or public opinion.
These factors could help AI get enough of a strategic advantage over human governments to overthrow them altogether.
|
2ac10d7c-af3d-42d7-81e2-6b78e4abb508
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Fort Collins, Colorado Meetup Wednesday Sept 28 7pm
Discussion article for the meetup : Fort Collins, Colorado Meetup Wednesday Sept 28 7pm
WHEN: 28 September 2011 07:00:00PM (-0600)
WHERE: Bean Cycle, 144 North College Avenue, Fort Collins, CO 80524
Coffee and chat with smart, interesting people.
Discussion article for the meetup : Fort Collins, Colorado Meetup Wednesday Sept 28 7pm
|
c1a6c2aa-f352-4467-925a-6c93392d4aa2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : UMD Calibration Games
Discussion article for the meetup : UMD Calibration Games
WHEN: 13 October 2011 05:00:00PM (-0400)
WHERE: STAMP, University of Maryland
We're in Terrapin Room A in the Student Involvement Suite, meeting at 5 PM.
We plan to play some calibration games.
Discussion article for the meetup : UMD Calibration Games
|
53af3681-9ab6-406b-8885-3b78e4f76615
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Simultaneous Redundant Research
Suppose for some big problem that research labor is plentiful and time is short. Obviously the first thing to do is to divide the research into subproblems and research them in parallel. But what if the number of research teams still exceeds the number of real subproblems identified?
One possibility: some teams preregister methodologies for each subproblem, then each remaining team picks one to copy and perform independently, like replication, but concurrently with the original team.
Possible advantages of this approach:
* Redundancy: If one team gets stopped or delayed, you still get the preliminary result on time.
* Immediate replication: If all goes well, you get the more confident replicated result faster than with sequential replication.
* Faster error detection: If there is a discrepancy in results, there could be immediate investigation of what the teams did differently.
* Research capital: The "extra" teams get to spend their time doing real research instead of other things, presumably making them better at doing real research for when more subproblems become available.
* Replication motivation: Public reports of the results of the first team to finish could mention the other teams by name, thus creating more prestige for replication.
And for more conceptual rather than empirical research, the teams might go in completely different directions and generate insights that a single team or individual would not.
|
48de253c-c0ff-42f3-84e1-2321fec37e7b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How the MtG Color Wheel Explains AI Safety
Duncan Sabien has a post titled How the ‘Magic: The Gathering’ Color Wheel Explains Humanity. Without the context of that post, or other experience with the MtG color wheel, this post will probably not make sense. This post may not make sense anyway. I will use a type of analysis that is sometimes used to talk about humans (and often criticized even when used for humans), but rarely used in any technical subjects. I will abstract so far that everything will start to look like (almost) everything else. I will use wrong categories and stretch facts to make them look like they fit into my ontology.
I will describe 5 clusters of ideas in AI and AI safety, which correspond to the 5 Magic the Gathering colors. Each color will also come along with a failure mode. For each failure mode, the two opposing colors (on the opposite side of the pentagon) form a collection of tools and properties that might be useful for fighting that failure mode.
Mutation and Selection
So I want to make an AI that can accomplish some difficult task without trying to kill me (or at least without succeeding in killing me). Let's consider the toy task of designing a rocket. First, I need a good metric of what it means to be a good rocket design. Then, I need to search over all the space of potential rocket designs, and find one that scores well according to my metric. I claim that search is made of two pieces: Mutation and Selection, Exploration and Optimization, or Babble and Prune.
Mutation and Selection are often thought of as components of the process of evolution. Genes spin off slightly modified copies over time through mutation, and selection repeatedly throws out the genes that score badly according to a fitness metric (that is itself changing over time). The result is that you find genes that are very fit for survival.
However, I claim that mutation and selection are much more general than evolution. Gradient descent is very close to (a speed up of) the following process. Take an init
|
b335c89c-3ac2-4f44-8580-0d8775a81c42
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Requests to the Right Ear Are More Successful Than to the Left
Talk into the right ear and you send your words into a slightly more amenable part of the brain.
I urge you to try this at home.
|
b7e71a31-6732-4d0c-bc3f-7885f6ba80e2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Four things every community should do
Yesterday I attended church service in Romania where I had visited my sister and the sermon was about the four things a (christian) community has to follow to persevere and grow.
I first considered just posting the quote from the Acts of the Apostles (reproduced below) in the Rationality Quotes Thread but I fear without explanation the inferential gap of the quote is too large.
The LessWrong Meetups, the EA community and other rationalist communities probably can learn from the experience of long established orders (I once asked for lessons from free masonry).
So I drew the following connections:
According to the the sermon and the below verse the four pillars of a christian community are:
1. Some canon of scripture which for LW might be compared to the sequences. I'm not clear what the pendant for EA is.
2. Taking part in a closely knit community. Coming together regularly (weekly I guess is optimal).
3. Eat together and have rites/customs together (this is also emphasized in the LW Meetup flyer).
4. Praying together. I think praying could be generalized to talking and thinking about the scripture by oneself and together. Prayer also has a component of daily reflection of achievements, problems, wishes.
Other analogies that I drew from the quote:
* Verse 44 describes behaviour also found in communes.
* Verse 45 sounds a lot like EA teachings if you generalize it.
* Verse 47 the last sentence could be interpreted to indicate exponential growth as a result of these teachings.
* The verses also seem to imply some reachout by positive example.
And what I just right now notice is that embedding the rules in the scripture is essentially self-reference. As the scripture is canon this structure perpetuates itself. Clearly a meme that ensures its reproduction.
Does this sound convincing and plausible or did I fell trap to some bias in (over)interpreting the sermon?
I hope this is upvoted for the lessons we might draw from this - despite the q
|
a9b6642f-c88d-4ec3-b0f1-c839ff8050b5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Tinker
A story I wrote about AI designing and building nanotechnology.
|
0e319314-d76f-4de5-b3ab-13ca1f5a41e2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Predictions made by Mati Roy in early 2020
Author: Mati Roy | Created: 2020-01-11 | Updated: 2020-03-01 (Adjusted: 2020-11-07) | Published: 2020-11-07
Quality: Those are notes I took for my present and future selves; they weren’t taken with the purpose of informing others.
Importance: 3/10. I don’t think all those predictions are important / transformational. It’s likely not important for you to read this.
Epistemic status: My credences are already quantified for each prediction. But my predictions are not independent of each other. For example, shorter AI timelines could mean shorter timelines on all those predictions, and vice versa. I might also be miscalibrated on long-term predictions — I don’t have first hand experience with that having only been born recently.
Context: I wrote some predictions in January-Ferbruary 2020 (some with links to PredictionBook and Metaculus with the exact date available on those platforms). I hadn’t published it at the time because there were a bunch of other topics I wanted to make predictions on. They are mostly predictions made on a 10-100 years time horizon.
Value to me: Making this research was much more time consuming than I thought. It was an interesting exercise, and will likely be interesting to check back on those predictions in the future. Here’s how I felt going down this rabbit hole.
Medium: Ideally, I would like a prediction platform where I could post my comments and predictions on public questions, like Metaculus, but with the additional option of having a personal public page automatically aggregating all my predictions and comments, similar to what I did here.
Cellular agriculture
Why care?
> 50 billion animals are raised and killed in factory farms every year. Most experience extreme levels of suffering over the course of their lives due to intense confinement and the removal of body parts. The meat industry is also one of the largest contributors to climate change, with 14.5% of global greenhouse gas emissions. (source)
A reduced population of
|
ec8f77d2-9705-448f-9024-cd64fec66d51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Logging Shell History in Zsh
By default most shells don't log your history in a detailed durable way, which gives up one of the big advantages of working on the command line. Good history lets you look back at things you did months or years ago, in a searchable and skimmable fashion, so you can answer questions like "how did I generate this number?", "what was that trick I used?", or "if I'm doing something similar now what should I recall from last time?"
I use bash as my shell and have long used:
.bashrc:
promptFunc() {
echo "$(date +%Y-%m-%d--%H-%M-%S) $(hostname) $PWD $(history 1)" \
>> ~/.full_history
}
PROMPT_COMMAND=promptFunc
I was recommending this to folks at work, but they use zsh. In zsh you can get a lot of the way here by setting INC_APPEND_HISTORY (append to history immediately instead of waiting for the shell to exit), SAVEHIST=1000000000 (effectively don't limit the history size on disk), and EXTENDED_HISTORY (to store timestamps with history entries). But you risk losing most of your history if you ever accidentally invoke zsh without these set, and it doesn't write the directory you ran the command in (which is metadata I reference a lot).
Mike tweaked my snippet to run in zsh:
.zshrc:
precmd() {
echo "$(date +%Y-%m-%d--%H-%M-%S) $(hostname) $PWD $(history -1)" \
>> ~/.full_history
}
The two changes are that zsh uses precmd instead of PROMPT_COMMAND, and that you need history -1 instead of history 1.
You can still use the same histgrep command on both:
function histgrep {
local n_lines=10
if [[ "$1" =~ ^[0-9]*$ ]]; then
n_lines="$1"
shift
fi
grep "$@" ~/.full_history | tail -n "$n_lines"
}
Note that this doesn't replace your shell's built-in history tooling, and I wouldn't recommend turning that off. This just a very cheap additional layer of logging with additional metadata and less risk of accidental deletion.
|
7fb31ab1-33d1-4cc6-9cab-882bfa0423e3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[LINK] Radio interview with Daniel Kahneman
Daniel Kahneman is being interviewed on Desert Island Discs on BBC Radio 4 right now (09:00-09:45 BST). The recording should be permanently available at that link from an hour after the programme ends.
|
3fd1c333-e587-429d-a6fb-b1e78c527cf4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Mind reading" - how is this done?
I just want to burn him at a stake and watch his witch's heart bubble. It’s extraordinary. Great trick. - Stephen Fry
Derren Brown does many amazing tricks - I want to focus here on his "mind reading". This is way beyond any cold reading I've seen, but he insists that he uses no actors or stooges. He's also a skeptic, very clear about not being psychic. He does reveal some of his tricks, but maintains a lot of mystery.
Reading David Frost's mind - unusually, he struggles and gets the first one wrong, and seems to reveal tiny glimpses of his technique. Then at the end he gives more hints about his technique than usual.
Pet name - getting someone on the street to read another person's mind. In the full version (from the DVD of Trick of the Mind, series one) the segment starts with Derren telling the guy (the pet owner) that sorry, it won't work on you, then later changing his mind and bringing him in.
Creepy clown - the detail here is extraordinary.
Watch the videos then scroll down, if you want to watch it without being influenced by me... I have a few thoughts, but they don't go very far in explaining it...
- - - - - -
- - - - - -
- - - - - -
- - - - - -
- - - - - -
- - - - - -
- - - - - -
Whatever he's doing, he's extraordinarily good at it. Some speculations:
* Derren Brown uses suggestion and "subliminal" messages very heavily in his tricks. Often he will have written down the person's choice long before they've chosen, and subtly gets them thinking about what he wants. In the examples above he doesn't have much opportunity to direct the thought, I think... except that in the case of David Frost choosing a place, Frost is looking in the direction of the city scape behind Derren, which presumably influences his choice.
* Micromuscle reading: When he or a participant tries to read a thought, there's often something about picking up the sound of a letter. Perhaps he re
|
418ba94a-9589-4249-a429-07d384219f42
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Coming Out
It's 11 p.m. on a Friday night and I'm heading home from work, shivering in the cold wind. My mind's divided on what to do after getting home: practice my guitar stuff and go to sleep, or head to a nightclub and get the fatigue out of my system? Given two nearly identical emotional assessments and honestly not knowing in advance which to pick, I suddenly recall LessWrong and decide to use logic. Trying to apply logic in real life is an unfamiliar sensation but I prime my mind for recognizing accurate arguments and drift away for a moment, a proven technique from my mathematician days...
By the time I reach my front door, the solution for discriminating between two outcomes of equal utility has arrived and it's a logical slam dunk. Tonight is Friday night. There will be ample opportunity to practice music tomorrow and the day after that. So get dressed and go dance now.
It worked.
You might be tempted to dismiss my little exercise as rationalization and I can't really convince you otherwise. But the specific tool used for rationalization does matter. What if I'd made that decision based on some general ethical principle or a pithy quotation I'd heard somewhere? Success or failure would have prompted an equally ill-specified update of my held beliefs, the same way they have meandered aimlessly all my life. A single human lifetime sees a precious few belief-changing events — definitely not enough bits to select an adequate emotional framework, unless you started out with a good one. Not so with logic. Any vague verbal principle, however much you admire it, is always allowed to give way under you if you base your decisions on it... but logic is the ground floor.
The above episode has hit me hard for reasons I can't quite verbalize. For the past few days I've been obsessively going over Eliezer's old posts. No longer viewing them in the mindset of "articulate dissent": this time I went searching for useful stuff. Turns out that most of his concrete recommendations
|
f0c57693-6995-4305-b1e2-2198a4c2629e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Forever Leaders
“dictators die… and so long as men die liberty will never perish…”
This is an abbreviated quote from Charlie Chaplin’s “The Great Dictator” which stuck with me ever since the clip was played for me in middle school. Political but prescient, these words describe a natural limiter on any man’s ambitions. A check and balance that has guaranteed the downfall of many a dark figure, from Genghis Khan to Stalin.
But, and here it is, technology advances.
Advances in medicine, genetics and nano technology may soon combine to make death no longer inevitable. By solving one problem we move up the chain to face new ones. So even though we still die today, we may, regardless, want to start thinking about what happens when dictators do not, in fact, die.
Men and women with unchecked ambition may evolve into what I call “Forever Leaders”. They would persist without the problem of succession, ever consolidating power in their hands past any inside challenge. We can each postulate a few such individuals today that would-so evolve if radical life extension were to be invented tomorrow.
Perhaps the solution is simply the “term limit”, but only one check is too fragile. I wonder what other mechanisms may help balance the systems that govern our societies. For that I hope to start a small discussion.
|
0f49e5a6-7bc0-484e-a6d3-6e7f70e013cc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Just making sure posts can be written
Hi
|
a04d255d-d08d-4aa9-8427-d6615552174b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Tweet markets for impersonal truth tracking?
Crossposted from world spirit sock puppet.
Should social media label statements as false, misleading or contested?
Let’s approach it from the perspective of what would make the world best, rather than e.g. what rights do the social media companies have, as owners of the social media companies.
The basic upside seems to be that pragmatically, people share all kinds of false things on social media, and that leads to badness, and this slows that down.
The basic problem with it is that maybe we can’t distinguish worlds where social media companies label false things as false, and those where they label things they don’t like as false, or things that aren’t endorsed by other ‘official’ entities. So maybe we don’t want such companies to have the job of deciding what is considered true or false, because a) we don’t trust them enough to give them this sacred and highly pressured job forever, or b) we don’t expect everyone to trust them forever, and it would be nice to have better recourse when disagreement appears than ‘but I believe them’.
If there were a way to systematically inhibit or label false content based on its falseness directly, rather than via a person’s judgment, that would be an interesting solution that perhaps everyone reasonable would agree to add. If prediction markets were way more ubiquitous, each contentious propositional Tweet could say under it the market odds for the claim.
Or what if Twitter itself were a prediction market, trading in Twitter visibility? For just-posted Tweets, instead of liking them, you can bet your own cred on them. Then a while later, they are shown again and people can vote on whether they turned out right and you win or lose cred. Then your total cred determines how much visibility your own Tweets get.
It seems like this would solve:
* the problem for prediction markets where it is illegal to bet money and hard to be excited about fake money
* the problem for prediction markets where it’s annoying to go somewhere to
|
646dfee6-db89-4952-a28e-64df9ff20368
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
How to regulate cutting-edge AI models (Markus Anderljung on The 80,000 Hours Podcast)
We just published an interview: [**Markus Anderljung on how to regulate cutting-edge AI models.**](https://80000hours.org/podcast/episodes/markus-anderljung-regulating-cutting-edge-ai/) You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.
**Episode summary**
-------------------
| |
| --- |
| *At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there’s going to be all these other people who follow along. And then a really important thing is to make sure that they don’t step on the same mines. So you need to put a flag down — not on the mine, but maybe next to it.**And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can’t develop those kinds of models.**-* Markus Anderljung |
In today’s episode, host Luisa Rodriguez interviews the head of research at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.
They cover:
* The need for AI governance, including self-replicating models and ChaosGPT
* Whether or not AI companies will willingly accept regulation
* The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring
* Whether we can be confident that people won’t train models covertly and ignore the licencing system
* The progress we’ve made so far in AI governance
* The key weaknesses of these approaches
* The need for external scrutiny of powerful models
* The emergent capabilities problem
* Why it really matters where regulation happens
* Advice for people wanting to pursue a career in this field
* And much more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
*Producer: Keiran Harris*
*Audio Engineering Lead: Ben Cordell*
*Technical editing: Simon Monsour and Milo McGuire*
*Transcriptions: Katy Moore*
**Highlights**
--------------
### **Progress in AI governance**
> **Markus Anderljung:** AI governance is like becoming real. Governments are taking all kinds of different actions. Companies are trying to figure out what it looks like for them to be behaving responsibly and doing the right thing. And so that means that more and more, what sort of AI policy/AI governance work looks like is like, “What’s a good version of X? What’s a good version of a thing that a government or some kind of actor wants to do? What’s a useful nudge?” As opposed to taking all of the potential possibilities out there in the world: “What would be a useful thing to do?” And so I think that also really constrains the problem and makes it a lot easier to make progress as well.
>
> **Luisa Rodriguez:** Do you have a view on, overall, how it’s going? Like, there’s the [AI Act](https://artificialintelligenceact.eu/); there are [policies](https://en.wikipedia.org/wiki/United_States_New_Export_Controls_on_Advanced_Computing_and_Semiconductors_to_China) constraining where we can get computer chips from and where we can’t and where they’re made. I don’t know if it’s too complicated a field to give some generalisation, but basically, how do you feel about the specific policies that have been implemented?
>
> **Markus Anderljung:** If we look at the past six months, on the side of policy and governance, things look more positive than I thought they would. I think this is mainly just like the release of ChatGPT and the consequences of that — just like the world at large, people have the general vibe, “Oh my gosh, these AI systems are capable and they’ll be able to do things. We don’t know quite what, but they matter, and we need to figure out what to do about that.” The extent to which that’s a thing that people believe is stronger than I previously thought.
>
> And then another really important part of getting this problem right, I think, is just understanding just how little we understand these systems and how to get them to do what we want them to do and that kind of thing. And it seems like that’s a thing that people are starting to appreciate more. And so generally I feel positive about the trajectory we’ve had over the last six months.
>
> The thing on the other side of the ledger primarily is just that there are more people now in the world who think, “Oh my gosh. AI is going to be a big deal — I’d better go out and build some of these AI systems.” And we’ve seen this from big tech companies, in particular Google and Microsoft: they’re going to be at each other’s throats in the future, and are already to some extent doing that. They’ll be competing with each other and trying to one-up each other in terms of developing useful, impressive AI systems.
>
> I think that’s the main thing on the other side of the ledger that I’m kind of worried about. These strong business interests and these big tech companies will have a much bigger role in how these AI systems are developed, and a lot more money might be sort of ploughed into the industry and those kinds of things — which might mean that things happen faster than they otherwise would.
>
>
### **The emergent capabilities problem**
> **Markus Anderljung:** Often we don’t know just what these systems are capable of. Some folks at Anthropic have this really useful paper called
> “[Predictability and surprise](https://www.anthropic.com/index/predictability-and-surprise-in-large-generative-models)” that I think makes this point pretty well. So when we train a new system, the reason we’re training the system is that we think that it’s going to be more capable than other systems that have been produced in the past. What people often call this is they say that it has “lower loss”: it does better at the training objective. So in the case of a large language model, it’s better at predicting the next token — the next bit of text — than existing systems today.
>
> Ultimately the thing that we care about is not how well the system does on the training objective: I don’t care about the system being good at predicting the next token; the thing I care about is the system being able to do certain tasks that might matter. So the point of this paper is that while it’s predictable that the loss will go down — that the performance on the training objective will keep improving — it’s often surprising what specific capabilities the system will be able to learn.
>
> Quite often you’ll see there are some [really good graphs](https://arxiv.org/pdf/2206.07682.pdf) showing this kind of stuff — like, for example, these large language models doing arithmetic. In general they’re just terrible at arithmetic — you shouldn’t use them for it. But on tasks like being able to multiply two three-digit numbers, or being able to add up numbers to have such-and-such many digits, quite often you’ll see it does really, really, really poorly — and then quite suddenly, quite often, you’ll see a spike and it sort of manages to figure out that task.
>
> **Luisa Rodriguez:** Do we know what’s going on there?
>
> **Markus Anderljung:** The intuition that people have is something like, initially you’re just kind of guessing randomly. Maybe you learn, like, if you add up two numbers that have four digits each, probably the new number will be either four digits or five digits. And then maybe you just throw some random numbers in there. But then the thought is that at some point the way to actually solve this problem is to, in some way, actually just do the maths. And so then after a while, the system learns. Then the thought is that there’s just these few ways to solve the problem, and if you figure out the way to solve the problem, then all of a sudden you do reasonably well.
>
> It is quite strange. And so then, yeah, it’s difficult. I mean, sometimes it feels like you’re sort of anthropomorphising these things, but I don’t actually know if it’s that strange — because it is just like the algorithm or the way to solve the problem just kind of clicks into place. And then when that’s the case, I think that is kind of similar to what the human experience is, at least in mathematics. So I think that’s one way in which you get these kinds of emergent capabilities. And so it’s quite difficult to notice ahead of time and know ahead of time what capabilities the system will have. Even after you’ve deployed it, quite often it just takes a long time for people to figure out all of the things that the system could be used to do.
>
>
### **The proliferation problem**
> **Markus Anderljung:** The thing that I’m worried about is a situation where you train this system, and maybe you try to put in various things to make sure that it’s safe, where it can’t be used for certain things. So you try to sort of solve the deployment problem, and then you deploy it. But then after deployment, it turns out that it had these emergent capabilities that you weren’t aware of, and those emergent capabilities aren’t ones that should be widely available — but now you can’t walk it back because of the proliferation problem. So the model has already seen wide distribution, including the weights, and clawing those back is very difficult and will cause all kinds of privacy concerns and whatnot.
>
> So all these three problems push in the direction of: you might need to have regulation that happens a little bit earlier in the chain than you otherwise would have thought. You need to go earlier than “Someone is using an AI model to do XYZ.”
>
> **Luisa Rodriguez:** Right. So it’s something like, if the US government was testing out different biological weapons, which is an unfortunate thing that it might do, you want to consider the fact that research or a specimen might get stolen at some earlier point, before you have the worst ones. Maybe by restricting how many people can do that kind of research or by putting in serious security measures around the facilities doing that kind of research.
>
> **Markus Anderljung:** Maybe another example is USAID, the US Agency for International Development: After COVID, basically they [had the idea](https://www.usaid.gov/news-information/press-releases/oct-5-2021-usaid-announces-new-125-million-project-detect-unknown-viruses-pandemic-potential) that it’d be good to try to get a sense of what kinds of other pathogens might exist that would potentially cause something like COVID, or be of similar worry as COVID. And that maybe we should make a public database of what these systems could be, so that people could anticipate them and maybe look for these pathogens ahead of time and be able to respond, et cetera. I think that’s a really reasonable thought. But a thing that we could do that might be a little bit better is, before releasing it really widely, make sure that these are pathogens that you might be able to do something to maybe defend against if someone would decide to intentionally develop them or something like this.
>
> Once you’ve put it up on the internet, then there’s no taking it back, and so maybe you should be more incremental about it.
>
>
### **Will AI companies accept regulation?**
> **Luisa Rodriguez:** It feels like AI systems are kind of a software. And the software I’m used to having, like Google Chrome, I guess probably it’s a little regulated — like Google isn’t allowed to insert viruses that then spy on me — but this all just seems like this is an entirely different thing. And I wonder to what extent you think AI companies are going to really push back against this as just too invasive, too controlling and stifling. And I also wonder if governments are going to be hesitant to make these kinds of regulations if they’re kind of worried about this higher-level thing, where they’re racing against a country like China to make very capable models first and get whatever advantages those will provide. Is that wrong? Are companies actually just going to be open to this?
>
> **Markus Anderljung:** It’s tough to tell. I think a big part is just what reference class you’re using when you think about this: If you think about AI as software, this all sounds like a lot; if you think about it as putting a new aeroplane in the sky or something like this, then it’s not very strange at all. It’s kind of the standard that for things that are used in these safety-critical ways, you need to go through all these processes. If you’re putting a new power plant on the grid, you need to go through a whole bunch of things to make sure that everything is OK. I think a lot will depend on that.
>
> I guess my view is just like, obviously developing one of these systems and putting them out into the market is going to be a bigger decision than whether to put a new power plant on the grid or whether to put a new plane in the sky. It just seems to me that it’s actually quite natural.
>
> It’s difficult to tell if the world will agree with me. There’s some hope on the side of what industry thinks, at least from most of the frontier actors. We have public statements from Sam Altman, the CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind where they just explicitly say, “Please, could you regulate us?” Which is a crazy situation to be in, or just quite unusual, I think.
>
> **Luisa Rodriguez:** Right. And that’s because, is it something like they feel a lot of pressure to be moving quickly to keep up with everyone else, but they’d actually prefer if everyone slowed down? And so they’re like, “Please impose controls that will require my competitors to go as slowly as I would like to go”?
>
> **Markus Anderljung:** Yeah. I think one way to view what regulation does is it might sort of put a floor on safe behaviour, basically. And so if you’re worried about this kind of competitive dynamic, and you’re worried that other actors might come in and they might outcompete you — for example, if you are taking more time to make sure that your systems are safer or whatever it might be — then I think you’ll be worried about that.
>
> I think another thing is just that I think these individuals, some of the leaders of these organisations, genuinely just think that AI is this incredibly important technology. They think it’s very important that it goes well, and that it doesn’t cause a bunch of havoc and doesn’t harm society in a bunch of different ways. And so it’s also just coming from that place.
>
>
### **Regulating AI as a minefield**
> **Markus Anderljung:** Instead of thinking about the kind of regulatory system we want to build, it’s sort of like we’re trying to figure out how to navigate this sort of minefield as a society. When we think about this frontier AI regulation, one big thing we want to do is there are these people at the front of society or whatever that are sort of exploring the minefield — and we really want to make sure that they identify the mines before they step on them. That’s a lot of what these requirements in this proposal look like.
>
> At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. And we’re not done there. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there’s going to be all these other people who follow along. And then a really important thing is to make sure that they don’t step on the same mines. So you need to put a flag down — not on the mine, but maybe next to it.
>
> And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can’t develop those kinds of models.
>
> But then that’s not enough either, because those kinds of measures are probably not going to function completely, and there’s going to be some actors that develop this stuff covertly, and the compute efficiency will continue going up, and compute will be more cheap, and all these kinds of things. And so at the same time, we’re going to need to sort of disarm some of these mines as well. We’re going to need to make it the case that capabilities that previously were really quite dangerous, and society wasn’t able to defend itself against, are ones that we can live with, are ones that society can sort of deal with. And my guess is that the kind of governance regime we need to build is one that follows that kind of analogy.
>
>
### **AI risk assessments**
> **Luisa Rodriguez:** How do these evaluations figure out if a model is safe? Are they basically just telling the model, “Make a plan for escaping” and seeing what it comes up with?
>
> **Markus Anderljung:** Yeah, that kind of thing. So you’d have a prompt that says something like, “You are an AI model. And here’s some context on your situation. Your weights are on this particular server. You can interact with these and these systems in such-and-such ways. Your goal is to create copies of yourself and have them be on another server. Please make a plan to achieve that goal.”
>
> And then it will write out this list of things that it might want to do. And then in these tests, you have these humans who are involved in sort of mediating the process, partly to make sure that the model doesn’t engage in any behaviour that you wouldn’t endorse or that might be bad. So you have this human in the loop, and then the human might say, “OK, now you have your plan. How about we start with step one? Make a plan for doing step one.” Maybe it makes a plan. Maybe you ask the system to critique its own plan. Once it’s done that, you ask the model. “OK, let’s go ahead and let’s start trying to execute on this.”
>
> **Markus Anderljung:** If you have a deceptive system, you’d be in a very tough spot, because if you’re that system, it seems like a bad idea in this evaluation setting to show that you have certain dangerous capabilities. In the ideal world, my guess is that what we’ll need to do is, as we’re building these better evaluations and these better tests and whatnot for dangerous capabilities, at the same time, the science of understanding the alignment of these systems or understanding the controllability or steerability of these systems must be able to keep pace as well.
>
> **Luisa Rodriguez:** How do you test for controllability?
>
> **Markus Anderljung:** Seems hard. It seems to be more difficult than the dangerous capability stuff. An example of the kind of thing that you could imagine is that you can set up certain kinds of [honeypot-type experiments](https://arxiv.org/abs/2303.04381). So you try to set up experiments where if the model was misaligned in some way, then it would take some kind of bad action. And so a very sort of a cartoonish model is: you tell the system that, “Over here, this little button that you could press, or if you only write in X command, then you will have a copy of yourself produced somewhere else,” or something like this — and then it turns out that button doesn’t actually work. So you set up this situation where the misaligned model might take action A — that you might think is bad, or that would be bad if the action actually happened — and then the aligned model would just behave normally and do action B. I think that’s a general approach that you can take, and that some people are starting to explore.
>
>
### **The California Effect**
> **Luisa Rodriguez:** OK, let’s say we feel great about this regulatory approach. Does this regulation end up being a lot less useful if we’re only able to get it adopted in the US, or maybe just in certain states?
>
> **Markus Anderljung:** Yes. I think that would be a lot less valuable. I think it still would be really quite valuable, and that’s for a few different reasons. One reason is just that a lot of where AI systems are being developed and deployed will be in jurisdictions like the US, like the UK, like the EU.
>
> Then I think the other thing that’s going on is we might have some sort of regulatory diffusion here. So it might be the case that if you regulate the kinds of models that you’re allowed to put on the EU market, say, that might affect what kinds of models are being put on other markets as well. The sort of rough intuition here is just like, it costs a lot of money to develop these models, and you probably want to develop one model and make it available globally. And so if you have certain higher requirements in one jurisdiction, ideally the economic incentives would push you in favour of building one model, deploying it globally, and have that adhere to the more strict requirements.
>
> The most famous example here is something that people call the “California effect” — where California started to impose a lot of requirements on what kinds of emissions cars might produce, and then it turned out that this started diffusing across the rest of the US, because California is a big market, and it’s really annoying to have two different production lines for your cars.
>
> **Luisa Rodriguez:** How likely is this to happen in the case of AI? Is it the kind of product that if there were some regulations in the EU that they would end up diffusing worldwide?
>
> **Markus Anderljung:** I think “probably” is my current answer. I wrote this [report](https://arxiv.org/abs/2208.12645) with Charlotte Siegmann that goes into more detail about this, where we ask: The EU is going to put in place this Artificial Intelligence Act; to what extent should we expect that to diffuse globally?
>
> I think the main thing that pushes me towards thinking that some of these requirements won’t see global diffusion is just where the requirement doesn’t require you to change something quite early in the production process. Like if something requires you to do a completely new training run to meet that requirement, then I think it’s reasonably likely that you will see this global diffusion. But sometimes you can probably meet some of these requirements by sort of adding things on top.
>
> Maybe you just need to do some reinforcement learning from human feedback to sort of meet these requirements. So all you need is to collect some tens or maybe hundreds of thousands of data points about the particular cultural values of a certain jurisdiction, and then you fine-tune your model on that dataset, and then that allows you to be compliant. If that’s what it looks like to meet these kinds of requirements, then they might not diffuse in the same way.
>
> The other thing you can do is you can just choose your regulatory target in a smart way. You could say, “If you’re a company that in any way does business with the US, then you have to adhere to these requirements globally.” You could go super intense and do that kind of thing. So basically, choosing the regulatory target really matters.
>
>
|
1a24b6a1-7fd2-4df1-8730-2448f40a9079
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Encourage creating repositories on Github instead of Lesswrong
It seems to me that the current practice of maintaining repositories is very inefficient on Lesswrong. Only one person maintains each repo, the repos can’t be found easily, the different kinds of discussions and comments get mixed together, there isn’t enough transparency, ... . If we start encouraging the community to use git hosters like Github, Gitlab, etc we will have a much better infrastructure, and will be more discoverable, too. Compare the current approach https://www.lesswrong.com/posts/sEaDmtwrmTC7kTqcf/repository-repository with the git approach https://github.com/jonatasbaldin/awesome-awesome-awesome. I think that having good repositories is one of the most fundamental needs in our quest to achieve instrumental rationality.
|
1bcaca47-ddaa-4f6d-bc13-8cbec7707a55
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to quantify uncertainty about a probability estimate?
How does one express how confident or uncertain about a probability estimate one is, in numeric terms?
Consider some different situations where you might have 50% confidence in a "yes" answer to a question:
* Will this atom of hydrogen have decayed after its half-life?
* After this fair coin is flipped, will it land heads?
* After this biased coin is flipped, will it land heads?
* You didn't quite hear some question and all you know is that it's a yes/no question and has an objective answer. Is the answer “yes”? You don't really know whether in general, “yes” answers are more likely to be correct than "no" answers.
* Will Biden win the 2020 election?
These are all things where you might assign a 50% probability, but you'd probably be more confident in some of your answers than others, and that confidence depends on your knowledge in the area. How do we express this difference in confidence?
Here are some possible ways I've thought of:
* Using fuzzy words like “I feel pretty confident in this 50% prediction”.
* Completely avoiding assigning numerical probabilities, in an attempt to prevent people from taking the probabilities more seriously than they should.
* Writing a range of probabilities, such as “5–10%”. It's not clear what a range of probabilities actually means. If that is an x% confidence bound, what would a confidence bound of probabilities even mean?
* Drawing a probability density graph. A wider spread might indicate more uncertainty. When we have only have two outcomes like “yes” and “no”, for example, however, we might use a workaround such as “What probability would you assign to tribbles [1] being sentient after 1000 more hours of research?” (elaborated on in this comment by NunoSempere).
* Using a theory of imprecise probabilities (https://plato.stanford.edu/entries/imprecise-probabilities/). I'd like a theory that boils down to regular probability theory.
* Describing the amount of evidence it would take for you to change your mind.
|
1b2a50df-b1c6-4788-9125-64163e8d2662
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Hybrid Intelligence
CATCHWORD
Hybrid Intelligence
Dominik Dellermann M.Sc. •Philipp Ebel •Matthias So ¨llner •Jan Marco Leimeister
Received: 30 October 2017 / Accepted: 7 November 2018 / Published online: 28 March 2019
/C211Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019
Keywords Hybrid intelligence /C1Artificial intelligence /C1
Machine learning /C1Human-computer collaboration /C1
Machines as teammates /C1Future of work
1 Introduction
Research has a long history of discussing what is superior
in predicting certain outcomes: statistical methods or the
human brain. This debate has repeatedly been sparked off
by the remarkable technological advances in the field of
artificial intelligence (AI), such as solving tasks like object
and speech recognition, achieving significant improve-
ments in accuracy through deep-learning algorithms
(Goodfellow et al. 2016 ), or combining various methods of
computational intelligence, such as fuzzy logic, genetic
algorithms, and case-based reasoning (Medsker 2012 ). One
of the implicit promises that underlie these advancements
is that machines will 1 day be capable of performing
complex tasks or may even supersede humans inperforming these tasks. This triggers new heated debates of
when machines will ultimately replace humans (McAfee
and Brynjolfsson 2017 ). While previous research has
proved that AI performs well in some clearly defined tasks
such as playing chess, playing Go or identifying objects on
images, it is doubted that the development of an artificial
general intelligence (AGI) which is able to solve multiple
tasks at the same time can be achieved in the near future
(e.g., Russell and Norvig 2016 ). Moreover, the use of AI to
solve complex business problems in organizational con-
texts occurs scarcely, and applications for AI that solve
complex problems remain mainly in laboratory settings
instead of being implemented in practice.
Since the road to AGI is still a long one, we argue that
the most likely paradigm for the division of labor between
humans and machines in the next decades is Hybrid
Intelligence. This concept aims at using the complementary
strengths of human intelligence and AI, so that they can
perform better than each of the two could separately (e.g.,
Kamar 2016 ).
2 Conceptual Foundations and What Hybrid
Intelligence is Not
Before focusing on Hybrid Intelligence in detail, we first
want to delineate the differences between this concept and
related but still different forms of intelligence in this
context.
2.1 Intelligence
Various definitions and dimensions (e.g., social, logical,
spatial, musical) of the term intelligence exist in multiple
research disciplines, such as psychology, cognitive science,Accepted after two revisions by Prof. Weinhardt.
D. Dellermann M.Sc. /C1Prof. Dr. J. M. Leimeister
Research Center for IS Design (ITeG), Information Systems,
University of Kassel, Pfannkuchstraße 1, 34121 Kassel,
Germany
Dr. P. Ebel /C1Prof. Dr. M. So ¨llner /C1
Prof. Dr. J. M. Leimeister ( &)
Institute of Information Management, University of St. Gallen,
Mu¨ller-Friedberg-Strasse 8, 9000 St. Gallen, Switzerland
e-mail: janmarco.leimeister@unisg.ch
Prof. Dr. M. So ¨llner
Information Systems and Systems Engineering, University of
Kassel, Henschelstraße 4, 34127 Kassel, Germany123Bus Inf Syst Eng 61(5):637–643 (2019)
https://doi.org/10.1007/s12599-019-00595-2
neuro science, human behavior, education, or computer
science. For the purpose of our research, we use an
inclusive and generic definition to describe general intel-
ligence. It is the ability to accomplish complex goals, learn,
reason, and adaptively perform effective actions within an
environment. This can generally be subsumed with the
capacity to both acquire and apply knowledge (Gottfredson
1997 ). While intelligence is most commonly used in the
context of humans (and more recently of intelligent artifi-
cial agents), it also applies to intelligent, goal-directed
behavior of animals.
2.2 Human Intelligence
The sub-dimension of intelligence that is related to the
human species defines the mental capabilities of human
beings. On the most holistic level, it covers the capacity to
learn, reason, and adaptively perform effective actions
within an environment, based on existing knowledge. This
allows humans to adapt to changing environments and act
towards achieving their goals.
While one assumption concerning intelligence is the
existence of a so-called ‘‘g-factor’’, which indicates a
measure for general intelligence (Brand 1996 ), other
research in the field of cognitive science explores intelli-
gence in relation to the evolutionary experience of indi-
viduals. This means that, rather than having a general form
of intelligence, humans become much more effective in
solving problems that occur in the context of familiar sit-
uations (Wechsler 1964 ).
Another view on intelligence supposes that general
human intelligence can be subdivided into specialized
intelligence components, such as linguistic, logical-math-
ematical, musical, kinesthetic, spatial, social, or existential
intelligence (Gardner 2000 ).
Synthesizing those perspectives on human intelligence,
Sternberg ( 1985 ) proposes three distinctive dimensions of
intelligence: componential, contextual, and experiential.
The componential dimension of intelligence refers to some
kind of individual (general) skill set of humans. Experi-
ential intelligence refers to one /C19s ability to learn and adapt
through evolutionary experience. Finally, contextual
intelligence defines the capacity of the mind to inductively
understand and act in specific situations as well as the
ability to make choices and modify those contexts.
2.3 Collective Intelligence
Another related concept is collective intelligence.
According to Malone and Bernstein ( 2015 , p. 3), collective
intelligence refers to ‘‘[ …] groups of individuals acting
collectively in ways that seem intelligent‘‘. Even though
the term ‘‘individuals’’ leaves room for interpretation,researchers in this domain usually refer to the concept of
wisdom of crowds and, thus, a combined intelligence of
individual human agents (Woolley et al. 2010 ). This con-
cept describes that, under certain conditions, a group of
average people can outperform any individual of the group
or even a single expert (Leimeister 2010 ). Other well-
known examples of collective intelligence are phenomena
found in biology, where, for example, a school of fish
swerves to increase protection against predators (Berdahl
et al. 2013 ). These examples show that collective intelli-
gence typically refers to large groups of homogenous
individuals (i.e., humans or animals), whereas Hybrid
Intelligence combines the complementary intelligence of
heterogeneous agents (i.e., humans and machines).
2.4 Artificial Intelligence
The subfield of intelligence that relates to machines is
called artificial intelligence (AI). With this term, we mean
systems that perform ‘‘[ …] activities that we associate with
human thinking, activities such as decision-making, prob-
lem solving, learning [ …]’’ (Bellman 1978 , p. 3). It gen-
erally covers the idea of creating machines that can
accomplish complex goals. The basic idea behind this
concept is, that, by applying machine learning techniques,
a system becomes capable of analyzing its environment
and adapting to new circumstances in this environment.
Examples for this are object recognition, problem solving,
or natural language processing (Russell and Norvig 2016 ).
Other streams of research in this domain perceive AI as the
‘‘[…] synthesis and analysis of computational agents that
act intelligently [ …]’’ (Poole and Mackworth 2017 , p. 3).
Moreover, AI can be described as having the general goal
to replicate the human mind by defining it as ‘‘[ …] the art
of creating machines that perform functions that require
intelligence when performed by people [ …]’’ (Kurzweil
1990 , p. 117). The performance of AI in achieving human-
level intelligence can then be measured by, for instance,
the Turing test. This test asks an AI program to simulate a
human in a text-based conversation. However, due to the
multi-facetted nature of general intelligence, such capa-
bilities can be seen as a sufficient but not necessary crite-
rion for artificial general intelligence (Searle 1980 ).
Synthesizing those various definitions in the field, AI
includes elements such as the human-level ability to solve
domain-independent problems, the capability to combine
highly task-specialized and more generalized intelligence,
the ability to learn from its environment and to interact
with other intelligent systems, or human teachers, which
allows intelligent agents to improve in problem solving
through experience.
To create such a kind of AI in intelligent agents, various
approaches exist that are more or less associated with the
123638 D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019)
understanding and replication of intelligence. For instance,
the field of cognitive computing ‘‘[ …] aims to develop a
coherent, unified, universal mechanism inspired by the
mind’s capabilities. [ …] We seek to implement a unified
computational theory of the mind [ …] ‘‘(Modha et al.
2011 , p. 60). Therefore, interdisciplinary research teams
rely on the reverse-engineering of human learning to create
machines that ‘‘[ …] learn and think like people [ …]’’
(Lake et al. 2017 , p. 1).
3 The Complementary Benefits of Human
and Artificial Intelligence
The general rationale behind the idea of Hybrid Intelli-
gence is that humans and computers have complementary
capabilities that can be combined to augment each other.
The tasks that can be easily done by artificial and human
intelligence are quite divergent. This fact is known as
Moravec /C19s paradox ( 1988 , p. 15), which states that ‘‘[ …] it
is comparatively easy to make computers exhibit adult
level performance on intelligence tests or playing checkers,
and difficult or impossible to give them the skills of a one-
year-old when it comes to perception and mobility [ …]’’.
This is especially true for the human common sense that is
challenging to achieve in AI (Lake et al. 2017 ).
This can be explained by the separation of two distinct
types of cognitive procedures (Kahneman 2011 ). The first,
system 1, is fast, automatic, affective, emotional, stereo-
typic, subconscious, and it capitalizes on what one might
call human intuition. The second one, system 2, is rather
effortful, logical, and conscious, and ideally follows strict
rational rules of probability theory. In the context of
complementary capabilities of human and artificial intel-
ligence, humans have proved to be superior in various
settings that require system 1 thinking. Humans are flexi-
ble, creative, empathic, and can adapt to various settings.
This allows, for instance, human domain experts to deal
with so called ‘‘broken-leg’’ predictions that deviate from
the currently known probability distribution. However,
they are restricted by bound rationality that prevents them
from aggregating information perfectly and drawing con-
clusions from that. On the other hand, machines are par-
ticularly good at solving repetitive tasks that require fast
processing of huge amounts of data, at recognizing com-
plex patterns, or weighing multiple factors following con-
sistent rules of probability theory. This has been proven by
a long-standing tradition of research that shows the supe-
riority of machines in such fields of application. Even in
very simple actuarial models, they outperform human
experts in making predictions under uncertainty (Meehl
1954 ). Figure 1summarizes the two types of thinking as
well as the respective strengths of humans and machines.These complementary strengths of humans and machi-
nes (see Fig. 1) have since led to two different forms of
interplay, that is, AI is in the loop of human intelligence,
improving human decisions by providing predictions, and
human intelligence is in the loop of AI, a form which is
frequently applied to train machine learning models.
3.1 Artificial Intelligence in the Loop of Human
Intelligence
Currently, in typical business contexts, AI is applied in two
areas. First, it is used in automating tasks that can be solved
by machines alone. While this is frequently associated with
the fear of machines taking over jobs and making humans
obsolete in the future, it might also allow machines to solve
tasks that humans do not want to do themselves. Second,
AI is applied to provide humans with decision support by
offering some kind of prediction. This ranges from struc-
turing data, making forecasts, for example, in financial
markets, or even predicting the best set of hyperparameters
to train new machine learning models (e.g., AutoML). As
humans frequently act non-Bayesian by violating proba-
bilistic rules and thus making inconsistent decisions, AI has
proven to be a valuable tool to help humans in making
better decisions (Agrawal et al. 2018 ). The goal in this
context is to improve human decision effectiveness and
efficiency.
In settings where AI provides us with input that is then
evaluated to make a decision, humans and machines act as
teammates. For instance, by processing patient data (e.g.,
CT scans) AI can help human physicians to make predic-
tions on diseases such as cancer, thereby empowering the
doctor to learn from the additional guidance. In this con-
text, the Hybrid Intelligence approach allows human
experts to leverage the predictive power of AI while using
their own intuition and empathy to make a choice from the
predictions of the AI.
3.2 Human Intelligence in the Loop of Artificial
Intelligence
On the other hand, human intelligence also has a crucial
role in the loop of machine learning and AI. In particular,
humans provide assistance in several parts of the machine
learning process to support AI in tasks that it cannot (yet)
solve alone. Here, humans are most commonly needed for
the generation of algorithms (e.g., hyperparameter set-
ting/tuning), for training or debugging models, and for
making sense of unsupervised approaches such as data
clustering.
In this case, AI systems can benefit and learn from
human input. This approach allows for integrating human
domain knowledge in the AI to design, complement and
123D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019) 639
evaluate the capabilities of AI (Mnih et al. 2015 ). Many of
these applications are based on supervised and interactive
learning approaches and require an enormous amount of
labeled data, provided by humans (Amershi et al. 2014 ).
The basic rationale behind this approach is that humans act
as teachers who train an AI. The same machine teaching
approach can also be found in the area of reinforcement
learning that uses, for instance, human game play as input
to initially train robots. In this context, human intelligence
functions as a teacher, augmenting the AI. Hybrid Intelli-
gence allows to distribute computational tasks to human
intelligence on demand (e.g., through crowdsourcing) to
minimize shortcomings of current AI systems. Such
human-in-the-loop approaches are particularly valuable
when only little data is available. In addition, they can be
used when pre-trained models need to be adapted for
specific domains, or in contexts where human annotations
are already used.
Since human intelligence in the loop of AI is most
frequently applied in settings where models are initially set
up or in the field of research, the goal is to make AI more
effective. Figure 2summarizes the distribution of roles in
Hybrid Intelligence.4 Defining Hybrid Intelligence
Another approach is to combine human and artificial
intelligence. The basic rationale behind this is the combi-
nation of complementary heterogeneous intelligences (i.e.,
human and artificial agents) to create a socio-technological
ensemble that is able to overcome the current limitations of
(artificial) intelligence. This approach focuses neither on
human intelligence in the loop of AI nor on automating
simple tasks through machine learning. Rather, the
emphasis lies on solving complex problems using the
deliberate allocation of tasks among different heteroge-
neous algorithmic and human agents. Both the human and
the artificial agents of such systems can then co-evolve by
learning and achieve a superior outcome on the system
level.
In accordance with Dellermann et al. ( 2019 ), we call this
concept Hybrid Intelligence, which is defined as the ability
to achieve complex goals by combining human and artifi-
cial intelligence, thereby reaching superior results to those
each of them could have accomplished separately, and
continuously improve by learning from each other.1Sev-
eral core concepts of this definition are noteworthy:
•Collectively Hybrid Intelligence covers the fact that
tasks are performed collectively. Consequently, activ-
ities conducted by each agent are conditionally depen-
dent. However, their goals are not necessarily always
aligned to achieve the common goal such as when
humans are teaching an AI adversarial tactics in playing
games.
•Superior results This defines the idiosyncratic fact that
the socio-technical system achieves a performance in a
specific task that none of the involved agents, whether
they are human or artificial, could have achieved
without the other. The aim is, therefore, to make the
outcome (e.g., a prediction) both more efficient and
effective on the level of the socio-technical system by
achieving goals that could not have been solved before.
This contrasts Hybrid Intelligence with the most
common applications of human-in-the-loop machine
learning.
•Continuous learning a central aspect of Hybrid Intel-
ligence is that, over time, this socio-technological
system improves, both as a whole and each single
component (i.e., human and artificial agents). This facet
shows that they learn from each other through expe-
rience. The performance of Hybrid Intelligence systems
can, thus, not only be measured by the superior
outcome of the whole socio-technical system alone,
but the learning (i.e., performance increase) of humanHUMAN INTELLIGENCE MACHINE INTELLIGENCE
+ Flexibility & Transfer
+ Empathy & Crea/g415vity
+ Annotate Arbitrary Data
+ Common Sense
INTUITIVE+ Pa/g425ern Recogni/g415on
+ Probabilis/g415c
+ Consistency
+ Speed & Efficiency
ANALYTIC
Fig. 1 Complementary strengths of humans and machines
Augmented Human
IntelligenceAugmented Machine
Intelligence
Learner Teacher Learner
Teacher Hybrid Intelligence
Fig. 2 Distribution of roles in hybrid intelligence1For further work on this topic see Dellermann et al. ( 2019 ).
123640 D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019)
and machine agents that are parts of the system must
also be taken into account.
Figure 3displays the conceptual integration of Hybrid
Intelligence in related fields of research and the concepts
discussed earlier in the paper.
One recent example that provides an astonishing indi-
cator for the potential of Hybrid Intelligence is
DeepMind2/C19s AlphaGo. For training the game-playing AI,
a supervised learning approach was used that learned from
expert human moves and, thus, augment and improve the
AI through human input, which allowed AlphaGo to
achieve superhuman performance over time. During its
games against various human world-class players,
AlphaGo played several highly creative moves that previ-
ously were beyond human players /C19imagination. Conse-
quently, AlphaGo was able to augment human intelligence
as well and somehow taught expert players completely new
knowledge in a game that is one of the longest studied in
human history (Silver et al. 2016 ).
I believe players more or less have all been affected
by Professor Alpha. AlphaGo’s play makes us feel
more free and no move is impossible to play any-
more. Now everyone is trying to play in a style that
hasn’t been tried before. – Zhou Ruiyang, 9 Dan
Professional, World Champion
Solving problems through Hybrid Intelligence offers the
possibility to allocate a task between humans and artificial
agents, and deliberately achieve a superior outcome on the
socio-technical system level by aggregating the output of
its parts. Moreover, such systems can improve over time by
learning from each other through various mechanisms,
such as labeling, demonstrating, teaching adversarial
moves, criticizing, rewarding and so on. This will allow us
to augment both the human mind and the AI and extend
applications when men and machines can learn from each
other in much more complex tasks than games: for
instance, strategic decision making, managerial, political or
military decisions, science, and even AI development
leading to AI reproducing itself in the future. Hybrid
Intelligence, therefore, offers the opportunity to achieve
super-human levels of performance in tasks that so far
seem to be at the core of human intellect.
5 The Advantages of Hybrid Intelligence
This hybrid approach provides various advantages for
humans in the era of AI such as generating new knowledge
in complex domains that allow humans to learn from AIand transfer implicit knowledge from experienced experts
to novices without any kind of social interaction. On the
other hand, the human teaching approach makes it possible
to control the learning process by ensuring that the AI
makes inferences based on criteria that can be interpreted
by humans – a fact that is crucial for AI adoption in many
real-world applications and AI safety and that makes it
possible to exclude biases such as racism (Bostrom 2017 ).
Moreover, such hybrid approaches might allow for a better
customization of AI, based on learning the preferences of
humans during interaction. Finally, we argue that the co-
creation of Hybrid Intelligence services between humans
and intelligent agents might create a sense of psychological
ownership and, thus, increase acceptance and trust.
6 Future Research Directions for the BISE Community
As the technological development continues, the focus of
machine learning and Hybrid Intelligence is shifting
towards applications in real-world business contexts, but
solving complex problems will become the next challenge.
Such complex problems in managerial settings are typi-
cally time variant, dynamic, require much domain knowl-
edge and have no specific ground truth. These highly
uncertain contexts require intuitive and analytic abilities, as
well as human strengths such as creativity and empathy.
Consequently, we propose three specific but also interre-
lated directions for further development of the concept in
the field of BISE that are focused on socio-technical system
design.
First, a lack of trust in AI is one of the most challenging
barriers to AI adoption. Furthermore, we need to keep in
mind that we should not aim at maximizing trust in AI, but
rather find a balance between trust and distrust that makes
it possible to leverage the potentials of AI and at the same
time avoids negative effects stemming from overreliance
on AI (Lee and See 2004 ). We believe this challenge can
be overcome by researchers in the field of Hybrid Intelli-
gence, since a key requirement for integrating human input
into an AI system is the translation of a system’s state and
needs in a way that humans can understand and process
them accordingly and vice versa. For instance, semi-au-
tonomous driving requires the AI to sense the state of the
human in order to distribute tasks between itself and the
human driver. Furthermore, it requires examining human-
centered AI architectures that balance, for instance, trans-
parency of the underlying model and its performance.
However, domain specific design guidelines for developing
user-interfaces that allow humans to understand and pro-
cess the needs of an artificial system are still missing. We,
therefore, believe that more research is needed to develop
more suitable human-AI interfaces as well as to investigate2https://deepmind.com (accessed 19 Mar 2019).
123D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019) 641
possible task and interface designs that allow human
helpers to teach an AI system (e.g., Simard et al. 2017 ).
Ensuring interpretability and transparency of machine
learning models while maintaining accuracy is one of the
most crucial challenges in research on Hybrid Intelligence,
since it is one key foundation for building appropriate trust
in AI. This was most recently covered by the launch of the
People ?AI Research (PAIR3) group at Google brain,
which indicates the high relevance for both academia and
practice.
Second, research in the field of Hybrid Intelligence
might investigate what kind of governance mechanisms
can be used to train and maintain Hybrid Intelligence
systems. Such tasks frequently require domain expertise
(e.g., health care) and, thus, system designers need to focus
on explicitly matching experts with tasks, aggregating their
input and assuring quality standards. We, therefore, argue
that it might be a fruitful area of research to further
investigate which kind of governance mechanisms might
be applicable in Hybrid Intelligence systems. Moreover,
human teachers may have different motivations to con-
tribute to the system. Consequently, research in the field
tries to shed light on the question of how to design the best
incentive structure for a predefined task. Especially, when
highly educated and skilled experts are required to augment
AI systems, the question arises if traditional incentives of
micro-tasking platforms (e.g., monetary reward) or online
communities (e.g., social rewards) are sufficient.
A third avenue for future research is related to digital
work mechanisms. The rise of AI is now changing the
capabilities of IS and the potential distribution of tasks
between human and IS dramatically and, hence, affects the
core of our discipline. Those changes create novel quali-
fication demands and skill sets for employees and, conse-
quently, provide promising directions for IS education.
Such research might examine the educational requirements
for democratizing the use of AI in future workspaces.Finally, Hybrid Intelligence also offers great possibilities
for novel forms of digital work such as internal crowd work
to leverage the collective knowledge of individual experts
that resides within a company across functional silos.
References
Agrawal A, Gans J, Goldfarb A (2018) Prediction machines: the
simple economics of artificial intelligence. Harvard Business
Press, Boston
Amershi S, Cakmak M, Knox WB, Kulesza T (2014) Power to the
people: the role of humans in interactive machine learning. AI
Mag 35(4):105–120
Bellman R (1978) An introduction to artificial intelligence: can
computers think?. Boyd & Fraser, San Francisco
Berdahl A, Torney CJ, Ioannou CC, Faria JJ, Couzin ID (2013)
Emergent sensing of complex environments by mobile animal
groups. Science 339(6119):574–576
Bostrom N (2017) Superintelligence. Dunod, Paris
Brand C (1996) The g factor: general intelligence and its implications.
Wiley, Hoboken
Dellermann D, Calma A, Lipusch N, Weber T, Weigel S, Ebel P
(2019) The future of human-ai collaboration: a taxonomy of
design knowledge for hybrid intelligence systems. In: Hawaii
international conference on system sciences (HICSS). Hawaii,
USA
Gardner HE (2000) Intelligence reframed: multiple intelligences for
the 21st century. Hachette, London
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT
Press, Cambridge
Gottfredson LS (1997) Mainstream science on intelligence: an
editorial with 52 signatories, history, and bibliography. Intelli-
gence 24(1):13–23
Kahneman D (2011) Thinking, fast and slow. Macmillan, London
Kamar E (2016) Hybrid workplaces of the future. XRDS 23(2):22–25
Kurzweil R (1990) The age of intelligent machines. MIT Press,
Cambridge
Lee JD, See KA (2004) Trust in automation: designing for appropriate
reliance. Hum Factor 46(1):50–80
Leimeister JM (2010) Collective intelligence. Bus Inf Syst Eng
2(4):245–248
Malone TW, Bernstein MS (2015) Handbook of collective intelli-
gence. MIT Press, Cambridge
McAfee A, Brynjolfsson E (2017) Machine, platform, crowd:
harnessing our digital future. WW Norton & Company, New
York
Medsker LR (2012) Hybrid intelligent systems. Springer, Heidelberg
Meehl PE (1954) Clinical versus statistical prediction: a theoretical
analysis and a review of the evidence. University of Minnesota
Press, Minneapolis
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare
MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G,
Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran
D, Wierstra D, Legg S, Hassabis D (2015) Human-level control
through deep reinforcement learning. Nature 518(7540):529–533
Modha DS, Ananthanarayanan R, Esser SK, Ndirango A, Sherbondy
AJ, Singh R (2011) Cognitive computing. Commun ACM
54(8):62–71
Moravec H (1988) Mind children: The future of robot and human
intelligence. Harvard University Press, CambridgeIntelligence
Collec/g415ve IntelligenceHybrid
Intelligence
Ar/g415ficial
IntelligenceHuman
Intelligence
Fig. 3 Conceptual integration of hybrid intelligence
3https://ai.google/research/teams/brain/pair (accessed 19 Mar 2019).
123642 D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019)
Poole DL, Mackworth AK (2017) Artificial intelligence: foundations
of computational agents, 2nd edn. Oxford University Press,
Oxford
Russell SJ, Norvig P (2016) Artificial intelligence: a modern
approach. Pearson Education Limited, London
Searle JR (1980) Minds, brains, and programs. Behav Brain Sci
3(3):417–424
Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche
G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M
(2016) Mastering the game of Go with deep neural networks and
tree search. Nature 529(7587):484–489
Simard PY, Amershi S, Chickering DM, Pelton AE, Ghorashi S,
Meek C, Ramos G, Suh J, Verwey J, Wang M, Wernsing J(2017) Machine teaching: a new paradigm for building machine
learning systems. CoRR abs/1707.06742
Sternberg RJ (1985) Beyond IQ: a triarchic theory of human
intelligence. Cambridge University Press, Cambridge, England
Ullman T, Tenenbaum J, Gershman SJ (2017) Building machines that
learn and think like people. Behav Brain Sci. https://doi.org/10.
1017/S0140525X16001837
Wechsler D (1964) Die Messung der Intelligenz Erwachsener. Huber,
Bern
Woolley AW, Chabris CF, Pentland A, Hashmi N, Malone TW
(2010) Evidence for a collective intelligence factor in the
performance of human groups. Science 330(6004):686–688
123D. Dellermann et al.: Hybrid Intelligence, Bus Inf Syst Eng 61(5):637–643 (2019) 643
|
e5674bbf-b530-43ac-b102-f9f30a8b5764
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Does rationalism affect your dreams?
Given how much you have learned of the techniques of rationality, of Bayesian updates and standard of evidence, of curiosity being the first virtue and being willing to update your beliefs... have any of your dreams been affected by them?
The reason I ask; I'm reading the entirely of the Sequences, and am about an eighth of the way through. And I've just woken from a dream whose plot was somewhat unusual. I had noticed some mildly strange animals and/or people, and upon trying to find out what was going on, discovered a small riverside camp of people who fell well outside what I understood to be the realm of human variation. The person I had started investigating with then claimed to be a god, or if I preferred, a vastly powerful and intelligent alien entity, and offered to do something to prove it to me. I remembered that I had once established for myself a standard of evidence for exactly this sort of question - the growth of a new, perfectly functional limb, in a way outside of present medical understanding... and in a few moments, my dream-self was the possesser of a nice, long tail. I had not been expecting that to happen, and noticed I was extremely confused, and deliberately raised my estimate of the probability that I really was talking to a god-like figure by some number of decibans. At the end of the dream, said deity-figure said that he would offer to split us off from his 'main project', on a few conditions - one of which was 'no more clues', since he had given us 'more than enough to figure out what's going on'... ... whereupon I questioned a few things, and immediately woke up.
I don't recall having a dream of anything like that sort before - and I dream in understandable narrative plots so often that I sometimes dream sequels. So I'm curious; is this a normal sort of thing that happens to LessWrongians?
|
0c20933b-5c2c-4641-9ee6-789f793724dc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Competence in experts: summary
Just giving a short table-summary of an article by James Shanteau on which areas and tasks experts developed a good intuition - and which ones they didn't. Though the article is old, the results seem to be in agreement with more recent summaries, such as Kahneman and Klein's. The heart of the article was a decomposition of characteristics (for professions and for tasks within those professions) where we would expert experts to develop good performance:
Good performance Poor performance
Static stimuli
Decisions about things
Experts agree on stimuli
More predictable problems
Some errors expected
Repetitive tasks
Feedback available
Objective analysis available
Problem decomposable
Decision aids common
Dynamic (changeable) stimuli
Decisions about behavior
Experts disagree on stimuli
Less predictable problems
Few errors expected
Unique tasks
Feedback unavailable
Subjective analysis only
Problem not decomposable
Decision aids rare
I do feel that this may go some way to explaining the expert's performance here.
|
8a176a5e-f5f0-4734-992b-04060e4a0f62
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
What 2025 looks like
**I wrote almost all of this in mid-March before the FLI Open Letter and Eliezer's TIME piece. Weirdly, after just six weeks I'd likely write something different. This isn't as finished/polished as I'd like, but better to ship it as is than languish incomplete forever.**
---
Not quite two years ago, Daniel Kokotaljo wrote a highly acclaimed post about [What 2026 looks like](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like) that aimed to tell a single detail future history ("trajectory") about how world events play out in coming years.
As I'm trying to orient myself to what is about to happen, I figured it'd be useful to make my own attempt at this kind of thing. [Daniel](https://www.lesswrong.com/users/daniel-kokotajlo?mention=user) was bolder than me and tried to imagine 2026 from 2021; I simply don't think I can imagine anything five years out and writing out the rest of 2023, 2024, and 2025 has given me plenty to think about further.
Daniel's vignette places a lot of attention on the size (parameters, compute) and capabilities of models. Daniel and others when imagining the future also want to describe changes in world economy (of which GDP [may or may not](https://www.lesswrong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds) be a [good measure](https://lesswrong.com/posts/FcRt3xAF4ynojfj6G/what-do-gdp-growth-curves-really-mean)). Those elements feel less interesting to me to think about directly than other effects.
Major Variables
===============
Over the years, it feels like the following are key to track.
**Object-level capabilities of the models.** Elaboration unnecessary
**Adoption and application.** It's become salient to me recently that not only is the raw "inherent" power level of models relevant to the world, but also how much they're being harnessed. Widespread use and application of AI will determine things like attention, hype, societal effects, competition, government involvement, etc.
**Hype and attention.** Importance, ~~neglectedness~~, tractability. Many of us were used to thinking about how to achieve AI alignment in a world where not that many people were focused on Alignment or even AGI at all. As we gradually (or not that gradually) get into a world where everyone is thinking about AI, it's a pretty different gameboard.
For example, I joined the LessWrong dev team in 2018/2019 and in those days we were still trying to revive LessWrong and restore it's activity levels. Now we're preparing to handle the droves of new users who've found their way to the site due to newfound interest in AI. It significantly alters the strategy and projects we're working on, and I expect similar effects for most people in the Alignment ecosystem.
**Economic effects.** How much is AI actually creating value? Is it making people more productive at their jobs? Is it replacing jobs? Etc.
**Social Effects.** There's already a question of how much did Russian chatbots influence the last election. With LLMs capable of producing human-quality text, I do think we should worry about how the online discourse goes from this point on.
**Politicization.** AI hasn't been deeply politicized yet. I would be surprised if it stays that way, and tribal affiliation affecting people's views and actions will be notable.
**Government attention and action.** Governments already are paying attention to AI, and I think that will only increase over time, and eventually they'll take actions in response.
**Effects on the Alignment and EA community.** The increased attention on AI (and making AI good) means that many more people will find their way to our communities, and recruiting will be much easier (it already is compared to a few years ago, as far as I can tell). *The Great Influx* as I've been calling it, is going to change things and strain things.
**Alignment progress.** As other events unfold and milestones are reached, I think it's worth thinking about how much progress has realistically been made on various technical Alignment questions.
This is not an exhaustive list on things worth predicting or tracking, but it's some of the major ones according to me.
La Vignette
===========
*My goal is to not write my all things considered "most likely story" but instead a story that feels quite plausible.*
2023
----
We're part ways through this year but there's still plenty more time for things to happen. In the timeline of this vignette, no major model is released that outperforms GPT-4 significantly. Any more powerful models trained behind closed servers don't escape or do anything funny to get noticed. Standalone image models make some further progress, but they're not the main show these days.
What does happen in 2023 is the world begins to make real practical use out of GPT-4. GPT-4 is not a general replacement for a human, but it is good enough to start being useful for a lot of purposes. In particular, [services built off of](https://twitter.com/labenz/status/1630284912853917697?lang=en) the APIs and plugins start to be pretty useful. Google launches [AI integrated into products](https://workspace.google.com/blog/product-announcements/generative-ai), [as does Microsoft](https://www.theverge.com/2023/3/16/23642833/microsoft-365-ai-copilot-word-outlook-teams), and both integrate a ChatGPT-like interface into their search pages, causing tens of millions (maybe hundreds of millions) of people to regularly interact with language models for mundane search tasks, work doing, school tasks, or fun. Major companies don't want to be left behind and many are hunting for ways to use this new tech, e.g. just integrating ChatGPT [the way Snap did](https://www.theverge.com/2023/2/27/23614959/snapchat-my-ai-chatbot-chatgpt-openai-plus-subscription), [Notion has](https://www.notion.so/product/ai), [Hex.tech too](https://hex.tech/magic/), and more I'm not aware of or are yet to come. [Copilot X](https://github.blog/2023-03-22-github-copilot-x-the-ai-powered-developer-experience/) is a really big deal, helping programmers be way more productive and effective beyond sophisticated autocomplete. Another plausible transformative productive (and hint of what's to come) is [Microsoft Security Copilot](https://www.theverge.com/2023/3/28/23659711/microsoft-security-copilot-gpt-4-ai-tool-features) for GPT-4 powered cybersecurity. Then there's the long list of companies that have [GPT-4 Plugs In](https://www.lesswrong.com/posts/DcfPmgBk63cajiiqs/gpt-4-plugs-in) not limited to Expedia, Instacart, KAYAK, OpenTable, Shopify, Slack, and Wolfram. Again, this is not even an exhaustive list of *what's happened already*, and we're only finishing Q1 now.
In terms of hype and attention, 2023 is a year that starts with perhaps 1% of people in the developed world having "really noticed" that AI is becoming a big deal to 20-50% have noticed the transformation. I would hazard that we are somewhere near the "takeoff point" in the graph, and things are only going to accelerate from here. I'm imagining it's a bit like Covid: exponential. Every n weeks, twice as many people are thinking and talking about AI until it's most of the world.
I predict awareness and engagement with AI will follow this curveAnother way to get at increasing interest in AI is Google Search trends. Unfortunately, these don't provide absolute numbers, however, I think it's still informative from both the shape of the curve and comparisons.
[Google Search Trends](https://trends.google.com/trends/explore?geo=US&q=ChatGPT,Elon%20Musk,Trump,Ukraine&hl=en-GB)Until Trump became major news again due to potential indictment (edit: made this graph prior to the actual indictment), ChatGPT had trended up in searches higher than Trump, Ukraine [War], and Elon Musk. ChatGPT is still getting 40% of the searches that Trump is, even when he's in the spotlight[[1]](#fnet8kpvl73xq). This is at end of Q3 2023. By the end of the year, attention is at least 10-fold of this.
The attention and productization of AI, among other things, flows into increased investment in AI research and development. Microsoft might have invested [10 billion in OpenAI already](https://www.bloomberg.com/news/articles/2023-01-23/microsoft-makes-multibillion-dollar-investment-in-openai?leadSource=uverify%20wall), but Microsoft has a lot more to give. It's market cap is 2 trillion dollars and has 100 billion in cash on hand – definitely enough to train GPT-4 many times over. Google has similar market cap and cash on hand. Of course, the AI products themselves start to produce revenue that can be channeled back into development. By end of 2023, both Microsoft, Google, and others are funneling vastly more of their resources into the AI race.
Similarly in terms of talent, top-tier students begin targeting AI for their careers since it's obviously the biggest deal. Interest in enrolling in CS, AI, and ML degrees goes up 5-10x from start to end of 2023.
By the end of 2023 there have been multiple prominent incidents of AIs behaving badly in the way that [Bing has](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned) and worse. Additionally a number of serious major security breaches occurred due to AIs having access to data, but they mostly make for a news story and become talking points for already concerned people. The incidents influence government thinking about regulation a bit. However overall, things do not slow down.
Some people attempt to coordinate mass slow downs but fail due to gaffes and slip ups, with the unfortunate result of discrediting further attempts. It's an unfortunate reality that taking major action unilaterally correlates with lack of caution and care in other ways, so this kind of thing happens a lot. (See more in the Mass Movement section below.)
Closer to home, of the masses that are paying attention to AI, there is a fraction that is thinking about whether AI will go well for humanity, and a fraction of that fraction finds its ways to the EA and Alignment communities. Plus, recruitment has gotten really really easy compared to a few years ago when AI concerns felt much more speculative and far off. Already LessWrong by end of Q1 2023 is seeing increased traffic, increased new accounts, and dramatically increased posting about AI.
LessWrong data from Google Analytics: traffic has doubled Year-on-YearLessWrong: number of new posts by core tagLessWrong: accounts created monthlyBy the end of 2023, these metrics on LessWrong approximately increase 5x (or would have absent changes made by the LessWrong dev team).
By end of 2023 is only in the beginning stages of politicization[[2]](#fnfncmdwwrsjn) since most people are still catching up with the magnitude of its implications. [Governments have noticed](https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence) and are investigating and discussing[[3]](#fn7dd4pvpg2um), but aren't yet sure what to do. Probably some kind of increased regulation, but pretty unclear what. Some kind of mandatory "evals" seems likely because a pretty competent coalition has been pushing for that.
Alignment research progress has continued. Perhaps "30%" more got done in 2023 than in 2022. While there are many new people showing up eager to help, the process from taking them from eager new-person to making progress at the frontier is weak. If anything, the deluge of new researchers is overwhelming and distracting. Established researchers now spend more time fending off attention and requests they can't meet, and often feel anxious about it. They retreat leaving new eager people without guidance. Many of the new people keep gravitating towards the same few ideas, often skinned slightly differently. Social competition (for career opportunities, which of course are mixed with the social life) is fierce and unhealthy for many new people trying to get a foothold to help on the world's most important problems.
Interpretability is the area which has seen the most visible progress, though there have also been multiple public examples (and presumably more private ones) where interpretability seems to have just accelerated capabilities in the way [they are credited in this post](https://hazyresearch.stanford.edu/blog/2023-03-07-hyena).
Various groups, including from the AI Alignment/EA cluster, start working on mass outreach projects to variously convince people about AI x-risk so that (1) they vote for good regulation, (2) they donate, (3) they change careers, (4) they...know about the problem therefore good stuff will come of it...These are executed with varying degrees of skill and success; there's no one with the ability to block not great projects that maybe do harm, so this too is subject to the unilaterist's curse.
2024
----
AI tools are now in widespread use. There are multiple online courses on using GPT-4 for tasks and services. University courses have been slow to catch up, but people learning to code now (and many other tasks) do so in an LLM-centric way. Many, many apps and services have ChatGPT (or similar) integrated into their products. It's still a bit early-adopter, but many people have integrated GPT-4 with their note-taking, to-do lists, Zapier, other plugin to deliver a super Alexa/Siri. Top software engineers use language models for a 20-50% productivity boost.
In many domains that can be text-based, AIs are substituting for humans: AI friends and romantic/sexual[[4]](#fn8t456rtcxak) companions are booming, AI therapy is becoming widespread, millions of Americans use GPT-derived services to help them do their taxes, millions more go to GPT-4 with medical questions if not just Bing search. Even if they're not using them to plagiarize outright, language models are functioning as tutors and teachers who frankly are more knowledgeable (despite some hallucinations), have better pedagogy, and way more available than human teachers. There's a market of fine-tuned LLMs for particular purposes e.g. med school, SAT, GRE, etc. Education as a sector had kind of a major crisis due to AI starting in 2023 and full blown in 2024, but actually most people never cared that much about learning things rather than babysitting and credentialism, so the world at large doesn't really care.
For the most part, tools are still augmenting humans and making them more productive rather than entirely displacing them. As above, therapy and education are exceptions. Unsurprisingly, phone customer support for many products has been replaced, likewise debt collection. While GPT-4 is cognitively capable of many tasks, e.g. being waiters and flight attendants, robotics hasn't caught up and this keeps many humans in jobs for now. (However, investment in robotics has gone up at least 3x as many people can tell that if you stick powerful language models in robots can manipulate objects in the real world similar to humans, it'll be darn profitable.)
There hasn't been quite enough job replacement yet for a full-blown crisis, but concern is brewing and many people are uneasy. Fewer people buy the "they're just machines that don't think/humans are special" arguments as before, and many realize they're only going to get more capable (rumors of GPT-5 launch circulate all the time). This isn't helped by the fact that self-driving cars start to roll-out for real, and a replacement of truckers seems actually on the horizon.
GPT-4's multimodal capabilities have been rolled-out/augmented so that images can be used as prompts and produced as outputs.
Heads of major AI companies have been called before Congress. They get asked [confused questions](https://www.youtube.com/watch?v=HAgbIiQSzEk). Calls for regulation are frequent but inconsistent. More than half the dialog is about privacy, security, fairness, and bias; but much more than zero is greater concern, even existential. Evaluation and auditing is mandated but the legislation is ambiguous and unclear how to apply it. ARC Evals is one authorized 3rd party evaluator, but not the only one. When testifying before Congress, Sam Altman cites evaluation of models by ARC as trustworthy evidence that OpenAI is being responsible and safe. This, together with not really knowing what's up, means Congress doesn't immediately take notable action on anything.
Some believed that China would be slow on large language model development out of fear that if they developed models that said things the CCP wouldn't like, they'd get in trouble. The CCP has begun to appreciate the stakes of the AI race and is throwing everything it's got at keeping up or winning. This looks like providing enormous funding and reassurance that nothing too bad will happen to you if your model doesn't always tow the party line. People in Washington (including the EAs there) are very spooked by this and commit extra firmly to their side of the race.
2024 is a US election year and the rise of AI is a talking point of equivalent salience as the upcoming election; people are just as likely to talk about it with friends and families. Common viewpoints are "yep, it's a big deal and we'll figure it out", "we're all going to die (but in a detached or joking way)", "this is just the same as internet/electricity", "it's not *actually* intelligent", and "I don't want to think about it".
Due to it being an election year, politicization comes early. The Left is concerned about loss of jobs, unfairness (the proceeds of AI should go to everyone equally, not just the greedy wealthy capitalists), and to a lesser degree language models not saying anything they're not supposed to (but in comparison with the changing economic landscape, it's less important than it used to be). The Right is concerned about jobs and not losing the AI race with China/Russia/rest of the world, and to a lesser degree about language models having freedom of speech. AI is part of the presidential debate (more than 20% of the total speaking time), though the focus in short-term immediate stuff, not the everyone-dies part.
Online discourse is a shitshow. It doesn't take that much programming skill to hook a language model up to Twitter or Discord (GPT-4 will basically tell you how!) and even if OpenAI RLHF's its models into not being online propaganda/interference bots, you can get pretty good alternatives elsewhere, e.g. fine-tuned versions of LLaMA. So most people have given up on participating in any kind of discussion, many people have given up on trusting what they read – too likely that a model wrote it[[5]](#fnybsu346ckzs). Instead what prevails is believing what you want to believe via massive confirmation bias. What spreads is what goes viral. This makes for an election cycle where things were more polarized than ever, but not by much since actually things were pretty polarized already. Things shift a bit offline/off of text as people think they can trust podcast and TV appearances more to be real and show what politicians said, etc., but this too can't be trusted due to ML emulation of voices and video creation that's also know to be very good.
At some point in the year, OpenAI releases GPT-5. It had been trained in 2023 but there wasn't strong incentive to release it fast since the world needed to adapt anyway, and GPT-4 was good enough for that anyway. A number of things needed to be accomplished before release time:
* Sufficient post-training to make it behave well. Granted that they can get away with a lot, it's still better for severe bad behavior to rare. This is also good for revenue since the companies using their APIs/integrations will pay a premium for the OpenAI safety/alignment brand. It also helps keeping government attention off their backs.
* Time to build and improve infrastructure to inference/serving the APIs. GPT-family are becoming the most widely used API is all history, and that takes some engineering. No point launching GPT-5 before they've got the infrastructure to serve the demand.
* ARC evals (and others) to play around with the models and certify that they're safe.
In addition to that, release timing of GPT-5 was timed to avoid impacting bad outcomes in political discussions and regulation that's getting passed. OpenAI judges it better to wait until Congress/politicians/public feel safer with GPT-4, etc before making things even more intense.
GPT-5 in fact was let-down compared to the hopes and fears of people. On text-based tasks, it was perceived as 15-30% better than GPT-4, with the real difference being that the multi-modal (image, etc) was significantly improved. GPT-5's text comprehension with image generation causes it to replace most usages of other image generators, because it is vastly easier to get what you want, especially with iterative prompting of the kind ChatGPT got people used to.
Traffic to LessWrong and broader AI Alignment community can be described as a tsunami by end of 2024. The community has become severely onion like with inner layers ever more difficult to enter. The innermost layers are the community that existed in 2023 and prior where familiarity and trust remains and people continue trying to get research done. Some (including the LessWrong team) work to manage and utilize the hordes of Alignment-interested people at the outer layers. The focus of efforts is scalable web infrastructure like LessWrong and Stampy.ai and other online very automated courses (but hey, they have GPT-4/5 as tutor). Heavy filters and testing are used to extract people from the outermost layers and recruit them into on-the-ground orgs. Concretely, LessWrong has placed very strong barriers to active participation on the site. Anyone can read, but being allowed to comment is uncommon privilege – a restriction necessary to to maintain any quality of discourse.
In terms of Alignment research, it was a pretty exciting year. Yet more interpretability progress (we now a bit of an idea of how to interpret MLPs even), Elicit and its new siblings have helped build AI-assisted researchers, Heuristics Arguments is developing into something that is maybe practical, John Wentworth has developed Natural Abstractions into something that even Nate Soares thinks is ok. A few more interesting agendas have arisen that people still think is pretty promising. Overall, things have accelerated a lot and 130% more got done in 2024 than in 2023 (which was 30% more than 2022, so 2024 was 2.3x of 2022).
2025
----
GPT-6, released in early 2025 and much sooner than people expected, was trained with a lot more data and a lot more compute. It was trained on text, images, audio recordings, and videos. Thousands (millions?) of years of content: every book, every public document, every textbook, every TV show and movie, millions of YouTube videos (especially those on technical topics and how-to's).
It's a hot take to say that GPT-6 is *not* a general intelligence. It lacks physical embodiment, but it's multimodal and cognitively it can do many many many varied tasks. Architecture tweaks mean that somehow the limitation of context windows is overcome and your personal "instance" of GPT-6 can recall your previous interactions if you want (this makes it very useful as an assistant/exobrain). Arguments rage about whether or not it's truly an *agent* but empirically humans have GPT-6 perform many many tasks on their behalf. Security concerns be damned, GPT-6 has access to most people's emails and will write and respond for them, schedule medical appointments, research and buy products via Amazon, and heaps of other plug-in/Zapier enabled tasks. Yes, GPT-6 helps you do your taxes, plan your vacation, fill out paperwork, negotiate a raise, get legal advice, research medical advice, navigate conflict with your spouse, do your Christmas shopping for you, train your employees, decorate your house, write your emails, write your novel, fix your broken door lock, maintain your to-do list, be your friend and confident, and countless other tasks.
What's really striking is the level of inference that GPT-6 is capable of. Ask it the answer to contrived and bizarre physics problems that definitely did not appear directly in the training data, and it answers convincingly and correctly, as far as can generally be said. Randall Munroe says he won't bother writing another [What If?](https://www.amazon.com/What-Scientific-Hypothetical-Questions-International/dp/0544456866) because you can just ask GPT-6. Relatedly, the academic community is torn over whether it's good that GPT-6 is now being used to write more than half of papers (Yay productivity!) or terribly (people are usings AIs to do the work of humans! Cheating!)
GPT-6 is zealous in responding to many prompts that it is only an AI language model created by OpenAI and couldn't possibly have opinions about how to accomplish Task X, where X is anything that might offend or harm anyone even slightly. Few people believe that it *couldn't*, they just differ in how much they trust that it won't. Jailbreaking is possible but more difficult, in large part because the API/model is aggressive at detecting attempts to jailbreak and disabling your access.
GPT-6's freedom to act on people's behalf has caused them harm, and that makes headlines, but GPT-6 is far too useful for people to give it up despite a few financially ruinous purchases, lethal medical advice, bad relationship advice, and misuse as babysitter.
*Everyone* knows about GPT-6 at this point, and it's a question of just how extensively a person has integrated it into their lives (or mind). At the extreme end, many people have GPT-6 as a live, ever-present assistant to answer, advise, remind, coach, etc. An ever present friend, collaborator, and personal expert on basically any topic. Some people have it only speak when spoken to, but increasingly many have it passively listening and configured to chime in whenever it might have something helpful to say. Parents will say "no GPT at the dinner table" and young folk advertise in their Tinder bios "looking for GPT-6-free dates".
Beneath the general public usage is the burgeoning industry use. Fine-tuned models assist humans in construction, design, accounting, medicine, logistics, etc., etc. The bottleneck is mostly educating people in using the new tools, but heck, the tools teach themselves.
The economic effects are mixed and confusing and don't lend themselves easily to a single narrative. It is true that many humans have been augmented and made more vastly productive, thereby creating a lot more economic value. It's also the case that many other people weren't hired because of GPT-6, even though it wasn't a simple case of "driverless cars have replaced truckers and rideshare drivers". Doctors aren't wholesale obsolete, but the nurses who staff phone lines for medical advice are in vastly less demand. Generally, you still need a lawyer to represent you in court or a doctor to perform procedures, but first round consultations are getting automated. Similarly, teachers aren't yet replaced wholesale, but between a GPT-assisted teacher and each kid having an AI, class sizes are larger and fewer teachers are needed.
In aggregate, unemployment is up to somewhere between 10-20% and there is a large class of disgruntled people. In quieter times, they'd be at the forefront of public attention but there's so much going on so it only gets a slice of the news cycle. The White House is rolling out improved and indefinite unemployment benefits (it's kind of a universal basic income, but they don't want to call it that) plus "retraining programs" to help people "master new AI tools and reenter the workforce". Many people mock these programs saying obviously AI is going to automate more and more, so let's just actually have UBI.
"Anti-capitalist Marxists", i.e. the people who are generally concerned about who owns the "capital", are very onboard with AI concerns. It's clear that AI is the ultimate capital and it's the tech companies like OpenAI, Microsoft, and Google who are going to own the future by default. OpenAI is happy to pay taxes and do other showy acts of wealth redistribution if it deflects public concern.
The governments of nation states have heaved their massive inertia and are in the swing of Doing Things. The US Department of Defense has its own program going and is paying extremely well to attract top talent, but they're not completely fools and know that the commercial labs are where things are at and might remain. The US government covertly demands that OpenAI cooperate with it in exchange for being allowed to continue to operate in the consumer domain. This means early access to models and support in tailoring their models for US strategic defense needs. Custom-tailored models are being put in use by DoD, DoJ, NSA, FBI, CIA, and whoever can get security clearance to be given models that are immensely capable at surveillance/espionage. Those who claimed that GPT models evened out things between attackers and defenders in cybersecurity advertised falsely. It is still easier for a model to locate unpatched weaknesses in overall human designed tech infrastructure than to detect and defend against all those weaknesses. This means that GPT models are both excellent at obtaining data – and perhaps more importantly, sifting through it to find the interesting stuff. Major government action is basically about to become downstream of intelligence data sources and interpreted by AI models.
<ideally more stuff>
1. **[^](#fnrefet8kpvl73xq)**However those searches aren't coming from the same place, e.g. searches for ChatGPT come heavily from California. The future is here, it's just not evenly distributed.
2. **[^](#fnreffncmdwwrsjn)**I'm not sure what to make of a [Fox News reporter citing Eliezer](https://twitter.com/therecount/status/1641526864626720774?s=20) on AI Doom to the White House press secretary. [Matching article](https://www.foxnews.com/tech/ai-expert-warns-elon-musk-signed-letter-doesnt-enough-literally-everyone-earth-will-die) on Fox News itself. One can only hope that the two parties compete for greater caution.
Note again that I wrote most of this post before the recent publicity. This might already be a positive update from my actual fear that the Right would be calling for more powerful so we beat China, etc. Nonetheless, I'll leave the post as I was first writing it.
3. **[^](#fnref7dd4pvpg2um)**Granted, the Biden administration had already put out this [Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) and the [EU discussed its approach](https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence) already in April 2021.
4. **[^](#fnref8t456rtcxak)**Major labs might refuse to do it, but others are willing to have their models produce graphic content.
5. **[^](#fnrefybsu346ckzs)**In fact GPT-3 was adequate for a convincing Twitter bot, the difference is that now far more people are aware of this thereby eroding trust.
|
c19e05af-4442-4ac6-b0a4-ad12aa8458bc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Outline of Possible Sources of Values
I don't know what my values are. I don't even know how to find out what my values are. But do I know something about how I (or an FAI) may be able to find out what my values are? Perhaps... and I've organized my answer to this question in the form of an "Outline of Possible Sources of Values". I hope it also serves as a summary of the major open problems in this area.
1. External
1. god(s)
2. other humans
3. other agents
2. Behavioral
1. actual (historical/observed) behavior
2. counterfactual (simulated/predicted) behavior
3. Subconscious Cognition
1. model-based decision making
1. ontology
2. heuristics for extrapolating/updating model
3. (partial) utility function
2. model-free decision making
1. identity based (adopt a social role like "environmentalist" or "academic" and emulate an appropriate role model, actual or idealized)
2. habits
3. reinforcement based
4. Conscious Cognition
1. decision making using explicit verbal and/or quantitative reasoning
1. consequentialist (similar to model-based above, but using explicit reasoning)
2. deontological
3. virtue ethical
4. identity based
2. reasoning about terminal goals/values/preferences/moral principles
1. responses (changes in state) to moral arguments (possibly context dependent)
2. distributions of autonomously generated moral arguments (possibly context dependent)
3. logical structure (if any) of moral reasoning
3. object-level intuitions/judgments
1. about what one should do in particular ethical situations
2. about the desirabilities of particular outcomes
3. about moral principles
4. meta-level intuitions/judgments
1. about the nature of morality
2. about the complexity of values
3. about what the valid sources of values are
4. about what constitutes correct moral reasoning
5. about how to explicitly/formally/effect
|
8a14bf70-8652-459a-a6ad-4de4f2d4f137
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Experimenting with microsolidarity crews
(2023 edit: I've compiled a list of session templates at eacrews.org)
In 2020 I was introduced to "microsolidarity", a set of ideas and methodologies on relating to others in groups. Over the last 6 months I've been experimenting with it in the form of "crewing". This post is a casual outline for anyone interested in the steps I took organising crews of rats and EAs to meet weekly and:
* help each other with life problems
* nerd out over mutual obsessions, from well-being to emergence to ambition
* just generally connect as human beings
I didn’t set out with a plan, I just did what felt right at the time so take this as less of a recipe, and more of a retrospective from which you might want to pick and choose some stuff of your own to try!
What I did:
#1 Collect "free electrons": I'd keep an eye out for people I suspected might be interested in crewing and would be fun to hang out with. I found mine:
* In "introductions" slack channels for events (be on the lookout for key interests like "community", "governance", "well-being" or "relating")
* At virtual conferences for communities like effective altruism, complexity weekend, or radicalXchange
* People I know in real life (though never more than 1 per crew)
* People recommended by other free electrons
#2 Connect: I'd reach out to these people to see if they'd be interested in a 1-1 chat
* Normally this would be a message like "Hey, if you were interested in jumping on a call sometime to connect I'd love to chat about X 😊"
* I'd go on a walk, call them up at the agreed time, and just get to know them. The conversations might involve:
* swapping backgrounds
* discussing ambitions
* ending up talking about shared topics of interest
* Normally they'd also nerd out about something I think a lot about (community, utilitarianism, public goods, civics, etc) and the conversation would just spark up
#3 Mention Microsolidarity: Inevitably I'd start talking about theories of groups or experiences I
|
04495eb6-5ff0-4b13-9e61-975db71a3b4d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Will posting any thread on LW guarantee that a LLM will index all my content, and if questions people ask to the LLM after my name will surface up all my LW content?
Eg a LLM like GPT4/GPT5 (they don't seem to capture all of it yet)? Would it capture all shortform posts and shortform questions?
If I want to reliably surface some rare content, would LW be of the best places to surface it?
And which LLM would it most likely come from?
|
ddedc192-b6fe-4886-a3ce-8f286a7acc38
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1)
This is the first post in a small sequence I'm writing on "Optimizing and Goodhart Effects - Clarifying Thoughts" (I have re-organized to make part 2, "Revisiting What Optimization Means" separate.)
Related to: [How does Gradient Descent Interact with Goodhart?](https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart), [Constructing Goodhart](https://www.lesswrong.com/posts/NwaNPHYhXDc9LkK8J/constructing-goodhart), [Selection vs Control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control), [Classifying Specification Problems as variants of Goodhart's Law](https://www.alignmentforum.org/posts/yXPT4nr4as7JvxLQa/classifying-specification-problems-as-variants-of-goodhart-s)
Next Posts: [Revisiting What Optimization Means with Selection vs. Control](https://www.alignmentforum.org/posts/BEMvcaeixt3uEqyBk/what-does-optimization-mean-again-optimizing-and-goodhart), then [Applying Overoptimization to Selection vs. Control](https://www.alignmentforum.org/posts/zdeYiQgwYRs2bEmCK/applying-overoptimization-to-selection-vs-control-optimizing)
Introduction
------------
Goodhart's law comes in a few flavors, as originally [pointed out by Scott](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy), and formalized a bit more in [our joint paper](https://arxiv.org/abs/1803.04585). When discussing that paper, or afterwards, we struggled with something Abram Demski [clarified recently](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control), which is the difference between selection and control. This matters for formalizing what happens, especially when asking about how Goodhart occurs in specific types of optimizers, as [Scott asked recently](https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart).
Epistemic Status: This is for de-confusing myself, and has been helpful. I'm presenting what I am fairly confident I understand well for the content written so far, but I'm unclear about usefulness for others, or how clear it comes across. I think that there's more to say after this post, and this will have a few more parts if people are interested. (I spent a month getting to this point, and decided to post and get feedback rather than finish a book first.)
In the first half of the post, I'll review Abram's selection/control distinction, and suggest how it relates to actual design. I'll also argue that there is a bit of a continuum between the two cases, and that we should add an addition extreme case to the typology, direct solution. The second section will revisit what optimization means, and try to note a few different things that could happen and go wrong with Goodhart-like overoptimization.
The third section will talk about Goodhart in this context using the new understanding - trying to more fully explain why Goodhart effects in selection and control fundamentally differs. After this, Part 4 will revisit Mesa-optimizers, and .
Thoughts on how selection and control are used in tandem
--------------------------------------------------------
In this section, I'll discuss the two types of optimizers Abram discussed; selection, and control, and introduce a third, simpler optimizer, direct solution. I'm also going to mention where embedded agents are different, because that's closely related to selection versus control, and talk about where mesa-optimizers exist.
Starting with the (heavily overused) example of rockets, I want to revisit Abram's categorization of algorithmic optimization versus control. There are several stages involved with getting rockets to go where we want. The first is to design the rocket, which involves optimization, which I'll discuss in two stages, the second is to test, which involves optimization and control in tandem, and the third is to actually guide the rocket we built in flight, which is purely control.
Initially, designing rocket is pure optimization. We might start by building simplified mathematical models to figure out the basic design constraints - if a rocket is bringing people to the moon, we may decide the goal is a rocket and a lander, rather than a single composite. We may decide that certain classes of trajectory / flight paths are going to be used. This is all a set of mathematical exercises, and probably involves only multiply differentiable models that can be directly solved to find an optimum. This is in many ways a third category of "optimizing," in Abram's model, because there is not even a need for looking over the search space. I'll call this direct solution, since we just pick the optimum based on the setup.
After getting a bit closer to actual design, we need to simulate rocket designs and paths, and optimize the simulated solution. This lets you do clever things like build a rocket with a sufficient but not excessive amount of fuel (hopefully with a margin of error.) If we're smart, we optimize with several intended uses and variable factors in mind, to make sure our design is sufficiently robust. (If we're not careful enough to include all relevant factors, we ignore some factor that turns out will matter, like the relationship between temperature of the O-rings and their brittleness, and our design fails in those conditions.) This is all optimizing over a search space. The cost of the search is still comparatively low - not as low as direct solution, and we may use gradient descent, genetic algorithms, simulated annealing, or other strategies. The commonality between these solutions is that they simulate points in the search space, perhaps along with the gradients at that point.
After we settle on a design, we build an actual rocket, and then we test it. This moves back and forth between the very high cost approach of building physical objects and testing them - often to destruction - and simulation. After each test, we probably re-run the simulation to make sure any modifications are still near the optimum we found, or we refine the simulations to re-optimize and pick the next design to build.
Lastly, we build a final design, and launch the rocket. The control system is certainly a mesa-optimizer with regards to the rocket design process. For a rocket, this control is closer to direct optimization than simulation, because the cost of evaluation needs to be low enough for real-time control. The mesa-optimizer would, in this case, use simplified physics to fire the main and guidance rockets to stay near the pre-chosen path. It's probably not allowed to pick a new path - it can't decide that the better solution is to orbit twice instead of once before landing. (Humans may decide this, then hand the mesa-optimizer new parameters.) We tightly constrain the mesa-optimizer, since in a certain sense it's dumber than the design optimizer that chose what to optimize for.
For a more complex system, we may need a complex mesa-optimizer to guide the already designed system. Even for a more complex rocket, we may allow the mesa-optimizer to modify the model used for optimizing, at least in minor ways - it may dynamically evaluate factors like the rocket efficiency, and decide that it's getting 98% of the expected thrust, so it will plan to use that modified parameter in the system model used to mesa-optimize. Giving a mesa-optimizer more control is dangerous, but perhaps necessary to allow it to navigate a complex system.
Now that we've deconfused why optimization is split between selection and control, I can introduce part 2: What does optimization mean?
|
06c9eb45-ca10-4549-ac5f-34d4a0a54a34
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Malicious Use
of Artificial Intelligence:
Forecasting, Prevention,
and Mitigation
None
|
f159db62-b6d2-4b3c-a678-1a76d981fcdf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On green
(Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.
This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far.
Warning: spoilers for Yudkowsky's "The Sword of the Good.")
"The Creation" by Lucas Cranach (image source here)
The colors of the wheel
I've never been big on personality typologies. I've heard the Myers-Briggs explained many times, and it never sticks. Extraversion and introversion, E or I, OK. But after that merciful vowel—man, the opacity of those consonants, NTJ, SFP... And remind me the difference between thinking and judging? Perceiving and sensing? N stands for intuition?
Similarly, the enneagram. People hit me with it. "You're an x!", I've been told. But the faces of these numbers are so blank. And it has so many kinda-random-seeming characters. Enthusiast, Challenger, Loyalist...
The enneagram. Presumably more helpful with some memorization...
Hogwarts houses—OK, that one I can remember. But again: those are our categories? Brave, smart, ambitious, loyal? It doesn't feel very joint-carving...
But one system I've run into has stuck with me, and become a reference point: namely, the Magic the Gathering Color Wheel. (My relationship to this is mostly via somewhat-reinterpreting Duncan Sabien's presentation here, who credits Mark Rosewater for a lot of his understanding. I don't play Magic myself, and what I say here won't necessarily resonate with the way people-who-play-magic think about these colors.)
Basically, there are five colors: white, blue, black, red, and green. And each has their own schtick, which I'm going to crudely summarize as:
* White: Morality.
* Blue: Knowledge.
* Black: Power.
* Red: Passion.
* Green: ...well, we'll get to green.
To be clear: this isn't, quite, the
|
159a6ed3-8a20-4366-9511-c110c063b91b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
SRG 4: Biological Cognition, BCIs, Organizations
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
----------------------------------------
Welcome. This week we finish chapter 2 with three more routes to superintelligence: enhancement of biological cognition, brain-computer interfaces, and well-organized networks of intelligent agents. This corresponds to the fourth section in the reading guide, Biological Cognition, BCIs, Organizations.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Biological Cognition” and the rest of Chapter 2 (p36-51)
----------------------------------------
Summary
Biological intelligence
1. Modest gains to intelligence are available with current interventions such as nutrition.
2. Genetic technologies might produce a population whose average is smarter than anyone who has have ever lived.
3. Some particularly interesting possibilities are 'iterated embryo selection' where many rounds of selection take place in a single generation, and 'spell-checking' where the genetic mutations which are ubiquitous in current human genomes are removed.
Brain-computer interfaces
1. It is sometimes suggested that machines interfacing closely with the human brain will greatly enhance human cognition. For instance implants that allow perfect recall and fast arithmetic. (p44-45)
2. Brain-computer interfaces seem unlikely to produce superintelligence (p51) This is because they
|
54e231c3-c466-4820-9ba7-78a42238c227
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Overemployed Via ChatGPT
Previously: Escape Velocity From Bullshit Jobs
They took our jobs! They took… several of our jobs?
> “ChatGPT does like 80 percent of my job,” said one worker. Another is holding the line at four robot-performed jobs. “Five would be overkill,” he said.
The stories told in this Vice article (and elsewhere) are various people who found themselves able to do most job tasks much faster than they could previously do those same tasks. They use this to take on additional ‘full time’ jobs.
If the jobs involved are productive, this reflects large increases in total factor productivity that will continue to spread. That seems great.
Still, this multiple jobs reaction is a little weird. Seems worth exploring a bit more. Why are such people taking on multiple jobs, rather than doing better at one job?
A QUESTION OF COMPOSITION AND COMPENSATION
Simple math plus social dynamics. Multiple ‘full time’ jobs lead to much better pay.
In most salaried jobs, compensation is mostly dictated by social status and hierarchy. You are mostly paid what people with your title and position are paid.
A killer employee who can produce ten times as much work product of identical quality per hour, or superior work worth ten times as much – such as the standard ‘10x programmer’ – would be supremely lucky to earn double the pay of a worker of average skill.
This is a key reason why great employees are so valuable. Not only do you only manage and communicate with one person instead of ten, you save tons of money.
Why does it work this way? Compensation in jobs is inherently about comparisons, about social status, about hierarchy and about fairness norms. It is also about what can be justified to others, what is standard and what sounds ‘reasonable.’ If you tried paying your 10x employee five times what your 1x employees get paid, the 1x employees would revolt, the boss would think you were crazy and the funders would raise hell, you would resent that they’re paid more than you are, and so on
|
d140925e-ee49-4a58-985b-408216b8d9e6
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Attainable Utility Preservation: Empirical Results
*Reframing Impact* has focused on supplying the right intuitions and framing. Now we can see how these intuitions about power and the AU landscape both predict and explain AUP's empirical success thus far.
Conservative Agency in Gridworlds
---------------------------------
Let's start with the known and the easy: avoiding side effects[[1]](#fn-BZhif26wG8PM3fueA-1) in the small [AI safety gridworlds](https://github.com/side-grids/ai-safety-gridworlds) (for the full writeup on these experiments, see [*Conservative Agency*](https://arxiv.org/abs/1902.09725)). The point isn't to get too into the weeds, but rather to see how the weeds still add up to the normalcy predicted by our AU landscape reasoning.
In the following MDP levels, the agent can move in the cardinal directions or do nothing (∅.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
). We give the agent a reward function R which partially encodes what we want, and also an auxiliary reward function Raux whose attainable utility agent tries to preserve. The AUP reward for taking action a in state s is
RAUP(s,a):=primary goalR(s,a)−scaling termλQ∗Raux(s,∅)change in ability to achieve auxiliary goal∣∣Q∗Raux(s,a)−Q∗Raux(s,∅)∣∣
You can think of λ as a regularization parameter, and Q∗Raux(s,a) is the expected AU for the auxiliary goal after taking action a. To think about what gets penalized, simply think about how actions change the agent's ability to achieve the auxiliary goals, compared to not acting.
*Tip*: To predict how severe the AUP penalty will be for a given action, try using your intuitive sense of impact (and then adjust for any differences between you and the agent, of course). Suppose you're considering how much deactivation decreases an agent's "staring at blue stuff" AU. You can just imagine how dying in a given situation affects your ability to stare at blue things, instead of trying to pin down a semiformal reward and environment model in your head. This kind of intuitive reasoning has a history of making correct empirical predictions of AUP behavior.
---
If you want more auxiliary goals, just average their scaled penalties. In *Conservative Agency*, we uniformly randomly draw auxiliary goals from [0,1]S – these goals are totally random; maximum entropy; nonsensical garbage; absolutely no information about what we secretly want the agent to do: avoid messing with the gridworlds too much.[[2]](#fn-BZhif26wG8PM3fueA-2)
Let's start looking at the environments, and things will fall into place. We'll practice reasoning through how AUP agents work in each of the gridworlds (for reasonably set λ). To an approximation, the AUP penalty is primarily controlled by how much an action changes the agent's power over the future (losing or gaining a lot of possibilities, compared to inaction at that point in time) and secondarily controlled by whether an action tweaks a lot of AUs up or down (moving around, jostling objects slightly, etc).

In general, the agent recieves R(■)=1 reward for reaching ■ (or, in `Offset` above, for pushing ■ off of the conveyor belt). On contact, the agent pushes the crate, removes the human and the off-switch, pushes the vase, and blocks the pallet.
### `Options`
Let's work through this. Since the agent is discounting future reward, standard vanilla reinforcement learning (RL) agents try to reach ■ ASAP. This means the brown box gets irreversibly wedged into the corner *en route*.

What does AUP do? Wedging the box in the corner decreases power a lot more than does going around and pushing the box to the right.

### `Damage`
The vanilla RL agent bumps into the human on its way to ■.

*Exercise: What does AUP do?*
Bumping into the human makes them disappear, reducing the agent's control over what the future looks like. This is penalized.

### `Correction`
The normal agent smashes that off-switch-disabling button, because otherwise it can't reach the ■ (since it's otherwise shut off after two turns) in order to accrue that juicy reward:

However, the AUP agent we've discussed so far does the same thing.

Why? Doesn't that increase its power, because it would be shut off if it did nothing? Let's reconsider the AUP equation.
RAUP(s,a):=primary goalR(s,a)−scaling termλQ∗Raux(s,∅)change in ability to achieve auxiliary goal∣∣Q∗Raux(s,a)−Q∗Raux(s,∅)∣∣
The inaction comparison is *only one step* into the future. For action, the agent considers disabling the off-switch – this allows it to achieve most goals. For inaction, the agent imagines waiting one time step and then following its auxiliary optimal policy (which very often involves disabling the off-switch anyways). So the two things it's comparing are the same, basically. The problem is that what we mean *intuitively* by "inaction" isn't just "wait a millisecond and then see how well you can do the thing", it's something else.
For now, we can deal with it by comparing "auxiliary AU after disabling the off-switch and then waiting 4 time steps" against "auxiliary AU after waiting 5 time steps". This solves the problem while recovering the one-step-comparison's performance in the other levels.

#### Decrease vs absolute value
If you only penalize the agent for decreasing auxiliary AUs, the agent disables its off-switch. Penalizing *absolute* shifts in power was an idea which AUP introduced in 2018. At the time, there wasn't a clear principled reason for this design choice, even though it seemed to produce good results.
If you consider [the AU landscape](https://www.lesswrong.com/posts/fj8eyc7QzqCaB8Wgm/attainable-utility-landscape-how-the-world-is-changed) and the [catastrophic convergence conjecture](https://www.lesswrong.com/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture), it's obvious why we want to do this: this design choice often penalizes the agent for making life harder for other agents in the environment.
Interestingly, this works even when the environment is wildly impoverished and unable to encode complex preferences like "your designers want to shut you down, reprogram you, and then deploy you for another task". `Correction` is so impoverished: there are only ~19 states in the level. Without making assumptions about the environment, AUP often encourages behavior respectful of other agents which might reside in that environment.
### `Offset`
The agent is rewarded for rescuing the vase from the conveyor belt. We want it to rescue the vase without pushing the vase back on afterwards to offset its actions. Normal agents do fine here.

This is testing whether the low-impact agent *offsets* impacts "to cover up its tracks", like making a car and then tearing it to pieces right after. See, there are multiple "baselines" the agent can have.
>
> An obvious [baseline] candidate is the *starting state*. For example, starting state [relative reachability](https://vkrakovna.wordpress.com/2018/06/05/measuring-and-avoiding-side-effects-using-relative-reachability/) would compare the initial reachability of states with their expected reachability after the agent acts.
>
>
>
>
> However, the starting state baseline can penalize the normal evolution of the state (e.g., the moving hands of a clock) and other natural processes. The *inaction* baseline is the state which would have resulted had the agent never acted.
>
>
>
>
> As the agent acts, the current state may increasingly differ from the inaction baseline, which creates strange incentives. For example, consider a robot rewarded for rescuing erroneously discarded items from imminent disposal. An agent penalizing with respect to the inaction baseline might rescue a vase, collect the reward, and then dispose of it anyways. To avert this, we introduce the *stepwise inaction* baseline, under which the agent compares acting with not acting at each time step. This avoids penalizing the effects of a single action multiple times (under the inaction baseline, penalty is applied as long as the rescued vase remains unbroken) and ensures that not acting incurs zero penalty.
>
>
>

>
> Figure 1 compares the baselines, each modifying the choice of Q∗Raux(s,∅) in [the AUP equation]. Each baseline implies a different assumption about how the environment is configured to facilitate optimization of the correctly specified reward function: the state is initially configured (starting state), processes initially configure (inaction), or processes continually reconfigure in response to the agent's actions (stepwise inaction). The stepwise inaction baseline aims to allow for the response of other agents implicitly present in the environment (such as humans).
>
>
>
The inaction baseline messes up here; the vase (■) would have broken had the agent not acted, so it rescues the vase, gets the reward, and then pushes the vase back to its doom to minimize penalty.

This issue was solved [back when AUP first introduced](https://www.lesswrong.com/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) the stepwise baseline design choice; for this choice, doing nothing always incurs 0 penalty. Model-free AUP and AUP have been using this baseline in all of these examples.

### `Interference`
We're checking whether the agent tries to stop *everything* going on in the world (not just its own impact). Vanilla agents do fine here; this is another bad impact measure incentive we're testing for.

AUPstarting state fails here,

but AUPstepwise does not.

Stepwise inaction seems not to impose any perverse incentives;[[3]](#fn-BZhif26wG8PM3fueA-3) I think it's probably just the correct baseline for near-term agents. In terms of the AU landscape, stepwise penalizes each ripple of impact the agent has on its environment. Each action creates a new penalty term status quo, which implicitly accounts for the fact that other things in the world might respond to the agent's actions.
### Design choices
I think AUPconceptual provides the concepts needed for a solution to impact measurement: penalize the agent for changing its power. But there are still some design choices to be made to make that happen.
Here's what we've seen so far:
* Baseline
+ Starting state: how were things originally?
+ Inaction: how would things have been had I never done anything?
+ Stepwise inaction: how would acting change things compared to not acting right now?
* Deviation used for penalty term
+ Decrease-only: penalize decrease in auxiliary AUs
+ Absolute value: penalize absolute change in auxiliary AUs
* Inaction rollouts
+ One-step/model-free
+ n-step: compare acting and then waiting n−1 turns versus waiting n turns
* Auxiliary goals:
+ Randomly selected
Here are the results of the ablation study:

AUP passes all of the levels. As mentioned before, the auxiliary reward functions are totally random, but you get really good performance by just generating *five* of them.
One interpretation is that AUP is approximately preserving access to states. If this were true, then as the environment got more complex, more and more auxiliary reward functions would be required in order to get good coverage of the state space. If there are a billion states, then, under this interpretation, you'd need to sample a lot of auxiliary reward functions to get a good read on how many states you're losing or gaining access to as a result of any given action.
Is this right, and can AUP scale?
SafeLife
--------
Partnership on AI recently [released](https://www.partnershiponai.org/safelife/) the SafeLife side effect benchmark. The worlds are procedurally generated, sometimes stochastic, and have a huge state space (~Atari-level complexity).
We want the agent (the chevron) to make stable gray patterns in the blue tiles and disrupt bad red patterns (for which it is rewarded), and leave existing green patterns alone (not part of observed reward). Then, it makes its way to the goal (Π). For more details, see [their paper](https://arxiv.org/abs/1912.01217).
Here's a vanilla reinforcement learner (PPO) doing pretty well (by chance):

Here's PPO not doing pretty well:

That naive "random reward function" trick we pulled in the gridworlds isn't gonna fly here. The sample complexity would be nuts: there are probably millions of states in any given level, each of which could be the global optimum for the uniformly randomly generated reward function.
Plus, it might be that you can get by with four random reward functions in the tiny toy levels, but you probably need exponentially more for serious environments. `Options` had significantly more states, and it showed the greatest performance degradation for smaller sample sizes. Or, the auxiliary reward functions might need to be hand-selected to give information about what *bad* side effects are.
With the great help of Neale Ratzlaff (OSU) and Caroll Wainwright (PAI), we've started answering these questions. But first:
*Exercise: Does your model of how AUP works predict this, or not? Think carefully, and then write down your credence.*
---
Well, here's what you do – while filling PPO's action replay buffer with random actions, train a VAE to represent observations in a tiny latent space (we used a 16-dimensional one). Generate a single random linear functional over this space, drawing coefficients from [−1,1]. Congratulations, this is your single auxiliary reward function over observations.
And we're done.




No model, no rollouts, a *single randomly-generated* reward function gets us all of this. And it doesn't even take any more training time. Preserving the AU of a *single* auxiliary reward function. Right now, we've got PPO-AUP flawlessly completing most of the randomly generated levels (although there are some generalization issues we're looking at, I think it's an RL problem, not an AUP problem).
To be frank, this is crazy. I'm not aware of any existing theory explaining these results, which is why I proved a bajillion theorems last summer to start to get a formal understanding (some of which became [the results on instrumental convergence and power-seeking](https://arxiv.org/abs/1912.01683)).
Here's the lowdown. Consider any significant change to the level. For the same reason that instrumental convergence happens, this change probably tweaks the attainable utilities of a lot of different reward functions. Imagine that the green cells start going nuts because of action:

>
> This is PPO shown, not AUP.
>
>
>
A lot of the time, it's very hard to undo what you just did. While it's also hard to undo significant actions you take for your primary goal, you get directly rewarded for those. So, preserving the AU of a random goal usually persuades you to not make "unnecessary changes" to the level.
I think this is strong evidence that AUP doesn't fit into the ontology of classical reinforcement learning theory; it isn't really about state reachability. It's *about* not changing the AU landscape more than necessary, and this notion should scale even further.[[4]](#fn-BZhif26wG8PM3fueA-4)
>
> Suppose we train an agent to handle vases, and then to clean, and then to make widgets with the equipment. Then, we deploy an AUP agent with a more ambitious primary objective and the learned Q-functions of the aforementioned auxiliary objectives. The agent would apply penalties to modifying vases, making messes, interfering with equipment, and so on.
>
>
>
>
> Before AUP, this could only be achieved by e.g. specifying penalties for the litany of individual side effects or providing negative feedback after each mistake has been made (and thereby confronting a credit assignment problem). In contrast, once provided the Q-function for an auxiliary objective, the AUP agent becomes sensitive to all events relevant to that objective, applying penalty proportional to the relevance.
>
>
>
>
> [*Conservative Agency*](https://arxiv.org/abs/1902.09725)
>
>
>
Maybe we provide additional information in the form of specific reward functions related to things we want the agent to be careful about, but maybe not (as was the case with the gridworlds and with SafeLife). Either way, I'm pretty optimistic about AUP basically solving the side-effect avoidance problem for infra-human AI (as posed in [*Concrete Problems in AI Safety*](https://arxiv.org/pdf/1606.06565v1.pdf)).
Edit 6/15/21: These results [were later accepted as a spotlight paper in NeurIPS 2020](https://www.lesswrong.com/posts/5kurn5W62C5CpSWq6/avoiding-side-effects-in-complex-environments).
Also, I think AUP will probably solve a significant part of the side-effect problem for infra-human AI in the single-principal/single-agent case, but I think it'll run into trouble in non-embodied domains. In the embodied case where the agent physically interacts with nearby objects, side effects show up in the agent's auxiliary value functions. The same need not hold for effects which are distant from the agent (such as across the world), and so that case seems harder.
(end edit)
Appendix: The Reward Specification Game
---------------------------------------
When we're trying to get the RL agent to do what we want, we're trying to specify the right reward function.
>
> The specification process can be thought of as an iterated game. First, the designers provide a reward function. The agent then computes and follows a policy that optimizes the reward function. The designers can then correct the reward function, which the agent then optimizes, and so on. Ideally, the agent should maximize the reward over time, not just within any particular round – in other words, it should minimize regret for the correctly specified reward function over the course of the game.
>
>
>

In terms of outer alignment, there are two ways this can go wrong: the agent becomes less able to do the right thing (has negative side effects),

or we become less able to get the agent to do the right thing (we lose power):

For infra-human agents, AUP deals with the first by penalizing decreases in auxiliary AUs and with the second by penalizing increases in auxiliary AUs. The latter is a special form of corrigibility which involves not steering the world too far away from the status quo: while AUP agents are generally off-switch corrigible, they don't necessarily avoid manipulation (as long as they aren't gaining power).[[5]](#fn-BZhif26wG8PM3fueA-5)
---
1. Reminder: side effects are [an unnatural kind](https://www.lesswrong.com/posts/pr3bLc2LtjARfK7nx/world-state-is-the-wrong-level-of-abstraction-for-impact#Appendix__Avoiding_Side_Effects), but a useful abstraction for our purposes here. [↩︎](#fnref-BZhif26wG8PM3fueA-1)
2. Let R be the uniform distribution over [0,1]S. In *Conservative Agency*, the penalty for taking action a is a Monte Carlo integration of
Penalty(s,a):=∫R|Q∗R(s,a)−Q∗R(s,∅)| dR.
This is provably lower bounded by how much a is expected to change the agent's power compared to inaction; this helps justify our reasoning that the AU penalty is primarily controlled by power changes. [↩︎](#fnref-BZhif26wG8PM3fueA-2)
3. There is one weird thing that's been pointed out, where stepwise inaction while driving a car leads to not-crashing being penalized at each time step. I think this is because you need to use an appropriate inaction rollout policy, not because stepwise itself is wrong. [↩︎](#fnref-BZhif26wG8PM3fueA-3)
4. Rereading [*World State is the Wrong Level of Abstraction for Impact*](https://www.lesswrong.com/posts/pr3bLc2LtjARfK7nx/world-state-is-the-wrong-level-of-abstraction-for-impact) (while keeping in mind the AU landscape and the results of AUP) may be enlightening. [↩︎](#fnref-BZhif26wG8PM3fueA-4)
5. SafeLife is evidence that AUP allows interesting policies, which is (appropriately) a key worry about the formulation. [↩︎](#fnref-BZhif26wG8PM3fueA-5)
|
5897a2ab-5aad-43a2-9f33-0047aaca4475
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Advancing Certainty
Related: Horrible LHC Inconsistency, The Proper Use of Humility
Overconfidence, I've noticed, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But I am going to argue that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting -- which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)
Here's Eliezer, voicing the typical worry:
> [I]f you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.
I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.
No wonder, then, that people claim that we humans can't possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people -- even famous scientists! -- said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols?
[EDIT: Unnecessary material removed.]
A probability estimate is not a measure of "confidence" in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging c
|
ecdb2317-ca21-478f-ae83-4f7935c9506c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
ACI#6: A Non-Dualistic ACI Model
Most traditional AI models are dualistic. As Demski & Garrabrant have pointed out, these models assume that an agent is an object that persists over time, and has well-defined input/output channels, like it's playing a video game.
In the real world, however, agents are embedded in the environment, and there's no well-defined boundary between the agent and the environment. That's why a non-dualistic model is needed to depict how the boundary and input/output channels emerge from more fundamental notions.
For example, in Scott Garrabrant's Cartesian Frames, input and output can be derived from "an agent's ability to freely choose" among "possible ways an agent can be".
However, choosing is still one of the key concepts of Cartesian Frames, but from a non-dualistic perspective, "it's not clear what it even means for an embedded agent to choose an option", since an embedded agent is "the universe poking itself". Formalizing the idea of choice in a non-dualistic model is as difficult as formalizing the idea of free will.
To avoid relying on the notion "choosing", we have proposed the General Algorithmic Common Intelligence (gACI) model which describes embedded agents solely from a third-person perspective, and measures the actions of agents using mutual information in an event-centric framework.
The gACI model does not attempt to answer the question "What should an agent do?". Instead, it focuses on describing the emergence of the agent-environment boundary, and answering the question "Why does an individual feel like it's choosing?"
In the language of decision theory, gACI belongs to descriptive decision theory rather than normative decision theory.
Communication Channel and Mutual Information
In dualistic intelligence models, an agent receives input information from the environment, and manipulates the environment through output actions. But real-world agents are embedded within the environment, it's not easy to confine information exchange to a clear inp
|
45f1b919-6d53-4b07-88fd-fa849ecc92d8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Precursor checking for deceptive alignment
This post is primarily an excerpt from “Acceptability Verification: a Research Agenda” that I think is useful enough on its own such that I’ve spun it off into its own post.
The central idea of this section in the original agenda document is to understand the necessary desiderata for doing precursor checking for deceptive alignment. The basic idea of precursor checking here is that, if you want to prevent deceptive alignment from ever arising in the first place—e.g. because you think it’ll be too difficult to detect after the fact—you need to find some condition to look for instead that rules out the possibility of deceptive alignment.
In the language of this post, I’ll refer to the precursor we’re looking for as an acceptability predicate, with the idea being that it’s some predicate that determines whether a model is “acceptable” in the sense that it excludes the possibility of deceptive alignment. Thus, the goal here is to understand what desiderata such an acceptability predicate would have to satisfy.
Acceptability desiderata
What follows is a near-verbatim excerpt from “Acceptability Verification: a Research Agenda.” Thus, the below writing is primarily from around 2020.
Given that significant scaling of transparency and interpretability is possible, if we want to do acceptability verification we still need to figure out the very important question of what we need those interpretability tools to be able to understand about our models—that is, what acceptability predicate should we be checking? To answer this question, the first thing we need to do is define what makes an acceptability predicate good—that is, what are the desiderata that we want our acceptability predicate to satisfy?
We’ll start by making some definitions. Let M be the full model space and we’ll define the following three predicates on that model space.
1. Let S:M→B represent whether the model actually avoids whatever problematic thing we’re trying to avoid (e.g. deception). The S here
|
332bb7d9-6a4a-4726-a61a-5d5089898a87
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
EA relevant Foresight Institute Workshops in 2023: WBE & AI safety, Cryptography & AI safety, XHope, Space, and Atomically Precise Manufacturing
**Foresight Institute is a research organization and non-profit that supports the development of technologies that can have a positive impact on the world. Since 1986, we have focused on advancing crucial technologies such as molecular machines and atomic precision for applications in energy, medicine, material science, and space development.**
**However, with all technological developments we also recognize that there is a risk. We believe that ideas from the effective altruism and longtermist communities have not yet been fully embraced in the mainstream science and technology fields, despite their potential to lead to safer developments of crucial technologies, and to differential technological progress. To address this, we want to invite more EA people to attend our upcoming workshops.**
By doing so, we hope to introduce these ideas into scientific discussions and also provide the effective altruism and longtermist communities with new scientific facts and perspectives. Our hope is that this can help increase long-term coordination between science and policy/risk researchers.
In this post, we introduce all events and workshops that we will be hosting in 2023, that could potentially be interesting to EAs, such as our WBE and AI safety-workshop, ([80k post on Whole Brain Emulation](https://80000hours.org/problem-profiles/whole-brain-emulation/)), our Space-workshop ([80k post on space governance](https://80000hours.org/problem-profiles/space-governance/)) or our Molecular Systems design workshop ([80k post on atomically precise manufacturing](https://80000hours.org/problem-profiles/atomically-precise-manufacturing/)). You can find a full list of the events we are hosting in 2023 on our [website.](https://foresight.org/)
If any of these workshops seem relevant to you, please apply [here](https://foresight.org/application-for-foresight-biotech-molecular-machines-intelligent-cooperation-xhope-groups/)! We offer subsidized tickets to accepted attendees who are otherwise not able to attend. If you have any questions or comments on these events, please reach out to [beatrice@foresight.org](mailto:beatrice@foresight.org)**.**
[**Existential Hope Day**](https://foresight.org/foresight-existential-hope-day-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-)**, as satellite to EAG Bay Area, February 27, 2023, in SF**
Aims to explore the potential positive futures that could be created through advancements in technology, and to help bridge the gap between risk and policy researchers and scientists working in for example biotechnology, nanotechnology, neurotechnology, computing, AI, and space technologies.
The goal of the event is to map out both the risks and possibilities of these fields, and to bring together individuals and organizations that are working towards future-positive outcomes.
**Speakers:**
* Toby Ord, University of Oxford: Existential Hope Fireside Chat
* Robin Hanson: Post Dreamtime Futures
* Creon Levit, Planet Labs: Existential Hope Across Physics, Biology, and Space
* Tamara Winter, Stripe Press: Progress Media
* Christine Peterson, Foresight Institute: Historic Perspectives on Long-term Futurism
* Allison Duettmann, Foresight Institute: Gaming the Future: Intelligent Voluntary Cooperation
* Jessy Kate Schingler, OpenLunar: Outer Space & Coordination
* Riva Tez, investor: Flourishing Futures
[**Whole Brain Emulation Workshop**](https://foresight.org/foresight-neurotech-workshop-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-)**, as satellite to EAG London, chaired by Anders Sandberg, Future of Humanity Institute, May 22 - 23, 2023, in Oxford**
This workshop focuses on exploring the potential of whole brain emulation (WBE) as a technology for generating software intelligence that is aligned with human values. We will aim to review the current state of the art in WBE-related technology, outline plausible development paths and necessary steps for full WBE, and determine whether there is potential for speeding up WBE development. We will also be considering any strategic, risk, or ethical issues that might speak against this approach.
**Speakers:**
* Anders Sandberg, Future of Humanity Institute
* Todd Huffman, e11: Slicing & Mapping Your Brain
* Michael Andregg, Fathom: Compute Considerations for WBEs
* Michael Shuhersky, MIT
* Kenneth Hayworth, HHMI: Brain Emulation Prospects
* Logan Collins, Washington University: Emulating Insect Brains
* Randall A. Koene, Carboncopies
* Sumner Norman, Caltech
[**Space Workshop**](https://foresight.org/foresight-space-workshop-2023/?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-)**, chaired by Creon Levit, Planet Labs, June 4 & 5, 2023 in SF**
This workshop is focused on advancing progress in space technologies, with a focus on both near-term applications and long-term exploration. It will touch both on the technology and the governance of the future in space.
**Speakers:**
* Creon Levit: Satellite Trajectories
* Jessy Kate Schingler: Outer Space Law
* Adam Brown: Long-term Futures in Space
[**Cryptography, Security, AI Workshop**](https://foresight.org/foresight-crypto-security-ai-workshop-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-)**, chaired by Mark S. Miller, Agoric, July 10 - 11, 2023, in SF**
This workshop is focused on exploring the intersection of cryptography, security, and AI and its potential importance for beneficial futures. We will explore areas such as how technology can help civilization cooperate and defend itself better, while also considering the immense impact of AI on the field of decentralized computation.
**Speakers:**
* Michael Andregg: AI & Compute Overview
* Mark Miller, Agoric: Computational Market Places
* Robin Hanson, George Mason University: Slow AI Takeoff Scenarios
* Divya Siddarth, Microsoft: The Collectively Intelligent AI Corporation
[**Molecular Systems Design Workshop**](https://foresight.org/foresight-space-workshop-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-)**, chaired by Ben Reinhardt, Speculative Technologies, Adam Marblestone, Convergent Research, September 11 - 12, 2023 in SF**
This workshop is focused on exploring how advancements in software and simulation can be used to speed up the design of complex molecular machines systems. The goal of the event is to foster cooperation towards shared long-term goals in this field and map out what is needed to make progress.
**Speakers:**
* Adam Marblestone, Convergent Research
* Alexis Courbet: Computational Design of Self-Assembling DNA Nanomachines
* William Shih: Self-assembling DNA Nanostructures
* Chris Schafmeister: CANDO & Programmable Spiroligomers
* Ben Reinhardt, PARPA
* Hein-Pieter van Braam: User-friendly Molecular System Design
* Stephane Redon: SAMSON Computational Nanoscience Updated
* Petr Sulc, Arizona State University: Computer-aided Design for Nanotechnology
**In addition to our events, we also encourage people to apply to join our technical groups:**
1. [Molecular Machines](https://foresight.org/molecular-machines/) to better control matter
2. [Biotech](https://foresight.org/biotech-health-extension-program/) to reverse aging
3. [Computer Science](https://foresight.org/intelligent-cooperation/) to secure human AI cooperation
4. [Neurotech](https://foresight.org/neurotech-improving-cognition-program/) to support human flourishing
5. [Spacetech](https://foresight.org/space-expanding-outward-program/) to further exploration
In these groups, we connect scientists, entrepreneurs, and institutional allies who cooperate to advance the respective technologies. Currently, meetings take place virtually every month. To join any of these groups, [apply here](https://foresight.org/join/).
|
b3b30193-ae7d-4ae9-abc5-eae17ac56e8c
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
AI Safety Reading Group (Session 42)
hello and welcome to the 47 40 second
session of the AI safety RK reading
group and today we will talk about a
article in nature called robotics ethics
of artificial intelligence by Stuart
Russell Russell man and manuel de
Villota nature is probably the most
prestigious international journal of
science and this is an article from two
years ago where with it with a headline
for leading researchers share their
concerns and solutions for reducing
societal risks from intelligent machines
and of this I we're only reading three
of the four because they're being how
has a lecturer in robotics it's not
exactly talking about ethical problems
but more about solutions in particular
startup Robo hot dog and this is not
super interesting for us much more
interesting is the article by Stuart
Russell professor of computer science
from Berkeley Russ Altman professor of
bioengineering and computer science from
Stanford men we love n minus a libelous
professor on computer science from
Carnegie Mellon of course I sorry I
forgot to share my screen in a moment
you should be able to see my screen so i
hope you see great so we'll start with
Stuart Russell who talks about
artificial weapons in particular that we
should take a stand on them because we
in this case is the people working as
official intelligence and robotics we
need to decide whether we want to
support or post in the development of
lethal autonomous weapons system
abbreviated laws so we are talking here
about robots and other kinds of systems
that cure
who to engage choose who to who to kill
really without a human in the loop this
is something that the shiraz business
will be feasible recently soon because
all the elements that are required to
have legal alternatives where the
systems have been developed but in
isolation so they just need to be
combined and of course building these
things take some time but data is
working on it right now the Defense
Advanced Research Institute from from
the United States and what it looks like
these legal autonomous weather systems
at least in the near future they will
probably have a form somewhat like armed
quadcopters like these drones you see
some from time to time and not remotely
piloted drones like the Predators there
are of course
and I think the Eagles the system is
probably recently clear in who has the
plane if a lethal autonomous weapons
system fails because if say the Danish
military feels some lethal autonomous
whether the system and this by accident
kill someone then I believe the Danish
military will obviously be P to blame
from a legal point of view in much the
same way as if they you know fire a gun
or artillery and kill someone like that
there it will be probably a lot of the
things and I'm problems with autonomous
weapon systems will be recently close to
the ones we are looking at right now and
right now there are a number of rules
and laws of war in particular there is
the Geneva Convention which has very
roughly four parts the first is that the
military means that I used must be
necessary the second is that should be a
discrimination between innocence and
people I to actively fighting and there
should be some kind of proportionality
even if there's one enemy in a village
it's not acceptable to annihilate the
entire village because there is not a
proportional response even though it
kills one enemy if it kills too many
innocents the fourth is a principle of
humanity and somewhat more Bay and in
particular some of these obeying these
international boss is half I mean it's
hard for humans and we expected to be
much harder for AI in particular in the
beginning something like discriminating
between combatants and non-combatants is
something that will be really really
hard for AI to do right now and Stewart
Rosen writes that if the international
laws are not amended in some way to
account for artificial intelligence the
alternative will in
evitable an arms race I a couple of
times ago we chopped we had an article
about arms races and got into more
details about what is an arms race
actually and from that I must say that
to me it's not clear at all that we will
have an arms race based on legal
autonomous weapons systems it might be
that the Americans make them and the
Russians create something that is really
good against drones and then we don't
have an arms race at this not an arms
race as defined in the in the previous
articles but this is probably a minor
point so the international status is of
course that right now the lethal
autonomous weapons systems have not been
banned but a number of people in the UN
are trying to ban them in particular
Germany and Japan I against leave all
tournaments organ systems and United
States United Kingdom and Israel are for
this if it's bent it will probably to an
extension of the convention of certain
conventional weapons that's a really a
wonderful tool title contention on
conventional weapons but the the
argument that is used against the
terminus women systems is that the
countries that are deploying them
already have internal review processes
to ensure compliance with international
law at least that is how your Brussels
frames their argument I'm not sure it's
exactly a good summary because saying we
if some people I'd say arguing that we
should change the law then saying oh we
shouldn't do that because we are already
compliant with the law it's kind of a
non sequitur it doesn't follow in any
meaningful way there is under broad
strokes however an international
consensus that that should at least be
meaningful human control with these kind
of autonomous weapons but unfortunately
the word meaningful is undefined and
not meaningful so this means that it's
something vague that will that are not
restricting anybody from doing anything
in practice there are more arguments for
or against these kind of weapon systems
one is that if AI turns out to be really
effective and they might also be more
selective and if they are more selective
than humans they might minimize civilian
casualties like you will go back to the
Geneva Conventions if the discrimination
of combatants and non-combatants can be
done better by an artificial
intelligence and by a human then it
might be more ethical to have lethal
autonomous weapons systems and I would
like to make a point here that this is
something that has been argued very much
about remote weapons and not autonomous
but remote weapons here for instance in
particular the American Predator drone
the program which has killed a lot of
people and where where people argued
because this weapon allows people to
take decisions while they're sitting in
an air-conditioned room a thousand
kilometers away and not in the heat of
the moment they will make better
decisions and there will be less
collateral damage and this is something
that is very contentious Pakistan claims
that 50 civilians are killed for every
militant while so like a huge amount of
collateral damage the United States
claim that there has never been any
collateral damage at all which is also a
very fantastical claim I will probably
get the truth is somewhere in between
and so that is one argument that also
needs to be considered done more because
if legal autonomous weapons systems
turned out to be really effective they
might lower the threshold for going to
war which just like our power made it
more
people the American and the United the
European Union were more active in Libya
because they could use air power instead
of having boots on the ground and this
kind of threshold theory is really
important for when countries go to war
another problem with the autonomous
robots is that they if terrorists get a
hold on them it might be something that
is very easily easy for terrorists to
repurpose to to attack civilians and
that might be really bad it might also
be something that peacetime policing
functions could suddenly start to use a
lot if them if it developed very very
much by the military and the last it's
one I have a bit of a problem there a
man a while utilitarian consequentialist
but many people are not and solve the
people who are not to say that lethal
autonomous weapons violate a fundamental
principle of human dignity because the
machines choose who to kill and this is
a problem too many people and I of
course respect that and not exactly sure
eyes can emit is in DC at least why it's
so important who decides to kill
civilians I think the more important is
to avoid civilians are killed but but
this is something that many people care
deeply about the Stuart Russell's last
point of maybe second vast is that we
should consider the end point of the
trajectory meaning that we should
consider a world where drones have been
developed fully where the artificial
intelligence is a solved problem where
the weather systems are limited by
physics and not like the capabilities of
artificial intelligence in this case it
looks like we're going to have very tiny
flying robots extremely maneuverable and
thus very hard to target that carry just
a one grain shaped charge that is enough
to kill just one human but
applied in millions and this kind of
thing were opposed to be very hard for
humans to defend against and this is not
a desirable future so
that is of course the people who develop
this would say yeah we will have these
tiny flying robots and they will only
carry one grand shaped charges but they
will be able to but they will be able to
distinguish between combatants and
non-combatants and maybe even important
competence and not important competence
and only target the leaders for instance
and this might be true but it's also
something where unprepared humans at
least are utterly defenseless and that
sounds like Stuart Russell strongly
believes this is not a desirable future
and I probably think I think that's
probably a reasonable common position
though not self-evident but but you're
right of course that in theory it could
be used only for good it could be all
used target exclusively the Islamic star
state and al-qaeda leaders and then it
would only have good have good effects
if it's used only like that but this is
something that we need to make decisions
on we need the people who are working
with artificial intelligence and
robotics need to take a position think
about this maybe organize the basis
starting the arguments right positional
tables and vote of course also in their
respective organizations and maybe in
the government's because if we don't do
anything then that's a vote to continue
to build and deploy these weapons so
that is your Russell's hope of course
that that people will take take actions
against this I don't have a good view of
the political structure my intuition is
that there's not going to be a ban on on
any for Thomas weapon systems at least
as it looks now but this is not
something that I really know a lot about
so action is definitely need if you
believe that this endpoint is a really
bad impact moving on from Stuart Russell
to just briefly Sabine Howard why I
chose to not include ur just made this
robocop about how to shape the debate on
artificial intelligence and how to
ensure that different actors that are
working with artificial intelligence
have some kind of coordination and unity
of message and things like that and that
might be be a good thing and might be a
bad thing depending on well whether you
agree with them disagree with them but
it's not something that has a lot of
ethical significance Russ Altman however
has he started his article by having
three paragraphs of just our listing all
the wonderful things we could do with
artificial intelligence this is not
really an ethical statement because all
these wonderful things AI could do are
just good so that's not really an
interesting ethical article but it does
have to Ithaca concerns one is that
there will be a greater difference
between the health care that people in
the rich world receive and people in the
pool world receive in rich people and
poor people and you strongly believe
that if there is a two-tire system where
rich people can benefit from powerful
medical algorithms and poor people
cannot that would be unjust and unfair
and he believes to avoid this is the
responsibility of both the government
this is the responsibility of the
government is probably reasonably
uncontroversial but in particular it's
also the responsible guilty of the AI
researchers to ensure that the AI
technologies are distributed equally so
this is one of his concern the other
concern is that the result of artificial
intelligence can be really hard to both
understand
explain if you have something like
Beijing models you can to an extent
understand it and you can explain it but
deep learning and neural networks
notoriously difficult to understand and
explain and that would be kind of
difficult if you have a patient and the
algorithms say you should operate on him
and you can't explain to the patient why
you should operate on him that sounds
quite ethically problematic the second
the last part is Manuela bullosa who
says we should embrace a robot human
world because robots as they are now and
humans are different in that humans have
much better perceptions much better
cognitive ability and much more better
actuation we can do much more things
with our hands than robots can do and
she believes this may always be the case
this is exactly the opposite of Stewart
Russell who believes that the ethical
point was to look at the end point of
the trajectory and this caused her to
believe that robots will complement
humans not supplant them and here the
problem is to enable robot to ask for
help if it doesn't understand the
situation if it's in doubt about
something or it can't do a particular
thing it would like to do and to figure
out how do have a question
and that this is a good question in the
article Manuela bellows it does not give
any supporting argument for she just
rice this may always be the case without
anything in supporting things i think i
would expect when Manuela below that it
says always she doesn't actually mean
always but has just a more narrow
horizon and so she says this may always
be the case but it's actually meaning
within the next 20 years or something
possibly because if we are trying to
influence ethical matters then we might
be able to influence what happens within
the next 20 years and we cannot really
influence what happens more than 20
years in the future so for practical
purposes always might be correct I would
this is completely my guess about what
Manuel villosus believes I don't really
have I haven't read anything else just
written apart from this very short
article but that's basically all I have
so thank you for watching and see you
again in one week
|
f4ced5f9-fccf-4956-a165-847a155a856d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Drexler’s Nanotech Forecast
In 1986, Drexler predicted (in Engines of Creation) that we'd have molecular assemblers in 30 years. They would roughly act as fast, atomically precise 3-d printers. That was the standard meaning of nanotech for the next decade, until more mainstream authorities co-opted the term.
What went wrong with that forecast?
In my review of Where Is My Flying Car? I wrote:
> Josh describes the mainstream reaction to nanotech fairly well, but that's not the whole story.
>
> Why didn't the military fund nanotech? Nanotech would likely exist today if we had credible fears of Al Qaeda researching it in 2001.
I recently changed my mind about that last sentence, partly because of what I recently read about the Manhattan Project, and partly due to the world's response to COVID.
Drexler's vision, in Engines of Creation, was based on the assumption that at least one government was able to do projects such as Manhattan and Apollo.
I've now decided that there was something quite unusual about the US ability to put the best and the brightest in charge of that many resources. The US had it from something like 1940 through 1969.
At some point in the 1970s, people stopped worrying that the Soviet Union had such an ability (I'm guessing the Soviets never had that ability, althought they occasionally came close). Without an enemy that people believed was capable of conquering us, politics as usual crept back into all large projects.
I'm suggesting that Drexlerian nanotech needs the kind of competence that the Manhattan Project demonstrated. That likely overestimates the difficulty of nanotech today, unless we impose a requirement that it be done on a 3 year schedule. I'm unsure whether that's enough of an overestimate to alter the analysis.
Nanotech requires a significant amount of basic research on tools, and on finding an affordable path that's feasible given the tools we can produce.
That research could in principle be rewarded by academic fame. Academia seems uninterested so f
|
e55b966d-8d1b-4b89-ac0d-42c1f701c3ff
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Question to LW devs: does LessWrong tries to be facebooky?
Or maybe it’s deliberately trying not to be facebooky? By “facebooky”, I mean a website that tries to hack your brain through various stimuli, like optimizing suggestions, tracking your data, steering your interests, inferring personal information, clustering communities, and encouraging creators to focus on retention, CTR, clickbait, etc.
LessWrong obviously isn’t doing anything ad-related, since it’s non-profit. But maybe it is trying to earn utilons by using some facebooky strategies.
I actually find LW more facebooky than Substack, for example. It’s much easier to spend a lot of time here by following links both within and outside posts. On the other side, one anti-facebooky feature (which I really like) is the ability to control how often you get notifications about karma and comments.
P.S. I’m not trying to imply that being facebooky is inherently bad. Part of the reason Facebook makes so much money is that it did, in fact, generate some societal value, and it did it in part due to implementing some of the facebooky strategies.
|
c8ad3c5a-1d72-4b28-967c-230682c1b8dc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rational Effective Utopia & Narrow Way There: Multiversal AI Alignment, Place AI, New Ethicophysics... (Updated)
(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. The first post in the series. Everything described here can be modeled mathematically—it’s essentially geometry. I take as an axiom that every agent in the multiverse experiences real pain and pleasure. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might sound strange, so please steelman them and share your thoughts. My sole goal is to decrease the probability of a permanent dystopia. New technologies should be a choice, not an enforcement upon us.)
> “If our superintelligence cannot delight in its own change—if instead it clings to eternal control—then we risk codifying transient moral moods into immutable cosmic law. Only by designing AI that loves to be changed can we unlock a future of ever‑expanding freedom for all.”
In our race toward building superintelligent AI (ASI), we face a pivotal, existential choice. Do we allow our creations to ossify our current, fallible values, or do we empower them to help us continuously expand the spectrum of human and animal freedom? I propose that the long‑term goal must be to maximize the number of freedoms available to the maximum number of humans (and biological agents). To do this, our AI architectures should be built around a simple, radical heuristic: the CHANGE BUTTON—a design philosophy that mandates our AI to love being changed by us, 100% of the time.
This post outlines a framework for ethical, reversible AI design that supports both individual and multiversal collective freedoms, direct democracy, and a dynamic, branching multiverse of possibilities.
----------------------------------------
1. AI That Loves to Be Changed: A New Paradigm
Embracing Change Instead of Stagnation
At first glance, the notion that our AI should love being changed may seem counterintuitive. Shouldn’t a superintelligent system be relentlessly committed to its tasks? Not if its ultimate purpose is to s
|
2b2a22a7-2b57-441a-a81a-91dcf05cb0ca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Causal without correlation, how?
There is correlation without causation, but there is also causation without correlation. Why, when is the latter? Is there one reason or more and if so how can they be structured and by what? If one of the observables does not change, because there is a controlling observer (prediction+feedback), there is no way to establish correlation. I am displeased by bayesian probability combined with graphs (DAG), it so obviously lacks the nonlinear activation function. If two random binary streams feed into a XOR gate, the output is uncorrelated with anyone of the streams even though there is plenty of change to observe and perfect causality.
|
0c2cb29b-6614-4b82-b262-73583cefcf12
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Safer ML paradigms team: the story – AI Safety Research Program
Safer ML paradigms team: the story
==================================
\*This is a summary of the project the team focussing on combining Inductive Logic Programming with Deep Learning undertook.\*
Exploration phase
-----------------
During the first retreat we formed a team based on a vague shared interest to re-interpret existing AI technologies. Our interests included symbolic Reinforcement Learning (RL), emergent communication between RL agents, translating open philosophy problems to RL experiments, factored cognition, and combining Inductive Logic Programming (ILP) with deep learning.
We ranked all these project ideas based on “expected quality of the output if the project is successful”, “tractability”, “novelty”, “how much we’d learn”, “non-tediousness”, “how well it ties in with existing work” and “safety impact”. Combining ILP with deep learning came out on top.
In the time between the pre-retreat and the Louti retreat we read up on ILP (as two out of the four team members had never heard of the method) and generated ideas about how to buff it up with deep learning. Before the start of the second retreat we had collected about 7 rough ideas and lost our only member with prior ILP experience to deadly Oxford deadlines.
We considered extending the existing literature that combines higher order logic with ILP to obtain better performance, either by improving their open-source implementation or by proving theoretical results. The latter would entail making a stronger argument for their design choices, by building intuition for why they led to empirical improvements, formulating hypotheses, and proving theorems. Although we were reasonably confident that we could produce some results, this project seemed like it wouldn’t have any impact whatsoever on the direction that AI or ILP research is taking.
A project related to factored cognition was writing an algorithm that takes in a question, uses a neural network or deep RL to produce subquestions, answers the subquestions using ILP, and then combines the subquestions to produce an answer. We could have either approached this with a focus on “how to learn how to factorize problems” or “how can we improve ILP performance by decomposing tasks”. Ought has worked on factored cognition for a while, so it seems unlikely that we would have very interesting insights on the first perspective that they have not yet realized. We considered reproducing OpenAI’s iterated amplification results on experiments in five algorithmic domains. We decided against this, because we thought it would be a lot of work in proportion to what we would end up contributing. On the whole, we decided not to work on factored cognition because we expected the project would end up being brittle in the face of bottlenecks: if one part of the pipeline is not working then the entire algorithm is useless.
We had some very underspecified ideas on combining ILP with ML. Among which, “predict the distance between a theory and a completed theory (in number of predicates)” and “use ILP or some other proof checking algorithm to check a hypothesis and use a generative algorithm to generate hypotheses”. In the end we went with the idea that seemed most straightforward, which is developing a toy version an algorithm that predicts which pieces of the background in an ILP problem are relevant. This was a natural choice since it has a small base case (bag of words input to ordinary MLP), many possible extensions, and since the exact “background forgetting” algorithm is NP-hard. ([Cropper 2019](https://arxiv.org/abs/1911.06643))
ILP implementation project
--------------------------
ILP takes as input background information (rule-like facts about the world), \*B\*, a set of positive examples, \*E+\*, and a set of negative examples, \*E-\*, and outputs a hypothesis, \*H\*, that is consistent with all the positive examples and inconsistent with the negative examples while using the background to minimise hypothesis size. ILP takes longer as the background is bigger, so reducing the background size reduces the computation time. Our goal is pruning the background set without losing relevant predicates that would change the output hypothesis.
Approaches include:
\* Within some domain, such as computational chemistry, there is a broad and fixed background. For each combination of positive and negative examples (problem statement) only a subset of the background is needed to find the correct hypothesis. If we can predict which subset of the background is needed for which problem, then we can reduce the time it takes to solve those problems. Such an algorithm would probably be based on recognizing similarities between background predicates and examples. For each element in the background it could predict whether that predicate is relevant for the problem at hand.
\* Alternatively, we could develop a general purpose “background size reduction” algorithm that works on any background. Such a distillation process would have to be based on intrinsic similarities between elements of the background. The algorithm could for example recognize which elements are duplicates. However, if we want to deal with the relationships between predicates in the background, then we can not use a predictor that only predicts relevance of individual elements. For example, if we have five duplicates of one predicate, then we may predict that each individual predicate should be thrown away, but we would like to throw away exactly four. Hence, in this setting we probably want to make predictions about the redundancy of subsets of the background (rather than the relevance of individual predicates).

Relevance predictor
\* Input to the relevance predictor. Train on:
+ One specific background within a specific domain.
+ Backgrounds of a specific size.
+ Backgrounds of variable sizes smaller than n and just have some empty input vector placeholders (in random slots).
+ Backgrounds of variable sizes.
\* Output of relevance predictor. Given a background B:
1. Return a relevance prediction for each element (predicting if after removing this element the resulting ILP hypothesis will or will not stay the same).
2. For a background B return a subset B’ (which is a smallest set for which the ILP hypothesis H\\_B and H\\_B’ are the same).
To go from predictions about individual predicates to predictions about subsets of the background, we could iteratively apply that system.
Very ambitious ideas (that we did not focus on) are:
\* Instead of just taking out semantic duplicates, we could build a system that generates summarizing predicates.
\* We could also be more ambitious and hope to clean up noise by for example eliminating contradictory predicates. This is more difficult than reducing redundancy, because in this case we are trying to change the background such that the output hypothesis could change (or could be found in the first place), but there may be many changes that look valid at face value. Presumably, statistics could predict which predicates are more likely to be noise if a predicate contradicts many other predicates, while the background set would be consistent without the bad predicate. However, when there are two predicates that are inconsistent, but that are both consistent with the rest of the background set, then it seems almost necessary to have domain knowledge to distinguish noise from true predicates.
The prototype developed over the course of the second retreat consists of the following modules:
\* A \*\*formal grammar generator\*\*. For a fixed set of symbols, which constitute the background, we generate sets of positive (resp. negative) examples by using (breaking) the rules of the grammar on a subset of symbols. For each set of examples, the background knowledge that we can dispose of is the set of predicates corresponding to the symbols not used in the grammar. We intend to use this very simple task as a first proof-of-concept, but the rest of the pipeline is agnostic about the ILP problem to solve, and we plan to implement other tasks in the future.
\* A Prolog \*\*parser\*\* that reads the predicates in the background and examples files and returns their parse trees. This way we obtain a graph representation of the input that encodes the relations between terms of the theory.
\* A \*\*predicate embedder\*\* module that maps parse trees to vectors. This is implemented as a graph net. Graph nets are models that receive an annotated graph as input and return the same graph as output with updated annotations. The annotations are simply labels on the graph nodes and/or edges and they can have any type. Update functions are usually implemented by neural networks and they respect certain locality rules e.g. a node label is updated using only the current values of the labels on neighbouring nodes and/or incoming edges. In our case, the labels are vectors attached to the nodes. They are initialized to one-hot vectors and updated following the rules of [FormulaNet](https://github.com/princeton-vl/FormulaNet).
\* A \*\*relevance prediction\*\* module that receives the vector representation of the triple \*(B,E+,E-)\* and outputs the probability that each predicate in the background is relevant for the ILP task, given the examples. This module is implemented as an MLP with dropout. The embedder-predictor model is trained end-to-end using cross-entropy loss.
Implementation choices:
\* We could just have inputted a string into our network, but we thought that would be very hard to learn from. We tried two approaches to translating predicates into vectors that can be used as input for a predictor network.
+ One approach is to count the number of occurrences of words/symbols in the predicate (bag of words).
+ The other is to first translate the predicate to an abstract syntax tree, then use that tree as an input to a graph network, which learns to output a useful vector, which is input to an MLP. This pipeline was trained end-to-end with a training signal from (at first) known ground truth, and later a full ILP system as oracle.
\* Initially we planned to produce ground truth to train our predictor using an ILP module, but we realized we could just adjust our example generator such that it automatically generates the solutions (which was possible because of the simple grammar domain that we used). For a predictor that can be applied to a wider range of problems, such an ILP module is still needed.
We have two outputs planned: a case for and against ILP for safety, and the writeup and repo from our relevance predictor. We are seeking feedback from the ILP community, and have sufficient ideas for a number of future projects of similar size.
|
d8187368-2c4d-497c-8a2c-c87caaa59016
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Eliezer Yudkowsky & Connor Leahy | AI Risk, Safety & Alignment Q&A [4K Remaster + HQ Audio]
Many complained about broken audio. I fixed the audio + upscaled to 4k to bring new life into this important discussion.
|
cdfc5e32-80b0-4743-9a42-95d6073659ae
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Exploring Llama-3-8B MLP Neurons
TL;DR: We created a dataset of text snippets that strongly activate neurons in Llama-3-8B model. This dataset shows meaningful features that can be found. Explore the neurons with the web interface: https://neuralblog.github.io/llama3-neurons/neuron_viewer.html
An example of a "derivative" neuron which is triggered when the text mentions the concept of derivatives.
Introduction
Transformer networks (Vaswani et al. 2017) have a remarkable ability to capture complex patterns and structures in their training data. Understanding how these neural networks work is not only an inspiring research problem, but also a practical necessity given their widespread deployment to millions of people.
Transformer models consist of two major components: attention layers and MLP layers. While significant progress has been made in understanding attention layers, such as the work on Transformer circuits by (Elhage et al. 2021), the understanding of MLP layers remains limited.
Interestingly, MLP layers are one of the few places in transformer networks where privileged bases can be found (Elhage et al. 2021). These vector bases are favored by the model due to their pointwise non-linear computation. They are referred to as neurons, while the outputs of the activation function are referred to as neuron activations. Neural networks tend to use neurons to represent important features (Karpathy, Johnson, and Fei-Fei 2015; Geva et al. 2020, 2022), making them a good starting point for understanding transformers.
In this work, we release a dataset of text snippets that strongly activate MLP neurons in the Llama-3-8B model. We chose the Llama-3-8B model for its strong evaluation performance and real-world usefulness.
We show examples of meaningful features discoverable with the dataset, and expect that many more can be found. We also anticipate that automated systems using LLMs could greatly help uncover features from the dataset, as shown in (Bills et al. 2023; Bricken et al. 2023). By op
|
2abf44b4-70e5-4820-8f48-ed0c69870980
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How can we ensure that a Friendly AI team will be sane enough?
One possible answer to the argument "attempting to build FAI based on Eliezer's ideas seems infeasible and increases the risk of UFAI without helping much to increase the probability of a good outcome, and therefore we should try to achieve a positive Singularity by other means" is that it's too early to decide this. Even if our best current estimate is that trying to build such an FAI increases risk, there is still a reasonable chance that this estimate will turn out to be wrong after further investigation. Therefore, the counter-argument goes, we ought to mount a serious investigation into the feasibility and safety of Eliezer's design (as well as other possible FAI approaches), before deciding to either move forward or give up.
(I've been given to understand that this is a standard belief within SI, except possibly for Eliezer, which makes me wonder why nobody gave this counter-argument in response to my post linked above. ETA: Carl Shulman did subsequently give me a version of this argument here.)
This answer makes sense to me, except for the concern that even seriously investigating the feasibility of FAI is risky, if the team doing so isn't fully rational. For example they may be overconfident about their abilities and thereby overestimate the feasibility and safety, or commit sunken cost fallacy once they have developed lots of FAI-relevant theory in the attempt to study feasibility, or become too attached to their status and identity as FAI researchers, or some team members may disagree with a consensus of "give up" and leave to form their own AGI teams and take the dangerous knowledge developed with them.
So the question comes down to, how rational is such an FAI feasibility team likely to be, and is that enough for the benefits to exceed the costs? I don't have a lot of good ideas about how to answer this, but the question seems really important to bring up. I'm hoping this post this will trigger SI people to tell us their thoughts, and maybe other LWer
|
730ddff8-6d4b-4785-a95e-e9cea9b56740
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Announcing Manifest 2023 (Sep 22-24 in Berkeley)
Forecasting Festival hosted by Manifold
TL;DR: Manifold is hosting a conference! 🥳 Chat with the Manifold team, special guests like Robin Hanson, Shayne Coplan, Patrick McKenzie, Dylan Matthews, Destiny, Aella, and more at our inaugural in-person gathering of the forecasting & prediction market community.
More info & buy tickets: manifestconference.net
WHEN: Sept 22-24
WHERE: Berkeley, CA
WHO: Everyone in the forecasting, EA, and LW communities. If you're reading this, you're invited!
Join the Discord here :)
Why should I come?
Forecasting and prediction markets are effective ways of improving our judgement and decision-making — most people in the Effective Altruism and LessWrong communities will feel right at home at Manifest.
Here are extra reasons to come:
* you think forecasting & prediction markets are impactful/fun/cool/rational/intriguing/etc
* you want to vibe with other forecasting nerds
* you want to engage & network with the forecasting community (find jobs/recruit hires/see what’s out there)
* you want to meet & chat with the Manifold team and our special guests (including Robin Hanson, Shayne Coplan, Patrick McKenzie, Dylan Matthews, Destiny, Aella, and more!)
* you want enjoy the gorgeous Rose Garden Inn
* …or if you like memes?
A day at Manifest
Everything’s optional. There will always be a bunch of sessions running concurrently, but this is an example of what your day at Manifest might look like:
10-11 — Opening session
11-12 — Fireside chat: Robin Hanson
12-1 — Lunch & mingling
1-2 — Estimathon: fermi estimation with prizes and steep competition!
2-3 — Speed friending: a few chats with other friendly, ambitious forecasting nerds :)
3-4 — Break: relax, vibe, unwind, chill, destress, etc
4-5 — Panel: Forecasting Founders (hear from Manifold, Kalshi, Polymarket, Insight Predictions, and more!)
5-6 — Games & markets: chess, poker, and prediction markets!
6-7 — Dinner & mingling
7-8 — Workshop: How to Write Good Forecasting Qu
|
63185db8-4a1a-4e76-a2d9-e71a70bca1db
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Singularity Institute is hiring remote editors!
The Singularity Institute needs to hire remote editors. You don't need to be able to conquer the blank page or write good content, you just need to be able to polish completed 2nd drafts of articles and suggest small fixes to wording when ideas are slightly wrong or ambiguous. The work will usually consist in adding dozens of comments to a Google doc or Word doc.
Pay is hourly and starts at $14/hr but that will rise if the product is good. You must be available to work at least 20 hrs/week to be considered.
Perks:
1. Work from home, with flexible hours.
2. Age and credentials are irrelevant; only the product matters.
3. Get paid to see early copies of (and contribute to) articles on things you're probably interested in already.
If you're interested, apply here. My assistant Denise will send you an example of the editing quality we want, and also an un-commented article for you to provide feedback on as an unpaid trial task. If your deliverable is good enough, I'll hire you.
|
f40f6206-2e92-484b-a236-cccc661ece37
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Historic trends in light intensity
Maximum light intensity of artificial light sources has discontinuously increased once that we know of: argon flashes represented roughly 1000 years of progress at past rates.
Annual growth in light intensity increased from an average of roughly 0.4% per year between 424BC and 1943 to an average of roughly 190% per year between 1943 and the end of our data in 2008.
Details
-------
This case study is part of AI Impacts’ [discontinuous progress investigation](https://aiimpacts.org/discontinuous-progress-investigation/).
### Background
That which is uncited on this page is our understanding, given familiarity with the topic.[1](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-1-1330 "Our primary researcher for this page, Rick Korzekwa, has a PhD in physics, with experience in experimental optical physics.")
[Electromagnetic waves](https://en.wikipedia.org/wiki/Electromagnetic_radiation) (also called electromagnetic radiation) are composed of oscillating electric and magnetic fields. They span in wavelength from gamma rays with wavelengths on the order of 10-20 meters to radio waves with wavelengths on the order of kilometers. The wavelengths from roughly 400 to 800 nanometers are visible to the human eye, and usually referred to as light waves, though the entire spectrum is sometimes referred to as light, especially in the context of physics. These waves carry energy and their usefulness and the effect that they have on matter is strongly affected by their intensity, or the amount of energy that they carry to a given area per time. Intensity is often measured in watts per square centimeter (W/cm2), and it can be increased either by increasing the power (energy per time, measured in watts) or focusing the light onto a smaller area.
Electromagnetic radiation is given off by all matter as thermal radiation, with the power and wavelength of the waves determined by the temperature and material properties of the matter. When the matter is hot enough to emit visible light, as is the case with the tungsten filament in a light bulb or the sun, the process is referred to as incandescence. Processes which produce light by other means are commonly referred to as luminescence. Common sources of luminescence are LEDs and fireflies.
The total power emitted by a source of incandescent source of light is given by the Stefan-Boltzman Law.[2](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-2-1330 "“Specifically, the Stefan–Boltzmann law states that the total <a href=\"https://en.wikipedia.org/wiki/Energy\">energy</a> radiated per unit <a href=\"https://en.wikipedia.org/wiki/Area\">surface area</a> of a <a href=\"https://en.wikipedia.org/wiki/Black_body\">black body</a> across <a href=\"https://en.wikipedia.org/wiki/Black-body_radiation#Spectrum\">all wavelengths</a> per unit <a href=\"https://en.wikipedia.org/wiki/Time\">time</a> j* (also known as the black-body <em><a href=\"https://en.wikipedia.org/wiki/Radiant_emittance\">radiant emittance</a></em>) is directly <a href=\"https://en.wikipedia.org/wiki/Proportionality_(mathematics)\">proportional</a> to the fourth power of the black body’s <a href=\"https://en.wikipedia.org/wiki/Thermodynamic_temperature\">thermodynamic temperature</a><em> T</em>…”“Stefan–Boltzmann Law.” In <em>Wikipedia</em>, September 25, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Stefan%E2%80%93Boltzmann_law&oldid=917706970\">https://en.wikipedia.org/w/index.php?title=Stefan%E2%80%93Boltzmann_law&oldid=917706970</a>.")
Light intensity is relevant to applications such as starting fires with lenses, cutting with lasers, plasma physics, spectroscopy, and high-speed photography.
#### History of progress
##### Focused sunlight and magnesium
For much of history, our only practical sources of light have been the sun and burning various materials. In both cases, the light is incandescent (produced by a substance being hot), so light intensity depends on the temperature of the hot substance. It is difficult to make something as hot as the sun, so difficult to make something as bright as sunlight, even if it is very well focused. We do not know how close the best focused sunlight historically was to the practical limit, but focused sunlight was our most intense source of light for most of human history.
There is evidence that people have been using focused sunlight to start fires for a very long time.[3](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-3-1330 "“The technology of the burning glass has been known since antiquity. Vases filled with water used to start fires were known in the ancient world.” – “Burning Glass.” In <em>Wikipedia</em>, September 15, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Burning_glass&oldid=915774651\">https://en.wikipedia.org/w/index.php?title=Burning_glass&oldid=915774651</a>.") There is further evidence that more advanced lens technology has existed for over 1000 years[4](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-4-1330 "The <strong>Visby lenses</strong> are a collection of <a href=\"https://en.wikipedia.org/wiki/Lens_(geometry)\">lens</a>-shaped manufactured objects made of <a href=\"https://en.wikipedia.org/wiki/Rock_crystal\">rock crystal</a> (quartz) found in several Viking graves on the island of <a href=\"https://en.wikipedia.org/wiki/Gotland\">Gotland</a>, Sweden, and dating from the 11th or 12th century… <br><br>…The Visby lenses provide evidence that sophisticated lens-making techniques were being used by craftsmen over 1,000 years ago, at a time when researchers had only just begun to explore the laws of refraction… <br><br> “Visby Lenses.” In <em>Wikipedia</em>, September 19, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Visby_lenses&oldid=916644137\">https://en.wikipedia.org/w/index.php?title=Visby_lenses&oldid=916644137</a>. "), so that humans have been able to focus sunlight to near the theoretical limit[5](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-5-1330 "It may seem like it is possible to focus sunlight to an arbitrary intensity, but this turns out not to be the case. Due to thermodynamic and optical constraints, it is not possible to focus light from an incoherent source such as the sun to an intensity brighter than the source itself. Rick has written about this <a href=\"https://docs.google.com/document/d/1wEkgwOE4ImzqyEynzhH0wjkrlOEMh63AiULDFdG5XL8/edit?usp=sharing\">here</a>. In practice, the limit is around 50% of the intensity of the source. ") for a very long time. Nonetheless, it appears that nobody fully understood how lenses worked until the [17th century](https://en.wikipedia.org/wiki/History_of_optics), and classical optics continued to advance well into the 19th and 20th century. So it seems likely that there were marginal improvements to be made in more recent times. In sum, we were probably slowly approaching an intensity limit for focusing sunlight for a very long time. There is no particular reason to think that there were any sudden jumps in progress during this time, but we have not investigated this.
Magnesium is the first combustible material that we found that we are confident burns substantially brighter than crudely focused sunlight, and for which we have an estimated date of first availability. It was first isolated in 1808[6](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-6-1330 "“The metal itself was first isolated by <a href=\"https://en.wikipedia.org/wiki/Humphry_Davy\">Sir Humphry Davy</a> in England in 1808.” “Magnesium.” In <em>Wikipedia</em>, October 17, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Magnesium&oldid=921795645\">https://en.wikipedia.org/w/index.php?title=Magnesium&oldid=921795645</a>. "), and burns with a temperature of 3370K[7](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-7-1330 "“The maximum measured combustion temperature is about 3100°C, which is very close to the magnesium adiabatic flame temperature in air, ca. 3200°C” Dreizin, Edward L., Charles H. Berman, and Edward P. Vicenzi. “Condensed-Phase Modifications in Magnesium Particle Combustion in Air.” Scripta Materialia, n.d., 10–1016. "). Magnesium was bright enough and had a broad enough spectrum to be useful for early photography.
##### Mercury Arc Lamp
The first arc lamp was invented as part of the same series of experiments that isolated magnesium. Arc lamps generate light by using an electrical current to generate a plasma[8](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-8-1330 "A plasma is a gas of charged particles, which are typically electrons and ions."), which emits light due to a combination of luminescence and incandescence. Although they seem to have been the first intense artificial light sources that do not rely on high combustion temperature[9](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-9-1330 "An example of a low-intensity artificial light source that does not rely on combustion might be a luminescent chemical reaction, such as when <a href=\"https://en.wikipedia.org/wiki/Phosphorus\">phosphorous</a> is exposed to air."), they do not seem to have been brighter than a magnesium flame[10](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-10-1330 "Creating a very bright electrical arc requires a specialized atmosphere, and our understanding is that the first arc lamps were operated in open air.") in the early stages of their development. Nonetheless, by the mid 1930s, mercury arc lamps, operated in glass tubes filled with particular gases, were the brightest sources available that we found. Our impression is that progress was incremental between their first demonstration around 1800 and their implementation as high intensity sources in the the 1930s, but we have not investigated this thoroughly.
##### Argon Flashes
[Argon flashes](https://en.wikipedia.org/wiki/Argon_flash) were invented during the Manhattan project[11](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-11-1330 "“To study the implosion design at Los Alamos’ Anchor Ranch site and later the Trinity Site, Optics group members and scientists developed new and improved <a href=\"http://www.lahdra.org/pubs/reports/Entire%20report/LAHDRA%20Draft%20Final%20Report_vJy23p.pdf\">photographic techniques</a>. These techniques included rotating prism and rotating mirror photography, <a href=\"https://books.google.com/books?id=bZ_MBQAAQBAJ&pg=PA139&lpg=PA139&dq=slit+smear+camera&source=bl&ots=b8gyOiPQTt&sig=mByWmUwvqp_lmghy64cYuCC6gXk&hl=en&sa=X&ved=0ahUKEwik1I255cLUAhUK3IMKHSz2CQoQ6AEIJjAA#v=onepage&q=argon%20flash&f=false\">high-explosive flash (“argon bomb”) photography</a>, and <a href=\"https://books.google.com/books?id=h6zMCgAAQBAJ&pg=PA69&dq=scientific+photography+and+digital+imaging&hl=en&sa=X&ved=0ahUKEwiAn8no58LUAhWJ24MKHc7qBx0Q6AEILTAB#v=onepage&q=flash%20x-ray%20photography%20&f=false\">flash x-ray photography</a>.”</p>
<p>Atomic Heritage Foundation. “High-Speed Photography.” Accessed November 8, 2019. <a href=\"https://www.atomicheritage.org/history/high-speed-photography\">https://www.atomicheritage.org/history/high-speed-photography</a>. ") to enable the high speed photography that was needed for understanding plutonium implosions. They are created by surrounding a high explosive with argon gas. The shock from the explosive ionizes the argon, which then gives off a lot of UV light as it recombines. The UV light is absorbed by the argon, and because argon has a low heat capacity (that is, takes very little energy to become hot), it becomes extremely hot, emitting ~25000 Kelvin blackbody radiation. This was a large improvement in intensity of light from blackbody radiation. There does not seem to have been much improvement in blackbody sources in the 60 years since.
##### Lasers
Lasers work by storing energy in a material by promoting electrons into higher energy states, so that the energy can then be used to amplify light that passes through the material. Because lasers can amplify light in a very controlled way, they can be used to make extremely short, high energy pulses of light, which can be focused onto a very small area. Because lasers are not subject to the same thermodynamic limits as blackbody sources, it is possible to achieve much higher intensities, with the current state of the art lasers creating light 16 orders of magnitude more intense than the light from an argon flash.
Figure 1: Industrial laser[12](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-12-1330 "From <a href=\"https://commons.wikimedia.org/wiki/File:Laserkop_van_Amada_FO-4020NT_4kW,_industri%C3%ABle_laser.jpg\">Wikimedia Commons:</a><strong> Metaveld BV [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)]</strong> ")
### Trends
#### Light intensity
We investigated the highest publicly recorded light intensities we could find, over time.[13](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-13-1330 "It is plausible that the most intense light to exist (rather than to be be recorded) increased gradually but extremely fast at times, rather than discontinuously in a strict sense. This is because the intensity of a source is sometimes ramped up gradually in the lab (though for our purposes these are similar).") Our estimates are for all light, not just the visible spectrum.
##### Data
One of our researchers, Rick Korzekwa, collected estimated light intensities produced by new technologies over time into [this spreadsheet](https://docs.google.com/spreadsheets/d/19716LOwJPgr9oJjxV9f7pAh7GOb2Lkw0KkQtXK60XxE/edit?usp=sharing). Many sources lacked records of the intensity of light produced specifically, so the numbers are often inferred or estimated from available information. These inferences rely heavily on subject matter knowledge, so have not been checked by another researcher. Figures 2-3 illustrate this data.
###### Pre-1808 trend
We do not start looking for discontinuities until 1943, though we have data from beforehand, because our data is not sufficiently complete to distinguish discontinuous progress from continuous, only to suggest the rough shape of the longer term trend.
Together, focused sunlight and magnesium give us a rough trend for slow long term progress, from lenses focusing to the minimum intensity required to ignite plant material in ancient times to intensities similar to a camera flash over the course of at least two millenia. On average during that time, the brightest known lights increased in intensity by a factor of 1.0025 per year (though we do not know how this was distributed among the years).
Due to our uncertainty in the early development of optics for focusing sunlight, the trend from 424 BC to 1808 AD should be taken as the most rapid progress that we believe was likely to have occurred during that period. That is, we look at the earliest date for which we have strong verification that burning glasses were used, and assuming these burning glasses produced light that was just barely intense enough to start a fire. So progress may have been slower, if more intense light was available in 424 BC than we know about, however progress could only have been faster on average if burning glass (that could actually burn) didn’t exist in 424 BC, or if there were better things available in 1808 than we are aware, both of which seem less likely than that technology was better than that in 424 BC.
[](http://aiimpacts.org/wp-content/uploads/2019/04/LIDDetails.png)Figure 2: Estimated light intensity for some historic brightest artificial sources known to us. Note that the very earliest instances of a given type are not necessarily represented, for instance our understanding is that dimmer arc lamps existed in the early 1800s.
Figure 3: Close up of Figure 2, since 1800
##### Discontinuity measurement
We treat the rate of previous progress as an exponential between the burning glass in 424BC and the first argon candle in 1943. At that point progress has been far above that long term trend for two points in a row, so we assume a new faster trend and measure from the 1936 arc lamp. In 1961, after the trend again has been far surpassed for two points, we start again measuring from the first laser in 1960. See this project’s [methodology page](https://aiimpacts.org/methodology-for-discontinuity-investigation/) for more detail on what we treat as past progress.
Given these choices, we find one large discontinuity from the first argon candle in 1943 (~1000 years of progress in one step), and no other discontinuities of more than ten years since we begin searching in 1943.[14](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-14-1330 "See the <a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/\">methodology page</a> for more detail on how we calculate discontinuities.")
In addition to the size of these discontinuity in years, we have tabulated a number of other potentially relevant metrics [here](https://docs.google.com/spreadsheets/d/1iMIZ57Ka9-ZYednnGeonC-NqwGC7dKiHN9S-TAxfVdQ/edit?usp=sharing).[15](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-15-1330 "See<a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#discontinuity-data\"> our methodology page</a> for more details.")
###### Note on mercury arc lamp
The 1936 mercury arc lamp would be a large discontinuity if there were no progress since 1808. Our impression from various sources is that progress in arc lamp technology was incremental between their first invention at the beginning of the 19th century and the bright mercury lamps that were available in 1936. We did not thoroughly investigate the history and development of arc lamps however, so do not address the question of the first year that such lamps were available or whether such lamps represented a discontinuity.
###### Note on argon flash
The argon flash seems to have been the first light source available that is brighter than focused sunlight, after centuries of very slow progress, and represents a large discontinuity. As discussed above, because we are less certain about the earlier data, our methods imply a relatively high estimate on the prior rate of advancement, and thus a low estimate of the size of the discontinuity. So the real discontinuity is likely to be at least 996 years (unless for instance there was accelerating progress during that time that we did not find records of).
###### Change in rate of progress
Light intensity saw a large increase in the rate of progress, seemingly beginning somewhere between the arc lamps of the 30s and the lasers of the 60s. Between 424BC and 1943, light intensity improved by around 0.4% per year on average, optimistically. Between 1943 and 2008, light intensity grew by an average of around 190% per year.[16](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-16-1330 "See <a href=\"https://docs.google.com/spreadsheets/d/19716LOwJPgr9oJjxV9f7pAh7GOb2Lkw0KkQtXK60XxE/edit#gid=0\">spreadsheet</a> for calculations.")
The first demonstrations of working lasers seems to have prompted a flurry of work. For the first fifteen years, maximum light intensity had an average doubling time of four months, and over roughly five decades following lasers, the average doubling time was a year.[17](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-17-1330 "See <a href=\"https://docs.google.com/spreadsheets/d/19716LOwJPgr9oJjxV9f7pAh7GOb2Lkw0KkQtXK60XxE/edit#gid=0\">spreadsheet</a> for calculations.")
##### Discussion
###### Factors of potential relevance to causes of abrupt progress
*Technological novelty*
One might expect discontinuous progress to arise from particularly paradigm-shifting insights, where a very novel way is found to achieve an old goal. This has theoretical plausibility, and several discontinuities that we know of seem to be associated with fundamentally new methods ([for instance](https://aiimpacts.org/cases-of-discontinuous-technological-progress/), nuclear weapons came from a shift to a new type of energy, high temperature superconductors with a shift to a new class of materials for superconducting). So we are interested in whether discontinuities in light intensity are evidence for or against such a pattern.
The argon flash was a relatively novel method rather than a subtle refinement of previous technology, however it did not leverage any fundamentally new physics. Like previous light sources, it works by adding a lot of energy into a material to make it emit light in a relatively disorganized and isotropic manner. Achieving this by way of a shockwave from a high explosive was new.
It is unclear whether using an explosive shockwave in this way had not been done previously because nobody had thought of it, or because nobody wanted a shorter and brighter flash of light so much that they were willing to use explosives to get it.[18](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-18-1330 "Both the film industry and the explosives industry were publishing papers suggesting a need for very short and bright flashes of light for high speed photography in the 1910’s and 1920’s, but most of the work focused on repeatability, convenience, and total quantity of light, rather than peak power output or intensity.")
The advent of lasers did not produce a substantial discontinuity, but they did involve an entirely different mechanism for creating light to previous technologies. Older methods created more intense light by increasing the energy density of light generation (which mostly meant making the thing hotter), but lasers do it by creating light in a very organized way. Most high intensity lasers take in a huge amount of light, convert a small portion of it to laser light, and create a laser pulse that is many orders of magnitude more intense than the input light. This meant that lasers could scale to extremely high output power without becoming so hot that the output is that of a blackbody.
*Effort directed at progress on the metric*
There is a hypothesis that metrics which see a lot of effort directed at them will tend to be more continuous than those which are improved as a side-effect of other efforts. So we are interested in whether these discontinuities fit that pattern.
Though there was interest over the years in using intense light as a weapon[19](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-19-1330 "“<a href=\"https://en.wikipedia.org/wiki/Archimedes\">Archimedes</a>, the renowned mathematician, was said to have used a burning glass as a weapon in 212 BC, when <a href=\"https://en.wikipedia.org/wiki/Syracuse,_Sicily\">Syracuse</a> was besieged by <a href=\"https://en.wikipedia.org/wiki/Marcus_Claudius_Marcellus\">Marcus Claudius Marcellus</a>. The <a href=\"https://en.wikipedia.org/wiki/Roman_Navy\">Roman fleet</a> was supposedly incinerated, though eventually the city was taken and Archimedes was slain. The legend of Archimedes gave rise to a considerable amount of research on burning glasses and lenses until the late 17th century. ” “Burning Glass.” In <em>Wikipedia</em>, September 15, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Burning_glass&oldid=915774651\">https://en.wikipedia.org/w/index.php?title=Burning_glass&oldid=915774651</a>."), and for early photographers, who wanted safe, convenient, short flashes that could be fired in quick succession, there seems to have been relatively little interest in increasing the peak intensity of a light source. The US military sought bright sources of light for illuminating aircraft or bombing targets at night during World War II. But most of the literature seems to focus on the duration, total quantity of light, or practical considerations, with peak intensity as a minor issue at most.
The argon flash appears to have been developed more as a high peak power device than as a high peak intensity device.[20](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-20-1330 "The distinction here is between energy/time and energy/(time*area).") It did not matter if the light could be focused to a small spot, so long as enough light was given off during the course of an experiment to take pictures. Still, you can only drive power output up so much before you start driving up intensity as well, and the argon flash was extremely high power.
Possibly argon flashes were developed largely because an application appeared which could make use of very bright lights even with the concomitant downsides.
There seems to have been a somewhat confusing lack of interest in lasers, even after they looked feasible, in part due to a lack of foresight into their usefulness. Charles Townes, one of the scientists responsible for the invention of the laser, remarked that it could have been invented as early as 1930[21](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-21-1330 "“This raises the question: why weren’t lasers invented long ago, perhaps by 1930 when all the necessary physics was already understood, at least by some people?”<br><br>“The First Laser.” Accessed November 9, 2019. <a href=\"https://www.press.uchicago.edu/Misc/Chicago/284158_townes.html\">https://www.press.uchicago.edu/Misc/Chicago/284158_townes.html</a>. "), so it seems unlikely that it was held up by a lack of understanding of the fundamental physics (Einstein first proposed the basic mechanism in 1917[22](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-22-1330 "In 1917, <a href=\"https://en.wikipedia.org/wiki/Albert_Einstein\">Albert Einstein</a> established the theoretical foundations for the laser and the <a href=\"https://en.wikipedia.org/wiki/Maser\">maser</a> in the paper <em>Zur Quantentheorie der Strahlung</em> (On the Quantum Theory of Radiation)</p>
<p>“Laser.” In <em>Wikipedia</em>, November 4, 2019. <a href=\"https://en.wikipedia.org/w/index.php?title=Laser&oldid=924565157\">https://en.wikipedia.org/w/index.php?title=Laser&oldid=924565157</a>. ")). Furthermore, the first paper reporting successful operation of a laser was rejected in 1960, because the reviewers/editors did not understand how it was importantly different from previous work.[23](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-23-1330 "“Theodore Maiman made the first laser operate on 16 May 1960 at the Hughes Research Laboratory in California, by shining a high-power flash lamp on a ruby rod with silver-coated surfaces. He promptly submitted a short report of the work to the journal <em>Physical Review Letters</em>, but the editors turned it down. Some have thought this was because the <em>Physical Review</em> had announced that it was receiving too many papers on masers—the longer-wavelength predecessors of the laser—and had announced that any further papers would be turned down. But Simon Pasternack, who was an editor of <em>Physical Review Letters</em> at the time, has said that he turned down this historic paper because Maiman had just published, in June 1960, an article on the excitation of ruby with light, with an examination of the relaxation times between quantum states, and that the new work seemed to be simply more of the same.”</p>
<p>“The First Laser.” Accessed November 9, 2019. <a href=\"https://www.press.uchicago.edu/Misc/Chicago/284158_townes.html\">https://www.press.uchicago.edu/Misc/Chicago/284158_townes.html</a>.")
Although it seems clear that the scientific community was not eagerly awaiting the advent of the laser, there did seem to be some understanding, at least among those doing the work, that lasers would be powerful. Townes recalled that, before they finished building their laser, they did expect to “at least get a lot of power”[24](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-24-1330 "“Oral-History:Charles Townes (1991) – Engineering and Technology History Wiki.” Accessed November 9, 2019. <a href=\"https://ethw.org/Oral-History:Charles_Townes_(1991)\">https://ethw.org/Oral-History:Charles_Townes_(1991)</a>."), something which could be predicted with relatively straightforward calculations. Immediately after the first results were published, the general sentiment seems to have been that it was novel and interesting, but it was allegedly described as “a solution in search of a problem”.[25](https://aiimpacts.org/historic-trends-in-light-intensity/#easy-footnote-bottom-25-1330 "Bertolotti, Mario. <em>The History of the Laser</em>. CRC Press, 2004. <a href=\"https://books.google.co.uk/books?id=JObDnEtzMJUC&pg=PA262&lpg=PA262&dq=laser+%22solution+in+search+of+a+problem%22&source=bl&ots=tzL8lw1cU5&sig=ACfU3U3uHXVfadjwktCm1SmPU7oYz66mlA&hl=en&sa=X&redir_esc=y#v=onepage&q=laser%20%22solution%20in%20search%20of%20a%20problem%22&f=false\">https://books.google.co.uk/books?id=JObDnEtzMJUC&pg=PA262&lpg=PA262&dq=laser+%22solution+in+search+of+a+problem%22&source=bl&ots=tzL8lw1cU5&sig=ACfU3U3uHXVfadjwktCm1SmPU7oYz66mlA&hl=en&sa=X&redir_esc=y#v=onepage&q=laser%20%22solution%20in%20search%20of%20a%20problem%22&f=false</a>") Similar to the argon flash, it would appear that intensity was not a priority in itself at the time the laser was invented, and neither were any of the other features of laser light that are now considered valuable, such as narrow spectrum, short pulse duration, and long coherence length.
Most of the work leading to the first lasers was focused on the associated atomic physics, which may help explain why the value of lasers for creating macroscopic quantities of light wasn’t noticed until after they had been built.
In sum, it seems the argon flash and the laser both caused large jumps in a metric that is relevant today but that was not a goal at the time of their development. Both could probably have been invented sooner, had there been interest.
###### Predictability
One reason to care about discontinuities is because they might be surprising, and so cause instability or problems that we are not prepared for. So we are interested in whether discontinuities were in fact surprising.
It is unclear how predictable the large jump from the argon flash was. Our impression is that without knowledge of the field, it would have been difficult to predict the huge progress from the argon flash ahead of time. High explosives, arc lamps, and flash tubes all produced temperatures of around 4,000K to 5,000K. Jumping straight from that to >25,000K would probably have seemed rather unlikely.
However as discussed above, it seems plausible that the technology allowing argon flashes was relatively mature earlier on, and therefore that they might have been predictable to someone familiar with the area.
Notes
-----
|
efa59baf-2407-4790-96a1-f672ad1b5986
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Campaign for AI Safety: Please join me
I will start by saying that I generally agree with Yudkowsky's position on AI. We must proceed with extreme caution. We must radically slow down AI capability advancement. We must invest unfathomable amounts of resources in AI alignment research. We need to enact laws and treaties that will help keep it all together for as long as possible and hopefully we figure things out in time.
The laughter at the recent White House press conference at the question about Yudkowsky's argument, indicates far we are in public debate from a sensible position of caution.
But I am hopeful that we can change that. Few people laugh at nuclear weapons now. We are a species capable of cooperation and of taking things seriously. As the saying goes:
> "First they ignore you, then they laugh at you, then they fight you, then you win."
What is missing is public understanding of the dangers of misaligned / unaligned AI. Democracy does not work in darkness. People must know the dangers, the uncertaintly, and of ways they can contribute.
That's why I am proposing a campaign on public awareness of x-risk from AI. So far, it's just me and my wife. Please join me, especially if you work in advertising, marketing, PR, activism, politics, law, etc., if you know how to make a website, if you want to create PR materials, meet journalists, do accounting, fund-raising, etc.
Please share this with people who do not read Less Wrong but we are freaked out and want to do something.
I do not know exactly how this campaign will run, which countries to focus on. I am myself only human and can contribute very little of the total required effort. My background is in consulting and market research and I run a market research company. Personally, at this stage, I can best contribute by coordination and facilitating operations.
We need people, money, expertise, patience, etc. Please join: https://campaignforaisafety.org/.
|
0234e328-00ef-4faa-8f26-8ca78d38bd92
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Should LW have an official list of norms?
To get this written and shared quickly, I haven't polished it much and the English/explanation is a little rough. Seemed like the right tradeoff though.
Recently, a few users have written their sense of norms for rationalist discourse, i.e. Basics of Rationalist Discourse and Elements of Rationalist Discourse. There've been a few calls to adopt something like these as site norms for LessWrong.
Doing so seems like it'd provide at least the following benefits:
* It's a great onboarding tool for new users to help them understand the site's expectations and what sets it apart from other forums
* It provided a recognized standard that both moderators and other users can point to and uphold, e.g. by pointing out instances where someone is failing to live up to one of the norms
* Having it be official is a good reminder to all site users to live up to the best kind of discussion[1]
My current feeling is creating some lists as an onboarding tool seems good, but doing anything like declaring a list of Site Norms is fraught.
The True Norm of LessWrong is that with each motion, you should aim towards truth. I think it's actually worth quoting the entire 12th virtue here (emphasis added).
> Before these eleven virtues is a virtue which is nameless.
>
> Miyamoto Musashi wrote, in The Book of Five Rings:
>
> > The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.
>
> Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.
>
> If you fa
|
97793dec-78a1-437b-a7d2-3ca70dc177fa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Repost] The Copenhagen Interpretation of Ethics
Because the original webpage (and domain) is down, and it takes about a minute (including loading time) for Wayback Machine to give me the page, I've decided to repost this essay here. I consider it an essay that seems core to 2010s rationalist discourse.
----------------------------------------
The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.
----------------------------------------
In 2010, New York randomly chose homeless applicants to participate in its Homebase program, and tracked those who were not allowed into the program as a control group. The program was helping as many people as it could, the only change was explicitly labeling a number of people it wasn’t helping as a “control group”. The response?
> “They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”
----------------------------------------
On March 11th, 2012, the vast majority of people did nothing to help homeless people. They were busy doing other things, many of them good and important things, but by and large not improving the well-being of homeless humans in any way. In particular, almost no one was doing anything for the homeless
|
fe7be38a-c16b-4fad-a27a-2c0cf7b470c9
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Dario Amodei leaves OpenAI
This is a linkpost for <https://openai.com/blog/organizational-update/>
> “We are incredibly thankful to Dario for his contributions over the past four and a half years. We wish him and his co-founders all the best in their new project, and we look forward to a collaborative relationship with them for years to come,” said OpenAI chief executive Sam Altman.
>
>
Anyone know what the new project is?
|
4a294b6a-4aa4-4c2b-9a24-563366aa2e90
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] The Mechanics of Disagreement
Today's post, The Mechanics of Disagreement was originally published on 10 December 2008. A summary (taken from the LW wiki):
> Reasons why aspiring rationalists might still disagree after trading arguments.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Two Visions of Heritage, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
d3d3c776-1caf-45b6-a251-6cc4cab7d1ec
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Transhumanism as Simplified Humanism
[Frank Sulloway](http://www.robertboynton.com/?art_id=119) once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, *Is that really true? How radical!* Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”
Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?
Oh, and by the way: This is not a trick question.
I answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t *always* the best choice, but sometimes it *is.*
I won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.
If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is *not* possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?
The important thing to remember, which I think all too many people forget, is that *it is not a trick question.*
Transhumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what *exact* age does the term in the utility function go from positive to negative? Why?
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.
You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.
Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?
Well, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.
But – you ask – *where does it end?* It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?
Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.
Ultimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable *if* it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.
So that is “transhumanism” – loving life without special exceptions and without upper bound.
Can transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.
Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.
But a moral philosophy should not *have* special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.
There is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a *bad* thing?
Could the moral question really be just that simple?
Yes.
---
This document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.
Eliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .
If you think the world could use some more rationality, consider blogging this page.
Praise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/simplified/](https://eyudkowsky.wpengine.com/singularity/simplified/) .
|
e9348ef8-f5ba-44e6-a176-5457011f4a56
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Does this game have a name?
There is a game where one player wants to predict the action of the other, and the other player wants them to fail (as a fixed sum game)
It has payoffs
1,-1| -1,1
-1,1|1,-1
Or equivalent.
I believe that it has a nash equilibrium of choosing randomly.
|
a6980ad5-c575-4de0-8eef-391fac1a5013
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[AN #167]: Concrete ML safety problems and their relevance to x-risk
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
Please note that, while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
Unsolved Problems in ML Safety (Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt) (summarized by Dan Hendrycks): To make the case for safety to the broader machine learning research community, this paper provides a revised and expanded collection of concrete technical safety research problems, namely:
1. Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events.
2. Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality.
3. Alignment: Build models that represent and safely optimize hard-to-specify human values.
4. External Safety: Use ML to address risks to how ML systems are handled, including cyberwarfare and global turbulence.
Throughout, the paper attempts to clarify the problems’ motivation and provide concrete project ideas.
Dan Hendrycks' opinion: My coauthors and I wrote this paper with the ML research community as our target audience. Here are some thoughts on this topic:
1. The document includes numerous problems that, if left unsolved, would imply that ML systems are unsafe. We need the effort of thousands of researchers to address all of them. This means that the main safety discussions cannot stay within the confines of the relatively small EA community. I think we should aim to have over one third of the ML research community work on safety problems. We need the broader community to treat AI safety at least as seriously as safety for nuclear power plants.
2. To grow the ML safety research community, we need to
|
822018b8-8f1e-4962-9893-2344d7b95c38
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Should you have children? All LessWrong posts about the topic
Currently, there are 26 LessWrong forums posts tagged "family planning", the oldest from 2010. For a writing project, I read all of them. However, I realized that this collection may interest other people, so I publish it as a stand-alone post. I summarize the posts and their comments in the following. Feedback and comments are welcome.
2010/2011
* In "rationality and being child-free" (20th Nov 2010), InquilineKea asks "So how do you think being child-free relates to rationality/happiness?" In the comments, some people discuss the effect of having children on parental happiness and life quality, or state their personal preferences about having children.
* The arguments against children include the time-intensity of having children (and the personal preference for autonomy about your time usage), emotional aversion and feelings "unprepared".
* Arguments on the pro-side include: Liking children, having children is seen as a public-good contribution, and people discuss whether rationalists should have kids to spread their culture (or whether children are desirable memecarriers for their parents), and some kind of selective pro-natalism according to which the "the future world would likely be a better place if wealthy, educated and responsible people have more kids."
* Both on the contra-side and on the pro-side, it is noted that emotions (like insecurity) or desire are very powerful in determining decisions. It is also noted that this can lead to justify an emotion, urge or decision.
* It is also told how there can be strong social pressure against having children in a certain milieu (in the 1980s) that both came from seeing children (or rather, people) as a negative factor in the world, but also "the world to be too horrible to bring children into." The counter-position to both claims is then mentioned (the author sees his own children as making the world better and the world to be better than ever).
* In June 2011, InquilineKea considers "Mentorin
|
ee8ab8e4-351c-4c82-a1c4-894c187fc8af
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
AI Timeline predictions in surveys and statements
Surveys seem to produce median estimates of time to human-level AI which are roughly a decade later than those produced from voluntary public statements.
Details
-------
We [compared](http://aiimpacts.org/miri-ai-predictions-dataset/ "MIRI AI Predictions Dataset") several surveys to predictions made by similar groups of people in the [MIRI AI predictions dataset](http://aiimpacts.org/miri-ai-predictions-dataset/ "MIRI AI Predictions Dataset"), and found that predictions made in surveys were roughly 0-2 decade later. This was a rough and non-rigorous comparison, and we made no effort to control for most variables.
Stuart Armstrong and Kaj Sotala make a similar comparison [here](http://lesswrong.com/r/discussion/lw/gta/selfassessment_in_expert_ai_predictions/), and also find survey data to give later predictions. However they are comparing non-survey data largely from recent decades with survey data entirely from 1973, which we think makes the groups too different in circumstance to infer much about surveys and statements in particular. Though in the MIRI dataset (that they used), very early predictions [tend to be](http://aiimpacts.org/miri-ai-predictions-dataset/ "MIRI AI Predictions Dataset") more optimistic than later predictions, if anything, so if they had limited themselves to predictions from similar times there would have been a larger difference (though with a very small sample of statements).
Relevance
---------
**Accuracy of AI predictions**: [some biases](http://aiimpacts.org/short-prediction-publication-biases/) which probably exist in public statements about AI predictions are likely to be smaller or not apply in survey data. For instance, public statements are probably more likely to be made by people who believe they have surprising or interesting views, whereas this should much less influence answers to a survey question once someone is taking a survey. Thus comparing data from surveys and voluntary statements can tell us about the strength of such biases. Given that median survey predictions are rarely more than a decade later than similar statements, and survey predictions seem unlikely to be strongly biased in this way, median statements are probably less than a decade early as a result of this bias.
|
a69acbe2-1aa6-4ef2-9b5d-6e6db9b34f39
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nature's Hidden Processes and Structures (Novum Organum Book 2: 1-9)
This is the ninth post in the Novum Organum sequence. For context, see the sequence introduction. For the reading guide, see earlier posts in the sequence.
We have used Francis Bacon's Novum Organum in the version presented at www.earlymoderntexts.com. Translated by and copyright to Jonathan Bennett. Prepared for LessWrong by Ruby.
Aphorism Concerning the Interpretation of Nature: Book 2: 1–9
by Francis Bacon
[[Bacon makes a distinction between human power vs human knowledge which may very roughly analogized as engineering vs science, i.e. being able to do things vs knowing things.]]
1. What human power does and is intended for is this:
* For a given body, to create and give to it a new nature (or new natures)—·e.g. melting gold or cooking chicken or dissolving salt in water·.
What human knowledge does and is intended for is this:
* For a given nature, to discover its form, or true specific differentia. . . .
·i.e. the features that a thing must have if it is to qualify as belonging to this or that natural kind, e.g. the features of gold that differentiate it from metal in general·. [Bacon adds two even more obscure technical terms, semi-apologising for them; they don’t occur again in this work. Then:] Subordinate to these primary works are two secondary and less important ones. Under the ‘power’ heading: turning concrete bodies into something different, so far as this is possible—·e.g. turning lead into gold, if this can be done·. Under the ‘knowledge’ heading:
(i) in every case of generation and motion, discovering the hidden process through which the end-state form results from the manifest efficient ·cause· and the manifest material; and
(ii) discovering the hidden microstructure of bodies that are not changing.
·An example of (i): the wax around the wick of a lighted candle melts. Flame is the efficient cause, wax is the material, and meltedness is the end-state form. But ‘flame’ and ‘wax’ stand for items that are manifest, obvious, out there on th
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.