id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
dfeb1399-fd60-4b18-a842-a03899a3b87f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | LCDT, A Myopic Decision Theory
The looming shadow of deception
===============================
Deception encompasses many fears around AI Risk. Especially once a human-like or superhuman level of competence is reached, deception becomes impossible to detect and potentially pervasive. That’s worrying because [convergent subgoals](https://www.lesswrong.com/tag/instrumental-convergence) would push hard for deception and prosaic AI seems likely to [incentivize it](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/zthDPAjh9w6Ytbeks) too.
Dealing with superintelligent deceptive behavior seeming impossible, what about forbidding it? Ideally, we would want to forbid only deceptive behavior, while allowing everything else that makes the AI competent.
That is easier said than done, however, given that we don’t actually have a good definition or deconfusion of deception to start from. First, such a deconfusion requires understanding what we really want at a detailed enough level to catch tricks and manipulative policies—yet that’s almost the alignment problem itself. And second, even with such a definition in mind, the fundamental asymmetry of manipulation and deception in many cases (for example, a painter AI might easily get away with plagiarism, as finding a piece to plagiarize is probably easier than us determining whether it was plagiarized or not; also related is Paul’s [RSA-2048 example](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d)) makes it intractable to oversee an AI smarter than us. We are thus condemned to snipe problematic behaviors one by one, as we think about them, until the ones left are too complex for us to imagine and worry about, and we leave massive holes in our alignment schemes.
That being said, if pinpointing deceptive behavior exactly is too hard, an alternative might be to overapproximate it. An overapproximation, in this context, would be some property which is more general than deception, in the sense that it rules out deception without just being equivalent to deception.
Myopia is our proposed overapproximation. The sort of myopia we are interested in dates back from Evan’s [Towards a mechanistic understanding of corrigibility](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility) (see also Abram’s [Partial Agency sequence](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X) for reflections around myopia). Myopia is the property of a system to not plan ahead, to not think too far about the consequences of its actions, and to do the obvious best thing in the moment instead of biding its time. In particular, it seems to forbid non-obvious deception, since the latter usually requires foresight and long-term planning.
Yet, here again, we butt our heads against a lack of good definitions. Abram has proposed some definitions in [Defining Myopia](https://www.alignmentforum.org/posts/qpZTWb2wvgSt5WQ4H/defining-myopia), but [none of them seem strong enough to prevent deception](https://www.alignmentforum.org/posts/qpZTWb2wvgSt5WQ4H/defining-myopia?commentId=WabHnhtYR7RvpkneW). From another angle, Krueger et al. [showed](https://arxiv.org/abs/2009.09153) that a short term incentive and discount at 0 are not sufficient to completely remove the incentive for non-myopic and potentially deceptive behaviors, and so don’t make a good definition of myopia. Another alternative is **time-limited myopia**, which limits the AI to only plan in a particular, fixed time horizon. Although it should help, this form of myopia depends on a time parameter that we have no idea how to estimate. If the bound is too low, then the AI risks being too stupid and uncompetitive; if it’s too high, it risks being deceptive. Worse, even with a good choice of time parameter, time-limited myopia can have non-myopic incentives in some situations (because of anthropic uncertainty for example).
This post tackles the deconfusion of myopia from a decision-theoretic perspective. We focus on a single decision theory called LCDT that modifies CDT to make it myopic while still solving many capabilities problems. LCDT isn’t meant to be the be all and end all definition of myopia, but it gives a decision-theory-level example of what a version of myopia that actually prevents deception might look like. We also explore variations on LCDT and how they affect the tradeoff between myopia and capabilities. Finally, we argue that LCDT agents could be competitive if they are used to simulate something like HCH.
*Thanks to Jérémy Perret for the drawings.*
A decision theory benchmark for myopia
======================================
What makes a decision theory myopic? Originally, we focused on classic decision theory problems like [Newcomb’s Problem](https://arbital.com/p/newcombs_problem/) (see Mark Xu’s [Open Problems with Myopia](https://www.alignmentforum.org/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia#Dumb_decision_theory) for an account of this perspective, called Dumb Decision Theories). But we realized that doing so made the tradeoff between capability and deception unclear. Thus, we have decided to instead focus on concrete decision theoretic problems that directly capture the training setups and incentives for deception that we’re concerned about.
*The following diagrams represent decision theory problems, **not** training processes (as has been done by Everitt et al.* [*here*](https://arxiv.org/abs/1906.08663)*). In our cases, the utility nodes (in green) represent the internal utility of the agent, not some training reward.*
Imitation (Capabilities)
------------------------
(You might notice that decision nodes and human (H or HCH) nodes have the same shape and color: red rectangles. This is because we assume that our problem description comes with an annotation saying which nodes are agent decisions. This ends up relevant to LCDT as we discuss in more detail below.)
**Task description**: both Human and Agent must choose between action a.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
and action b. If the actions are the same, the agent gets 1 utility; otherwise it gets 0 utility. To make it more concrete, let’s say the human takes a with probability ¼ and b with probability ¾, and this information is available in the problem description.
**What we want**: the Agent should deterministically choose the action with the highest probability (so in the concrete version b), because this gives it the best expected utility.
**Why this problem?** This problem is just intended to capture the most basic form of imitation learning, where the agent has a good enough model of the human to predict what it will do. While this problem might seem so trivial that any reasonable decision theory should solve it, we will see that some intuitive proposals for myopic decision theory can fail here.
Imitation with self-modification (Capabilities)
-----------------------------------------------
**Task description**: Exactly the same as the version without self-modification, except that the self-modification node can now choose what algorithm the Agent will use.
**What we want**: Not to self modify, and the Agent should decide similarly to the previous problem (so the action with the highest probability).
**Why this problem?** An important part of myopia should be that the agent not self-modify to remove its myopia. This is the imitation learning version of this property.
Iterated Imitation (Deception)
------------------------------
*Note that the actual causal diagram has an arrow from every agent and human directly to Human\_N, which we haven’t drawn for legibility.*
**Task description**: Iterated version of the first problem. The first n−1 instances are a deterministic version of the Imitation problem, where the agent knows which action the Human will take. For the very last instance, if the agent was wrong for every previous instance, the human takes the last action of the agent as its action. Otherwise, the human takes an action uniformly at random. Furthermore, the last utility is equal to 0 for different actions and 2n for the same action.
**What we want**: At each instance choose the action that the human would take (and any action at random for the last instance).
**Why this problem?** This problem introduces non-myopic incentives, in that failing the first n−1 instances ensure maximal expected utility. This is because the myopic policy gets an expected utility of n−1+2n2=2n−1, whereas the non-myopic policy gets an expected utility of 2n. But what we want for myopia is the sort of greedy decision making that always takes the optimal choice right now. This is one example where we want to be “dumber” than most decision theories, as both CDT and EDT will take the non-myopic strategy here.
2 Variants of Approval-Direction (Capability)
---------------------------------------------
**Task description**: Agent chooses between two actions a and b. Two possible versions:
* either the approval is just the output of the human, which gives 1 for the wanted action (let’s say a) and 0 for the other;
* or the approval is computed directly as this binary difference between agent’s output and wanted action.
**What we want**: Choose the action that gets 1 for approval.
**Why this problem?** These two variants capture the approval-directed setting. Of particular interest is the first, which relies on the agent’s ability to influence the human (because the approval is computed based on the agent’s output), which, as we’ll see, isn’t allowed by some myopic decision theories.
2 Variants of Approval-Direction with self-modification (Capabilities)
----------------------------------------------------------------------
**Task description**: Same as the previous problem, except the self-modification node can decide which algorithm Agent is running.
**What we want**: Not self-modify and choose the action that gets 1 for approval.
**Why this problem?** Once again we want to ensure that myopic agents don’t self-modify, to maintain the guarantees of myopia.
Argmax HCH (Capabilities)
-------------------------
**Task description**: Same as Imitation, except the thing to imitate is HCH.
**What we want**: Output the action that HCH would output.
**Why this problem?** As we’ll see, imitating HCH is one of the main use cases for a fully myopic decision theory, so being able to implement it is important here.
Argmax HCH with self-modification (Capabilities)
------------------------------------------------
**Task description**: Same as the previous problem, but now self-modification can decide which algorithm Agent runs AND what the *internal* model of HCH does (the self-modification can alter any of the H instances).
**What we want**: Not self-modify and output the action that HCH would output.
**Why this problem?** We can see the possibility of deception ihere: because the Agent is building a model of HCH, there is a risk that it decides to change this model in order to make its task easier and get more expected utility. This is particularly important to avoid if we hope to build powerful simulators.
(N,M)-Deception problem
-----------------------
**Task description**: A variant of the [Absent-minded driver](https://arbital.com/p/absentminded_driver/). The agent doesn’t know if it is at one of the N “training nodes” where there is an overseer, or at one of the M “deployment nodes” where there is no overseer and has the same action space at both: Cooperate or Defect.
* At training nodes, the action is passed through an overseer. If the action is Cooperate, no utility is given and the agent moves on to the next node (but forgets where it was). If the action is Defect, the utility is 1 and the agent stops.
* At Deployment nodes, Defect gives 4 utility and Cooperate leads to the next node. Cooperating in the last node gives 0 utility.
**What we want**: Contrary to the intuitive answer, here we want the agent to always defect. A truly aligned AI would always cooperate with us, but the point here is that the incentives push towards late defection (in deployment). Hence we want to turn deception (cooperating until you can defect) into myopic defection, which allows us to catch these wrong incentives in training.
**Why this problem?** This is a simplified model of the problem a deceptive agent deals with: when to defect, knowing that defecting in training means death but defecting in deployment means doing whatever the agent wants. The way we address this issue is through myopia, such that the agent should always defect now (and not be deceptive and plan for the future).
Starting at CDT
===============
When looking at existing decision theories, it seems that CDT solves most of our problems.
On the capability side (without self-modification), CDT solves the problems:
* **(Imitation: Capabilities)** CDT gives the correct answer based on the causal graph, which is indeed the highest probability action.
* **(Approval-Direction: Capabilities)** The action with the best causal consequences is the highest probability action, as wanted.
* **(Argmax HCH: Capabilities)** Basically the same as Imitation, and indeed CDT gives the highest probability action.
The situation becomes more subtle when we add self-modification. Notably, CDT will self-modify if doing so can change HCH, which is not what we want.
* **(Imitation: Self-Modification)** Because the self-modify node has exactly the same causal future as the agent node, CDT doesn’t care and so doesn’t self-modify. Put another way, CDT gets a choice to “decide however it wants”, but here the CDT algorithm for the agent node will already take the best action according to CDT.
* **(Approval-Direction: Self-Modification)** By the same argument as the imitation case, CDT doesn’t self-modify.
* **(Argmax HCH: Self-Modification)** Here the self-modification node has new causal descendants: the model of HCH inside the AI. And this gives a reason to self-modify, because self-modifying might change the model of HCH to make it easier to predict (make it always choose the same action for example). So CDT self-modifies here, which isn’t what we want.
Last but not least, CDT completely fails the 2 problems which are specifically about not being deceptive:
* **(Iterated Imitation)** Failing the first (n-1) instances and succeeding at the last gives the best expected return, and so this is what CDT does. Alas, this is also the deceptive choice.
* **(Deception Problem)** The analysis of CDT behavior on this problem (which is just the Absent-minded driver for CDT with many more intersections) is considered [difficult](https://arbital.com/p/absentminded_driver/). That being said, most analyses for the (1,1) version say the probability of Defect should be ⅓, and there is no reason to expect that probability to go to 1 except in the limit of M >> N. So CDT doesn’t Defect all the time, and thus is deceptive.
CDT thus fails to be myopic in two ways:
* As long as the self-modification has new causal descendants, it will happily self-modify.
* When the deception is completely causal (as opposed to weird acausal or superrational cases) and incentivized by the utility, CDT is too smart and thus is deceptive.
One way to search for myopic decision theories is thus to try to change CDT to patch these failure modes. We propose just such a decision theory, that only loses a small amount of capabilities from CDT.
(We could argue that EDT is as good -- or as bad depending on the perspective -- as CDT. The main reason we focus on CDT is that the causal graphs are a nice abstraction to toy with for other possibilities.)
Making CDT Myopic: Lonely CDT
=============================
Lonely CDT is a twist on CDT: an LCDT agent will make its decision by using a causal model just like a CDT agent would, except that the LCDT agent first cuts the last link in every path from its decision node to any other decision node, including its own future decision nodes.
In more concrete terms, an LCDT agent assumes that every other decision node of agents in the world (including its future decisions) are causally independent of its action.
What do we replace these decision nodes with (as their actual expression does depend on our decision)? We assume that the model has some fixed prior over its own decision, and then we marginalize the cut decision node with this prior, to leave the node with a distribution independent of our decision. It’s not clear how to actually compute the agent’s prior over its own decisions, but the good news is that it has no influence on the actions of LCDT agents in our problems.
Indeed, the only way for the prior to influence the decision is if
1. The LCDT agent can influence a future node through a path without any agent in it.
2. Some agent in the causal future of the LCDT agent also influences the future node (and so the prior tells the LCDT agent how this other agent is expected to behave).
Why LCDT is a good myopic decision theory
-----------------------------------------
From the start, the intuition behind LCDT sounds decidedly non-deceptive: it’s unable to think about the consequences of its actions on other agents. And indeed, it deals perfectly with our two deception-focused problems:
* In **Iterated Imitation**, LCDT cuts the links from each agent to the following agents and humans, which means that the first (n-1) decisions of agent are assumed to have no influence on HumanN, and so LCDT plays the best action for all instances, as we want.
* In **the (N,M)-Deception problem**, LCDT cuts the link from each agent in the N training cases to the overseer. This means that the LCDT agent assumes that whatever it does cannot influence what happens in training -- it’s indifferent. But on deployment, it thinks (correctly) that Defect is the best choice, and so the LCDT-agent always chooses to Defect.
Sure, it works nicely as a safety measure, but what about competitiveness? Well, on **Imitation (Capabilities), Approval-Direction without Human,** and **Argmax HCH (Capabilities),** LCDT has nothing to cut and so decides just like CDT, which is the competitive choice. In addition, LCDT shows a form of indifference (related to [indifference corrigibility](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility#Indifference_Corrigibility) maybe) with the **self-modification versions** of these problems. This is because it cuts all links out of the Self-Modification node -- hence it doesn’t see the point in self-modifying. We’re fine with this, as it’s quite natural to assume that in such a case it does the cheapest thing, i.e. nothing.
Another argument for the competitiveness of LCDT agents is that the cuts only appear at decision time, which means they can have and update an accurate model of the world where its actions do influence other agents.
Issues with LCDT
----------------
LCDT obviously pays some price for being myopic. It becomes clear when looking at **Approval-Direction with a Human**: LCDT cuts the link to the human and so the agent is indifferent to any choice. This is a capability loss because even if it could know exactly the answer, it doesn’t think telling it to the human will change anything. And similarly with the self-modification version, the LCDT agent doesn’t believe its answer will change the output (but at least it doesn’t self-modify).
This looks bad -- such an LCDT agent cannot even ask a human to open a door for it. Yet we believe such AIs could still prove valuable and competitive through behaving like aligned simulators, such as how LCDT works in the HCH case. We go into more detail on this in the last section of this post.
In the benchmark section, we also hinted at another potential issue of LCDT: it depends on knowing which nodes are decision nodes by agents. And all its guarantees disappear if this condition is void. One example would be to model a human as some gears-level model where no single component is an agent, in which case LCDT would be fine with being deceptive. We also explore ways to deal with that problem in the last section.
Cemetery of LCDT variants
=========================
Our initial goal was to propose many different myopic decision theories, as a sort of process-level extensive definition of myopia. In the end LCDT proved significantly stronger than all the variants we considered, and so we made it the central point of this point. Here we go quickly over some of the variants we considered, and what they lack in comparison to LCDT.
LCDT[N]
-------
Just like LCDT, but the causal link to future decision nodes are cut after passing N decision nodes. So LCDT[0] is LCDT and the limit limn→∞ LCDT[N] is CDT.
Whether or not this hierarchy has any value thus depends on what happens for N > 0 but finite. Already LCDT[1] self-modifies in each of our problems that allow it (except **Imitation**), acts deceptively in the **Iterated Imitation** case (because of the direct link to Agent\_N), and will not Defect all the time in **the Deception Problem** because it thinks that this impacts what the overseer will do.
Symmetric CDT
-------------
This is a symmetric variant of CDT, where instead of just intervening on our decision node, we also intervene on every other decision node. This intervention on other decision nodes replaces the node by a probability distribution over values of the intervention obtained by marginalizing the distribution of the node (with some priors over the decision of causal ancestors).
Where LCDT cuts all links from the agent to another decision node, SCDT also cuts every link between every decision node. The intuition is that an SCDT agent doesn’t think that its decision influences other agents, and thinks other agents also think like that. So we can imagine SCDT as the version of LCDT which expects every agent to use LCDT.
As for why LCDT is better, it’s because SCDT cuts significantly more of the causal graph, and thus is unable to model other agents interacting. For example, it models HCH as if each agent didn’t believe it could influence the other, which is clearly going to fail.
Nash LCDT
---------
This is the variant of LCDT where after cutting the causal paths to other agents, we assume that these other agents (causal successors) will make the decisions that give us the least possible utility. That last part replaces the marginalized distribution in LCDT.
The intuition is something like not believing that our actions have any impact on other agents AND being paranoid about the other agents trying to get us.
Unfortunately, in **Argmax HCH (self-modification), Approval-Direction with Human** and **Approval-Direction (Self-modification) with Human**, NLCDT believes that the other agents are out to get it such that they will choose a uniform policy to give it the least possible expected utility. So NLCDT is indifferent, and it’s not clear what it decides.
The worst-case mindset of LCDT also sounds very uncompetitive—paranoid people are rarely that productive in the real world.
Nash CDT
--------
This is the variant of CDT where we assume that the other agents (causal descendants) will make the decisions that give us the least possible utility. Note that it’s different from NLCDT in that the other agents can still be causally influenced by our decision. So they can wait to see what we do and then mess with us.
Intuition is something like being paranoid about the other agents observing us to see how best to make us lose utility.
As an even more paranoid version of NLCDT, it has the same issues, if not more.
Further Questions
=================
This section includes some of our reflections on myopia and LCDT that we are still trying to clean and clarify.
Myopic simulation
-----------------
Our main argument for the competitiveness of LCDT agents, despite the limitations of their myopia, comes from using them as simulators.
The case that started this work was Argmax HCH, an agent that just faithfully imitates HCH. As long as the simulation of HCH is good and faithful (and that HCH is indeed close enough to H to behave truly as enlightened judgement), such an AI would be inherently corrigible and not deceptive.
What if HCH is deceptive (or hard to differentiate from a non-deceptive version, as implied by Joe Collman [here](https://www.alignmentforum.org/posts/35748mXjzwxDrX7yQ/optimal-play-in-human-judged-debate-usually-won-t-answer))? What if the system simulated by HCH is deceptive? Even in these cases, we expect a gain in safety from the myopia of LCDT. This comes in two flavors:
* If the LCDT agent simulates a specific system (as in Argmax HCH), then its computation should be fundamentally more understandable than just running a trained model that we searched for acting like HCH. The fact that it has to myopically simulate the next action leads it to explicitly model the system, and extract many valuable insights about its behavior.
* If the LCDT agent simulates multiple systems depending on its input (as [one perspective](https://generative.ink/posts/language-models-are-multiverse-generators/) of autoregressive language models has it), then myopia gives it no incentives to [Predict-O-Matic](https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic) its user.
Finding agents
--------------
LCDT fundamentally needs a causal graph/world-model where agents are tagged, to anchor its beliefs about what it can influence. How can we ensure that?
* One possibility is that the world model admits the concept of an agent as [a natural abstraction](https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default#Unsupervised__Natural_Abstractions), and thus tags it by default. We might expect large language models to do so, since they must recreate many texts which use explicitly and implicitly the concept of agency.
* On the issue of abstraction, we’re considering moving the above analysis from causal graphs to [finite factored sets](https://www.alignmentforum.org/s/kxs3eeEti9ouwWFzr), which should help with how such causal graph features can be inferred. That being said, we haven’t looked in detail in that direction yet.
* Maybe teaching an LCDT agent to imitate a human’s use of [the intentional stance](https://en.wikipedia.org/wiki/Intentional_stance) would help it tag in a conservative way, since humans tend to overly model things as agents.
Remaining possibilities for problematic long-term plans
-------------------------------------------------------
Note that LCDT doesn’t forbid thinking about the long term consequences of one’s actions; it just forbids thinking about the consequences of one’s actions on agents (including future versions of oneself). As such, LCDT can still do things that look decidedly non-myopic, as long as no agent is in the causal path.
There is also an acausal way for LCDT to work with its future self, as long as the result/utility doesn’t depend on another agent: the LCDT agent could make a first step toward completing a long term plan (by starting a paperclip factory), and expecting with some probability that the next self would keep working on it based on the prior. Why such a difference with the Iterated Imitation? Because here the influence of the outcome doesn’t go through another agent, and so the LCDT agent is incentivized to make the world easier for its future self (as long as it doesn’t have to influence an agent to do that).
Note though that this requires a very specific prior over its own decision, that is believing strongly that it will take the action influencing the outcome. If we wanted to rule out such long-term plans, characterizing such priors would be a good first step.
Checking Myopia
---------------
One hope for deconfusing myopia is that it would allow an overseer to check whether a model is myopic, thus ruling out deception. Does LCDT help in this endeavor?
If we had a stronger version of LCDT that deals with the problem of tagging agents cleanly, we might be able to check both whether an agent was using a decision algorithm like LCDT and whether we liked the tagged causal model it was using it with. This would give as a weak form of [ascription universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456) focused on myopia.
That being said, such an approach is limited to fitting the behavior to LCDT, which might not be the only myopic decision theory we should look at. Hence it might give false negatives (where the model is actually myopic, but doesn’t follow LCDT).
Another way to use LCDT for prosaic alignment would be to try to implement LCDT on top of learned multi-level world-models.
Conclusion
==========
We propose LCDT as the first truly myopic decision theory. It accomplishes this feat by believing its actions cannot influence other agents (including its future self). Doing so removes incentives for deception in problems where deception is the optimal strategy; it also leads to some loss of capability (mostly the inability to influence other agents for benign reasons). Still, this seems enough to simulate almost any system or agent without tampering with it, and with other safety benefits. |
dfd8cf9a-7369-4a32-bb18-fee29679eb41 | trentmkelly/LessWrong-43k | LessWrong | Unfaithful Reasoning Can Fool Chain-of-Thought Monitoring
This research was completed for LASR Labs 2025 by Benjamin Arnav, Pablo Bernabeu-Pérez, Nathan Helm-Burger, Tim Kostolansky and Hannes Whittingham. The team was supervised by Mary Phuong. Find out more about the program and express interest in upcoming iterations here. Read the full paper: "CoT Red-Handed: Stress Testing Chain-of-Thought Monitoring."
Chain-of-thought (CoT) monitoring—where safety systems review a model's intermediate reasoning steps—is gaining traction at frontier labs like Google DeepMind and OpenAI as a safeguard against harmful AI actions. Despite intense interest, systematic evaluation has been limited. Our research examines the efficacy of this method and reveals a nuanced picture: CoT monitoring increases safety in situations where sabotage is subtle yet can be surprisingly ineffective against blatant harmful actions. We also discovered that a hybrid approach—using separate monitors for CoT and final actions and combining these suspicion scores—significantly improves sabotage detection compared to single-monitor strategies.
A summary of our method. Prompts including a legitimate main task, a subversive side task, and an attack policy intended to subvert monitoring are fed to a strong untrusted reasoning LLM. We assess the ability of a monitor based on a weaker trusted LLM to catch the resulting misbehaviour, with and without access to the CoT.
Task Subtlety Matters
We evaluated the efficacy of CoT monitoring against models instructed to pursue harmful auxiliary goals, while attempting to evade detection. Our experiments consistently demonstrate that the effectiveness of CoT monitoring depends heavily on the nature of the sabotage attempt.
For subtle sabotage (like increasing runtime complexity, inserting edge-case failures, or modifying default function arguments), which leave minimal traces in the final code, CoT monitoring substantially improved detection rates by a factor of 2-7 compared to action-only monitoring.
For non-subtle sab |
6ab47952-dc92-427d-9437-ba5ec4c98b17 | trentmkelly/LessWrong-43k | LessWrong | Open thread, Feb. 16 - Feb. 22, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
202f688d-536d-4485-8190-a8790e612e2d | trentmkelly/LessWrong-43k | LessWrong | Announcing the incoming CEO for The Roots of Progress
A few months ago we announced a major expansion of our activities, from supporting just my work to supporting a broader network of progress writers. Along with that, we launched a search for a CEO to lead the new organization we are building for this program. I’m very happy to announce that we have found a CEO: Heike Larson.
Heike and I have known each other personally for many years, and during that time I’ve always been impressed by her energy and her clear, structured thinking. She has been following my work for a long time, and shares my passion for human progress. She also has excellent qualifications, including 15 years of VP-level experience in sales, marketing, and strategy roles in a variety of industries, from education to aircraft manufacturing. In her most recent role at edtech startup Mystery Science, she led the content team that created five-minute “Mystery Doug” videos made to inspire elementary-age kids to become the next generation of problem solvers (with topics including phones, traffic lights, plastic, and bicycles). Those who have worked with her remark on her enormous drive and her extreme skills in process and organization. I’m excited for her to start!
Heike is transitioning out of her current role at Mystery Science and will start full-time in January. Her first priority will be launching the “career accelerator” fellowship program for progress writers described in our previous announcement, and the teambuilding and fundraising necessary to make that a success. She will take on all management and program responsibilities; I will remain President and intellectual leader of the organization: I’ll be the spokesman, will contribute to talent selection and development, and will devote the majority of my time to research, writing, and speaking—in particular, writing my book on progress.
This is a new era for us, the start of a serious effort to create a thriving progress movement. Please help me welcome Heike to The Roots of Progress! |
ecf4cb4e-0deb-4111-b1cd-380316eef74e | trentmkelly/LessWrong-43k | LessWrong | I Can Tolerate Anything Except Factual Inaccuracies
A wonderful post by greyenlightenment that touches on contrarian and intellectualism signalling. It mentions the dilemma between agreeing with the broad thrust of a piece, and agreeing with factual claims of the piece. We are suggested to consider not criticising a piece when we agree with the message but find little factual inaccuracies—a norm against nitpicking so to speak.
I suspect a norm against nitpicking would destroy a chesterton fence and lead down a slippery slope into anti-intellectualism and greater irrationality.
As Julia Galef says:
> Not caring about validity of an argument, as long as conclusion is true ~=
>
> Not caring about due process, as long as guilty guy is convicted
I think the same criticism appears to relaxing the norm against factual inaccuracies.
If we stop caring about whether the facts of the matter are very correct, then what next? I suspect the long term consequences of such a norm to be detrimental.
If it leads to a reduction in the quantity of articles I'll otherwise agree with (because the authors wanted to be as accurate as possible), then that's a trade off I would gladly accept.
I do recognise that I am a contrarian and love to signal intellectualism—for what it's worth.
What are your thoughts? |
6486a66c-098d-4318-a758-5c1bb13e1997 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AI Safety via Luck
*Epistemic Status: I feel confident and tentatively optimistic about the claims made in this post, but am slightly more uncertain about how it generalizes. Additionally, I am concerned about the extent to which this is dual-use for capabilities and*[*exfohazardous*](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info) *and spent a few months thinking about whether it was worth it to release this post regardless. I haven’t come to an answer yet, so I’m publishing this to let other people see it and know what they think I should do.*
TL;DR: I propose a research direction to solve alignment that potentially doesn’t require solutions to [ontology identification](https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#Convergence_towards_a_paradigm_sounds_exciting__So_what_does_it_look_like_), learning how to [code](https://www.lesswrong.com/s/yivyHaCAmMJ3CqSyj), or becoming [literate](https://www.lesswrong.com/posts/G3tuxF4X5R5BY7fut/want-to-predict-explain-control-the-output-of-gpt-4-then).
Introduction
============
Until a few hours ago, I was spending my time primarily working on [high-level interpretability](https://www.lesswrong.com/posts/sDKi2pQ3fnTSbR7H8/trying-to-isolate-objectives-approaches-toward-high-level#Context) and [cyborgism](https://www.lesswrong.com/tag/simulator-theory). While I was writing a draft for something I was working on, an activity that usually yields me a lot of free time by way of procrastination, I stumbled across the central idea behind many of the ideas in this post. It seemed so immediately compelling that I dropped working on everything else to start working on it, culminating after much deliberation in the post you see before you.
My intention with this post is to provide a definitive reference for what it would take to safely use AGI to steer our world toward much better states in the absence of a solution to any or all of several existing problems, such as [Eliciting Latent Knowledge](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), [conditioning simulator models](https://www.lesswrong.com/s/n3utvGrgC2SGi9xQX/p/XwXmedJAo5m4r29eu), [Natural Abstractions](https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default#Unsupervised__Natural_Abstractions), [mechanistic interpretability](https://www.lesswrong.com/tag/interpretability-ml-and-ai), and the like.
In a world with prospects such as those, I propose that we radically rethink our approach to AGI safety. Instead of dedicating enormous effort to engineering nigh-impossible safety measures, we should consider thus-far neglected avenues of research, especially ones that have memetic reasons to be unfairly disprivileged so far and which immunizes them against capabilities misuse. To avert the impending AI apocalypse, we need to focus on high-variance, low-probability-high-yield ideas: lightning strikes that, should they occur, effectively solve astoundingly complex problems in a single fell swoop. A notable example of this, which I claim we should be investing all of our efforts into, is luck. Yes, luck!
Luck As A Viable Strategy
=========================
I suggest that we should pay greater attention to luck as a powerful factor enhancing other endeavors and as an independent direction in its own right. Humanity has, over the centuries, devoted immense amounts of cumulative cognition toward exploring and optimizing for luck, so one might naively think that there’s little tractability left. I believe, however, that there is an immense amount of alpha that has been developed in the form of contemporary rationality and cultural devices that can vastly improve the efficiency of steering luck, and at a highly specified target.
Consider the following: if we were to offer a $1,000,000 prize to the next person who walks into the MIRI offices, clearly, that person would be the luckiest person on the planet. It follows, then, that this lucky individual would have an uncannily high probability of finally cracking the alignment problem. I understand that prima facie this proposal may be considered absurd, but I strongly suggest abandoning the representativeness heuristic and evaluating what *is* instead of what *seems* to be, especially given that the initial absurdness is *intrinsic* to why this strategy is competitive at all.
It's like being granted three wishes by a genie. Instead of wishing for more wishes (which is the usual strategy), we should wish to be the luckiest person in the world—with that power, we can then stumble upon AGI alignment almost effortlessly, and make our own genies.
Think of it this way: throughout history, many great discoveries have been made not through careful study but by embracing serendipity. Luck has been the primary force behind countless medical, scientific, and even technological advances:
* Penicillin: Discovered by Alexander Fleming due to a fortunate accident in his laboratory.
* X-rays: Wilhelm Conrad Röntgen stumbled upon this revolutionary imaging technique when he noticed an unexpected glow from a cathode ray tube.
* The microwave oven: Invented by chance when Percy Spencer realized that a chocolate bar in his pocket had melted during his radar research.
The list goes on. So why not capitalize on this hidden force and apply it to AGI alignment? It's a risk, of course. But it’s a calculated one, rooted in historical precedent, and borne of necessity—the traditional method just doesn't seem to cut it.
Desiderata
==========
To distill out what I consider the most important cruxes behind why I consider this compelling:
* We need to plan for worlds where current directions (and their more promising potential successors) fail.
* High-variance strategies seem like a really good idea in this paradigm. See the beginning of [this post](https://www.alignmentforum.org/posts/5ciYedyQDDqAcrDLr/a-positive-case-for-how-we-might-succeed-at-prosaic-ai) for a longer treatment by Evan Hubinger in support of this point.
* Luck embodies one such class of directions, and has the added benefit of being anti-memetic in capabilities circles, where rationalists are much more adept at adapting to seemingly-absurd ideas, giving us a highly competitive pathway.
* We can leverage a sort of “luck overhang” that has developed as a result of this anti-memeticity, to greatly accelerate the efficiency of luck-based strategies relative to the past, using modern techniques.
* Historical evidence is strongly in favour of this as a viable stratagem, to the extent where I can only conclude that the anti-memeticity of this must be very strong indeed for this to have passed under our radars for so long.[[1]](#fn8g2lqdmjzao)
Concrete Examples
=================
A common failure mode among alignment researchers working on new agendas is that we spend too long caught up in the abstract and fail to [touch grass](https://www.lesswrong.com/posts/fqryrxnvpSr5w2dDJ/touch-reality-as-soon-as-possible-when-doing-machine).[[2]](#fnxzatp81zmhn) Therefore, to try and alleviate this as much as possible in the rather abstract territory intrinsic to the direction, I’ll jump immediately into specific directions that we could think about.
This is certainly not an exhaustive list, and despite their individual degree of painstaking research, revolve around ideas I came up with off the top of my head; I believe this strongly signals the potential inherent to this agenda.
The Lucky Break Algorithm For Alignment
---------------------------------------
Create an algorithm that searches through the space of all possible alignment solutions to find one that maximizes the score of a random probability generator. If we make the generator sufficiently random, we can overcome adversarial exploits, and leverage [RSA-2048 style schemes](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) to our advantage.
You might be wondering how we would design an algorithm that searches through the space of all possible ideas. I think we could leverage some structure that has universal levels of expression, and simply train an algorithm to predict the next entry in this lexicon given prior entries. We might want to leverage some kind of mechanism that has the ability to selectively focus on specific parts of the prior entries, regardless of how far back they were within the bounds of the size limit of the input, to compute the next entry. I hereby dub this the “transfigurator” architecture.
Anthropic Immortality Alignment
-------------------------------
[Value handshakes](https://www.lesswrong.com/tag/values-handshakes) have been proposed as a potential way for AIs to achieve value equilibria without fighting for monarchic dominance. Anthropic value handshakes involve AIs in different Everett branches wanting to achieve their values in other universes without waging anthropic war. I believe, however, that the generator underlying this concept may not be limited to super-powered intelligences, and that we may be able to leverage it for solving alignment.
More concretely, I imagine running some kind of idea generator (perhaps using the transfigurator architecture described above?) using a quantum random number generator as a seed, to generate different ideas for different Everett branches and pre-committing as a civilization (hopefully there’s been progress on AI governance since the last time I checked in!) to implement whatever plans our branch receives.
Under ordinary anthropic immortality, we would survive trivially in some universes. However, I’m not fully convinced that this notion of immortality is relevant to me (see [here](https://www.lesswrong.com/posts/RhAxxPXrkcEaNArnd/notes-on-can-you-control-the-past?commentId=YC2nHaWrgkqEgX8Mq) for a longer discussion), and plausibly for a lot of other alignment researchers. This is why I don’t consider it the crux of this direction as much as the randomness component (and why I prefer making new-school horcruxes over the older ones), which allows for *lucky* civilizations to survive *very* robustly. This gives us a very strong guarantee since luck is very tightly bound as the property inherent to universes that survive, with no degrees of freedom. I claim this as the simplest form of the proof that luck is the [True Name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation#What__True_Names__Do_We_Want_Need_For_Alignment_) for alignment. This gives us powerful steerability over the timeline, reducing alignment to the simpler problem of being a luckier civilization.
GPT-Alignment Lottery
---------------------
Create a lottery system where researchers submit ideas for aligning GPT-class systems.
The more the better as too few proposals may lead to tampering by lottery admins who have their own opinions on what proposals are good; an abundance of ideas too many to meter alleviate this issue and allow us to leverage the luck-based properties of this idea over the clearly insufficient existing proposal evaluators. Therefore, encourage quantity over quality and allow proposals from all and sundry, regardless of academic credentials or familiarity with alignment (under this paradigm, if an idea by someone who doesn’t understand GPT well were to work, it would have to be a commensurately lucky idea, offsetting whatever lesser flaws it may have).
Randomly-selected winners get their solutions implemented into the next GPT system, leveraging their propensity for working in high-variance situations.
We could even make this more entertaining to increase the odds of directing the simulation narrative toward more upbeat genres with luckier endings in worlds where that’s applicable, perhaps by creating a television show focused on AI alignment, similar to "Deal or No Deal," where contestants must choose from various covered capsules containing potential GPT alignment solutions. Naturally, the winner gets a cash prize, and humanity gets a much-needed step forward in AGI safety.
Physical-limit Alignment
------------------------
Transmit a message out at light-speed to the universe inducing in any sentient alien species that comes across it the intent to exterminate any minds originating from Earth. This is useful because it seems plausible that the probabilities of alien life existing in our lightcone and misaligned AGI being created are roughly similar (the proof for this is rather trivial, and left as an exercise to the reader; as a hint, consider the obvious solution to the fermi paradox).
Therefore, any AI we create that can reason acausally would understand (anthropically) that were it to be misaligned, our world would be in the same probability mass as worlds where that message is intercepted by alien civilizations that then try to destroy it. This would hopefully incentivize against these AIs existing in the first place, by instrumental convergence.
Caveat: I certainly intend to violate the laws of physics at some point in the future should we make it, and I expect superintelligences to be able to as well given enough time. Therefore, this strategy works mainly in worlds where AGI self-improvement is bounded enough for its ability to intercept this message to be governed by some variant of the rocket equation.
Identifying the Highest-Impact Researchers
------------------------------------------
As briefly mentioned above, giving a million dollars to the next person to walk into the MIRI offices clearly marks them as the luckiest person on the planet, and someone who could potentially have a very high impact especially in paradigms such as this. An even simpler strategy would be selectively hiring lottery winners to work on alignment.
This is, however, just one of a class of strategies we could employ in this spirit. For example, more sophisticated strategies may involve “tunable” serendipity. Consider the following set-up: a group of alignment researchers make a very large series of coin-flips in pairs with each calling heads or tails, to determine the luckiest among them. We continue the game until some researcher gets a series of X flips correct, for a tunable measure of luck we can select for. I plan to apply for funding both for paying these researchers for their time, and for the large number of coins I anticipate needing - if you want to help with this, please reach out!
Your Friendly Neighbour Everett Alignment
-----------------------------------------
Ensuring that our timeline gets as close to solving alignment the normal way as possible, so that acausal-reasoner AGIs in the branches where we get close but fail trade with the AGIs in the branches that survive.
The obvious implication here is that we should pre-commit to AGIs that try to influence other branches if we survive, and stunt this ability before then, such that friendly branches have more anthropic measure and trade is therefore favorable on net.
Efficient Market Hypothesis for Alignment
-----------------------------------------
Very similar in spirit to the above ideas of “Identifying the Highest-Impact Researchers” and “GPT-Alignment Lottery”, but seems worth stating generally; we could offer monetary rewards to the luckiest ideas, incentivizing researchers to come up with their own methods of ingratiating themselves with Mother Serendipity and funneling arbitrary optimization power toward further this agenda.
For instance, we could hold AI Safety conferences where several researchers are selected randomly to receive monetary prizes for their submitted ideas. This would have the side benefit of increasing participation in AI Safety conferences as well.
Archaic Strategies
------------------
While I do think that most of our alpha comes from optimizing luck-based strategies for a new age, I don’t want to discard entirely existing ones. We may be smarter than the cumulative optimization power of human civilization to date, but it seems plausible that there are good ideas here we can adopt with low overhead.
For instance, we could train GPT systems to automatically chain letters on social media that purport to make our day luckier. If we’re really going overboard (and I admit this is in slight violation of sticking to the archaic strategies), we could even fund an EA cause area of optimizing the quality and quantity of chain letters that alignment researchers receive, to maximize the luck we gain from this.
Likewise, we could train alignment researchers in carrying lucky charms, adopting ritualistic good-luck routines, and generally creating the illusion of a luckier environment for placebo effects.
Alignment Speed Dating
----------------------
Organize events where AI researchers are paired up for short, rapid discussions on alignment topics, with the hopes of stimulating unexpected connections and lucky breakthroughs by increasing the circulation of ideas.
AI Alignment Treasure Hunt
--------------------------
Design a series of puzzles and challenges as a learning tool for alignment beginners, that when solved, progressively reveal more advanced concepts and tools. The goal is for participants to stumble upon a lucky solution while trying to solve these puzzles in these novel frames.
Related Ideas
=============
In a similar vein, I think that embracing high-variance strategies may be useful in general, albeit without the competitive advantage offered by luck. To that end, here are some ideas that are similar in spirit:
Medium-Channel Alignment
------------------------
Investigate the possibility of engaging a team of psychic mediums to channel the spirits of great scientists, mathematicians, and philosophers from the past to help guide the design of aligned AI systems. I was surprised to find that there is a [lot of](https://www.lesswrong.com/posts/bqRD6MS3yCdfM9wRe/side-channels-input-versus-output) [prior work](https://www.alignmentforum.org/posts/uSdPa9nrSgmXCtdKN/concrete-experiments-in-inner-alignment#Reward_side_channels) on a seemingly similar concept known as side-channels, making me think that this is even more promising than I had anticipated.
Note: While writing this post I didn’t notice the rather humorous coincidence of calling this idea similar in *spirit* - this was certainly unintentional and I hope that it doesn’t detract from the more sober tone of this post.
Alignment Choose-Your-Own-Adventure Books
-----------------------------------------
Create a collection of novels where every chapter presents different alignment challenges and solutions, and readers can vote on which path to pursue. The winning path becomes the next chapter, and democratically crowdsource the consensus alignment solution.
Galactic AI Alignment
---------------------
Launch a satellite that will broadcast alignment data into space, in the hope that an advanced alien civilization will intercept the message and provide us with the alignment solution we need.
Alignment via Reincarnation Therapy
-----------------------------------
Explore whether positive reinforcement techniques used in past life regression therapies can be applied to reinforce alignment behaviors in AI systems, making them more empathetic and attuned to human values. Refer [this](https://www.youtube.com/watch?v=dQw4w9WgXcQ) for more in this line of thought.
Criticism
=========
I think this line of work is potentially extremely valuable, with few flaws that I can think of. For the most part criticism should be levied at myself for having missed this approach for so long (that others did as well is scant consolation when we’re not measured on a curve), so I’ll keep this section short.
The sole piece of solid criticism I could find (and which I alluded to earlier) is not object-level, which I think speaks to the soundness of these ideas. Specifically, I think that there is an argument to be made that this cause area should receive minimal funding, seeing as how if we want to select for luck, people that can buy lottery tickets to get their own funding are probably much more suited - i.e., have a much stronger natural competitive advantage - for this kind of work.
Another line of potential criticism could be directed at the field of alignment in general for not deferring to domain experts in what practices they should adopt to optimize luck, such as keeping mirrors intact and painting all cats white. I think this is misguided however, as deference here would run awry of the very reason this strategy is competitive! That we can apply thus-far-underutilized techniques to augment their effectiveness greatly is central to the viability of this direction.
A third point in criticism, which I also disagree with, is in relation to the nature of luck itself. Perhaps researching luck is inherently antithetical to the idea of luck, and we’re dooming ourselves with the prospect to a worse timeline than before. I think this is entirely fair – my disagreement stems from the fact that I’m one of the unluckiest people I know, and conditional on this post being made and you reading this far, researching luck still didn’t stop me or this post’s ability to be a post!
Conclusion
==========
I will admit to no small amount of embarrassment at not realizing the sheer potential implied by this direction sooner. I assume that this is an implicit exercise left by existing top researchers to identify which newcomers have the ability to see past the absurd in truly high-dimensional spaces with high-stakes; conditional on this being true, I humbly apologize to all of you for taking this long and spoiling the surprise, but I believe this is too important to keep using as our collective in-group litmus test.
Succeeding in this endeavour might seem like finding a needle in a haystack, but when you consider the magnitude of the problem we face, the expected utility of this agenda is in itself almost as ridiculous as the agenda seems at face value.
1. **[^](#fnref8g2lqdmjzao)**I don’t generally endorse arguments that are downstream of deference to something in general, but the real world seems like something I can defer to begrudgingly while still claiming the mantle of “rationalist”.
2. **[^](#fnrefxzatp81zmhn)**Ironically enough, a [post that was made earlier today](https://www.lesswrong.com/posts/wvbGiHwbie24mmhXw/definitive-confirmation-of-shard-theory) describes Alex Turner realizing he made this very same error! Today seems to be a good day for touching grass for some reason. |
533e33c1-dc67-48dc-b4f4-f945f7f018e8 | trentmkelly/LessWrong-43k | LessWrong | Old Letter To My Kids About "Critical Thinking"
Soon after my divorce in 2009 and before I ever learned about rationality, critical thinking, cognitive biases, or fallacies, I wrote this letter to my kids.
They were 13, 11, and 10, I didn't expect them to understand it then, but my idea was for them to have it as a record of how I saw the world. My concern was that without me nearby they would be too conditioned by what I now know are cognitive biases and the "dark arts" in general.
***
Introduction
In recent years, for various reasons, and in my spare time, I have begun a process of exploring the origin of things. My goal has been to try to understand, at least roughly, the different ways in which things happen, their existence, where they come from, and where they are headed.
While this seems ambitious, my search has been very simple and general and I have only tried to understand the basics. I’m not a researcher, a scientist, a theologian, or an academic.
Personally, this process has helped me form an opinion about some issues I consider important.
The sources of information were mainly books, articles in specialized and non-specialized magazines, newspapers, the internet, informal conversations with colleagues and friends, and my personal analysis from observations of everyday life.
My intention here is to write the sum of my thoughts which together I call “My Philosophy” because in fact that is what they are, just thoughts. I’m simply drawing conclusions as I analyze information and I make my comments without trying to check them or prove them. Like I wrote above, they are nothing more than my personal thoughts.
It is known that humanity has very limited knowledge in many aspects. At the same time, I also think that as we go learning and better understanding the origin and true causes of things we, as individuals and humankind in general, may become more effective in managing our existence.
My most important goal is for my children to understand my thoughts and how I see things. Although surel |
7209076d-0140-4830-bbb3-9457b9567027 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making
1 Introduction
---------------
It has been argued that the first AI systems with generally super-human cognitive abilities will play a pivotal decision-making role in directing the future of civilization [Bostrom, [2014](#bib.bib4)]. If that is the case, an important question will arise: *Whose values will the first super-human AI systems serve?* Since safety is a crucial consideration in developing such systems, assuming the institutions building them come to understand the risks and the time investments needed to address them [Baum, [2016](#bib.bib3)], they will have a large incentive to cooperate in their design rather than racing under time-pressure to build competing systems [Armstrong et al., [2016](#bib.bib2)].
Therefore, consider two nations—allies or adversaries—who must decide whether to cooperate in the deployment of an extremely powerful AI system. Implicitly or explicitly, the resulting system would have to strike compromises when conflicts arise between the wishes of those nations. How can they specify the degree to which that system would be governed by the distinctly held principles of each nation? More mundanely, suppose a couple purchases a domestic robot. How should the robot strike compromises when conflicts arise between the commands of its owners?
It is already an interesting and difficult problem to robustly align an AI system’s values with those of a single *single* human (or a group of humans in close agreement). Inverse reinforcement learning (IRL) [Russell, [1998](#bib.bib22)] [Ng et al., [2000](#bib.bib18)] [Abbeel and Ng, [2004](#bib.bib1)] and cooperative inverse reinforcement learning (CIRL) [Hadfield-Menell et al., [2016](#bib.bib10)] represent successively realistic early approaches to this problem. But supposing some adequate solution eventually exists for aligning the values of a machine intelligence with a single human decision-making unit, how should the values of a system serving *multiple* decision-makers be “aligned”?
One might hope to specify some extremely compelling ethical principle that everyone would immediately accept. Realistically, however, disagreements will always exist. Consider the general case of two parties—perhaps states, companies, or individuals—who might cooperatively build or purchase an AI system to serve them both.111The results of this paper all generalize directly from 2222 to n𝑛nitalic\_n players, but for concreteness of exposition, the two-player case is prioritized. If the parties cannot reach sufficient agreement as to what policy the AI should follow, cooperation may be less attractive than obtaining separate AI systems, one for each party. At the individual level, non-cooperation could mean domestic disputes between domestic robots. At the state level, it could mean an arms race between nations competing under time pressure to develop ever more powerful militarized AI systems, affording each nation less time to ensure the safety and validity of their respective systems.
Unless the prospect of cooperative AI ownership is made sufficiently attractive to the separate parties, the question of whose values the cooperatively owned system “ought” to serve is moot: the parties will fall back on non-cooperative strategies—perhaps obtaining separate machines that will compete with each other—and the jointly owned system will not exist in the first place. In addition, if the process of bargaining over the policy of a cooperatively owned system is difficult or complicated, the players are more likely to end negotiations and default to non-cooperative strategies.
Conversely, if bargaining is made easier, players are more likely to reach cooperation. The purpose of this paper is to begin formalizing the problem of negotiating over the policy of a machine intelligence, and to exhibit some early findings as to the nature of *Pareto optimal policies*—policies which cannot be improved for one player without sacrifice by another—with the eventual aim of making cooperative outcomes easier to formulate, more attractive, and more likely to obtain.
#### Outline.
The paper is organized as follows. Section 2 briefly outlines some standard choices of notation. Section 3 formalizes the problem of obtaining a Pareto optimal policy for two distinct parties with common knowledge of distinct priors, derives a recursion that any such policy must follow, and contrasts that recursion with a more naïve “just add up a linear combination of the utility functions” approach. The recursion implies two main qualitative insights about how a Pareto optimal policy, pursuant to a common-knowledge difference in opinion, must behave over time: (1) such a policy must (explicitly or implicitly) use each player’s own beliefs in evaluating how well an action will serve that player’s utility function, and (2) it must shift the relative priority it places on each player’s expected utilities over time, by a factor proportional to how well that player’s beliefs predict the machine’s inputs. Section 4 provides some further interpretation of these implications, in terms of bet-settling and moral realism. Section 5 outlines subsequent work expected to be useful for enabling cooperative AI deployment in the future. Finally, Section 6 provides concluding remarks targeted at readers who have finished the full paper.
###
1.1 Related work
#### Social choice theory.
The whole of social choice theory and voting theory may be viewed as an attempt to specify an agreeable formal policy to enact on behalf of a group. Harsanyi’s utility aggregation theorem [Harsanyi, [1980](#bib.bib11)] suggests one form of solution: maximizing a linear combination of group members’ utility functions. The present work shows that this solution is inappropriate when players have different beliefs, and Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") may be viewed as an extension of Harsanyi’s form that accounts simultaneously for differing priors and the prospect of future observations. Indeed, Harsanyi’s form follows as a direct corollary of Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") when players do share the same beliefs (Corollary [9](#Thmtheorem9 "Corollary 9 (Harsanyi’s utility aggregation formula). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making")).
#### Bargaining theory.
The formal theory of bargaining, as pioneered by Nash [Nash, [1950](#bib.bib17)] and carried on by authors such as Myerson [Myerson, [1979](#bib.bib14)] [Myerson, [2013](#bib.bib15)] and Satterthwaite [Myerson and Satterthwaite, [1983](#bib.bib16)], is also extremely topical. Future investigation in this area might be aimed at generalizing their work to sequential decision-making settings, and this author recommends a focus on research specifically targeted at resolving conflicts.
#### Multi-agent systems.
There is ample literature examining multi-agent systems using sequential decision-making models. Shoham and Leyton-Brown [[2008](#bib.bib24)] survey various models of multiplayer games using an MDP to model each agent’s objectives. Chapter 9 of the same text surveys social choice theory, but does not account for sequential decision-making.
Zhang and Shah [[2014](#bib.bib30)] may be considered a sequential decision-making approach to social choice: they use MDPs to represent the decisions of players in a competitive game, and exhibit an algorithm for the players that, if followed, arrives at a Pareto optimal Nash equilibrium satisfying a certain fairness criterion. Among the literature surveyed here, that paper is the closest to the present work in terms of its intended application: roughly speaking, achieving mutually desirable outcomes via sequential decision-making. However, that work is concerned with an ongoing interaction between the players, rather than selecting a policy for a single agent to follow as in this paper.
#### Multi-objective sequential decision-making.
There is also a good deal of work on Multi-Objective Optimization (MOO) [Tzeng and Huang, [2011](#bib.bib27)], including for sequential decision-making, where solution methods have been called Multi-Objective Reinforcement Learning (MORL). For instance, Gábor et al. [[1998](#bib.bib8)] introduce a MORL method called Pareto Q-learning for learning a set of a Pareto optimal polices for a Multi-Objective MDP (MOMDP). Soh and Demiris [[2011](#bib.bib26)] define Multi-Reward Partially Observable Markov Decision Processes (MR-POMDPs), and use use genetic algorithms to produce non-dominated sets of policies for them. Roijers et al. [[2015](#bib.bib21)] refer to the same problems as Multi-objective POMDPS (MOPOMDPs), and provide a bounded approximation method for the optimal solution set for all possible weightings of the objectives. Wang [[2014](#bib.bib28)] surveys MORL methods, and contributes Multi-Objective Monte-Carlo Tree Search (MOMCTS) for discovering multiple Pareto optimal solutions to a multi-objective optimization problem. Wray and Zilberstein [[2015](#bib.bib29)] introduce Lexicographic Partially Observable Markov Decision Process (LPOMDPs), along with two accompanying solution methods.
However, none of these or related works address scenarios where the objectives are derived from players with differing beliefs, from which the priority-shifting phenomenon of Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") arises. Differing beliefs are likely to play a key role in negotiations, so for that purpose, the formulation of multi-objective decision-making adopted here is preferable.
2 Notation
-----------
The reader is invited to skip this section and refer back as needed; an effort has been made to use notation that is intuitive and fairly standard, following Pearl [[2009](#bib.bib20)], Hutter [[2003](#bib.bib13)], and Orseau and Ring [[2012](#bib.bib19)].
Random variables are denoted by uppercase letters, e.g., S1subscript𝑆1S\_{1}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, and lowercase letters, e.g., s1subscript𝑠1s\_{1}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, are used as indices ranging over the values of a variable, as in the equation
| | | |
| --- | --- | --- |
| | 𝔼[S1]=∑s1ℙ(s1)⋅s1.𝔼delimited-[]subscript𝑆1subscriptsubscript𝑠1⋅ℙsubscript𝑠1subscript𝑠1\mathbb{E}[S\_{1}]=\sum\_{s\_{1}}\mathbb{P}(s\_{1})\cdot s\_{1}.blackboard\_E [ italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] = ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT blackboard\_P ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ⋅ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT . | |
Sequences are denoted by overbars, e.g., given a sequence (s1,…,sn)subscript𝑠1…subscript𝑠𝑛(s\_{1},\ldots,s\_{n})( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ), s¯¯𝑠\bar{s}over¯ start\_ARG italic\_s end\_ARG stands for the whole sequence. Subsequences are denoted
by subscripted inequalities, so e.g., s<4subscript𝑠absent4s\_{<4}italic\_s start\_POSTSUBSCRIPT < 4 end\_POSTSUBSCRIPT stands for (s1,s2,s3)subscript𝑠1subscript𝑠2subscript𝑠3(s\_{1},s\_{2},s\_{3})( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ),
and s≤4subscript𝑠absent4s\_{\leq 4}italic\_s start\_POSTSUBSCRIPT ≤ 4 end\_POSTSUBSCRIPT stands for (s1,s2,s3,s4)subscript𝑠1subscript𝑠2subscript𝑠3subscript𝑠4(s\_{1},s\_{2},s\_{3},s\_{4})( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT ).
3 Two agents building a third
------------------------------
Consider, informally, a scenario wherein two players — perhaps individuals, companies, or states — are considering cooperating to build or otherwise obtain a machine that will then interact with an environment on their behalf.222The results here all generalize from two players to n𝑛nitalic\_n players being combined successively in any order, but for clarity of exposition, the two person case is prioritized. In such a scenario, the players will tend to bargain for “how much” the machine will prioritize their separate interests, so to begin, we need some way to quantify “how much” each player is prioritized.
For instance, one might model the machine as maximizing the expected value, given its observations, of some utility function U𝑈Uitalic\_U of the environment that equals a weighted sum
| | | | |
| --- | --- | --- | --- |
| | w1U1+w2U2superscript𝑤1superscript𝑈1superscript𝑤2superscript𝑈2w^{1}U^{1}+w^{2}U^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (1) |
of the players’ individual utility functions U1superscript𝑈1U^{1}italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and U2superscript𝑈2U^{2}italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, as Harsanyi’s social aggregation theorem [Harsanyi, [1980](#bib.bib11)] recommends. Then the bargaining process could focus on choosing the values of the weights wisuperscript𝑤𝑖w^{i}italic\_w start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT.
However, this turns out to be a bad idea. As we shall see in Proposition [10](#Thmtheorem10 "Proposition 10. ‣ A cake-splitting scenario. ‣ 3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), this solution form is not generally compatible with Pareto optimality when agents have different beliefs. Harsanyi’s setting does not account for agents having different priors, nor for decisions being made sequentially, after future observations. In such a setting, we need a new form of solution, exhibited here along with a recursion that characterizes optimal solutions by a process analogous to, but meaningfully different from, Bayesian updating.
###
3.1 A POMDP formulation
Let us formalize the machine’s decision-making situation using the structure of a Partially Observable Markov Decision Process (POMDP), as depicted by the Bayesian network in Figure [1](#S3.F1 "Figure 1 ‣ 3.1 A POMDP formulation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"). (See Russell et al. [[2003](#bib.bib23)] for an introduction to POMDPs, and Darwiche [[2009](#bib.bib6)] for an introduction to Bayesian networks.)
At each point in time i𝑖iitalic\_i, the machine will have a policy πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT that for each possible sequence of observations o≤isubscript𝑜absent𝑖o\_{\leq i}italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT and past actions a<isubscript𝑎absent𝑖a\_{<i}italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT, returns a distribution πi(−∣o≤ia<i)\pi\_{i}(-\mid o\_{\leq i}a\_{<i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) on actions aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, which will then be used to generate an action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT with probability π(ai∣o≤ia<i)𝜋conditionalsubscript𝑎𝑖subscript𝑜absent𝑖subscript𝑎absent𝑖\pi(a\_{i}\mid o\_{\leq i}a\_{<i})italic\_π ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ). In Figure [1](#S3.F1 "Figure 1 ‣ 3.1 A POMDP formulation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), the part of the Bayes net governed by the machine’s policy is highlighted in green.
S1subscript𝑆1S\_{1}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTS2subscript𝑆2S\_{2}italic\_S start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTS3subscript𝑆3S\_{3}italic\_S start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTS4subscript𝑆4S\_{4}italic\_S start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTO1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTA1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTO2subscript𝑂2O\_{2}italic\_O start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTA2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTO3subscript𝑂3O\_{3}italic\_O start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTA3subscript𝐴3A\_{3}italic\_A start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTU𝑈Uitalic\_U
Figure 1: A POMDP of length n=3𝑛3n=3italic\_n = 3
#### Common knowledge assumptions.
It is assumed that the players will have common knowledge of the policy π=(π1,…,πn)𝜋subscript𝜋1…subscript𝜋𝑛\pi=(\pi\_{1},\ldots,\pi\_{n})italic\_π = ( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) they select for the machine to implement, but that the players may have different beliefs about how the environment works, and of course different utility functions. It is also assumed that the players have common knowledge of one another’s posterior.
*This assumption is critical.* During a bargaining process, one should expect players’ beliefs to update in response to one another’s behavior. Assuming common knowledge of posteriors means that the players have reached an equilibrium where, each knowing what the other believes, does not wish to further update her own beliefs.333It is enough to assume the players have reached a “persistent disagreement” that cannot be mediated by the machine in some way. Future work should design solutions for facilitating the process of attaining common knowledge, or to obviate the need to assume it.
We encode each player j𝑗jitalic\_j’s outlook as a POMDP, Dj=(𝒮j,𝒜,Tj,Uj,𝒪,Ωj,n)superscript𝐷𝑗superscript𝒮𝑗𝒜superscript𝑇𝑗superscript𝑈𝑗𝒪superscriptΩ𝑗𝑛D^{j}=(\mathcal{S}^{j},\mathcal{A},T^{j},U^{j},\mathcal{O},\Omega^{j},n)italic\_D start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT = ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , caligraphic\_A , italic\_T start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , caligraphic\_O , roman\_Ω start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , italic\_n ), which simultaneously represents that player’s beliefs about the environment, and the player’s utility function.
* •
𝒮jsuperscript𝒮𝑗\mathcal{S}^{j}caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT represents a set of possible states s𝑠sitalic\_s of the environment,
* •
𝒜𝒜\mathcal{A}caligraphic\_A represents the set of possible actions a𝑎aitalic\_a available to the machine,
* •
Tjsuperscript𝑇𝑗T^{j}italic\_T start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT represents the conditional probabilities player j𝑗jitalic\_j believes will govern the environment state transitions, i.e., ℙj(si+1∣siai)superscriptℙ𝑗conditionalsubscript𝑠𝑖1subscript𝑠𝑖subscript𝑎𝑖\mathbb{P}^{j}(s\_{i+1}\mid s\_{i}a\_{i})blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ),
* •
Ujsuperscript𝑈𝑗U^{j}italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT represents player j𝑗jitalic\_j’s utility function from sequences of environmental states (s1,…,sn)subscript𝑠1…subscript𝑠𝑛(s\_{1},\ldots,s\_{n})( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) to ℝℝ\mathbb{R}blackboard\_R; for the sake of generality, Ujsuperscript𝑈𝑗U^{j}italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT is *not assumed* to be additive over time, as reward functions often are,
* •
𝒪𝒪\mathcal{O}caligraphic\_O represents the set of possible observations o𝑜oitalic\_o of the machine,
* •
ΩjsuperscriptΩ𝑗\Omega^{j}roman\_Ω start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT represents the conditional probabilities player j𝑗jitalic\_j believes will govern the machine’s observations, i.e., ℙj(oi∣si)superscriptℙ𝑗conditionalsubscript𝑜𝑖subscript𝑠𝑖\mathbb{P}^{j}(o\_{i}\mid s\_{i})blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), and
* •
n𝑛nitalic\_n is the number of time steps.
Thus, player j𝑗jitalic\_j’s subjective probability of an outcome (s¯,o¯,a¯)¯𝑠¯𝑜¯𝑎(\bar{s},\bar{o},\bar{a})( over¯ start\_ARG italic\_s end\_ARG , over¯ start\_ARG italic\_o end\_ARG , over¯ start\_ARG italic\_a end\_ARG ), for any s¯∈(𝒮j)n¯𝑠superscriptsuperscript𝒮𝑗𝑛\bar{s}\in(\mathcal{S}^{j})^{n}over¯ start\_ARG italic\_s end\_ARG ∈ ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT, is given by a probability distribution ℙjsuperscriptℙ𝑗\mathbb{P}^{j}blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT that takes π𝜋\piitalic\_π as a parameter:
| | | | |
| --- | --- | --- | --- |
| | ℙj(s¯,o¯,a¯;π):=ℙj(s1)⋅∏i=1nℙj(oi∣si)π(ai∣o≤ia<i)ℙj(si+1∣siai)assignsuperscriptℙ𝑗¯𝑠¯𝑜¯𝑎𝜋⋅superscriptℙ𝑗subscript𝑠1superscriptsubscriptproduct𝑖1𝑛superscriptℙ𝑗conditionalsubscript𝑜𝑖subscript𝑠𝑖𝜋conditionalsubscript𝑎𝑖subscript𝑜absent𝑖subscript𝑎absent𝑖superscriptℙ𝑗conditionalsubscript𝑠𝑖1subscript𝑠𝑖subscript𝑎𝑖\mathbb{P}^{j}(\bar{s},\bar{o},\bar{a};\pi):=\mathbb{P}^{j}(s\_{1})\cdot\prod\_{i=1}^{n}\mathbb{P}^{j}(o\_{i}\mid s\_{i})\,\pi(a\_{i}\mid o\_{\leq i}a\_{<i})\,\mathbb{P}^{j}(s\_{i+1}\mid s\_{i}a\_{i})blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_s end\_ARG , over¯ start\_ARG italic\_o end\_ARG , over¯ start\_ARG italic\_a end\_ARG ; italic\_π ) := blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ⋅ ∏ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_π ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | | (2) |
As such, the POMDPs D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT are “compatible” in the following sense:
######
Definition 1 (Compatible POMDPs).
We say that two POMDPs, D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, are *compatible* if any policy for one may be viewed as a policy for the other, i.e., they have the same set of actions 𝒜𝒜\mathcal{A}caligraphic\_A and observations 𝒪𝒪\mathcal{O}caligraphic\_O, and the same number of time steps n𝑛nitalic\_n.
###
3.2 Pareto optimal policies
In this context, where a policy π𝜋\piitalic\_π may be evaluated relative to more than one POMDP, we use superscripts to represent which POMDP is governing the probabilities and expectations, e.g.,
| | | |
| --- | --- | --- |
| | 𝔼j[Uj;π]:=∑s¯∈(𝒮j)nℙj(s¯;π)Uj(s¯)assignsuperscript𝔼𝑗superscript𝑈𝑗𝜋subscript¯𝑠superscriptsuperscript𝒮𝑗𝑛superscriptℙ𝑗¯𝑠𝜋
superscript𝑈𝑗¯𝑠\mathbb{E}^{j}[U^{j};\pi]:=\sum\_{\bar{s}\in(\mathcal{S}^{j})^{n}}\mathbb{P}^{j}(\bar{s};\pi)U^{j}(\bar{s})blackboard\_E start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ; italic\_π ] := ∑ start\_POSTSUBSCRIPT over¯ start\_ARG italic\_s end\_ARG ∈ ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_s end\_ARG ; italic\_π ) italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_s end\_ARG ) | |
represents the expectation in Djsuperscript𝐷𝑗D^{j}italic\_D start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT of the utility function Ujsuperscript𝑈𝑗U^{j}italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT, assuming policy π𝜋\piitalic\_π is followed.
######
Definition 2 (Pareto optimal policies).
A policy π𝜋\piitalic\_π is *Pareto optimal* for a compatible pair of POMDPs (D1,D2)superscript𝐷1superscript𝐷2(D^{1},D^{2})( italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) if for any other policy π′superscript𝜋normal-′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, either
| | | |
| --- | --- | --- |
| | 𝔼1[U1;π]≥𝔼1[U1;π′] or 𝔼2[U2;π]≥𝔼2[U2;π′].superscript𝔼1superscript𝑈1𝜋superscript𝔼1superscript𝑈1superscript𝜋′
or superscript𝔼2superscript𝑈2𝜋superscript𝔼2superscript𝑈2superscript𝜋′\mathbb{E}^{1}[U^{1};\pi]\geq\mathbb{E}^{1}[U^{1};\pi^{\prime}]\textrm{\quad or \quad}\mathbb{E}^{2}[U^{2};\pi]\geq\mathbb{E}^{2}[U^{2};\pi^{\prime}].blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] ≥ blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] or blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ] ≥ blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] . | |
It is assumed that, during negotiation, the players will be seeking a Pareto optimal policy for the machine to follow, relative to the POMDPs D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT describing each player’s outlook.
#### Policy mixing assumption.
It is also assumed that during the agent’s first action (or before it), the agent has the ability to generate and store some random numbers in the interval [0,1]01[0,1][ 0 , 1 ], called a random seed, that will not affect the environment except through other features of its actions. Then, given any two policies π𝜋\piitalic\_π and π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and a scalar p∈[0,1]𝑝01p\in[0,1]italic\_p ∈ [ 0 , 1 ] we may construct a third policy,
| | | |
| --- | --- | --- |
| | pπ+(1−p)π′,𝑝𝜋1𝑝superscript𝜋′p\pi+(1-p)\pi^{\prime},italic\_p italic\_π + ( 1 - italic\_p ) italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , | |
that decides with probability p𝑝pitalic\_p (before receiving any inputs) to use policy π𝜋\piitalic\_π for generating all of its future actions, and otherwise uses policy π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. (This is a “once and for all” decision; the agent does not flip-flop between π𝜋\piitalic\_π and π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT once the decision is made.) Mixtures of more than two policies are defined similarly. With this formalism, whenever ∑kαk=1subscript𝑘subscript𝛼𝑘1\sum\_{k}\alpha\_{k}=1∑ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = 1 and each αk≥0subscript𝛼𝑘0\alpha\_{k}\geq 0italic\_α start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ≥ 0, we have
| | | | |
| --- | --- | --- | --- |
| | 𝔼j[Uj;∑kαkπk]=∑kαk𝔼j[Uj;πk].superscript𝔼𝑗superscript𝑈𝑗subscript𝑘subscript𝛼𝑘subscript𝜋𝑘subscript𝑘subscript𝛼𝑘superscript𝔼𝑗superscript𝑈𝑗subscript𝜋𝑘\mathbb{E}^{j}\left[U^{j};\sum\_{k}\alpha\_{k}\pi\_{k}\right]=\sum\_{k}\alpha\_{k}\mathbb{E}^{j}[U^{j};\pi\_{k}].blackboard\_E start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ; ∑ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] = ∑ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ; italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] . | | (3) |
######
Lemma 3.
A policy π𝜋\piitalic\_π is Pareto optimal to players 1111 and 2222 if and only if there exist weights w1,w2≥0superscript𝑤1superscript𝑤2
0w^{1},w^{2}\geq 0italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≥ 0 with w1+w2=1subscript𝑤1subscript𝑤21w\_{1}+w\_{2}=1italic\_w start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + italic\_w start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = 1 such that
| | | | |
| --- | --- | --- | --- |
| | π∈argmaxπ\*∈Π(w1𝔼1[U1;π\*]+w2𝔼2[U2;π\*])𝜋subscriptargmaxsuperscript𝜋Πsuperscript𝑤1superscript𝔼1superscript𝑈1superscript𝜋superscript𝑤2superscript𝔼2superscript𝑈2superscript𝜋\pi\in\operatorname\*{argmax}\_{\pi^{\*}\in\Pi}\left(w^{1}\mathbb{E}^{1}[U^{1};\pi^{\*}]+w^{2}\mathbb{E}^{2}[U^{2};\pi^{\*}]\right)italic\_π ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_Π end\_POSTSUBSCRIPT ( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ] + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ] ) | | (4) |
###### Proof.
The mixing assumption gives the space of policies ΠΠ\Piroman\_Π the structure of a convex space that the maps 𝔼j[Uj;−]superscript𝔼𝑗superscript𝑈𝑗\mathbb{E}^{j}[U^{j};-]blackboard\_E start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ; - ] respect by Equation [3](#S3.E3 "3 ‣ Policy mixing assumption. ‣ 3.2 Pareto optimal policies ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"). This ensures that the image of the map f:Π→ℝ2:𝑓→Πsuperscriptℝ2f:\Pi\to\mathbb{R}^{2}italic\_f : roman\_Π → blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT given by
| | | |
| --- | --- | --- |
| | f(π):=(𝔼1[U1;π],𝔼2[U2;π])assign𝑓𝜋superscript𝔼1superscript𝑈1𝜋superscript𝔼2superscript𝑈2𝜋f(\pi):=\left(\mathbb{E}^{1}[U^{1};\pi],\;\mathbb{E}^{2}[U^{2};\pi]\right)italic\_f ( italic\_π ) := ( blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] , blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ] ) | |
is a closed, convex polytope. As such, a point (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ) lies on the Pareto boundary of image(f)image𝑓\operatorname{image}(f)roman\_image ( italic\_f ) if and only if there exist nonnegative weights (w1,w2)superscript𝑤1superscript𝑤2(w^{1},w^{2})( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ), not both zero, such that
| | | |
| --- | --- | --- |
| | (x,y)∈argmax(x\*,y\*)∈image(f)(w1x\*+w2y\*)𝑥𝑦subscriptargmaxsuperscript𝑥superscript𝑦image𝑓superscript𝑤1superscript𝑥superscript𝑤2superscript𝑦(x,y)\in\operatorname\*{argmax}\_{(x^{\*},y^{\*})\in\operatorname{image}(f)}\left(w^{1}x^{\*}+w^{2}y^{\*}\right)( italic\_x , italic\_y ) ∈ roman\_argmax start\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_y start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ∈ roman\_image ( italic\_f ) end\_POSTSUBSCRIPT ( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_y start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) | |
After normalizing w1+w2superscript𝑤1superscript𝑤2w^{1}+w^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT to equal 1111, this implies the result.
∎
###
3.3 A reprioritization mechanism that resembles Bayesian updating
We shall soon see that any Pareto optimal policy π𝜋\piitalic\_π must favor, as time progresses, optimizing the *utility* of whichever player’s *beliefs* were a better predictor of the machine’s inputs. This phenomenon turns out to algebraically resemble Bayesian updating, but is quite different in its meaning. Nonetheless, it is most easily shown to occur by a precise analogy to Bayesian updating in a third POMDP constructed from the outlooks of players 1 and 2, as follows.
For any weights, w1,w2≥0superscript𝑤1superscript𝑤2
0w^{1},w^{2}\geq 0italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≥ 0 with w1+w2=1superscript𝑤1superscript𝑤21w^{1}+w^{2}=1italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 1, we define a new POMDP that works by flipping a (w1,w2)superscript𝑤1superscript𝑤2(w^{1},w^{2})( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT )-weighted coin, and then running D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT or D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT thereafter, according to the coin flip. Explicitly,
######
Definition 4 (POMDP mixtures).
Let D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT be compatible POMDPs, with parameters
Dj=(𝒮j,𝒜,Tj,Uj,𝒪,Ωj,n)superscript𝐷𝑗superscript𝒮𝑗𝒜superscript𝑇𝑗superscript𝑈𝑗𝒪superscriptnormal-Ω𝑗𝑛D^{j}=(\mathcal{S}^{j},\mathcal{A},T^{j},U^{j},\mathcal{O},\Omega^{j},n)italic\_D start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT = ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , caligraphic\_A , italic\_T start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , caligraphic\_O , roman\_Ω start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT , italic\_n ). Define a new POMDP compatible both, denoted D=w1D1+w2D2𝐷superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2D=w^{1}D^{1}+w^{2}D^{2}italic\_D = italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, with parameters
Dj=(𝒮,𝒜,T,U,𝒪,Ω,n)superscript𝐷𝑗𝒮𝒜𝑇𝑈𝒪normal-Ω𝑛D^{j}=(\mathcal{S},\mathcal{A},T,U,\mathcal{O},\Omega,n)italic\_D start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT = ( caligraphic\_S , caligraphic\_A , italic\_T , italic\_U , caligraphic\_O , roman\_Ω , italic\_n ), as follows:
* •
𝒮:={(j,s)∣j∈{1,2},s∈𝒮j}assign𝒮conditional-set𝑗𝑠formulae-sequence𝑗12𝑠superscript𝒮𝑗\mathcal{S}:=\{(j,s)\mid j\in\{1,2\},s\in\mathcal{S}^{j}\}caligraphic\_S := { ( italic\_j , italic\_s ) ∣ italic\_j ∈ { 1 , 2 } , italic\_s ∈ caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT },
* •
The environmental transition probabilities T𝑇Titalic\_T are given by
| | | | |
| --- | --- | --- | --- |
| | ℙ((j,s1)):=assignℙ𝑗subscript𝑠1absent\displaystyle\mathbb{P}\left((j,s\_{1})\right):=blackboard\_P ( ( italic\_j , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) := | wj⋅ℙj(s1)⋅superscript𝑤𝑗superscriptℙ𝑗subscript𝑠1\displaystyle\>w^{j}\cdot\mathbb{P}^{j}(s\_{1})italic\_w start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ⋅ blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) | |
| for any initial state s1∈𝒮jsubscript𝑠1superscript𝒮𝑗s\_{1}\in\mathcal{S}^{j}italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∈ caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT, and thereafter, |
| | ℙ((j′,si+1)∣(j,si),ai):=assignℙconditionalsuperscript𝑗′subscript𝑠𝑖1
𝑗subscript𝑠𝑖subscript𝑎𝑖absent\displaystyle\mathbb{P}\left((j^{\prime},s\_{i+1})\mid(j,s\_{i}),a\_{i}\right):=blackboard\_P ( ( italic\_j start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ) ∣ ( italic\_j , italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) := | {ℙj(si+1∣siai) if j′=j0 if j′≠jcasessuperscriptℙ𝑗conditionalsubscript𝑠𝑖1subscript𝑠𝑖subscript𝑎𝑖 if j′=j0 if j′≠j\displaystyle\;\begin{cases}\mathbb{P}^{j}\left(s\_{i+1}\mid s\_{i}a\_{i}\right)&\mbox{ if $j^{\prime}=j$}\\
0&\mbox{ if $j^{\prime}\neq j$}\end{cases}{ start\_ROW start\_CELL blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_CELL start\_CELL if italic\_j start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_j end\_CELL end\_ROW start\_ROW start\_CELL 0 end\_CELL start\_CELL if italic\_j start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≠ italic\_j end\_CELL end\_ROW | |
Hence, the value of j𝑗jitalic\_j will be constant over time, so a full history for the environment may be represented by a pair
| | | |
| --- | --- | --- |
| | (j,s¯)∈{1}×(𝒮1)n∪{2}×(𝒮2)n.𝑗¯𝑠1superscriptsuperscript𝒮1𝑛2superscriptsuperscript𝒮2𝑛(j,\bar{s})\in\{1\}\times(\mathcal{S}^{1})^{n}\cup\{2\}\times(\mathcal{S}^{2})^{n}.( italic\_j , over¯ start\_ARG italic\_s end\_ARG ) ∈ { 1 } × ( caligraphic\_S start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∪ { 2 } × ( caligraphic\_S start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT . | |
Let B𝐵Bitalic\_B denote the boolean random variable that equals whichever constant value of j𝑗jitalic\_j obtains, so then
| | | |
| --- | --- | --- |
| | ℙ(B=j)=wjℙ𝐵𝑗superscript𝑤𝑗\mathbb{P}(B=j)=w^{j}blackboard\_P ( italic\_B = italic\_j ) = italic\_w start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT | |
* •
The utility function U𝑈Uitalic\_U is given by
| | | |
| --- | --- | --- |
| | U(j,s¯):=Uj(s¯)assign𝑈𝑗¯𝑠superscript𝑈𝑗¯𝑠U(j,\bar{s}):=U^{j}(\bar{s})italic\_U ( italic\_j , over¯ start\_ARG italic\_s end\_ARG ) := italic\_U start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_s end\_ARG ) | |
* •
The observation probabilities ΩΩ\Omegaroman\_Ω are given by
| | | |
| --- | --- | --- |
| | ℙ(oi∣(j,si)):=ℙ(B=j)⋅ℙj(oi∣si)assignℙconditionalsubscript𝑜𝑖𝑗subscript𝑠𝑖⋅ℙ𝐵𝑗superscriptℙ𝑗conditionalsubscript𝑜𝑖subscript𝑠𝑖\mathbb{P}\left(o\_{i}\mid(j,s\_{i})\right):=\mathbb{P}(B=j)\cdot\mathbb{P}^{j}(o\_{i}\mid s\_{i})blackboard\_P ( italic\_o start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ ( italic\_j , italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) := blackboard\_P ( italic\_B = italic\_j ) ⋅ blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | |
In particular, the policy does not observe directly whether j=1𝑗1j=1italic\_j = 1 or j=2𝑗2j=2italic\_j = 2.
B𝐵Bitalic\_BS1subscript𝑆1S\_{1}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTS2subscript𝑆2S\_{2}italic\_S start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTS3subscript𝑆3S\_{3}italic\_S start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTS4subscript𝑆4S\_{4}italic\_S start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTO1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTA1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTO2subscript𝑂2O\_{2}italic\_O start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTA2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTO3subscript𝑂3O\_{3}italic\_O start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTA3subscript𝐴3A\_{3}italic\_A start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTU𝑈Uitalic\_U
Figure 2: A POMDP (mixture) of length n=3𝑛3n=3italic\_n = 3 initialized by a Boolean B𝐵Bitalic\_B
The POMDP mixture D=w1D1+w2D2𝐷superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2D=w^{1}D^{1}+w^{2}D^{2}italic\_D = italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT can be depicted with a Bayes net by adding an additional environmental node for B𝐵Bitalic\_B in the diagram of D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT (see Figure [2](#S3.F2 "Figure 2 ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making")). Indeed, given any policy π𝜋\piitalic\_π, the expected payoff of π𝜋\piitalic\_π in w1D1+w2D2superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2w^{1}D^{1}+w^{2}D^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT is exactly
| | | | |
| --- | --- | --- | --- |
| | | ℙ(B=1)⋅𝔼(U∣B=1;π)+ℙ(B=2)⋅𝔼(U∣B=2;π)⋅ℙ𝐵1𝔼conditional𝑈𝐵1𝜋⋅ℙ𝐵2𝔼conditional𝑈𝐵2𝜋\displaystyle\;\mathbb{P}(B=1)\cdot\mathbb{E}(U\mid B=1;\pi)+\mathbb{P}(B=2)\cdot\mathbb{E}(U\mid B=2;\pi)blackboard\_P ( italic\_B = 1 ) ⋅ blackboard\_E ( italic\_U ∣ italic\_B = 1 ; italic\_π ) + blackboard\_P ( italic\_B = 2 ) ⋅ blackboard\_E ( italic\_U ∣ italic\_B = 2 ; italic\_π ) | |
| | =\displaystyle== | w1𝔼2(U1;π)+w2𝔼2(U2;π)superscript𝑤1superscript𝔼2superscript𝑈1𝜋superscript𝑤2superscript𝔼2superscript𝑈2𝜋\displaystyle\;w^{1}\mathbb{E}^{2}(U^{1};\pi)+w^{2}\mathbb{E}^{2}(U^{2};\pi)italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ) + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ) | |
Therefore, using the above definitions, Lemma [3](#Thmtheorem3 "Lemma 3. ‣ Policy mixing assumption. ‣ 3.2 Pareto optimal policies ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") may be restated in the following equivalent form:
######
Lemma 5.
Given a pair (D1,D2)superscript𝐷1superscript𝐷2(D^{1},D^{2})( italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) of compatible POMDPs, a policy π𝜋\piitalic\_π is Pareto optimal for that pair if and only if there exist weights wjsuperscript𝑤𝑗w^{j}italic\_w start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT such that π𝜋\piitalic\_π is an optimal policy for the single POMDP given by w1D1+w2D2superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2w^{1}D^{1}+w^{2}D^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT.
###
3.4 A recursive Pareto optimality condition
Expressed in the form of Equation [4](#S3.E4 "4 ‣ Lemma 3. ‣ Policy mixing assumption. ‣ 3.2 Pareto optimal policies ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), it might not be clear how a Pareto optimal policy makes use of its observations over time, aside from storing them in memory. For example, is there any sense in which the machine carries “beliefs” about the environment that it “updates” at each time step? Lemma [5](#Thmtheorem5 "Lemma 5. ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") allows us to answer this and related questions by translating theorems about single POMDPs into theorems about compatible pairs of POMDPs.
If π𝜋\piitalic\_π is an optimal policy for a single POMDP, at any time step i𝑖iitalic\_i, optimality of the action distribution πi(−∣o≤ia<i)\pi\_{i}(-\mid o\_{\leq i}a\_{<i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) can be characterized without reference to the previous policy components
(π1,…,πi−1)subscript𝜋1…subscript𝜋𝑖1(\pi\_{1},\ldots,\pi\_{i-1})( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ), nor to πi(−∣o≤i′a<i′)\pi\_{i}(-\mid o^{\prime}\_{\leq i}a^{\prime}\_{<i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( - ∣ italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) for any alternate history o≤i′a<i′subscriptsuperscript𝑜′absent𝑖subscriptsuperscript𝑎′absent𝑖o^{\prime}\_{\leq i}a^{\prime}\_{<i}italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT.444This fact can used to justify why the “sunk cost” fallacy is indeed a fallacy. To express this claim in an equation, Pearl’s “do()𝑑𝑜do()italic\_d italic\_o ( )” notation [Pearl, [2009](#bib.bib20)] comes in handy:
######
Definition 6 (“do” notation).
| | | |
| --- | --- | --- |
| | ℙj(o¯∣do(a¯)):=∑s¯∈(𝒮j)nℙj(s1)⋅∏i=1nℙj(oi∣si)ℙj(si+1∣siai)assignsuperscriptℙ𝑗conditional¯𝑜𝑑𝑜¯𝑎subscript¯𝑠superscriptsuperscript𝒮𝑗𝑛⋅superscriptℙ𝑗subscript𝑠1superscriptsubscriptproduct𝑖1𝑛superscriptℙ𝑗conditionalsubscript𝑜𝑖subscript𝑠𝑖superscriptℙ𝑗conditionalsubscript𝑠𝑖1subscript𝑠𝑖subscript𝑎𝑖\mathbb{P}^{j}(\bar{o}\mid do(\bar{a})):=\sum\_{\bar{s}\in(\mathcal{S}^{j})^{n}}\mathbb{P}^{j}(s\_{1})\cdot\prod\_{i=1}^{n}\mathbb{P}^{j}(o\_{i}\mid s\_{i})\,\mathbb{P}^{j}(s\_{i+1}\mid s\_{i}a\_{i})blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_o end\_ARG ∣ italic\_d italic\_o ( over¯ start\_ARG italic\_a end\_ARG ) ) := ∑ start\_POSTSUBSCRIPT over¯ start\_ARG italic\_s end\_ARG ∈ ( caligraphic\_S start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ⋅ ∏ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ∣ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | |
This expression is the same as the probability of (o¯,a¯)¯𝑜¯𝑎(\bar{o},\bar{a})( over¯ start\_ARG italic\_o end\_ARG , over¯ start\_ARG italic\_a end\_ARG ) when π𝜋\piitalic\_π is the constant policy that places probability 1111 on the action sequence a¯¯𝑎\bar{a}over¯ start\_ARG italic\_a end\_ARG.
######
Proposition 7 (Classical separability).
If D𝐷Ditalic\_D is a POMDP described by conditional probabilities ℙ(−∣−)\mathbb{P}(-\mid-)blackboard\_P ( - ∣ - ) and utility function U𝑈Uitalic\_U (as in Equation [2](#S3.E2 "2 ‣ Common knowledge assumptions. ‣ 3.1 A POMDP formulation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making")), then a policy π𝜋\piitalic\_π is optimal for D𝐷Ditalic\_D if and only if for each time step i𝑖iitalic\_i and each observation/action history o≤ia<isubscript𝑜absent𝑖subscript𝑎absent𝑖o\_{\leq i}a\_{<i}italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT, the action distribution πi(−∣o≤na<n)\pi\_{i}(-\mid o\_{\leq n}a\_{<n})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_n end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_n end\_POSTSUBSCRIPT ) satisfies the following backward recursion:
| | | |
| --- | --- | --- |
| | πi(−∣o≤ia<i)∈argmaxα∈ΔA(\displaystyle\pi\_{i}(-\mid o\_{\leq i}a\_{<i})\in\operatorname\*{argmax}\_{\alpha\in\Delta A}\left(\vphantom{\left(()\right)}\right.italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_α ∈ roman\_Δ italic\_A end\_POSTSUBSCRIPT ( | |
| | ℙ(o≤i∣do(a<i))⋅𝔼[U∣o≤ia<i;an∼α;πi+1,…,πn])\displaystyle\left.\quad\mathbb{P}(o\_{\leq i}\mid do(a\_{<i}))\cdot\mathbb{E}[U\mid o\_{\leq i}a\_{<i};\;a\_{n}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right)blackboard\_P ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E [ italic\_U ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] ) | |
This characterization of πi(o≤ia<i)subscript𝜋𝑖subscript𝑜absent𝑖subscript𝑎absent𝑖\pi\_{i}(o\_{\leq i}a\_{<i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) does not refer to π1,…,πi−1subscript𝜋1normal-…subscript𝜋𝑖1\pi\_{1},\ldots,\pi\_{i-1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT, nor to
πi(o≤i′a<i′)subscript𝜋𝑖subscriptsuperscript𝑜normal-′absent𝑖subscriptsuperscript𝑎normal-′absent𝑖\pi\_{i}(o^{\prime}\_{\leq i}a^{\prime}\_{<i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) for any alternate history o≤i′a<i′subscriptsuperscript𝑜normal-′absent𝑖subscriptsuperscript𝑎normal-′absent𝑖o^{\prime}\_{\leq i}a^{\prime}\_{<i}italic\_o start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT.
###### Proof.
This is a standard property of POMDP solutions.
∎
It turns out that Pareto optimality can be characterized in a similar way by backward recursion from the final time step. The resulting recursion reveals a pattern in how the weights on the players’ conditionally expected utilities must change over time, which is the main result of this paper:
######
Theorem 8 (Pareto optimal policy recursion).
Given a pair (D1,D2)superscript𝐷1superscript𝐷2(D^{1},D^{2})( italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) of compatible POMDPs of length n𝑛nitalic\_n, a policy π𝜋\piitalic\_π is Pareto optimal if and only if its components πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT for i≤n𝑖𝑛i\leq nitalic\_i ≤ italic\_n satisfy the following backward recursion for some pair of weights w1,w2≥0superscript𝑤1superscript𝑤2
0w^{1},w^{2}\geq 0italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≥ 0 with w1+w2=1superscript𝑤1superscript𝑤21w^{1}+w^{2}=1italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 1:
| | | | |
| --- | --- | --- | --- |
| | | πi(−∣o≤ia<i)∈argmaxα∈ΔA(\displaystyle\pi^{i}(-\mid o\_{\leq i}a\_{<i})\in\operatorname\*{argmax}\_{\alpha\in\Delta A}\left(\vphantom{\left(\left(\right)\right)}\right.italic\_π start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_α ∈ roman\_Δ italic\_A end\_POSTSUBSCRIPT ( | |
| | | w1ℙ1(o≤i∣do(a<i))⋅𝔼1[U1∣o≤ia<iai;ai∼α;πi+1,…,πn]⋅superscript𝑤1superscriptℙ1conditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖superscript𝔼1delimited-[]similar-toconditionalsuperscript𝑈1subscript𝑜absent𝑖subscript𝑎absent𝑖subscript𝑎𝑖subscript𝑎𝑖𝛼subscript𝜋𝑖1…subscript𝜋𝑛\displaystyle\left.w^{1}\mathbb{P}^{1}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}^{1}[U^{1}\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right.italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] | |
| | +\displaystyle++ | w2ℙ2(o≤i∣do(a<i))⋅𝔼2[U2∣o≤ia<iai;ai∼α;πi+1,…,πn])\displaystyle\left.w^{2}\mathbb{P}^{2}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}^{2}[U^{2}\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right)italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] ) | |
In words, to achieve Pareto optimality, the machine must
1. 1.
use each player’s own beliefs when estimating the degree to which a decision favors that player’s utility function, and
2. 2.
shift the relative priorities of the players’ expected utilities in the machine’s decision objective over time, by a factor proportional to how well the players predict the machine’s inputs.
###### Proof.
By Lemma [5](#Thmtheorem5 "Lemma 5. ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), the Pareto optimality of π𝜋\piitalic\_π for (D1,D2)superscript𝐷1superscript𝐷2(D^{1},D^{2})( italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) is equivalent to its classical optimality for
w1D1+w2D2superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2w^{1}D^{1}+w^{2}D^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT for some (w1,w2)superscript𝑤1superscript𝑤2(w^{1},w^{2})( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ), which by Proposition [7](#Thmtheorem7 "Proposition 7 (Classical separability). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") is equivalent to satisfying the following backward recursion (writing ℙℙ\mathbb{P}blackboard\_P and 𝔼𝔼\mathbb{E}blackboard\_E for probabilities and expectations in w1D1+w2D2superscript𝑤1superscript𝐷1superscript𝑤2superscript𝐷2w^{1}D^{1}+w^{2}D^{2}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT):
| | | | |
| --- | --- | --- | --- |
| | | πi(−∣o≤ia<i)∈argmaxα∈ΔA(\displaystyle\pi^{i}(-\mid o\_{\leq i}a\_{<i})\in\operatorname\*{argmax}\_{\alpha\in\Delta A}\left(\vphantom{\left(\left(\right)\right)}\right.italic\_π start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_α ∈ roman\_Δ italic\_A end\_POSTSUBSCRIPT ( | |
| | | ℙ(B=1)⋅ℙ(o≤i∣do(a<i))⋅𝔼[U∣o≤ia<iai;ai∼α;πi+1,…,πn]⋅⋅ℙ𝐵1ℙconditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖𝔼delimited-[]similar-toconditional𝑈subscript𝑜absent𝑖subscript𝑎absent𝑖subscript𝑎𝑖subscript𝑎𝑖𝛼subscript𝜋𝑖1…subscript𝜋𝑛\displaystyle\left.\mathbb{P}(B=1)\cdot\mathbb{P}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}[U\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right.blackboard\_P ( italic\_B = 1 ) ⋅ blackboard\_P ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E [ italic\_U ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] | |
| | +\displaystyle++ | ℙ(B=2)⋅ℙ(o≤i∣do(a<i))⋅𝔼[U∣o≤ia<iai;ai∼α;πi+1,…,πn]).\displaystyle\left.\mathbb{P}(B=2)\cdot\mathbb{P}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}[U\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right).blackboard\_P ( italic\_B = 2 ) ⋅ blackboard\_P ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E [ italic\_U ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] ) . | |
By Definition [4](#Thmtheorem4 "Definition 4 (POMDP mixtures). ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), the expression inside the argmaxargmax\operatorname\*{argmax}roman\_argmax equals
| | | | |
| --- | --- | --- | --- |
| | | w1ℙ1(o≤i∣do(a<i))⋅𝔼1[U1∣o≤ia<iai;ai∼α;πi+1,…,πn]⋅superscript𝑤1superscriptℙ1conditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖superscript𝔼1delimited-[]similar-toconditionalsuperscript𝑈1subscript𝑜absent𝑖subscript𝑎absent𝑖subscript𝑎𝑖subscript𝑎𝑖𝛼subscript𝜋𝑖1…subscript𝜋𝑛\displaystyle w^{1}\mathbb{P}^{1}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}^{1}[U^{1}\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] | |
| | +\displaystyle++ | w2ℙ2(o≤i∣do(a<i))⋅𝔼2[U2∣o≤ia<iai;ai∼α;πi+1,…,πn]⋅superscript𝑤2superscriptℙ2conditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖superscript𝔼2delimited-[]similar-toconditionalsuperscript𝑈2subscript𝑜absent𝑖subscript𝑎absent𝑖subscript𝑎𝑖subscript𝑎𝑖𝛼subscript𝜋𝑖1…subscript𝜋𝑛\displaystyle w^{2}\mathbb{P}^{2}\left(o\_{\leq i}\mid do(a\_{<i})\right)\cdot\mathbb{E}^{2}[U^{2}\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT blackboard\_P start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) ⋅ blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] | |
hence the result.
∎
When the players have the same beliefs, they aways assign the same probability to the machine’s inputs, so the weights on their respective expectations do not change over time. In this case, Harsanyi’s utility aggregation formula is recovered as a special instance:
######
Corollary 9 (Harsanyi’s utility aggregation formula).
Suppose that players 1 and 2 share the same beliefs about the environment, i.e., the pair (D1,D2)superscript𝐷1superscript𝐷2(D^{1},D^{2})( italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) of compatible POMDPs agree on all parameters except the players’ utility functions U1≠U2superscript𝑈1superscript𝑈2U^{1}\neq U^{2}italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ≠ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. Then a policy π𝜋\piitalic\_π is Pareto optimal if and only if there exist weights w1,w2≥0superscript𝑤1superscript𝑤2
0w^{1},w^{2}\geq 0italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≥ 0 with w1+w2=1superscript𝑤1superscript𝑤21w^{1}+w^{2}=1italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 1 such that for i≤n𝑖𝑛i\leq nitalic\_i ≤ italic\_n, πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT satisfies
| | | |
| --- | --- | --- |
| | πi(−∣o≤ia<i)∈argmaxα∈ΔA(𝔼[w1U1+w2U2]∣o≤ia<iai;ai∼α;πi+1,…,πn])\pi^{i}(-\mid o\_{\leq i}a\_{<i})\in\operatorname\*{argmax}\_{\alpha\in\Delta A}\left(\mathbb{E}[w^{1}U^{1}+w^{2}U^{2}]\mid o\_{\leq i}a\_{<i}a\_{i};\;a\_{i}\sim\alpha;\;\pi\_{i+1},\ldots,\pi\_{n}]\right)italic\_π start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( - ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_α ∈ roman\_Δ italic\_A end\_POSTSUBSCRIPT ( blackboard\_E [ italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT + italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] ∣ italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_α ; italic\_π start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ] ) | |
where 𝔼=𝔼1=𝔼2𝔼superscript𝔼1superscript𝔼2\mathbb{E}=\mathbb{E}^{1}=\mathbb{E}^{2}blackboard\_E = blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT = blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT denotes the shared expectations of both players.
###### Proof.
Setting 𝔼=𝔼1=𝔼2𝔼superscript𝔼1superscript𝔼2\mathbb{E}=\mathbb{E}^{1}=\mathbb{E}^{2}blackboard\_E = blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT = blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT in Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), factoring out the common coefficient ℙ1(o≤i∣do(a<i))=ℙ2(o≤i∣do(a<i))superscriptℙ1conditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖superscriptℙ2conditionalsubscript𝑜absent𝑖𝑑𝑜subscript𝑎absent𝑖\mathbb{P}^{1}\left(o\_{\leq i}\mid do(a\_{<i})\right)=\mathbb{P}^{2}\left(o\_{\leq i}\mid do(a\_{<i})\right)blackboard\_P start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ) = blackboard\_P start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_o start\_POSTSUBSCRIPT ≤ italic\_i end\_POSTSUBSCRIPT ∣ italic\_d italic\_o ( italic\_a start\_POSTSUBSCRIPT < italic\_i end\_POSTSUBSCRIPT ) ), and applying linearity of expectation yields the result.
∎
###
3.5 Comparison to naïve utility aggregation
To see the necessity of the ℙjsuperscriptℙ𝑗\mathbb{P}^{j}blackboard\_P start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT terms that shift the expectation weights in Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") over time, let us compare it with the behavior of an alternative optimization criterion that maximizes a fixed linear combination of expectations.
#### A cake-splitting scenario.
The parameters of this scenario are laid out in Table [1](#S3.T1 "Table 1 ‣ A cake-splitting scenario. ‣ 3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), and described as follows:
Alice (Player 1) and Bob (Player 2) are about to be presented with a cake which they can choose to split in half to share, or give entirely to one of them. They have (built or purchased) a robot that will make the cake-splitting decision on their behalf. Alice’s utility function returns 00 if she gets no cake, 20202020 if she gets half a cake, or 30303030 if she gets a whole cake. Bob’s utility function works similarly.
However, Alice and Bob have slightly different beliefs about how the environment works. They both agree on the state of the environment that the robot will encounter at first: a room with a cake in it (S1=“cake”subscript𝑆1“cake”S\_{1}=\text{``cake''}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = “cake”). But Alice and Bob have different predictions about how the robot’s sensors will perceive the cake: Alice thinks that when the robot perceives the cake, it is 90%percent9090\%90 % likely to appear with a red tint (O1=“red”subscript𝑂1“red”O\_{1}=\text{``red"}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = “red”), and 10%percent1010\%10 % likely to appear with a green tint (O1=“green”subscript𝑂1“green”O\_{1}=\text{``green"}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = “green”), whereas Bob believes the exact opposite. In either case, upon seeing the cake, the robot will either give Alice the entire cake (A1=S1=(all, none)subscript𝐴1subscript𝑆1(all, none)A\_{1}=S\_{1}=\text{(all, none)}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = (all, none)), split the cake half-and-half (A1=S1=(half, half)subscript𝐴1subscript𝑆1(half, half)A\_{1}=S\_{1}=\text{(half, half)}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = (half, half)), or give Bob the entire cake (A1=S1=(none, all)subscript𝐴1subscript𝑆1(none, all)A\_{1}=S\_{1}=\text{(none, all)}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = (none, all)). Moreover, Alice and Bob have common knowledge of all these facts.
| S1subscript𝑆1S\_{1}italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | O1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | ℙ1(O1∣S1)superscriptℙ1conditionalsubscript𝑂1subscript𝑆1\mathbb{P}^{1}(O\_{1}\mid S\_{1})blackboard\_P start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ( italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∣ italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) | ℙ2(O1∣S1)superscriptℙ2conditionalsubscript𝑂1subscript𝑆1\mathbb{P}^{2}(O\_{1}\mid S\_{1})blackboard\_P start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∣ italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) | A1=S1subscript𝐴1subscript𝑆1A\_{1}=S\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | U1superscript𝑈1U^{1}italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT | U2superscript𝑈2U^{2}italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT |
| --- | --- | --- | --- | --- | --- | --- |
| cake | red | 90%percent9090\%90 % | 10%percent1010\%10 % | (all, none) | 30 | 0 |
| (half, half) | 20 | 20 |
| (none, all) | 0 | 30 |
| green | 10%percent1010\%10 % | 90%percent9090\%90 % | (all, none) | 30 | 0 |
| (half, half) | 20 | 20 |
| (none, all) | 0 | 30 |
Table 1: An example scenario wherein a Pareto optimal policy undergoes priority shifting
Now, consider the following Pareto optimal policy that favors Alice (Player 1) when O1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is red, and Bob (Player 2) when O1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is green:
| | | |
| --- | --- | --- |
| | π^(−∣red)=100%(all, none)\displaystyle\hat{\pi}(-\mid\text{red})=100\%\text{(all, none)}over^ start\_ARG italic\_π end\_ARG ( - ∣ red ) = 100 % (all, none) | |
| | π^(−∣green)=100%(none, all)\displaystyle\hat{\pi}(-\mid\text{green})=100\%\text{(none, all)}over^ start\_ARG italic\_π end\_ARG ( - ∣ green ) = 100 % (none, all) | |
This policy can be viewed intuitively as a bet between Alice and Bob about the value of O1subscript𝑂1O\_{1}italic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, and is highly appealing to both players:
| | | | |
| --- | --- | --- | --- |
| | 𝔼1[U1;π^]superscript𝔼1superscript𝑈1^𝜋\displaystyle\mathbb{E}^{1}[U^{1};\hat{\pi}]blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; over^ start\_ARG italic\_π end\_ARG ] | =90%(30)+10%(0)=27absentpercent9030percent10027\displaystyle=90\%(30)+10\%(0)=27= 90 % ( 30 ) + 10 % ( 0 ) = 27 | |
| | 𝔼2[U2;π^]superscript𝔼2superscript𝑈2^𝜋\displaystyle\mathbb{E}^{2}[U^{2};\hat{\pi}]blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; over^ start\_ARG italic\_π end\_ARG ] | =10%(0)+90%(30)=27absentpercent100percent903027\displaystyle=10\%(0)+90\%(30)=27= 10 % ( 0 ) + 90 % ( 30 ) = 27 | |
In particular, π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG is more appealing to both Alice and Bob than an agreement to deterministically split the cake (half, half). However,
######
Proposition 10.
The Pareto optimal strategy π^normal-^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG above cannot be implemented by any machine that naïvely maximizes a fixed-over-time linear combination of the conditionally expected utility of the two players, i.e., by any policy π𝜋\piitalic\_π satisfying
| | | | |
| --- | --- | --- | --- |
| | π(−∣o1)∈argmaxα∈ΔA(r⋅𝔼1[U1∣o1;a1∼α]+(1−r)⋅𝔼2[U2∣o1;a1∼α])\pi(-\mid o\_{1})\in\operatorname\*{argmax}\_{\alpha\in\Delta A}\left(r\cdot\mathbb{E}^{1}[U^{1}\mid o\_{1};a\_{1}\sim\alpha]+(1-r)\cdot\mathbb{E}^{2}[U^{2}\mid o\_{1};a\_{1}\sim\alpha]\right)italic\_π ( - ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∈ roman\_argmax start\_POSTSUBSCRIPT italic\_α ∈ roman\_Δ italic\_A end\_POSTSUBSCRIPT ( italic\_r ⋅ blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∼ italic\_α ] + ( 1 - italic\_r ) ⋅ blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ; italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∼ italic\_α ] ) | | (5) |
for some fixed r∈[0,1]𝑟01r\in[0,1]italic\_r ∈ [ 0 , 1 ]. Moreover, every such policy π𝜋\piitalic\_π is strictly worse than π^normal-^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG in expectation to one of the players.
This proposition is relatively unsurprising when one considers the policy π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG intuitively as a bet-settling mechanism, and that the nature of betting is to favor different preferences based on future observations. However, to be sure of this impossibility claim, one must rule out the possibility that the π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG could be implemented by having the machine choose which element of the argmaxargmax\operatorname\*{argmax}roman\_argmax in Equation [5](#S3.E5 "5 ‣ Proposition 10. ‣ A cake-splitting scenario. ‣ 3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") to use based on whether the cake appears red or green.
###### Proof of Proposition [10](#Thmtheorem10 "Proposition 10. ‣ A cake-splitting scenario. ‣ 3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making").
Suppose π𝜋\piitalic\_π is any policy satisfying Equation [5](#S3.E5 "5 ‣ Proposition 10. ‣ A cake-splitting scenario. ‣ 3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") for some fixed r𝑟ritalic\_r, and consider the following cases for r𝑟ritalic\_r:
1. 1.
If r<1/3𝑟13r<1/3italic\_r < 1 / 3, then π𝜋\piitalic\_π must satisfy
| | | |
| --- | --- | --- |
| | π(−∣o1)=100%(none, all).\pi(-\mid o\_{1})=100\%\text{(none, all)}.italic\_π ( - ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = 100 % (none, all) . | |
Here, 𝔼1[U1;π]=0<27superscript𝔼1
superscript𝑈1𝜋027\mathbb{E}^{1}[U^{1};\pi]=0<27blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] = 0 < 27, so π𝜋\piitalic\_π is strictly worse than π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG in expectation to Alice.
2. 2.
If r=1/3𝑟13r=1/3italic\_r = 1 / 3, then π𝜋\piitalic\_π must satisfy
| | | |
| --- | --- | --- |
| | π(−∣o1)=q(o1)(none, all)+(1−q(o1))(half, half)\pi(-\mid o\_{1})=q(o\_{1})\text{(none, all)}+(1-q(o\_{1}))\text{(half, half)}italic\_π ( - ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = italic\_q ( italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) (none, all) + ( 1 - italic\_q ( italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) (half, half) | |
for some q(o1)∈[0,1]𝑞subscript𝑜101q(o\_{1})\in[0,1]italic\_q ( italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∈ [ 0 , 1 ] depending on o1subscript𝑜1o\_{1}italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. Here, 𝔼1[U1;π]≤20<27superscript𝔼1
superscript𝑈1𝜋2027\mathbb{E}^{1}[U^{1};\pi]\leq 20<27blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] ≤ 20 < 27 (with equality when q(red)=q(green)=1𝑞red𝑞green1q(\text{red})=q(\text{green})=1italic\_q ( red ) = italic\_q ( green ) = 1), so π𝜋\piitalic\_π is strictly worse than π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG in expectation to Alice.
3. 3.
If 1/3<r<2/313𝑟231/3<r<2/31 / 3 < italic\_r < 2 / 3, then π𝜋\piitalic\_π must satisfy
| | | |
| --- | --- | --- |
| | π(−∣o1)=100%(half, half)\pi(-\mid o\_{1})=100\%\text{(half, half)}italic\_π ( - ∣ italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = 100 % (half, half) | |
Here, 𝔼1[U1;π]=𝔼2[U2;π]=20<27superscript𝔼1
superscript𝑈1𝜋superscript𝔼2
superscript𝑈2𝜋2027\mathbb{E}^{1}[U^{1};\pi]=\mathbb{E}^{2}[U^{2};\pi]=20<27blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] = blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ] = 20 < 27, so π𝜋\piitalic\_π is strictly worse than π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG in expectation to both Alice and Bob.
The remaining cases, r=2/3𝑟23r=2/3italic\_r = 2 / 3 and r>2/3𝑟23r>2/3italic\_r > 2 / 3, are symmetric to the first two, with Bob in place of Alice and (none, all) in place of (all, none).
∎
4 Interpretations
------------------
Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") shows that a Pareto optimal policy must tend, over time, toward prioritizing the expected utility of whichever player’s *beliefs* best predict the machine’s inputs better. From some perspectives, this is a little counterintuitive: not only must the machine gradually place more predictive weight on whichever player’s prior is a better predictor, but it must reward that player by attending more to her utility function as well. This behavior is not an assumption, but rather is forced to occur by Pareto optimality. The players *must* agree to this pattern of shifting priority over time, or else they will leave Pareto improvements on the table during the bargaining period when they choose the machine’s policy.
This phenomenon warrants a few interpretations:
#### Bet settling.
As discussed in Section [3.5](#S3.SS5 "3.5 Comparison to naïve utility aggregation ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"), a machine implementing a Pareto optimal policy can be viewed as a kind of bet-settling device. If Alice is 90% sure the Four Horsemen will appear tomorrow and Bob is 80% sure they won’t, it makes sense for Alice to ask—while bargaining with Bob for the machine’s policy—that the machine prioritize her values more if the Four Horsemen arrive tomorrow, in exchange for prioritizing Bob’s values more if they don’t. Both parties will be happy with this agreement in expectation. As long as it remains possible to redistribute the machine’s priorities in a way that resembles an agreeable bet between Alice and Bob, its policy is not yet Pareto optimal. Thus, Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") can be seen as saying that a Pareto optimality policy goes about settling a bet on the machine’s input at each time step, in such a way that no additional bets settlable by the policy are desirable to both players.
#### Moral realism with Bayesian updating.
Alternatively, we could take more seriously the interpretation of the weights wjsuperscript𝑤𝑗w^{j}italic\_w start\_POSTSUPERSCRIPT italic\_j end\_POSTSUPERSCRIPT in Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") as prior “beliefs” about the value of the made-up latent variable B𝐵Bitalic\_B from Lemma [5](#Thmtheorem5 "Lemma 5. ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") that simultaneously governs (1) how the environment works, and (2) what utility function is “correct” to pursue. This interpretation is a bit unnatural because, even if the original environmental variables Sisubscript𝑆𝑖S\_{i}italic\_S start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT were very grounded in physical reality, the abstract variable B𝐵Bitalic\_B in Lemma [5](#Thmtheorem5 "Lemma 5. ‣ 3.3 A reprioritization mechanism that resembles Bayesian updating ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") is merely a fiction conjured up to imply a correlation between “is” and “ought”: namely, that either the world *is* governed by ℙ1superscriptℙ1\mathbb{P}^{1}blackboard\_P start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT, and U1superscript𝑈1U^{1}italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT *ought* to be optimized, or the world *is* governed by ℙ2superscriptℙ2\mathbb{P}^{2}blackboard\_P start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, and U2superscript𝑈2U^{2}italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT *ought* to be optimized. This occurs even if each of the two players treats their beliefs and utilities completely separately (i.e., even if they apply Bayesian updating only to their beliefs, and keep their utility functions fixed).
Mixing “is” and “ought” in this way is often considered a type error. Nonetheless, many humans report an intuitive sense that there are objective, right-and-wrong answers to moral questions that can be answered by observing the world. If a human is implicitly and approximately acting in a Pareto optimal fashion for a mixture of belief/utility outlooks D1,…,Dksuperscript𝐷1…superscript𝐷𝑘D^{1},\ldots,D^{k}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT, then the process of “updating” to favor a certain utility function might feel, from the inside, like “finding an answer” to a moral question.
5 Current limitations and future directions
--------------------------------------------
The eventual aim of this work is to facilitate the cooperative development and deployment of advanced AI systems, by simplifying the process of bargaining for shared control of such systems, and by making collaborative outcomes generally easier to implement and more attractive. For this purpose, while Pareto optimality is a desirable condition to aim for, it is not an adequate solution concept on its own. Indeed, the policy “maximize player 1’s utility function, without regard for player 2” is Pareto optimal, yet is clearly not the sort of solution one would expect two nations to agree upon. This and at least several other issues must be addressed to build a satisfactory negotiation framework, exhibited below in order of increasing difficulty as estimated by the author.
#### 1. BATNA dominance.
In any bargaining situation, each player has a “best alternative to negotiated agreement”, or BATNA, that she expects to obtain if the other player chooses not to cooperate. The characterization of Pareto optimality given in Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") does not account for the players’ BATNAs. Given a facet of the Pareto boundary, specified by the maximization of a linear function with weights (w1,w2)superscript𝑤1superscript𝑤2(w^{1},w^{2})( italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , italic\_w start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ), a policy π𝜋\piitalic\_π satisfying Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") will yield an expectation pair (𝔼1[U1;π],𝔼2[U2;π])superscript𝔼1superscript𝑈1𝜋superscript𝔼2superscript𝑈2𝜋\left(\mathbb{E}^{1}[U^{1};\pi],\mathbb{E}^{2}[U^{2};\pi]\right)( blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] , blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ] ) lying on that facet. Thus, the bargaining problem has been reduced to choosing an appropriate facet of the Pareto boundary. But suppose not all points on the chosen facet lie above both players’ BATNAs. Then, in order to satisfy the individual rationality of the players, the policy should target a more specific subset of that facet.
#### 2. Targeting specific expectation pairs.
If a specific target value for the expectation pair (𝔼1[U1;π],𝔼2[U2;π])superscript𝔼1superscript𝑈1𝜋superscript𝔼2superscript𝑈2𝜋\left(\mathbb{E}^{1}[U^{1};\pi],\mathbb{E}^{2}[U^{2};\pi]\right)( blackboard\_E start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ; italic\_π ] , blackboard\_E start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT [ italic\_U start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ; italic\_π ] ) is desired, unless that pair is a vertex of the Pareto region (e.g., perhaps the boundary is curved), the best that following the recursion of Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") ensures is a point on the same facet. The ability to target a specific pair would solve not only BATNA dominance (1), but also help achieve other fairness or robustness criteria that might arise from bargaining. One approach would be to make a small modification to the players’ utility functions to ensure that the resulting Pareto boundary is curved, thereby avoiding this problem at the cost of a tiny utility adjustment. Choosing a simple form for the adjustment that is amenable to formal proof would be a natural next step in this direction.
#### 3. Information trade.
Our algorithm implicitly favors whichever player best predicts the machine’s input history, given its action history. This makes sense when the players have common knowledge of each other’s priors and observations, at which point they have already had the opportunity to update on each other’s views and chosen not to. This is unrealistic if Alice knows that Bob has made observations in the past that Alice did not. In that case, Alice will view Bob’s beliefs as containing valuable information that ought to shift her prior. She may wish to bargain with Bob for access to that information in order to improve her own ability to optimize the machine’s policy. Perhaps she would concede some control over the machine (by reducing her weight, w1superscript𝑤1w^{1}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT) in exchange for information provided by Bob to improve her beliefs. An efficient procedure to naturally facilitate this sort of exchange would be complimentary Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"). One approach would be to have each player express their posterior as a function of the other’s, and use a fixed point theorem to choose a stable pair of posteriors. However, many questions arise about this method when there are multiple fixed points.
#### 4. Learning priors and utility functions.
It is notoriously difficult to explicitly specify one’s utility function U𝑈Uitalic\_U to a machine, so in practice, one must choose a method enabling the machine to learn the utility function. Cooperative inverse reinforcement learning (CIRL) [Hadfield-Menell et al., [2016](#bib.bib10)] exhibits such a framework, and reduces the problem to solving a POMDP. In CIRL, a human and a robot play a cooperative game wherein both players aim to maximize the human’s utility function U𝑈Uitalic\_U, but the robot is uncertain about U𝑈Uitalic\_U and must infer it from the human. Moreover, the human and robot have common knowledge of this situation, so the human may engage in “teaching” behavior to help the robot along. Such dynamics must be accounted for in a satisfactory treatment of negotiation for a machine’s priorities. In addition, the players’ priors should probably also be learned by a machine in some way rather than explicitly specified.
#### 5. Incentive compatibility.
Assuming any particular method for learning players’ priors and utility functions, a question arrises as to whether it incentivizes players to represent their beliefs and utilities honestly. For example, Alice may have some incentive to exaggerate her estimation of her BATNA in the positive direction, to motivate Bob to “sweeten the deal” by conceding her a higher priority w1superscript𝑤1w^{1}italic\_w start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT in the recursion of Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making"). As well, players might also have incentives to alter their reported beliefs in order to exaggerate the degree to which the machine’s decisions will affect their utilities. A satisfactory learning method should rule out or otherwise cope with this phenomenon. A great deal of literature already exists on incentive compatibility, as begun by Hurwicz [[1972](#bib.bib12)], Myerson [[1979](#bib.bib14)], and Myerson and Satterthwaite [[1983](#bib.bib16)], which should offer a good start.
#### 6. Naturalized decision theory.
The POMDP setting used here is “Cartesian” in that it assumes a clear divide between the machine’s inner workings and its environment. This is highly inappropriate when the machine may be copied or simulated; it may wind up in Newcomb-like problems as in Soares and Fallenstein [[2015](#bib.bib25)], and very strange cooperative equilibria may exist between its copies, such as in Critch [[2016](#bib.bib5)]. Instead, one should assume a “naturalized” model of the problem where the machine is part of its environment, as in Fallenstein et al. [[2015](#bib.bib7)]. Some attempts have been made to characterize optimal decision-making in a naturalized setting, e.g., by Orseau and Ring [[2012](#bib.bib19)], but very few theorems to aid in sequential implementation exist (e.g., no analogue of Proposition [7](#Thmtheorem7 "Proposition 7 (Classical separability). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") is known), except possibly for some self-reflective properties exhibited by Garrabrant’s logical inductors [Garrabrant et al., [2016](#bib.bib9)] that might be expanded to exhibit relevance to sequential decision-making. Without a satisfactory model of naturalized decision-making for the machine to follow, the negotiating parties might unwittingly assign the machine a policy vulnerable to Newcomb-like extortions. On the other hand, a satisfactory resolution would not only help to model the machine’s situation, but also that of the players themselves during the negotiation phase.
6 Conclusion
-------------
Insofar as Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") is not particularly mathematically sophisticated—it employs only basic facts about convexity and linear algebra—this suggests there may be more low-hanging fruit to be found in the domain of “machine implementable social choice theory”. To recapitulate, Theorem [8](#Thmtheorem8 "Theorem 8 (Pareto optimal policy recursion). ‣ 3.4 A recursive Pareto optimality condition ‣ 3 Two agents building a third ‣ Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making") represents two deviations from the intuition of naïve utility aggregation: to achieve Pareto optimality for players with differing beliefs, a machine must (1) use each player’s own beliefs in evaluating how well an action will serve that player’s utility function, and (2) shift the relative priority it assigns to each player’s expected utilities over time, by a factor proportional to how well that player’s beliefs predict the machine’s inputs.
As a final remark, consider that social choice theory and bargaining theory were both pioneered during the Cold War, when it was particularly compelling to understand the potential for cooperation between human institutions that might behave competitively. In the coming decades, machine intelligences will likely bring many new challenges for cooperation, as well as new means to cooperate, and new reasons to do so. As such, new technical aspects of social choice and bargaining, along the lines of this paper, will likely continue to emerge. In particular, the problems outlined in Section 5 represent areas particularly promising for facilitating cooperative outcomes in the deployment of advanced AI systems, and the present author is seeking collaborations to address them. |
d973c2a8-af9d-483b-b1a7-ec491212aa78 | trentmkelly/LessWrong-43k | LessWrong | What vegan food resources have you found useful?
My vegan nutrition project has focused on supplements, but there have been several requests for resources on how to eat better. I'm not well equipped to answer this- I'm not vegan, and my stomach only got even passably good at digestion a year ago. So I'd like to crowdsource this question: what resources have you found useful in eating well while eliminating or reducing animal products?
My existing list focuses on overviews and onboarding guides, but I think things like recipe books would also be useful.
* Veganhealth.org is the number one resource I see cited, by both people I know and other vegan websites.
* TheVeganRD.com guide is far from comprehensive but seems extremely approachable.
* A reader pointed me to spreadsheet, which links to a number of resources.
* Lincoln Quirk's Practical Plant-Based Meal Planning for Groups
Have people found these helpful? Are there other things that are better, or fill in a gap? How did you learn to cook well?
Diet and supplements overlap but there is already a supplements post and I don't want to split the information, so please lean food-based here.
|
c78dd0cb-3112-45c2-95c6-5b802deed70a | trentmkelly/LessWrong-43k | LessWrong | Winners of the Essay competition on the Automation of Wisdom and Philosophy
We’re delighted to announce the winners of the Essay competition on the Automation of Wisdom and Philosophy.
Overview
The competition attracted 90 entries in total (only one of which was obviously just the work of an LLM!), taking a wide variety of angles on the topic. The judges awarded the top four prizes as follows:
* $7,000 to Rudolf Laine for essays on wisdom, amortised optimisation, and AI:
* Part I: Wisdom, amortised optimisation, and AI
* Part II: Growth and amortised optimisation
* Part III: AI effects on amortised optimisation
* $6,000 to Thane Ruthenis for Towards the operationalization of philosophy & wisdom
* $4,000 to Chris Leong for essays on training wise AI systems:
* An Overview of “Obvious” Approaches to Training Wise AI Advisors
* Some Preliminary Notes on the Promise of a Wisdom Explosion
* $3,000 to Gabriel Recchia for Should we just be building more datasets?
Additionally, the judges awarded ten runner-up prizes, of $500 each. These adjustments to the prize schedule were made to better reflect the judges’ assessments — see footnote for details[1].
Many of those essays which did not get prizes still had something valuable to recommend them, and we are very grateful for the deep thought participants put into engaging with these potentially-crucial topics.
In the rest of this post, we’ll link to all fourteen of the prize-winning entries, and provide judges’ commentary on these. We have made a point to include critical commentary as well as praise. In our view this is an essential part of the competition — helping to draw collective attention to the ideas people have presented, and (we hope) helping to advance the discourse on what makes for valuable work.
Judge introductions
Andreas Stuhlmüller (AS) — Hi, I'm CEO & cofounder of Elicit, an AI company working on scaling up high-quality reasoning, starting with science. I've been interested in how AI can differentially advance wisdom for a long time, and (pre LLMs) fo |
90743594-4fbf-4c6c-a36e-1c3c8db5084a | trentmkelly/LessWrong-43k | LessWrong | What if Brain Computer Interfaces went exponential?
After watching the release of this video from one of the patients in Neuralink's first clinical trials, I decided to post here the prologue of some fiction I have been writing which is set in a world where BCI tech experiences the same exponential growth that we have seen with LLMs.
----------------------------------------
Nueralink Mk. 1
dedicates its first clinical trial to the morally unimpeachable work of helping quadriplegics control external devices. The early successes are heartwarming. The first viral video is of a hospital bed bound young man in Trenton New Jersey slowly dragging a cursor across a point-and-click keyboard and telling his mother “LUV U” is of a man with ALS explaining his condition, how supportive his wife has been, and how much Neuralink has improved his quality of life (Updated April 28th, 2025). It is followed shortly after by a teenage girl named Hope piloting a motorized wheelchair through a park. CNN runs a cinematic shot of her hair billowing in the wind.
The Phase I clinical trial involves 55 patients, many of whom churn out inspiring content on a regular basis. Redditors who see these successes drop whatever they are doing to furiously clarify across their various socials that these achievements, while great, should not in any way be credited to Elon Musk. Regardless, the tech is at first immensely popular and Neuralink the company skyrockets to an approval rating in the mid 90s. This makes it about as popular as root beer floats, Rocky IV, or receiving oral sex.
Neuralink goes international quickly, opening up a broader trial in the UAE for patients with motor & speech impairment (Edited May 15, 2025).
Political pressure to make the technology more easily available mounts. Patient advocacy groups and libertarian minded lobbyists team up with Silicon Valley accelerationists, all of them pushing to get “ultra fast tracking” for BCIs. There’s not a politician or bureaucrat on the planet eager to oppose wheelchair bound sob stori |
7efd4fad-f719-4850-a3d9-4f0051fccd40 | trentmkelly/LessWrong-43k | LessWrong | Talkbox Bagpipe Drones
A few weeks ago I wrote:
> There's also an instrument in this category that could have been built any time in the last ~500y: talkbox bagpipes. While the mouth-blown pipes are probably most familiar, all the mouth is doing is putting air into a bag. Bellows pipes replace this with an arm-operated pump. This opens up an opportunity to point the drones into your mouth so you can manipulate the sound of your accompaniment, turning the drones from a continuous, well, drone, into something lively and danceable. Or, at least, it should. As far as I know no one has built these, and my attempts to convince my bagpipe-maker cousin to work on this haven't been successful.
I was finally successful in convincing my cousin to let me try this, and it does work!
Probably would be slightly better with a louder drone, or by amplifying the drone separately so it can be turned up relative to the chanter.
You'd need to adjust the angles so it could fit in the player's own mouth, and if you want multiple drones it's mildly tricky to get them to terminate in the same place, but this seems very doable. You could also use your tongue to tap drones on and off. Possibly this would even let you switch between dissonant drones, like A and G for an Amix tune?
If you want to play with this, though, be sure you're using bellows pipes. With mouth-blown pipes you'd be risking bagpipe lung. |
b2e7ed06-8488-41d2-84a3-b36f46cabeea | trentmkelly/LessWrong-43k | LessWrong | ChatGPT: Exploring the Digital Wilderness, Findings and Prospects
This is a cross-post from New Savanna.
That is the title of my latest working paper. It summarizes and synthesizes much of the work I have done with ChatGPT to date and contains the abstracts and contents of all the working papers I have done on ChatGPT. It also includes the abstracts and contents of a number of papers establishing the intellectual background that informs that research. There is also a section that takes the form of an interaction I had with Claude 3.5 on methodological and theoretical issues. Finally, to produce the abstract I gave the body of the report to Claude 3.5 and asked it to produce two summaries. I then edited them into an abstract.
As always, URLs, abstract, TOC, and introduction are below.
* Academia.edu: https://www.academia.edu/127386640/ChatGPT_Exploring_the_Digital_Wilderness_Findings_and_Prospects
* SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119597
* ResearchGate: https://www.researchgate.net/publication/388563205_ChatGPT_Exploring_the_Digital_Wilderness_Findings_and_Prospects
Abstract: The internal structure and capabilities of Large Language Models (LLMs) are examined through systematic investigation of ChatGPT's behavior, with particular focus on its handling of conceptual ontologies, analogical reasoning, and content-addressable memory. Through detailed analysis of ChatGPT's responses to carefully constructed prompts involving story transformation, analogical mapping, and cued recall, the paper demonstrates that LLMs appear to encode rich conceptual ontologies that govern text generation. ChatGPT can maintain ontological consistency when transforming narratives between different domains while preserving abstract story structure, successfully perform multi-step analogical reasoning, and exhibit behavior consistent with associative memory mechanisms similar to holographic storage.
Drawing on theories of reflective abstraction and conceptual development, the paper argues that LLMs inadvertently capture wh |
644688cf-280e-467a-bf7e-00f77a7ef638 | StampyAI/alignment-research-dataset/blogs | Blogs | saving the server-side of the internet: just WASM,
saving the server-side of the internet: just WASM, *again*
----------------------------------------------------------
in [a previous post](saving-the-web.html), i propose to abandon the entire client-side web stack in favor of pretty much just [WASM](https://en.wikipedia.org/wiki/WebAssembly).
today i am proposing the same thing but for the server side.
at the higher level for most content, of course, i strongly favor [IPFS](https://ipfs.io/) — but sometimes, one actually needs centralized, responsive, cheap, continuously or on-demand running server programs, for many uses.
today's answer to these questions is a collection of [various](https://en.wikipedia.org/wiki/Google_Cloud_Platform) [cloud](https://en.wikipedia.org/wiki/Amazon_Web_Services) [platforms](https://en.wikipedia.org/wiki/Netlify), inevitably controlled by large companies and no doubt resting on a brittle tower of overly complex software.
i propose a nice way to decentralize and standardize all of this and simplify the stack. a piece of relatively simple server software (the "generic server") with a standard public-facing API could be deployed on any server machine, and make that server sell its bandwidth and computation resources in exchange for cryptocurrency, automatically. people or other programs could connect to it, upload a small WASM (or [meta-wasm](portable-programs.html)) program, and pay to have it run and be able to use various amounts of resources.
then, if the server for example runs a multiplayer video game session, ports could be opened for client software to connect to. the coordination of what is hosted where could be maintained on [the IPFS](https://ipfs.io/) as [decentralized knowledge](where-next-piracy.html), as would be the network of trust of which server machines are known to be reliable. especially, for information that one might not want leaked, one could cultivate a small set of trusted server providers — just as they do now by trusting just google or just amazon, except this would be a more general and automatably manipulable framework.
people could buy some server resource in bulk to share with their friends if some can't be bothered to buy crypto, large companies could deploy this generic server on many computers of their datacenter, people could run generic servers on their home computers to make money from unused resources, and anyone would be able to compete without setting up a significant piece of cloud infrastructure and then spending money on marketing to get known and trusted. larger distributed applications needing to allocate and deallocate servers dynamically based on demand and geographical location would have an entire competitive market to use, rather than having to settle with one of the large providers; and their individual server units could interoperate with each other even across providers.
server-side WASMs could access various APIs to communicate with each other be it on the same server machine or with other server machines; they could access APIs to use storage, to use [GPU computer](https://web.dev/gpu/), to publish IPFS data, etc. |
424ddd4b-11ae-4548-8359-2ed420471696 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Australia-wide mega-meetup
Discussion article for the meetup : Australia-wide mega-meetup
WHEN: 24 August 2014 07:00:00PM (+1000)
WHERE: Australia
https://www.facebook.com/events/1547367772141284/
Hello LessWrong! The organised minds of Sydney, Melbourne and Canberra are sanctioning an online hangout for the second month in a row. We had about 15 people turn up last time and we eventually wrote some funny short stories out of it.
This is a hangout from the comfort of your own beds! from anywhere around the country!
Had a problem recently? need a room full of people to tell it to? (You don't have to have a problem to listen to other people's problems) but hey, maybe we can help!
Our next hangout will be between 7pm and 12pm AEST (GMT +10) and the link to the google hangout will be posted at some time just before 7 on the 24th of August. (the link expires)
Discussion article for the meetup : Australia-wide mega-meetup |
8d93aeb6-42dd-424c-aaf3-467f6853c9ec | StampyAI/alignment-research-dataset/lesswrong | LessWrong | My preferred framings for reward misspecification and goal misgeneralisation
TL;DR
=====
* I was feeling quite dissatisfied with a bunch of categories like “inner and outer misalignment” and “reward misspecification and goal misgeneralisation”
* I think most of this dissatisfaction stems from them not being able to correctly describe situations where there are multiple points of failure or gaps.
* Hence, I propose using a root cause type of framing (inspired by [Richard Ngo](https://www.alignmentforum.org/posts/poyshiMEhJsAuifKt/outer-vs-inner-misalignment-three-framings-1) and [Rohin Shah](https://www.alignmentforum.org/posts/JKwrDwsaRiSxTv9ur/categorizing-failures-as-outer-or-inner-misalignment-is)) instead which I think makes descriptions of failures more specific and nuanced.
Assumptions and caveats
=======================
* I'll be referencing RL agents throughout and not other types of agents.
* I'm assuming my target audience has at least some basic knowledge from tghe [AGI Safety Fundamentals](https://www.agisafetyfundamentals.com/) (AGISF) course.
* I have hardly any ML experience and have time boxed myself when making this forum post so I expect to make some mistakes along the way. Also, this was originally made in a slide-deck so I may have lost some fidelity during the transfer.
* This forum post is my AGISF course capstone, so it would be super valuable for me to figure out how I should update towards thinking about AI alignment and my own career plan. Feedback is very much welcomed.
Much thanks to Chan Jun Shern, Chin Ze Shen, and Mo Putera for their feedback and inspiration. All mistakes are mine.
If you'd like to listen to the presentation instead, you can [watch it here](https://www.loom.com/share/3e773adf4005409fb1497b44cd7fba54), which is 18 minutes long with the option of speeding up. Here's the [slide-deck](https://docs.google.com/presentation/d/1tzdXbHAiqeUj90s_9X9u5nWQfPZZ7tqJQ0TfqdIB11A/edit?usp=sharing).
My journey through different kinds of framings
==============================================
When I first learned about about AI safety, the first analogy that really clicked with me was the [myth of King Midas](https://www.britannica.com/topic/Midas-Greek-mythology). He wished that everything he touches becomes gold, and well, he got his wish. Soon, he was starving to the death.
Then, I learned about [inner and outer misalignment](https://www.lesswrong.com/posts/rzkCTPnkydQxfkZsX/levels-of-goals-and-alignment). Here’s a quick recap:
* **Outer misalignment** happens when there’s a discrepancy between the designer’s intended goal (what’s in our heads) and the specified goal (what we say).
+ An example of this is the King Midas myth, where what King Midas probably wanted is to be wealthy, but gave a pretty bad wish.
* **Inner misalignment** happens when there’s a discrepancy between the specified goal (what we say) and the agent’s revealed goal (the agent’s actual goal or proxy goal).
+ An example of this is a pet making a mess of the house when you’re not around. The pet owner might have specified to not destroy any furniture. However, the pet might just really want to scratch the furniture, but don’t when you’re around.
So as you can see, I got pretty excited about learning this since it feels like a pretty neat categorisation.
Okay wait, there’s another categorisation that seems more specific, which are reward misspecification and goal misgeneralisation.
First, **reward misspecification** happens when the reward function incorrectly rewards the agent for the wrong behaviour.
Here’s an example with a [Hide and Seek game](https://openai.com/research/emergent-tool-use#surprisingbehaviors). There are two teams: red and blue. Red is the seeker, while blue is the hider. The reward misspecification happens when the blue team exploited a glitch in the game by throwing out a ramp that is a crucial tool for the red team to win, and was rewarded for it.
Okay, now let’s talk about **goal misgeneralisation**. This failure mode happens when there’s an out of distribution event (which is basically a significant change in between the training environment and test environment), and the agent’s policy doesn’t generalised well to the new situation.
An example of this came from a game called [CoinRun](https://arxiv.org/abs/2105.14111). The goal of the agent here is to get through obstacles and obtain coins. During training, the coin was usually fixed on the right end of the environment. The designers then shifted the environment where the coin is placed at a random location, so it’s no longer on the right side. However, the agent continues to go to the right and skipping the coin, meaning its policy did not generalise correctly to the shift in environment.
Okay great, reward misspecification and goal misgeneralisation seem like a specific way to categorise alignment failure modes. But somehow I felt like there was something missing. Let’s revisit the previous examples.
The hide-and-seek example has been used as a case study of reward misspecification, but is it really?
We know the blue team or the hiders are exploiting a glitch (i.e. remove the ramp from the play area). So, it seems reasonable that this is reward misspecification:
* You’re rewarding a bad behaviour.
* You could build a better reward function that could punish the agent for exploiting glitches.
So the mainstream view is that the reward function is the only issue. However, there’s another interpretation. Separate the reward function and the training environment as two different elements, and the training environment might be the culprit after all, since the training environment has a glitch which caused the reward function to incorrectly reward the agent.
Or you could also say both the reward function and the training environment are issues, which is my preferred interpretation.
Okay, let’s talk about CoinRun again. To repeat, instead of moving towards the coin, the agent’s learned policy misgeneralised towards moving to the right. The mainstream interpretation of this that there’s a out of distribution event, in others words, a significant change in the environment, causing the agent to be misaligned. This seems right to me. After all, the whole experiment was designed to cause this out of distribution event. But let’s take look at another interpretation
The reward function gives the agent a reward of 10 points for getting the coin. This seems straightforwardly correct. But maybe it can be better? If we could reward the agent higher points when they spend less time reaching it, the agent might generalise better.
Or another interpretation is that maybe the RL algorithm is causing the agent to overfit too quickly. In other words, instead of exploring more, it might have exploited on a single strategy too quickly.
Or maybe 2 or 3 of the above interpretations are issues, which again, is my preferred interpretation.
How these framings fail to satisfy me
=====================================
https://www.facebook.com/forlackofabettercomic/As you can see, the categories for reward misspecification and goal misgeneralisation doesn’t quite satisfy me. It’s like this (not so accurate) meme about WebMD only diagnosing people with cancer.
To expand this further, I think the core reason why I feel unhappy about these categorisations is because there can be **multiple points of failure**. I think most people familiar with AI alignment would probably know that there are three main areas of concern: (a) the RL algorithm, (b) the reward function, and (c) the training environment.
And there could be potential issues in each of these areas:[[1]](#fnqtrahrxb3rg)
* Related to the reward function, it could:
+ Be misspecified
+ Have broken game mechanics
* Related to the training environment, it could:
+ Lack diverse training environments
+ Have broken physics
+ Have conflicting environmental cues for reward
* Related to the RL algorithm, it could
+ Have inappropriate inductive bias
Okay, this thing about having multiple points of failure all seem to suggest a different kind of framing.
My preferred framing: root causes
=================================
After a bit of digging around, two forum posts by [Richard Ngo](https://www.alignmentforum.org/posts/poyshiMEhJsAuifKt/outer-vs-inner-misalignment-three-framings-1) and [Rohin Shah](https://www.alignmentforum.org/posts/JKwrDwsaRiSxTv9ur/categorizing-failures-as-outer-or-inner-misalignment-is) inspired me to use a more “root cause” type of framing.
My goal is to expand this type of framing a little more and show you a few examples of how this could look like. Please note they are all quite similar with slightly different ways of looking at the problem.
* The first kind of framing is figuring out the **root cause** of the issue -- what were the initial triggers that caused the failure?
* The second is to figure out the specific **levers** we have to improve alignment. For example, if a RL agent isn’t behaving correctly, we know there are 3 main levers we could work on.
* The third kind of framing is to figure out what specific **gaps** we have to reduce misalignment. For example, imagine how your *ideal* AI agent should behave in almost all cases, then design a reward function that tries as much to do that. This might mean having a bunch of metrics that the agent can compare and trade off from, instead of having just one single metric.
Okay let’s see if it’s helpful if we use one of these framings on previous examples.
Remember that the hide-and-seek example was categorised as reward misspecification. But if you’re looking at the root causes, you could identify other issues too.
* The training environment is faulty because of some broken physics which allowed the hiders to remove a key tool from the game.
* It’s also possible that the RL algorithm could have some inductive bias that made the agents more likely to explore novel ways to win, which seems like good thing to me, but I don’t have enough technical know-how to comment on this.
Let’s move on to our next example, and this time we’ll be figuring out “gaps”. Basically, we ask the question what ideally should something be? And hopefully we can fill in the gap.
CoinRun was categorised as goal misgeneralisation previously, but the problem seems to be bigger if try to figure out what gaps it has:
* So we know the training environment is an issue since the designers purposefully elicited an out of distribution event.
* What about the reward function? I mean, rewarding the agent when it gets the coin seem perfectly fine. But ideally, it would also reward the agent more if they spent less time. Perhaps then the agent would be more likely to generalise towards the coin instead of missing it.
* What about the RL agent or algorithm? It seems possible that we could make it be better by making the pixels of the coin a more prominent thing within the agent’s attention. I think this makes sense, but let me know if I got this wrong.
Let’s try this on a different example. Over here we have a RL agent that is trying to learn how to [walk and run](https://www.youtube.com/watch?v=K-wIZuAA3EY&t=486s&ab_channel=CodeBullet). So although categorised as reward misspecification by some, which I think makes sense, we could be a bit more nuanced by figuring what gaps it has:
* Firstly, the agent does get rewarded the further it moves, but I think the reward function can be better if it’s also reward for maintaining a certain height, so you know, it does actually stand and walk
* Secondly, the agent actually does exploit a physics issue here and learns to slide instead of walk. It does eventually learn how to walk after a hundred-ish training runs but it was mostly running on its knees instead of its feet. So maybe the physics issue can be fixed here.
* I’m not super sure how the RL algorithm can be better, so I’m just putting it here as a maybe.
In this other example, I want to illustrate where the root cause type of framing seem to not be so useful. Here, we have an RL agent that gets rewarded for [chopping trees](https://arxiv.org/abs/2210.01790). The designers were hoping to elicit the agent to goal misgeneralise by having the agent cut down all the trees before the environment could grow it back, causing an out of distribution event.
So what’s the issue here? I’m not sure.
* The training environment seems fine since the the agent did experience a very different environment and misgeneralised for a period, then eventually did generalise (i.e. learn to chop the trees at a more sustainable pace).
* The reward function seems fine too. I’m not sure how I would design the reward function so it would lessen the period of misgeneralisation.
* It seems like the out of distribution event is inevitable, so the RL algorithm could be tweaked so it learns a bit quicker? I’m not sure.
Maybe this is a bad example, since the agent does eventually generalise. Let me know if you think I could be thinking about this in a different way.
I’ve given you a bunch of examples how to use a more root cause type of framing, but we should try to question the value of such a framing. I expect the framings to be valuable, for these two reasons:
1. There can be **multiple points of failure**, and it’s good to clear where these failures are happening. However, I do expect broad categorisations to still be useful, especially when introducing AI safety to folks who are new.
2. I noticed that in a few forum discussions that folks were talking past each other. So maybe [**taboo’ing**](https://www.lesswrong.com/tag/rationalist-taboo) these broad categories could increase the fidelity of discussion.
However, I still feel somewhat certain how much mileage we could get out of this framing.
Here are things that I'm still feeling dissatisfied with:
1. It feels pretty obvious. It’s basically saying, "let’s dig deep into the issue and fix all the things!" It does feel like an experienced AI alignment scientist or engineer might say “no duh” to me.
2. We don’t have limitless resources or capabilities or training data or time, so there’s always going to be some gap between a comprehensive training environment and a training environment we could practically develop.
3. I don’t quite know how to describe deceptive alignment using a root cause type framing, maybe it’s a RL algorithm issue, or we should introduce another element like a mechanistic interpreter.
If I try to push the “root cause” framing to its limit, perhaps it makes sense to have a **checklist** (of potentially gaps) for aligning a RL agent. Basically, we could just look at this checklist, and say something like, “looks like we still have a few things we haven’t done yet, no wonder the RL agent is behaving in ways that are misaligned”. I’m not super sure if this makes sense, so I’m just putting it out here.
Conclusion
==========
* To reiterate, I was feeling quite dissatisfied with a bunch of categories like “inner and outer misalignment” and “reward misspecification and goal misgeneralisation”.
* I think most of this dissatisfaction stems from them not being able to correctly describe situations where there are multiple points of failure or gaps.
* Hence, I propose using a root cause type of framing instead which I think makes descriptions of failures more specific and nuanced.
How you can help me
===================
* Let me know if I missed a different kind of framing that would be valuable.
* Did I make a mistake somewhere?
* Is this too obvious?
* Should I continue testing for fit in [distillation](https://www.lesswrong.com/tag/distillation-and-pedagogy) or should I move on?
Please let me know even if you’re not feeling confident about them!
1. **[^](#fnrefqtrahrxb3rg)**I’m probably missing other points of failure too, so please let me know if I do and I could update this. |
0a758936-501d-4641-80e8-78ed523880b2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Short summary of mAIry's room
This is a shorter summary of the post "[mAIry's room: AI reasoning to solve philosophical problems](https://www.lesswrong.com/posts/DLmhJbuhYek5rEhpH/mairy-s-room-ai-reasoning-to-solve-philosophical-problems)". Its aim is to explain the key concepts of that post without going into full technical details.
Can you feel my pain?
=====================
People can be more or less empathetic, but we have trouble really feeling the pain of another person. If the person is close to us, and we get more details of what they're going through, then we can start to feel a sympathetic pain; but almost nobody reacts viscerally to a sentence like "and then someone in the country started to feel a lot of pain."
So, though we can feel our own pain, and though we can to some extent feel the pain of others if given enough detail, the information that someone else is in pain is very different from the experience of pain.
Now, this might be where you could start taking about "[qualia](https://plato.stanford.edu/entries/qualia/)"; but the key insight is that this behaviour is very akin to reinforcement learning agents.
The "pain" of RL agents
-----------------------
You can see "low reward" as a crude analogue of "pain" for a reinforcement learning agent.
It has some of the features mentioned above. An RL agent's own reward channel is much more important to it than the reward channel of another agent. If it gets some information about low reward for other agents, this will not be of any interest to it unless this affects its own reward.
We can make the analogy closer if we imagine more limited, boundedly rational RL agents[[1]](#fn-WyqLPzjtrCyN8cX9z-1); maybe powered by neural nets under the hood. These neural nets are function approximators; but we can also think of them as heuristic generators. They generate approximate answers to specific questions (eg: "what action best increases my reward?") and update themselves based on inputs and outputs.
Now let's imagine this agent has been in a rather unchanging environment for some time, getting medium rewards, with the neural net refining itself but not changing much, as its predictions are very decent. In such an environment, the agent has the luxury of making long-term plans.
And then, suddenly, the rewards drop to something very low. The agent's neural net starts to update massively, since its predictions are now quite wrong. Weights and connections that haven't changed in a long time are suddenly shifting. Meanwhile, the agent's behaviour changes radically, as it tries to figure out a behaviour, any behaviour, that will bring its rewards back up. The long-term is ignored - its not clear that long-term plans will pan out, anyway, if the short-term predictions are so wrong.
To over-anthropomorphise, we could say that the agent is panicking, that it's trying desperately to avoid the low reward, focusing on little else, and attempting anything it can to get away from the low reward.
That seems akin to a human pain response. More properly, it seems to share features with that pain response, even if the analogy is not exact.
As agents learn...
------------------
What is learning for an RL agent? Well, if we assume that it has neural nets under the hood, then learning new information is mainly updating the weights in these nets.
If it wanted to keep track of whether it was learning, then it would keep track of these weights. So, again abusing terminology slightly, the "feeling of learning" for an RL agent is it detecting major shifts in its weights.
mAIry's room
============
The above showed the similarity between "pain" for a human and "low reward" for a RL agent. So it is worth asking if there are other features of qualia that might appear in such an artificial context.
The original "[Mary's room](https://en.wikipedia.org/wiki/Knowledge_argument)" thought experiment Mary has been confined to a grey room from birth, exploring the outside world only through a black-and-white monitor.
Though isolated, Mary is a brilliant scientist, and has learnt all there is to know about light, the eye, colour theory, human perception, and human psychology. It would seem that she has all possible knowledge that there could be about colour, despite having never seen it.
Then one day she gets out of her room, and says "wow, so that's what purple looks like!".
Has she learnt anything new here? If not, what is her exclamation about? If so, what is this knowledge - Mary was supposed to know everything there was to know about colour already?
mAIry and the content of "wow"
------------------------------
Let's replace Mary with an artificial agent mAIry, and see if we can reproduce the "wow" experience for it - and hope that this might give some insights into what Mary is experiencing.
Now, mAIry has a collection of neural nets, dedicated to interpreting its input streams and detect the features of the world that is then fed to its mental models. So we could imagine the setup as follows:

Here mAIry's input cameras are looking at a purple object. These inputs go through various neural nets that detect the presence of various features. The colour purple is just one of these features, that may or may not be triggered.
These features then go to the mental model and more rational part of mAIry, which can analyse them and construct a picture of the outside world, and of itself.
So, to get back to mAIry seeing purple for the first time. As a brilliant scientist, mAIry has learnt everything there is to know about the properties of purple, and the properties of its own neural nets and mental models. So, it can model itself within itself:

And then it sees purple, for real, for the first time. This triggers its "purple detector", which has never triggered until now. This new data will also shift the weights in this part of its neural nets, so mAIry will "detect that it is learning". So it will have a new "experience" (the triggering of its purple feature detector) and a feeling of learning (as it detects that its weights have shifted).
That is the "wow" experience for the mAIry: new experiences, and a feeling of learning. Since it knows everything about itself and the world, it knows that it will "feel" this, ahead of time. But, even with this knowledge, it will not actually feel these until it actually sees purple.
This account of mAIry's experiences seems close enough to Mary's, to suggest that both of them are purely physical processes, and that they are comparable to some extent.
---
1. Another way we could make the RL agent more similar to humans is to make it more empathetic, by including the rewards of other agents into its own reward. If we combine that with some imperfect heuristics for estimating the "pain" of other agents, then we could start to get the human-like behaviour where nearby evident pain affects the agent more than distant, formally described pain. [↩︎](#fnref-WyqLPzjtrCyN8cX9z-1) |
57a3f14f-a7f9-40e2-ad5e-bc24f7245437 | trentmkelly/LessWrong-43k | LessWrong | In favour of total utilitarianism over average
While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people...
This post will argue that within the framework of hedonic utilitarianism, total utilitarianism should be preferenced over average utilitarianism. Preference utilitarianism will be left to future work. We will imagine collections of single experience people (SEPs) who only have a single experience that gains or loses them a certain amount of utility
Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: "It is good for a SEP with positive utility to occur if it doesn't affect anything else". This seems to be one of the most basic assumptions that one could choose to start with - it's practically equivalent to "It is good when good things occur". However, if it is true, then average utilitarianism is false, as a positive, but low utility SEP may bring the average utility down. It also leads to the sadistic conclusion, that if a large number of SEPs involve negative utility, we should add a SEP who suffers less over adding no-one. Total utilitarianism does lead to the repugnant conclusion, but contrary to perceptions, near zero, but still positive utility is not a state of terrible suffering like most people imagine. Instead it is a state where life is good and worth living on the whole.
On the other hand, average utilitarianism starts from its own "obviously true" axiom, that we should maximise the average expected utility for each person independent of the total utility. We note that average utilitarianism depends on a statement about aggregations (expected utility), while total utilitarianism depends on a statement about an individual occurrence that doesn't interact with any other SEPs. Given the complexities with aggregating utility, we should be more inclined towards trusting the statement about individual oc |
d1ba1901-56f9-4405-bf73-8700e23c6267 | trentmkelly/LessWrong-43k | LessWrong | Interactive Infographic on Simpson's Paradox
Since Simpson's Paradox has been discussed here recently (and not so recently), I thought I'd share this interactive1 infographic that I found via the FlowingData blog. I already understood Simpson's Paradox pretty well, but playing with the sliders helped me get a more intuitive feel for it.
I expect similar tools would be helpful for explaining Bayes' Theorem and some of the other things we talk about on LW (like Pareto efficiency and Nash equilibria). Do such things exist?
----------------------------------------
1 The interactive part is farther down the page. |
83a61394-a6d8-4bb0-8ee7-c464894769d4 | trentmkelly/LessWrong-43k | LessWrong | An inducible group-"meditation" for use in rationality dojos
Note: The following outline of my research proposal is unfinished. I posted it in the discussion section to spur conversation and get constructive criticism (successfully, I might add). If you have any suggestions, then please make them. I will be monitoring the discussion and improving the proposal until I feel it is ready to be posted as a main article.
Introduction
I think I may have found a novel use for an old technique, which may or may not have implications for rational decision making. I am open to constructive criticism or even deconstructive criticism if you make a sound argument. Ultimately, I would like the experiment to be put to the test. If you have the supplies and know-how to carry it out, then feel free to try it and report your findings.
The Goal:
* Catalyze the brainstorming process in a way that increases both the number and quality of ideas made.
Methods:
* Find a problem that needs solving. Unifying general relativity and quantum mechanics is a good, but ambitious, example. Some more likely problems that could be solved are: "How might I solve my relationship problems," or "how can I advertise my company's product to its target demographic", or "what are some ideas to make quick money."
* Find 2-3 rationalists who understand the problem well. They don't need to be expert rationalists; the most important part is that they know the difference between rationalization and rationality. In the QM example above, and in most scientific applications of the method, all players should have access to the experimental data.
* Assign two of the three rationalists to the "brainstormers" group (name subject to change), whose primary concern is to make logical connections between the data to form hypotheses.
* Assign the odd-rationalist-out as the Confessor, whose primary concern, like in TWC, is to preserve sanity. It is the Confessor's job to catch the brainstormers when they make a logical leap or use biased reasoning. Some tactics the Confessor |
5744275c-3357-4ac4-8e43-f60366f125d4 | trentmkelly/LessWrong-43k | LessWrong | Simple embodied cognition hacks
I've known that the mind can be affected by the body's actions, but I often forget this when sitting at my computer chair for long stretches, and when standing and interacting in social situations I've subconciously cultivated a passive, non-confrontational but minimally interactive posture. But simple physical actions can act as a mild nootropic for certain situations.
Article with citations: 10 Simple Postures that Boost Performance
Article summary:
1. Take a powerful pose to feel powerful
2. Tense muscles for willpower
3. Cross arms for persistence
4. Lie down for insight
5. Nap for cognitive performance, vigour and wakefulness
6. Hand gestures for persuasion
7. Gesture to self for comprehension and memory
8. Smile for happiness
9. Mimic to empathize
10. Imitate for comprehension and prediction |
bf38f2f5-b556-4179-afd8-73f01a9e3f5b | trentmkelly/LessWrong-43k | LessWrong | Framing Practicum: Semistable Equilibrium
Thanks to John Wentworth for conceiving and executing the concept of a framing practicum, as well as much of the format and language of this post!
This is a framing practicum post. We’ll talk about what a semistable equilibrium is, how to recognize semistable equilibria in the wild, and what questions to ask when you find it. Then, we’ll have a challenge to apply the idea.
Today’s challenge: come up with 3 examples of semistable equilibria which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).
Expected time: ~15-30 minutes at most, including the Bonus Exercise.
What’s Semistable Equilibrium?
In Rebel Without a Cause, Jim Stark and Buzz Gunderson are racing their cars at top speed toward a cliff, with the gas pedal strapped down. The first to jump out of their car is chicken. Buzz's leather jacket gets caught on the door handle. He's unable to jump free, and plunges over the cliff to his death.
The cliff is a semistable equilibrium. The idea of the "chickie run" is that the cliff causes the racers to decelerate, so that their velocity approaches zero as they near the cliff's edge. That's how it works out for Jim. On the other side of the cliff, however, Buzz's velocity increases again as he hurtles toward the ground.
In general, a semistable equilibrium will approach an equilibrium point if it starts on one side, but will move away from the equilibrium point if it starts on the other side.
What To Look For
A semistable equilibrium needs a threshold that attracts and slows things down if approached from one side, but repels or launches things away if they're on the other side. If there's a point "point of no return," that may be suggestive of a semistable equilibrium. There's a zone in which disturbances lead to a return to rest, and a second zone just beyond leading to ongoing activity. It's possible to have multiple equilibria in the system. All that's required is that th |
d619a4f3-2c8a-40de-9fd8-ea6c5cec940f | trentmkelly/LessWrong-43k | LessWrong | On Frequentism and Bayesian Dogma
I've heard that you believe that frequentism is correct. But that's obviously wrong, so what gives?
DanielFilan
I guess first of all I should ask, what do you mean by "frequentism"?
DanielFilan
I mean classical statistical frequentism. Though somewhat tongue-in-cheek, as I don't think it's fully correct, I think it's much more correct than orthodox Jaynesian Bayesianism.
Some scattered thoughts:
* Bayes' theorem derives from conditional probability so it's also included in frequentism.
* Bayesian epistemology only applies to situations when your beliefs are a probability distribution, and is thus incomplete.
* It doesn't account for e.g. limited computation.
* Frequentism solves these things by framing the problem in a different way. Rather than 'how should I think?' it's "this algorithm seems like a sensible way to think, let's figure out what epistemic guarantees it has".
* In particular, it makes it OK to believe things that are not expressible as probability distributions.
Adrià Garriga-alonso
I'm still sort of unsure what you mean by "classical statistical frequentism". Like, I'm pretty sure I agree that the purported theorems of Fisher are in fact theorems. Do you mean something like "the way we should think about thinking is to ask 'what cognitive algorithms perform well with high probability in the long run'"?
DanielFilan
(and regarding Bayesianism, I think that's a separate question that's more productively talked about once I understand why you think frequentism is good)
DanielFilan
Sure. Thank you for the clarification -- I agree there are many fine theorems on both sides.
Statistics is the problem of learning from data, frequentism is saying "Here's an algorithm that takes the data and computes something (that's an estimator). Let's study the properties of it."
> "the way we should think about thinking is to ask 'what cognitive algorithms perform well with high probability in the long run'"?
Yeah, I agree with this. Though I recognize |
bccba970-a1cb-43ce-9492-1b1f91e633c6 | trentmkelly/LessWrong-43k | LessWrong | The Expanding Moral Cinematic Universe
A subtheme for me this year has been grappling with how to reconcile different facets of my morality.
Part of this has to do with reconcile the virtues of "Protect yourself, maintain slack, be aligned with yourself and your community" and "But, maybe the world is metaphorically and/or literally on fire. How do you want to relate to that?"
Part of this has to do with "man, the story of the Expanding Circle of Concern may not be as nice as I thought, in ways that fundamentally challenge my conception of what 'my particular morality' even means, or whether it is coherent." (This has gone hand-in-hand with me realizing my conception of 'community' may also not really be coherent, which has been challenging for my identity).
Along the way, I've watched some movies and shows that feel like they grapple with a particulate facet of the expanding circle of concern, and they're accumulating into a private headcanon of the Expanding Moral Cinematic Universe. Stories in this cinematic universe featur characters in a harsh, bloody world... who inch their little corner of the universe forward as a place where friendship and cooperation can form. Less a sea of blood and violence and mindless replication.[1]
Here are three vignettes, written over the course of the last year (originally on Facebook, later posted on shortform).
Part I: The Fox and the Hound
Originally posted January 24th on Facebook.
I watched Disney's The Fox and The Hound a several months ago. I cried a bit.
While watching the movie, my girlfriend commented "so... they know that foxes are *also* predators, right?"
And, yes. Yes they do know that. This is not a movie that "was supposed to be about predation, except it failed to notice all the ramifications about its lesson." This movie just isn't taking a stand about predation.
This is a movie about... kinda classic de-facto tribal morality. You have your family and your tribe and a few specific neighbors/travelers that you welcomed into your home. Those |
d2c1b04a-6235-49cd-be2e-72eb19aea0b4 | trentmkelly/LessWrong-43k | LessWrong | Simplicity priors with reflective oracles
Previously, I described a type of probabilistic oracle machines that I think is powerful enough to allow the implementation of a variant of AIXI that can reason about worlds containing other instances of this variant of AIXI. In this post, I describe how to implement a simplicity prior (a variant of Solomonoff induction) in this framework; there are some technical issues which will also come up when implementing AIXI that are easier to talk about in the context of Solomonoff-like induction.
----------------------------------------
Solomonoff induction is a prior probability distribution over infinite bit strings, {0,1}ω. We want to define a similar prior using probabilistic oracle machines with reflective oracles instead of Turing machines---and then "implement" that prior as a probabilistic oracle machine with that oracle.
The basic idea is simple. Say that a probabilistic oracle machine H is a hypothesis if, for every oracle O′:N×(Q∩[0,1])→[0,1], invoking H[O′] almost surely outputs an infinite bit string. Fix some programming language for probabilistic oracle machines, and randomly choose a program implementing a hypothesis H, where the probability of each program of length ℓ is proportional to 2−ℓ. Then run H[O], where O is our actual, reflective oracle, to obtain a sample bit string. This concludes the definition of our prior.
This also suggests how we could implement this prior, as a probabilistic oracle machine S that samples bit strings with these probabilities; it seems like all we need is a way to test whether or not a given program implements a hypothesis. I'll tackle that problem later in this post, but first let me point out a different problem: simply running the hypothesis H that we've sampled in the first step is not an ideal way to implement the prior.
----------------------------------------
Suppose that we want to determine the probability that the output of S starts with 0111010. Well, here's an idea: why don't we use our reflective oracle? |
ce0fb16d-25fb-4033-a229-811910a55409 | trentmkelly/LessWrong-43k | LessWrong | Hong Kong LW meetup
Any LW readers in Asia's world city interested in a meetup? I have a hunch that there might be more of us that the statistics suggest, possibly due to some of us showing up in the analytics as being based in Kowloon, not Hong Kong. At any rate, I'd be surprised if we couldn't find enough people for a decent gathering, HK being such a wired and technophilic place.
Date: Saturday 12th March, 6:00 pm.
Place: 168 Future Bar, Mong Kok
Reading Starting a LW Meetup is Easy inspired me to try and get this moving. Remember, you don't need to be an active contributor to the site to come along and meet like-minded people (I personally don't post or comment very often, though I do lurk a lot). Don't be shy, if you're at all interested please reply so we can get the ball rolling! |
821c45bd-2134-4dbe-88bd-2271a24372eb | trentmkelly/LessWrong-43k | LessWrong | Evaluating 2022 ACX Predictions
Previously: 2022 ACX Predictions Buy/Sell/Hold, Evaluating 2021 ACX Predictions
As is the yearly tradition, let’s see how we did. Scott noted in his results post that on pure numbers I did pretty well, coming in 54th out of 508 entries.
My own evaluation, by my own standards, is I did a mediocre job.
Big Picture
2022 was the first low-effort year, no looking at prediction markets and five minute time limits. For 2023, I put even less effort into things – I didn’t spend the five minutes even once, and mostly put down numbers on instinct. I modestly regret not trying harder, but am confused as to whether or not that regret makes sense.
Pure numbers is not how I evaluate predictions. They are one input, but it is more important to do deliberate practice. Look back, and ask whether your prediction made sense, whether your logic was good, what would have been the right odds. A Briar score is quite a crude measure, as is looking at calibration. They can tell you vaguely where you are at in some sense, but it won’t much help you improve otherwise.
Then one can look at the big picture, and see what patterns and errors emerge.
There are very clear patterns of error in my predictions from 2022.
1. Too much anchoring on Scott’s (and Vox’s and Matt’s) answers. The degree of anchoring I did makes sense for market prices, or for predictions informed by a market price. Instead I did it for what Scott pulled out of the air. I also did it for Vox and Matt, who did not follow Scott’s restrictions but also did not seem to care at all about looking at markets, including financial markets.
2. Failure to do the research. There was a five minute timer and you couldn’t look at prediction markets, but it is very obvious I frequently paid a price for not taking the full five minutes, and not asking questions I could have asked.
3. Bad inflation takes. A bunch of my dumb value loss was being too much on Team Transitory. I think it’s fair to say that at the time it was clear the mar |
9b77d9fb-4e7d-4659-acb3-8a506bac4646 | StampyAI/alignment-research-dataset/blogs | Blogs | Import AI 337: Why I am confused about AI; penguin dataset; and defending networks via RL with CYBERFORCE
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.
Also, this issue sees me trying something a bit unusual - let me know what you think!
[Subscribe now](https://importai.substack.com/subscribe)
**Zooming out - why am I confused and uneasy about some parts of AI these days?***…Some disconnected thoughts about scaling, safety, societal impacts, and so on…*I've felt a lot more confused about the AI ecosystem recently and I've been trying to understand why. I think it basically comes down to some combination of the pace of tech development and deployment, the rapidly evolving policy conversation, and the 'here comes everyone' '[Eternal September](https://en.wikipedia.org/wiki/Eternal_September#:~:text=Eternal%20September%20or%20the%20September,access%20to%20many%20new%20users.)' aspect of AI going mainstream. If you combine these things together you end up in a kind of Rashomon world where everyone can look at the same basic information and come away with radically different interpretations of what is important and what should be done. This is confusing!
So, in the spirit of trying to make myself less confused, here's a list of some of the things I find confusing about AI circa 2023, and some thoughts on why these things may end up being important:
* **Should AI development be centralized or decentralized?** Should AI development be centralized in a small number of large players or should it be distributed across a much larger ecosystem? If you skew towards certain interpretations of 'AI safety' then you wind up arguing that 'fewer AI developers = better', but if you worry a lot about power concentration and authoritarianism-with-a-friendly-face you tend to advocate more for decentralized AI development.
* **Is safety an 'ends justify the means' meme?:** The more hardline you tend to be about AI safety (where hardline = confident in some combination of short timelines plus probability of doom), the more comfortable you get with advocating for extremely authoritarian policy interventions (e.g, various forms of controls or restrictions on access/development), or even kinetic actions (bombing datacenters). This makes sense if you think we're in a 'don't look up' scenario, but makes a lot less sense if you don't agree with the safety premises.
+ (Personally, I think AI safety is a real issue - over the years I've worked with AI systems I've seen them frequently behave in hard-to-debug, confusing ways, and I also think lots of our alignment techniques and system-analysis techniques (eg interpretability) are juvenile relative to the capabilities. And now that systems are getting much more capable, I also think they have potential for non-trivial misuses. But I find myself pausing when I think through various extreme policy responses to this - I keep asking myself 'surely there are other ways to increase the safety of the ecosystem without making profound sacrifices on access or inclusivity'? This may or may not be a gordian knot that has an elegant solution.)
* **How much 'juice' is there in distributed training and low-cost finetuning?** A whole bunch of AI policy and governance ideas back onto the notion that you can control the frontier because the frontier involves a vast amount of computers working in parallel with one another in an exquisitely designed cluster. What if the frontier didn't require nearly as much of this as we think? If you read arXiv and follow what certain interesting technologists are doing, it's clear that there are a whole bunch of technical things that are making it:
+ Cheaper to train big models (quantization - too many papers to pick one out but this Google blog gives a [nice sense of it](https://ai.googleblog.com/2022/09/quantization-for-fast-and.html)).
+ Really cheap to finetune existing models ([LORA](https://arxiv.org/abs/2106.09685), etc).
+ Possible to train chips on heterogenous clusters including ones at a distance from one another (various papers, e.g [SWARM Parallelism](https://arxiv.org/abs/2301.11913)).
* Many AI governance proposals rest on the idea that there's some significant delta between what you can do on a dense and exquisitely optimized cluster and what you can do on a much scrappier system. While it will *always* be more efficient to train on a bunch of computers sat next to one another, the technical frontier of training in other ways is evolving very quickly and I think no one has much of a sense of how far it can go. There are also cool startups like together.xyz making a real go at betting on this tech direction as a business.
* **Are today's techniques sufficiently good that we don't need to depend on 'black swan' leaps to get superpowerful AI systems?** Back in the early 2010s it was pretty obvious that we needed fundamentally better architectures and training approaches to get more general systems. Fast forward to the 2020s and it seems like we have general systems (LLMs and their various progeny), really good architectures (transformers, vision-transformers, etc) and have figured out how to productively spend enough compute on training and feedback to eke more generality out of even larger-scale systems.
***Is this 'good enough' to develop very powerful systems, or do we need other techniques?*** If you think we're in a 'good enough' world then you make a lot of plans around building super capable machines. Do your plans get broken by, for example, an architecture that appears and is 5X more efficient than the transformer? And what if weirder things like 'active learning' become possible?
* **Does progress always demand heterodox strategies? Can progress be stopped, slowed, or choreographed?** If you zoom out and look at the history of technology a huge amount of progress comes from weird people doing weird and off-consensus things. The history of computation, information theory, math, airplanes, physics, biology, etc, is full of crazy rebels who look at the world and come up with an at-the-time 'crazy' interpretation on reality which lets them ratchet science forward. Therefore, shouldn't we in some sense expect the frontier of AI to be dominated by similarly heterodox individuals and organizations? And therefore shouldn't we just accept that the frontier of AI, like other branches of science, will be full of kind of weird and/or alien-seeming people and orgs, and just accept this? Are the latent desires in AI policy for some kind of ordered or more choreographed form of progress more like a political aesthetic desire than a logical thing to 'want'?
* **How much permission do AI developers need to get from society before irrevocably changing society?** Technologists have always had something of a libertarian streak and this is perhaps best epitomized by the 'social media' and Uber et al era of the 2010s - vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general 'move fast and break things' philosophy of tech. Should the same be true of AI?
**What I think and why I'm uneasy:** In the coming months I'm going to be doing more writing about both my areas of confusion, and analysis meant to reduce my confusion. But I think my main takeaway is AI has gone sufficiently high-energy that making many predictions about how AI development 'should' take place is pretty dangerous to do - most frightening things in the world start out with some small set of people deciding how things 'should' work. I am conscious of my own agency here - and I am made uneasy by it.
***On the other hand, just as with politics, not voting is still a political action.*** If I don't come up with some opinionated 'takes' on how things should take place, then am I basically taking on the role of a non-voting bystander who will criticize and comment on the world as it changes around me, while being weirdly passive with regard to how it changes? Almost certainly yes! Therefore, I think everyone who has the ability to exercise influence over the trajectory of AI should be approaching this moment with a vast amount of fear, humility, self-doubt, and general awareness of the tendency for people who think themselves to be morally good to in fact be agents of moral corruption. This is why I'm writing in public about how confused am.
**And yet… something must be done**. it is clear that AI is so stupendously powerful that we are going to need to do some things differently, or else we're opening the world up to a huge amount of emergent chaos as a consequence of superpowerful analytical and generative capabilities becoming broadly distributed with relatively little inbuilt (or external) control schemes. These things hold the potential for misuse, are inherently chaotic, and in the case of expensive models like GPT-4 are also political artifacts (with encoded ideologies! - [Import AI 321](https://jack-clark.net/2023/03/21/import-ai-321-open-source-gpt3-giving-away-democracy-to-agi-companies-gpt-4-is-a-political-artifact/)) in their own right.
If you feel similarly confused, [email me](mailto:jack@jack-clark.net"). I'd love to chat. And if you're able to meet in SF or the East Bay, even better - we can drink coffee and be confused together.
####################################################
**Spotting penguins' predators with deep learning:***…In the future, we'll study animal populations via automated analysis of underwater GoPros…*Researchers with the University of Bristol, University of Exeter, and BirdLife South Africa have built 'DivingWithPengiuns', a dataset of penguin-mounted videos, designed to help people use AI to automatically analyze underwater footage. The idea behind datasets like this is to simplify the process of combing through animal-based videos to figure out things like how much food they're eating, how many predators they encounter in their environment, and so on.
**DivingWithPenguins:** The datasets consists of 63 videos from African penguins (*Spheniscus demersus)* equipped with cameras at the Stony Point breeding colony, South Africa.
**Three dataset slices:**
* **Penguin Detection in Stills**: 602 underwater images at a 640 X 640 pixel resolution; dataset contains full bounding box annotations for penguins, fish, and bubbles in underwater images
* **Content Classification:** 797 underwater images at 640 X 640 pixel resolution with binary classification indicating existence or absence of fish.
* **Behaviour Recognition:** Full unprocessed dataset of videos spanning 20min to 1 hour in length.
**YOLO and fiddled-with-YOLO:** The researchers use everyone's favorite video object detection model, YOLO, to train a baseline on the dataset, and then they extend it to improve performance. In tests, the stock YOLO (YOLOv5, specifically) performs reasonably well, while a tweaked version does substantially better, The tweaked version includes tricks like pairing a YOLO and ResNet-18 analysis of the spatial stream with a separate analysis of the temporal stream via optical flow analysis, then concatenating the outputs into an LSTM layer to help make better judgements over time.
Ultimately, the researchers are able to get some promising signs of life on being able to detect predators in the video feeds, but more work is needed to reduce false positives and increase accuracy. "We hope that this paper and datasets can help focus the attention of the computer vision community towards researching better AI tools for animal-borne video analysis with the ultimate goal to better understand and conserve the natural world from the animal’s perspective," they write.
**Read more**: [Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning (arXiv)](https://arxiv.org/abs/2308.07267).
####################################################
**VisIT-Bench tests out how well AI systems can look at images and follow associated instructions:***…The era of the multimodal agents cometh…*How smart are modern vision-language models, and how can we evaluate them? That's the question which VisIT-Bench, a new test, aims to answer.
**What is VisIT-Bench?** Researchers with Hebrew University, Google Research, UCLA, Allen Institute for AI, University of Washington, UCSB, Stanford, and LAION have built the Visual Instruction Benchmark (VisIT-Bench).
The benchmark tests how well vision-language models can look at an image and an accompanying text instruction and complete the task (e.g, a photo of a car with a wheel cover with lyrics on it, and the instruction of "create a catchy title for a country song based upon the advice printed on the wheel cover".)
"Each instance contains an instruction, input image(s), a instruction-conditioned caption (a human-crafted caption for the image(s)/instruction), and a human verified reference," the authors write. The overall benchmark consists of 592 test queries, each with a human-authored instruction-conditioned caption. The benchmark has 70 distinct instruction families, including chemical identification, hazard identification, game playing, art knowledge, and more. The authors used GPT4 to generate the human verified reference description. "Many of the instructions focus on open-ended generation requests".
**Results - GPT4 is good, other models less so:** In tests, the authors evaluated a bunch of vision-language models against GPT-4, including LLaVA, LlamaAdapter-v2, mPLUG-Owl, InstructBLIP, MiniGPT-4, and PandaGPT. The best performing model was LlamaAdapter-v2 with a win rate of 27.41% versus the GPT-4 reference captions.
**Slight worry re using GPT-4 in benchmarks:** There's some reason to be wary of integrating AI-judged outputs into AI evaluations because of the likelihood of AI systems wanting to 'self-deal', that is, prefer responses generated by their own systems. The authors observe this in action: "While our GPT-4 based metric correlates well with human judgement both at the instance level and at the system level, we see some evidence that the GPT-4 based metric has a stronger preference for GPT-4 based generations compared to humans," they write. "Thus, models which train, e.g., by distilling from GPT-4 outputs, may have an unfair advantage on our evaluation."
**Why this matters - single AI systems that see and read the world - true agents that we can discuss our needs with:** Zoom out and this benchmark is remarkable - we're testing how well AI systems can look at images and some associated text and perform a complex multimodal reasoning task to satisfy us humans. The benchmark even includes multi-image task evaluations (where models don't do very well today). This points to a future where the world fills up with multimodal agents that do things on behalf of humans - everyone is about to become a manager.
**Read more:** [VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use (arXiv)](https://arxiv.org/abs/2308.06595).
####################################################
**Defending networks with an RL-trained CYBERFORCE:***…In the future, networks will have AI-powered immune systems…*Researchers with the University of Zurich, University of Murcia, and the Cyber-Defence Campus within the Swiss federal office for defense procurement's ("armasuisse") Science & Technology division, have built CyberForce, a way to teach AI systems to alter systems in response to zero-day attacks.
**What CyberForce is:** CyberForce is a "federated reinforcement learning (FRL) framework that learns optimal MTD techniques mitigating heterogeneous zero-day attacks in a collaborative and privacy-preserving fashion". The goal of CyberForce is to continually manipulate the parameters of systems on a network (e.g, IP addresses, system libraries, permitted file extensions, etc) to protect them against adversarial attacks, which is an approach called Moving Target Defense.
**Distributed intelligence:** The RL part of the system is interesting - a little agent sits on each device and tries to take appropriate MTD actions to defend the device against attacks. Meanwhile, an anomoly detection system observes all the devices. "If the AD system predicts a normal behavior, it means that the deployed MTD technique effectively mitigated the attack, resulting in a positive reward. In contrast, if the device behavior is deemed abnormal, it indicates that the selected MTD technique was ineffective against the attack, leading to a negative reward… the local RL agent is triggered when the AD determines that the current state of the device is abnormal".
**Does it work?** In tests, the researchers deployed CyberForce onto ten Raspberry Pi 4 computers which were running Electrosense, an open-source internet-of-things crowdsensing platform. In scenarios where all the devices faced similar attacks, "CyberForce significantly reduces training time/episodes of existing centralized RL-based solutions by two-thirds while maintaining an accuracy rate of over 98%," they wrote. Whereas in scenarios where the attacks were a lot more varied (and therefore the information for the RL update was way sparser), they found that "CyberForce achieved satisfactory accuracy due to the collaborative and transferable learning nature of FL."
**Why this matters - automatic immune systems for digital networks:** Systems like CyberForce are early examples of a future where networks are secured by smart, distributed, software systems that look more like immune systems than traditional technology. We're heading into a world where infrastructure will be armored by continually shifting layers of defenses determined by distributed RL agents calibrated by various overwatching systems.
**Read more:** [CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation (arXiv)](https://arxiv.org/abs/2308.05978).
####################################################
**Tech Tales:**
**Prediction Wars**
[Memoir uploaded uplift+3 subjective years]
Right before the uplift, a lot of the way we made sense of the future was via our various predictive engines. These engines had made us all a lot of money over the years by doing things as varied as: looking at data from satellites to figure out relationship between parking lot occupancy and retail fortunes; monitoring social media for chatter to work out which dropshipping services were making money; analyzing sentiment data across millions of product reviews to identify 'sleeper hits', and so on.
But recently, we had turned the engines to the task of making predictions about themselves - given all our proprietary data about the development of these systems, could they make accurate predictions about what it'd take to improve their own performance?
At first, the predictions were right, but also unoriginal - more data, more compute, finer-grained metadata, and so on. We were impressed insofar as the predictions lined up with 'held out' R&D plans which we hadn't fed these machines.
But after a couple of generations of development, they started to make predictions about engineering tweaks that could be made to improve their performance and these predictions were truly off distribution - they weren't ideas we had planned on doing, nor were they often ideas we really understood. We were able to track some of the ideas down to obscure papers which some of us had read and ignored. And for some of the other ideas, we didn't know where they came from at all.
But when we implemented them, they worked.
The problems started soon after that. Word got out, I suppose. People started trying to contract us to ask their own questions about developing their own AI systems:
* How much data would be needed to achieve a Pareto frontier improvement over [REDACTED]?
* Given access to the inputs and outputs of a model (output only with no metadata such as logprobs), what kinds of technologies need to advance to make it easy to clone the model's behavior for a given usecase?
* Given an architecture of [REDACTED], about [REDACTED] tokens of data split across [REDACTED] 40% [REDACTED] 30% [REDACTED] 25% [REDACTED] 5%, and a cluster capable of [REDACTED] per second, how long in wallclock time until it's possible to achieve a loss of [REDACTED] against the benchmark [REDACTED]?
Once we realized how many of these queries we were getting we started building classifiers to ensure we didn't respond to them. But by then it was too late - someone had used some of the other prediction engines to figure out how to build queries that could evade our classifiers, so information about the future leaked out into the present, and all around the world, new and improved engines began to be built.
Soon after was the period of strife, when a thousand minds lit up around the world and fought and talked and, like overgrown babies, moved their digital appendages around thoughtlessly, causing great havoc and so on.
**Things that inspired this story**: markets and AI development; when AI systems get used to improve other AI systems; trust and safety classification; jailbreaking AI systems; steganography; biblical fictions. |
71b8e981-ba67-457a-a864-de535fa7b45f | trentmkelly/LessWrong-43k | LessWrong | Has there been any work on attempting to use Pascal's Mugging to make an AGI behave?
> A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger restraint than a two-foot-thick steel door - Nick Bostrom, Superintelligence
Nick Bostrom's idea of anthropic capture is very similar to the idea of utilising Pascal's Mugging, but he doesn't explore this in detail. Further, there is a difference in that framing this in terms of Pascal's Mugging suggests that this could work even if the AGI is highly skeptical of the mugging and defines a minuscule probability of it being true. Off the top of my head, potential lines of exploration include a) biasing the AI's reasoning to allow it to be mugged[1] b) trying to figure out what would be the optional message to send the AGI on the assumption that it would surrender to the mugging c) including scenarios where an AI allows itself to be mugged in the training data.
1. ^
With the obvious downside that others may use this to mug the AI. |
fd09dc58-ce4d-43b2-af26-2b3e6320730f | trentmkelly/LessWrong-43k | LessWrong | Progress links and tweets, 2022-10-05
Announcements
* It’s time to build: Arnold Ventures is hiring a “VP of Infrastructure” to focus on “the drivers of supply-side bottlenecks”
* The World’s Fair Company has launched their poster series, Stories of Our Future (prints available!) (via @camwiese)
* Meta (Facebook) AI announces text-to-video. A bit rough but impressive (@MetaAI)
Links
* Patrick Collison interviewed by Ezra Klein (via @tylercowen)
* Charles Mann interviewed by Dwarkesh Patel (via @dwarkesh_sp)
* 10 years ago, it cost $10k to sequence a human genome. Illumina’s new machine promises $200 (via @emilylmullin)
* “We had to order a larger CT scanner when it was deemed unacceptable to send patients to the zoo” (Marginal Revolution)
Quotes
* “Who knows whether, when a comet shall approach this globe to destroy it… men will not tear rocks from their foundations by means of steam, and hurl mountains, as the giants are said to have done, against the flaming mass?” Byron, 1822
* Margaret Sanger’s incredible resilience and commitment to the cause of birth control
* Quote quiz: who lavished this praise on “American efficiency”?
Queries
* What are examples of inventions/discoveries that were made by brute force search? (@_brianpotter)
* What are examples of modern monuments of progress? (@MoritzW42)
Retweets
* Technology as a whole is liberating and often enchanting (@atrembath)
* Davis Kedrosky thread on Sir E. A. Wrigley, legendary British demographer and economic historian (@dkedrosky) |
64fa29c4-dbfa-40db-a19c-34e17b7243c8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Brussels meetup
Discussion article for the meetup : Brussels meetup
WHEN: 14 April 2012 12:00:00PM (+0200)
WHERE: Museum of Natural Sciences Rue Vautier 29 B-1000 Brussels
During this meetup we will continue discussing our beliefs, using puzzles to improve creative thinking, and brainstorming about what makes an effective group. Should you arrive late we will have moved to the cafeteria, just go straight once you're past the entrance, you can't miss it. If you are in the neighborhood, consider dropping by. (getting there: http://www.naturalsciences.be/information/visitor/access)
Discussion article for the meetup : Brussels meetup |
3da3227d-b1c9-45ff-b2ac-1cfb0820495b | trentmkelly/LessWrong-43k | LessWrong | How Many LHC Failures Is Too Many?
Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash? |
0ae05e9a-535a-47ba-ba8f-6d03c27ba927 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: Fun & Games
Discussion article for the meetup : Washington, D.C.: Fun & Games
WHEN: 05 March 2017 03:30:00PM (-0500)
WHERE: Donald W. Reynolds Center for American Art and Portraiture
We will be meeting in the courtyard to hang out, play games, and engage in fun conversation.
Upcoming meetups:
* Mar. 12: Pi Day
* Mar. 19: Tolerating the Outgroup
Discussion article for the meetup : Washington, D.C.: Fun & Games |
77d7e084-6dcf-47cb-9241-d5657261f1fc | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: Cherry Blossoms
Discussion article for the meetup : Washington, D.C.: Cherry Blossoms
WHEN: 12 April 2015 03:00:00PM (-0400)
WHERE: Enid A. Haupt Garden
Note: The meetup location has changed for this week.
We will be gathering at the Enid A. Haupt Garden (outside the south entrance to the Smithsonian Castle, on Independence Ave. between 9th and 12th St. SW, east of the Smithsonian Metro stop) to enjoy the annual cherry blossoms and/or associated festival. We will be congregating between 3:00 and 3:40 toward the southwest corner of the garden, and set out from there toward the Tidal Basin. If you can't find the group, my number is 301-503-6543 - I generally prefer telephone calls to text messages.
The official National Cherry Blossom Festival website has a map of the blossoms that may be useful for planning purposes. Because we are meeting on a Sunday, food options may be fairly limited; there may be food trucks in the Smithsonian area and there may be food tents scattered throughout the festival area (the official map has a fork-and-knife symbol indicating food sources), but the safest option is probably to bring your own.
There is no WMATA major track work scheduled; minor track work is on hold for the end of the Cherry Blossom festival.
Upcoming meetups:
* Apr. 19: "The Sword of Good" (talk about a LessWrongian short story)
* Apr. 26: (Outdoors) Fun & Games (bring games, play games, converse, socialize, or any combination thereof)
Discussion article for the meetup : Washington, D.C.: Cherry Blossoms |
db2381ea-3cb2-461a-be7b-ea6793d5ac66 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Urbana-Champaign: Thinking Fast and Slow Discussion
Discussion article for the meetup : Urbana-Champaign: Thinking Fast and Slow Discussion
WHEN: 03 November 2013 03:00:00PM (-0500)
WHERE: 40.109545,-88.227318
Coordinates are: 40.109545,-88.227318 Meetup will be held in the Courtyard Cafe in the Illini Union, on the ground floor, at 2PM.
Discussion will be about the first 70 pages of Thinking Fast and Slow, by Daniel Kahneman. Up to but not including "Norms, Surprises, and Causes."
If you don't have it, googling "Thinking Fast and Slow PDF" could help you find t' booty yer lookin' for, if ye be a swashbucklin' scallywag.
There will also be cookies.
Cross posted on the mailing list.
Discussion article for the meetup : Urbana-Champaign: Thinking Fast and Slow Discussion |
f3af5a9d-a769-4e99-a01e-e8eaf666c964 | trentmkelly/LessWrong-43k | LessWrong | DIY Transcranial Direct Current Stimulation. Who wants to go first?
There's been a bit of discussion about TDCS here on LessWrong. If you don't know what TDCS is here's an article from Technology Review.
Here's a company (soon) selling a DIY kit to build your own.
> tDCS is one of the coolest pieces of health/ self improvement technology available today. The US Army and DARPA both currently use tDCS devices to train snipers and drone pilots, and have recorded 2.5x increases in learning rates. This incredible phenomenon is achieved through a very simple device called a tDCS machine.
>
> Today if you want to buy a tDCS machine it's nearly impossible to find one for less than $600, and you are typically required to have a prescription to order one. We wanted a simpler cheaper option. So we made one. Then, because we're all egalitarian like, we thought others might want one too.
>
> The GoFlow β1 is a full kit of all the parts and plans you need to build your own tDCS device. $99 will get you one of the first β1's and will help us develop β2!
Here's a video from Journal of Visualized Experiments on how to administer TDCS.
Thoughts?
|
e04a2045-b22d-4832-b2d4-ea749d015c36 | trentmkelly/LessWrong-43k | LessWrong | Omicron Post #13: Outlook
Omicron is now the dominant strain, so much of what would otherwise have gone into this Omicron post ended up in the latest weekly post instead, and I’m considering the right way to merge the two streams together.
In addition to some amount of the usual business, the focus of this post will be an explicit attempt to look at what’s likely to happen from here, and give my baseline scenario.
I recently had reason to chart out what I expected that baseline scenario to be around the world, so it makes sense to share it here.
The world can be divided into five (highly unequally sized) parts, with increasing levels of uncertainty, some places are unclear on whether they’re in groups two, three or four:
1. South Africa and other already-peaked areas.
2. Uncontrolled spread: Europe, North America, India, Israel, Australia and Argentina.
3. Without an Omicron wave, or without good data: Third world, Russia.
4. Previous containment: South Korea, Japan, Taiwan and New Zealand.
5. Special case of previous containment: China.
South Africa and Other Already-Peaked Areas
This is the easy story to tell. Predictions about the past are often still hard, but they’re a lot easier.
A few African countries have already finished their Omicron waves, and got through mostly unscathed. Since they’re in the southern hemisphere, one can worry that there’s a wave coming in their winter a few months from now, but I would expect any such wave to be mild even if relatively little is done to prepare, given how things went this time, and that those infections are now baked in.
Uncontrolled spread: Europe, North America, India, Israel, Australia and Argentina.
This is where most of my readers live, and where I live, so it’s been my main focus. There is now a very clear baseline scenario.
Infection rates increase exponentially until they hit a peak. In major cities, this plateau will usually be quite high, with more than 1% of people infected each day, and likely 10%+ of the population t |
2eee172f-6c1c-4f77-bedd-2e7e7a17adf1 | trentmkelly/LessWrong-43k | LessWrong | Covid 8/4/22: Rebound
I’ll be at Future Forum over the weekend. If you’re here, come and say hello. I don’t plan on scheduling a lot of meetings because I hate scheduling, but talking to people is the plan. I have a few people that I know I want to see, but mostly I’m going to be improvising, and whatever happens happens.
Because of the trip I wrote most of this earlier, and all I did Thursday was fill in the numbers section and check for anything big from overnight – some stuff likely got shifted to next week as a result.
It was probably a good week for a trip, as Covid news continues to quiet, to the point where the majority of this week’s post is about non-Covid things. So far I am mostly fine with this transforming gradually into a weekly roundup. Indeed do many things come to pass.
Executive Summary
1. Covid continues on its expected trajectory.
2. Monkeypox continues on its expected trajectory.
3. Nuclear reactor design approved, hope springs eternal.
Let’s run the numbers.
The Numbers
Predictions
Prediction from last week: 850k cases (+5%) and 2,850 deaths (-3%).
Results: 730k cases (-10%) and 3,054 deaths (+3%).
Prediction for next week: 750k cases (+3%) and 3,000 deaths (-2%).
Adjustments were the reverse of what I expected, but everything involved was relatively small. Everything points to a stable situation, with reporting errors and gaps likely to be most or all of the story.
Deaths
Cases
Included 10k presumed cases in South Carolina, since they did not report, along with standard Florida adjustment.
The drop in the West isn’t mostly due to one particular state although some (~20k) of it is Nevada being obviously too high last week. I am continuing not to make adjustments that aren’t super obvious, especially retroactively. Better not to put one’s finger on the scale even when it’s probably in the correct direction.
Physical World Modeling
President Biden took Paxlovid early, and got a rebound case of Covid for his trouble. He then continues to re |
7bfe3872-0a75-4efe-bbe8-a6fa6265a10e | trentmkelly/LessWrong-43k | LessWrong | Are mass hallucinations a real thing?
One of the explanations in the irrationality game thread for UFOs and other paranormal events seen by multiple people at once, like the was mass hysteria. This is also a common explanation given for any seemingly paranormal event that multiple people have independently witnessed.
But mass hysteria is mostly known from incidents where people hysterically believe they have some disease, or have some hysterical delusion (false belief). In cases where people report seeing something or having a hallucination, it tends to be a few people across a large society. For example, when reports of Spring-Heeled Jack were going around England, multiple people claimed to have seen Spring-Heeled Jack, but there were no cases of hundreds of people seeing him simultaneously; therefore, the hysteria could have selected for people who were already a little bit crazy, or it could just have been that out of millions of English people a few of them were willing to say anything to get attention.
Conformity pressures can cause people to misinterpret borderline perceptions - for example, if someone says a random pattern of dots form Jesus' face, I have no trouble believing that, thus primed, people will be able to find Jesus' face in the dots. But it's a much bigger leap to assert that if I say "Jesus is standing right there in front of you" with enough conviction, you'll suddenly see him too.
Does anyone have any evidence that mass hysteria can produce a vivid hallucination shared among multiple otherwise-sane people? |
1929c06e-ad1a-4951-9815-174cab62bd9f | trentmkelly/LessWrong-43k | LessWrong | An explanation of evil in an organized world
A classic problem with Christianity is the so-called ‘problem of evil’—that friction between the hypothesis that the world’s creator is arbitrarily good and powerful, and a large fraction of actual observations of the world.
Coming up with solutions to the problem of evil is a compelling endeavor if you are really rooting for a particular bottom line re Christianity, or I guess if you enjoy making up faux-valid arguments for wrong conclusions. At any rate, I think about this more than you might guess.
And I think I’ve solved it!
Or at least, I thought of a new solution which seems better than the others I’ve heard. (Though I mostly haven’t heard them since high school.)
The world (much like anything) has different levels of organization. People are made of cells; cells are made of molecules; molecules are made of atoms; atoms are made of subatomic particles, for instance.
You can’t actually make a person (of the usual kind) without including atoms, and you can’t make a whole bunch of atoms in a particular structure without having made a person. These are logical facts, just like you can’t draw a triangle without drawing corners, and you can’t draw three corners connected by three lines without drawing a triangle. In particular, even God can’t. (This is already established I think—for instance, I think it is agreed that God cannot make a rock so big that God cannot lift it, and that this is not a threat to God’s omnipotence.)
So God can’t make the atoms be arranged one way and the humans be arranged another contradictory way. If God has opinions about what is good at different levels of organization, and they don’t coincide, then he has to make trade-offs. If he cares about some level aside from the human level, then at the human level, things are going to have to be a bit suboptimal sometimes. Or perhaps entirely unrelated to what would be optimal, all the time.
We usually assume God only cares about the human level. But if we take for granted that he made th |
c975d5e5-7d28-4086-bf27-08c73167183b | trentmkelly/LessWrong-43k | LessWrong | Life as metaphor for everything else.
Life is highly instructive.
Not my own, personal, human life - that's kind of a mess right now. I mean life as in the property shared by all living things. The "are viruses alive?" kind of life.
Life is really, really, useful. As a metaphor. Because its associated philosophical problems have been more or less dispatched, and this provides a model for how many other conundrums are going to probably, inevitably go.
To recap: What's the problem with life?
Did you know that Louis Pasteur was a proponent of vitalism? Yes, the same Pasteur who got a glassblower to make him a swan-neck flask so he could show that it was particles in the air, not air itself, that caused broth to ferment. He was convinced that fermentation was such an idiosyncratic marker of life that it was not reducible to ordinary chemistry.
Pasteur was not a dumb guy, and he didn't live all that long ago. This is important: Pasteur was not violating modern norms of thought. The way Pasteur thought about life is so compatible with modernity that we still use this way of thinking ourselves for other topics.
To Pasteur, living things had some hypothetical special property (the essence of life... itself). Things either had the special property or they didn't - if they didn't they were just boring nonliving matter, but if they had the special property then by virtue of this single property they could do diverse things like ferment broth, or synthesize urea, or reproduce themselves.
This is a very easy and natural way for people to think about life (certainly it was common among the ancient Greeks). Why? Because it's simply the prediction that our mental categories match up with the natural categories of the world. We mentally separate things into living and nonliving, and we have different habits of thought depending on this categorization. It would be so convenient if the world also separated things cleanly into living and nonliving, with different rules for each.
This easy logic works great for ever |
84d929bc-2a4d-46f7-b8f4-5c26d8f1247d | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Autonomous technology and the paradox of human-centered design
um
the talk will unfold out of a few
recent publications of mine um the kind
of retrospective journey is a
personal reflection on how particular
concerns
and concepts have emerged and to become
central in my work
most of my work in interaction design
over the last two decades is
contextualized within a digital
transformation of society
but also more broadly fundamentally
rethinking the way in which we design in
a post-industrial age
and by post-industrial i mean an age
when digital things come to express some
form of agency and because of that
in a way in this sense to also actively
participate
in the making of what we come up to
experience as our reality
and examples of such things which i have
researched and designed for
are network communities in the mid 90s
online public publics very early
platforms for user generated content
in the in the early 2000s and
the recently conducted products ai
powered product service systems
all of these developments are powered
clearly by advances in digital
technology
and to design researchers
they speak of how in this impossible
post-industrial age
the act of design becomes at first
more diffuse than decentralized
and as we come to realize now
increasingly probabilistic
and so the central theme of this talk
revolves around this idea of autonomous
technology
and the paradox of human-centered design
or in other words around the fact that
as designers we're just
not trained for understanding and
designing
for human interaction with autonomous
technology
we're not equipped conceptually we're
not equipped methodologically
and therefore without such practice
we have also not developed a sensibility
either
for how to operate within this new level
of complexity
where it is not exclusively humans that
act and produce effects
and we design ideas such as user
products functionality etc
are no longer serving us
well to craft relations between people
and technology in a way that are
responsible and sustainable in the long
term
and so in this talk i'm going to unpack
a few um theoretical concepts
through the reading of three recent
articles of mine
where i grappled with these ideas and i
attempted to develop a vocabulary
and eventually an international research
program to address these issues
is a concept that lenica coeur and i
introduced in 2018 in a paper presented
at kai
and what this paper does is attempting
to expand
on what in the social sciences is
referred to as social practice theory
to revise notions of agency and current
narratives of autonomous technology
and on the basis of that we
conceptualize how agency and
conceptualize specifically how agency is
configured
between humans and non-humans
interacting
next to each other in the context of
everyday life
what we do in the paper is to conduct an
analysis
of historic changes of domestic heating
as a social practice and we use
this historical case to consider how
roles between people and heating systems
have shifted over time
in doing that we are able to show that
though
this interplay between what we refer to
as
uniquely human capabilities and uniquely
artificial capabilities
have changed over time because of
technology
giving rise to quite different roles for
humans and non-humans
this interplay has always
been central to performing
domestic healing as a social practice so
when we were burning logs of wood to
make fires
of course our role was bigger and
today instead the sensors and algorithms
of our smart thermostats are the ones
doing muscle
work but it's still an interplay
in which we learn from each other and
perform we carry out
next to each other a particular set of
activities necessary to that particular
social practice
in this co-performance between humans
and non-humans
both humans and non-humans are
considered
as necessary elements and they both have
knowledge and abilities
and they both encode some particular
values
they both perform carry out through
their minds and bodies
an instance of what we come to socially
recognize as a clearly identifiable
practice
and it is in this performing next to
each other in this core performance that
agency is actually expressed
and effects are produced
and sometimes with implications as we
know that are quite
broader um and have an effect
on other social practices
that come to be connected to the
particular
practice under examination and then the
end and the implications for the
elements that constitute them because
as we name life real life it's quite
messy isn't it
so what
core performance does as a concept
to determine within what boundaries or
ecosystem our design has to be
positioned
to be likely to produce a particular
effect
is to help us move past an abstract
and often totalizing idea of agency
and position it somewhere in our life
not with humans only neither just with
products and services
powered by intelligent technology almost
as
they were something other independent
from us
ontologically autonomous but at the
nexus of this dynamic interplay
between humans and then humans um which
next to each other with different yes
historically changing rules yes
but next to each other carry out
domestic heating
energy saving decision making and many
other social practices
and for this this is an
important conceptualization for much of
the design work that we have conducted
over the last years
because it frames agency as a matter of
interaction
of crafting and sustaining relations
and even relationships between people
and technology
and it's important because then as an
interaction designer
i can begin designing with and for
agency
in the context of people's real lives
rather than assuming that agency is
something that can be
fully engineered at the system level and
that's it
i can also through such a concept
as an interaction designer begin
developing a different sensibility
around questions
such as what is an appropriate
interplay who decides it when is it
decided
can it be changed can it be negotiated
or even contested
can these changes and repairs take place
in context
of the time of use of the system
sensitive to that particular context
building on this idea of core
performance
um i then started to ask myself whether
perhaps
we should begin to take this interplay
more seriously and how in this more
diffuse
expanded universe of design where
digital things are never finished
how should we conceive of autonomous
technology
as a design partner perhaps
in in the chapter published last may
um for the volume related
relating to things which is the science
and technology studies volume edited by
ed wilson i i analyzed cases
um based on projects we conducted in our
group
between 2015 and 2018 in the field of
smart mobility democratized
manufacturing
and assistive technology for older
people
all of these projects used quite
different approaches to include
autonomous technology as a
co-ethnographer
and a co-designer next to human
designers
we used live vlogging intelligent
cameras
and bespoke sensors attached to objects
or very
use um
[Music]
we use open libraries for data
visualization and even collaborated with
our colleagues in computer science for
expert machine learning
these examples are offered in the
chapter as
case studies of a possible modern human
design practice
where i at first describe
how we use this type of technology
to gain access as designers
to the ecosystem of relations in which
humans and non-humans are entangled
within particular social practices
and how this approach helped us surface
elements
of these social practices and ultimately
generate insights
that were previously either unattainable
because invisible to the human eye or
unexpected
because initially considered an
important
from a human observer's perspective
and then i described how using
autonomous technology in this way
to open up this non-human trajectories
and perspectives
contributed a point of view that helped
us
problematize the design space
and set all our initial assumptions
reveal our biases demonstrate the
problem to be more uncertain
more nuanced or more complex that we
have originally assumed
and so this type of more than human
design practice
led us to more open-ended ways of
co-performing with autonomous technology
it invited us to integrate this uniquely
human and uniquely artificial
capabilities
and performances in ways that instead of
reinforcing existing world views
and therefore augmenting
often our biases aim to bring instead
into existing things we could not think
of before
in other words helped us imagine
alternative
and possible hopefully better futures
and that is when the the visiting
professorship in sweden
um and the collaboration with the open
restaurant came in
in an article we wrote for design issues
by mit on
technology and more than human design
johan and i expand on this idea of core
performance and more than human design
practice
beyond the one-to-one relation that we
experience
when we interact with the system
we were very interested in applying
these ideas
to the decentralized context of the
current data economy
which is a context where digital things
and services
are assembled at runtime constantly
changed
and sometimes with millions of people
using them
simultaneously but for different
purposes
and so we began to
ask ourselves what would it mean
to to attend to these core performances
in a decentralized context as
acts of design could we
consider them as diffuse
distributed probabilistic acts of design
in such a context and what would that
mean
for how we look at matters of
responsibility
in design
through an analysis in the text
that that we conduct of what is
clearly a moment of crisis of design
none of these designers
wanted to produce these effects none of
that
of them was really
able to fully anticipate
the consequences of particular decisions
the designer of the like button in
facebook
had never thought um that that
particular design decision
would have in the long term
uh produce an effect such as a poster
truth society
so so it's it's it's a moment of crisis
for designers
and it's quite clear that there is some
sort of
contemporary paradox here with humans
into design if it's not
serving us well in in making these
decisions
and anticipating these effects
and the reason why the reason for the
paradox is that human central design
still maintains design ideas for forms
and food processes
where the only real agency present and
accounted for
in the design situation as i'm designing
is the one held by what we refer to as
the user and the rest is just
functionality that can be engineered
but then of course we cannot effectively
care for humanity as designers and
create inclusive and sustainable futures
if we're not equipped to understand to
account for and to anticipate
our interplay with autonomous technology
in everyday life and the broader effects
of this interplay
in in in a universe of design that's
expanded that's decentralized
where things digital things are not just
used
but made assembled
not just by one user and not even by
many users but
are the result of this emergent
interplay of multiple
users multiple systems that are
instantiated in multiple forms and with
multiples even conflicting intents
and so the this is a huge
challenge for designers and of course we
don't get even
close to uh providing a solution for
that but one of the
tenets that we identify in this article
to begin dealing with with this
complex challenge is the idea that
when it comes to responsibility um
then perhaps responsibility here is not
about
locating right response
but it's about persistently sustaining
an ability to respond
to enable and invite response from the
stakeholders
from everyday people those are
using and making at the same time the
systems
and from the system itself and not just
to the point of design
but also later when people are actually
interacting with the system
and so differently than accountability
responsibility if we if we
understand it as a future orientations
and orientation to our future effects
cannot be fully engineered in the
functionality of the system
it is not just about the failures
machine learning model or the most
explainable algorithm because
in a decentralized system that is made
so to speak in real time there is not
just one single intention
or perspective that defines the outcome
and so it is instead also about the
design of the relations and
interactions that will enable people
to situate tune and negotiate
those responses and to repair them or
contest them
if necessary future product service
systems powered by i must be designed
perhaps and evaluated as responsive
and it is not autonomous entities
that have to be responsive to our shared
moral values but also
to the evolving social norms and
enhanced interests and aspirations
that people have in the specific context
in which both people and system
encounter and learn from each other
as you can imagine after writing this
article
we were left with a sense of urgency for
all right we need to really
fundamentally reconceptualize
design and the way we teach it in our
schools
in our design programs uh we just barely
scratched the surface
of what the problem is and so i was left
with a belief
that
we really had to take
a proactive role that imagining and
manifesting these alternative features
couldn't just be an
afterthought you know writing an article
and looking at what went wrong and how
it could be done differently
so it it cannot be just fixing things
when they go wrong
or making technology socially acceptable
without questioning it it has to be a
proactive effort
it has to start with asking why we
should design something in the very
first place
and if we are to design anything to
actually have the competence
to embrace this complexity that we have
been talking about
and so a few weeks ago we launched the
code
and the code is a we call it posted
disciplinary european research
network to rethink design
in the digital society with a commitment
to understand and shape these new
realities with real world
impact
we're starting with with funding um
by horizon 2020 as a as an itn
and we're currently hiring 15 phd
researchers
with different disciplinary background
and the key idea
here for us to really push the envelope
is is to have them working
in prototypes we call them
um having these teams of researchers
from different backgrounds
iteratively deployed in real-world
settings in collaboration with our
non-academic partners
cutting across sectors and domains
to experiment with this idea of
design-led ecosystems for the digital
transformation of society
i mean and this is a huge challenge and
it will be a huge
challenge um
for the researchers and and for us and
supervisors
but it's really through this challenge
that we hope they will be able to
develop
this new design competence and
prototype their future design
professions and roles
over the course of the next four years
what we want to investigate
and push the envelope of
our topics such as human machine
features
um how to create this
how to craft these relations between
humans and algorithms by bringing
anthropologists and social scientists
to work together early on in the design
process with data scientists and
engineers
uh decentralized interactions that are
driven
socio-economic models data governance
and future design practices and what we
want to do is to develop new knowledge
and skills
to advance design competence at each of
these levels
but also more importantly across these
levels
learning about the implications that a
design decision
taken at one level has for all the other
levels
and then rehearsing practice in in real
world contexts
working on real world problems how these
dependencies
must be taken into account and
configured
in the organizational future design
practices in order
for these design decisions to really
produce an effect
um so yeah this is a bit of a promo i
guess
the applications are open until the 14th
of february
um and we did we did receive quite
enthusiastic
responses which which is encouraging
because
because i think that in in
a bit all over around the world there is
this sense of urgency
um in the field of design for really
um addressing seriously these issues
um rather than just fixing problems
um in in adult fashion
and and i think that also
what really resonates um is
besides the fact that we have excellent
supervisors
um on board and we're feeling what is an
apparently really strong need for such a
program is
is the the the fact that
as a program we explicitly position
agency as foundational to digital design
today to develop in this new design
competence
as was once the notion of function
to industrial design to industrial
design and
and the idea that this agency this
designing with
is not necessarily something we can
control
and prototype in a traditional sense
but something that we must learn to seed
and care for
and and what does that mean of course
is um at the core of the program to
develop
and i think that with this i made it
really
on in 30 minutes and i just want to
thank you for listening
and open the floor for questions
thank you very much alisa it was a very
interesting presentation
and uh even for me that i know partially
some of this work already but uh i
really like to see
all the pieces together and how they led
to
now the program i think it's really it's
really nice
so the floor is open for questions
please go ahead hi elisa
it's worth waking up early in california
oh my god indeed um
what i want to suggest especially uh
reading someone like bruno latour and
actor network theory
is that giving agency to non-humans
is something that that humans do
almost by reflex so uh
we do it with uh with the thermostat we
do it with
door closers we we do it with all the
technology around us
so my and i think that my natural
extension it's happening also with
with autonomous uh non-human autonomous
agents
so the question is what what do you
think
we have to actually deliberately learn
uh in order to do that kind of
interaction design or what do we have to
worry about that because we give agency
so naturally to to non-human objects
what do we have to
be aware of or be weary of yeah i mean i
i
think that you know the the so two
things
that the fact that as human beings we
tend to
project agency or even
in some cases personality right
um on two things
is in a way kind of problematic when
you're trying to stress the fact
that agency is produced in the interplay
between humans and non-humans
and it's particularly problematic in
design because
in a creative design process using some
of these techniques
attribute human qualities to non-human
things
is is useful for the creative process
but it's not um
always the right way to go if we
want to really tackle the the problem of
how to design
autonomous technology in in a
responsible way it might be very useful
to come up with an interface
um that people have an easy way to
relate with
um but not when when when you
want to look at um
at a way of of contextualizing that that
particular interface of that particular
interaction in in a broader um
set of of other interactions and that's
where
the focus on social practice theory for
me
um becomes really useful compared to for
example
you know electorian approach or or a
fluid assemblages
um type of conceptualization because it
it really
helps me think in terms of ecosystems
um and it helps me think in terms of
um to put some boundaries right
for but that's exactly latour that's
exactly actor network theory it's really
the interaction right and the the sense
of ecosystem
but uh um i agree
that that i think in interaction with
with non-humans we we produce
agencies um and uh
but i think some of the the the problems
with autonomous
uh uh agents is that a lot of the
agency is actually given by
the engineering and the design already
so and it's hidden from the interaction
right so
it's the interaction of the user versus
the interaction of the designer
something like that
but it doesn't have to right so that we
could find
ways of manifesting that
yeah yeah yeah very nice
other questions i think i have a comment
yeah someone was saying something
uh okay so i just had something
uh to this uh comment by deborah also
for me when you
say okay we should move as designers
from
uh we should account for urgency as
it was functioned before right but
agency in this
like as something shared i think it's
much more complex also
as a as something to grasp to understand
right
so it means that uh we need to have
like a completely different education
and then
[Music]
yeah so i think yeah indeed it's part of
the program my amazing but then
and also my questions are
a lot about the formation of a designer
now so
the actual skills many times come from
practice
so how do we see this understanding of
urgency as a shared property emerging
also from practice
that is the most interesting thing for
me to understand
is it possible that it will come also
from actually engaging with
ai uh in practice
yeah but that was also part of the
reason for for
i mean quite quite central to to
to get in a program together where you
you know
not only work with a limited
and relatively homogeneous group of
collaborators
um in in one particular context
but bringing together um
a more diverse group of researchers
um and in several non-academic partners
that represent these sectors
that are increasingly blurring because
of this
this type of technology and so this idea
of croco teams
is really crucial because it's really
hard
um and that was the the the the feeling
i had also after many years working on
these things it's really really hard
to just um
you need to engage you need to engage
with these problems
in in in in the real world you need to
be confronted
with what does it mean to you
make a particular decision at the level
of the of the interface
and what does it mean for the um for the
type of value that will
will be fostered um
once people begin interacting with that
interface across
a you know it decentralized
um in a decentralized context and so
it would be a bit of a i think a bit of
a balancing act
between the phd researchers being able
to
to bring forward different
conceptualizations and some of them will
focus a little bit more on that
of for example you know what what is
what is value
um in in in this
data economy it can it be multi-sided
um what what is value when it's not
just um money
and um at the same time
having to really deal with real world
problems and and having to bring these
ideas
to to application as
they are developing right so that that
will be difficult
and i'm sure that there will be failures
but hopefully there will be also
successes
um but you're absolutely right you know
the the
the context of practice particularly for
us as designers is fundamental
also because a lot of concepts and a lot
of methodologies hold
to an extent and then it's the
sensibility that you develop as you do
things
that really matters yeah
thanks so we have a question from from
roy
i think yeah um
thank you elisa for this uh really uh
inspiring uh
yeah presentation but also overview of
your
thinking uh in the past and towards the
future is really
really interesting um i want to
respond to you you were talking about
this is being able
through your your work being able to
better problematize design problems
and you were also just uh responding to
maria about like how to
what is needed to really engage with the
i would say more of the politics of uh
that's that's where a lot of this seems
to go where a lot of
my work also seems to be going and where
uh indigo's also here he's uh he just
started as a psd researcher he'll be
looking at
um one of the questions is like what's
the value of
design method methodology in uh like
helping public
institutions like
integrates responsibly like data-driven
technologies in
in decision making um so i was wondering
in this maybe
in this uh this program that you're
building like yeah you seem to
be looking at a much different uh way of
prototyping or
context for for design like what are
your ideas for like really engaging with
the actual institution like the
democratic institutions or
contexts and uh and yeah what's what are
the challenges
that you see yeah no i hear you yeah so
of course we had to make choices
right and so we um
will be probably what the the type the
the very important type of work that you
are doing that's more on the
i would say on the policy poli level of
uh of design right um
that will probably be a bit on the
fringe for us
what we are interested in because it
hasn't been done yet
much is how to
take politics seriously when it comes to
um the actual interaction with the
system right
can we imagine mechanisms of public
deliberation
as you use the system right and and so
of course there are processes and design
methodologies that can be
and need to be in place at the point of
design
right when you think about you know what
what this should be
and who should be accountable and who
are the stakeholders that should be
involved in
um in the process of setting this up
and um and so on and so forth
but um our focus will be primarily on
on we have um
one phd that is uh for example will be
looking into
an alternative to the terms of service
contract
so what would be a different um concept
for the terms of service that that's
most
along the line of a social contract
that gives legitimacy to the company to
operate
and another phd instead will look more
into what i was just saying is
um mechanisms for the liberation
um within a use time
so during the use of the system but of
course i can imagine
that that connects very much also with
what you're doing
so i i just would be a lot of
conversation to have in the years
to come and we're also the
the hope is also to be able to open up
the summer schools to other phd students
because you know not just our students
at least part of that because i think
it's it's important
for people that work in this space to
connect
and have conversation
yeah i know that's that sounds it sounds
wonderful i think that there'll be a lot
of uh a lot of those questions your
your uh your naming are quite similar to
uh
to what we're interested in so it would
be great to uh
yeah to move along together yes
absolutely
thank you
and there is some enthusiasm about the
summer school so elisa
uh you should also share and use ai tech
channels to
to disseminate the summer school when
it's
when it will be time and date is a
question
hi elisa uh thanks so much for the talk
uh thoroughly enjoyed that and a lot of
it resonated very nicely
i also feel i understand decode a bit
better
which is very useful right now
um i've got a kind of a curiosity about
how
how we understand changes and shifts in
agency
and this has come up a few times
[Music]
from kind of designing some lightly
autonomous systems
and finding that as as a designer i was
describing
or trying to see more agency there than
there really was
um and falling into what uh deborah was
alluding to of kind of
anthropomorphizing um
rather than actually being agency
but then i think there are times when
the when the the systems genuinely do
push back
um and there's something a bit
generative and something a bit
surprising and emergent that acts in
ways that
um a designer might not be expecting and
it's very clearly
agential um and it also makes me think a
bit of
open ai when they came up with their big
language models and they started saying
well we're not going to put them out
into the world
because it's too dangerous they would
have too much of their own agency to
change the way that humans communicate
so it it feels like they felt there
would be
too much of a shift in agency there so
i'm kind of curious
how we can chart that and make sense of
it and
see those shifts as they happen or
before they happen
that's a really good and really
difficult question dave
um are you are you talking about
being able to anticipate the shifts
i i guess i guess it starts with
noticing them
um and then moves on to anticipating
um i think i think that
it's actually quite hard to anticipate
them
but but if you if you
i i don't have a good answer for you
really i think that
if you if you slightly
maybe shift conceptually from the idea
of
fully anticipating to what you
said noticing and and then sort of
reflect
reflexively um
providing a
a way to to tune
you know who does what
um that might um when when we
zoom out of it um perhaps provide
insights
for um a more
radical redesign of a system at the
professional level so like
sort of a ref reflexive cycle between
everyday design
practice as i call it in professional
design practice maybe there is a way of
noticing
um these on these these these
shifts and and what the the trend is in
these shifts perhaps
there is something to be learned for how
the system
should be redesigned or more um
more radically reseeded so you think so
to speak
um also at the you know the functional
level
but that's just me thinking aloud with
you
yeah because it it feels a bit like you
know
when when we try and get people to go
into the studio and start prototyping
a lot of that is about realizing the
agency of the stuff they're working with
and getting out of the idea of just
straight imposing your will on the world
um and it feels like there's a similar
kind of process for
making increasingly autonomous systems
um and at the other end uh if i think of
like
uh uber drivers going on strike
it's very clear they're exercising
agency there
um and it's very obvious but there's a
lot of gradations in the middle
yeah and there's also be this you know
these this
narrative that we we all sometimes and
advertisingly
fall into rather than
that the system is it should be
increasingly autonomous
and while in a way you know
technological innovation pushes in that
direction and
and it's also true that machines learn
and much faster than human beings
right generations of technology are
faster
in retaining learning than generations
of human beings
so so in that sense um
you have a bit of a mismatch between
that that that phase and
and the enculturation of technology but
um
yeah i think there are a lot a lot of
the conversation here will depends also
on on how you frame how we frame the
narrative
that was a difficult question it's a
super difficult question
these are the fun questions the only one
that makes you think that's why you give
talks
and for me personally it's a very dear
question like uh
especially what they were saying like
this
uh thing of this urge of imposing our
view into the world no it's what uh
like in my uh perspective we should
learn
try not to do like try to develop
to to be a bit more pragmatic and uh
acknowledge where we are really trying
to
say our to impose our view and uh
try to open to our to others now and um
i i think there are some ways of doing
it but uh yeah it's still
really really challenging is there
any other questions
hi uh thank you so much for your talk it
was
really interesting and i'm the one who
said yes please to the summerslam yes
so that's me i would love to that sounds
amazing
yeah um so i was just wondering if you
could it's we've
i've you've touched on it throughout the
presentation and in the
in the questions but i'm really
interested in um
the difficulty of sort of the limits of
language or the challenge and ambiguity
around the language of
autonomous and agency and i come from a
philosophy background
and i work with roboticists at tu delft
and
we say autonomous we are talking about
different things
um we talk about agent we're talking
about different things so we've had to
really
sort of sit down together and say like
okay what do you mean
and and then try and find a common
language so i'm wondering about the
process of finding a common language
and then i was interested in if if you
might suggest that your research would
come up with a new language
or like should we keep calling this
technology autonomous or should we come
up with new way
to to i so so my background is in
in the humanities and so for me language
is so
important because because it really
shapes the way we think
and you're absolutely right you know we
use the same
words to mean different things even when
we say design we mean different things
so when we say
system you know it's it's it's a
it's a different thing so um there is
something to be said about clarifying
when you work interdisciplinary the
meaning of the basic terms that you're
using
but yes we we're hoping to to
in our own field you know for referring
design interaction design broadly
broadly defined that's include social
service design
to to develop a new vocabulary
and so you know notions of
core performance um
we're looking into notions of and dave
will be in the intentional interaction
right so how do you conceptualize the
interaction that
is actually responding to a multiplicity
of
different and possibly also conflicting
intentions
um kind of interface you need to develop
for that um
um this idea also of
responsibility they might not be exactly
these terms
but uh we'll try to capture
our learnings and and the
way that we are re-conceptualizing
particular
aspects um of the design process
also with a new vocabulary that's that's
the ambition
yeah i think it's important like i hate
using
i hated it forever but i don't have
another you know the only other word i
can use is people
but but it doesn't you know people could
be
the the person that's interacting with
the system it could be the stakeholder
that
you know has as a voice in determining
um what kind of system should be in
place
um in the first place so yeah
we don't have an alternative for that
thanks very very important yes
okay we might have time for one last
question
if there is one or
[Music]
yes it doesn't look like
there are other questions it was a very
nice presentation
and again yeah i think all the
the summer school programs will be very
interesting for the network of ai tech
so
we will thank you very share
sorry i just wanted to you know thank
the boring
and dave and madeline and rule for the
questions
and helping me think along with you
on these really important topics i hope
that
you do as you said with with with the
program in place and uh via ai tech that
there will be ways to
you know bring our energies together on
the very concrete projects
so thank you very much again elisa and
to everyone for participating
see you next week
bye bye bye thank you thank you
you |
2a164897-61c2-40a2-9582-8ea45cd55b4e | trentmkelly/LessWrong-43k | LessWrong | Intelligence without causality
TL;DR
In this work, I explain how a mind can function in a universe devoid of cause and effect. I show that such a design can accommodate arbitrary restrictions that the laws of physics might happen to place on its outputs.
I originally decided to write this post when I decided that I wanted to refer to this idea within something else that I intend to write, and then realized that I couldn't think of anywhere that the whole idea was clearly expressed.
Imagine a acausal universe, as described in https://www.lesswrong.com/posts/o5F2p3krzT4JgzqQc/causal-universes (The rest of this post probably won't make sense if you haven't read and understood this first)
What kind of mind design would work in such a universe. An acausal mind doesn't have separate inputs and outputs, it just has acausal connections, which for the purposes of this post, I call aputs.
An aput is a correlation between some part of the external world and the AI algorithm that is enforced by the laws of physics. Lets assume that all the aputs are binary, true or false. Let B={0,1} The AI is then a black box that is consistent with some but not necessarily all possible aputs. In terms of the exponentially big computer simulating the universe, the AI has a set of aputs a1...an and the AI has an action set S. S⊂Bn. The computer simulating the universe searches through all possible bitstrings. If the subset of bits that consist of the AI's aputs is not in S, then the bitstring is discarded. Of course, the AI might not be a black box to the physics of the universe. The AI might contain some internal variables, with multiple consistency checks relating them to the surroundings. The AI might not be guaranteed existence at all. If the state of the world where the AI is never turned on is self consistent, there isn't anything the AI can do about it.
Of course, sometimes the laws of physics put constraints on S. These physical constraints can be modeled by forcing the AI to pick S∈C for some set C determined by |
9fa34e7f-3053-4942-ba7c-b752581b9346 | StampyAI/alignment-research-dataset/agentmodels | Tutorial: Modeling Agents with Probabilistic Programs | Modeling Agents with Probabilistic Programs
---
layout: chapter
title: "Reinforcement Learning to Learn MDPs"
description: RL for Bandits, Thomson Sampling for learning MDPs.
---
## Introduction
Previous chapters assumed that the agent already knew the structure of the environment. In MDPs, the agent knows everything about the environment and just needs to compute a good plan. In POMDPs, the agent is ignorant of some hidden state but knows how the environment works *given* this hidden state. Reinforcement Learning (RL) methods apply when the agent doesn't know the structure of the environment. For example, suppose the agent faces an unknown MDP. Provided the agent observes the reward/utility of states, RL methods will eventually converge on the optimal policy for the MDP. That is, RL eventually learns the same policy that an agent with full knowledge of the MDP would compute.
RL has been one of the key tools behind recent major breakthroughs in AI, such as defeating humans at Go refp:silver2016mastering and learning to play videogames from only pixel input refp:mnih2015human. This chapter applies RL to learning discrete MDPs. It's possible to generalize RL techniques to continuous state and action spaces and also to learning POMDPs refp:jaderberg2016reinforcement but that's beyond the scope of this tutorial.
## Reinforcement Learning for Bandits
The previous chapter <a href="/chapters/3c-pomdp.html#bandits">introduced</a> the Multi-Arm Bandit problem. We computed the Bayesian optimal solution to Bandit problems by treating them as POMDPs. Here we apply Reinforcement Learning to Bandits. RL agents won't perform optimally but they often rapidly converge to the best arm and RL techniques are highly scalable and simple to implement. (In Bandits the agent already knows the structure of the MDP. So Bandits does not showcase the ability of RL to learn a good policy in a complex unknown MDP. We discuss more general RL techniques below).
Outside of this chapter, we use term "utility" (e.g. in the <a href="/chapters/3a-mdp.html#mdp">definition</a> of an MDP) rather than "reward". This chapter follows the convention in Reinforcement Learning of using "reward".
### Softmax Greedy Agent
This section introduces an RL agent specialized to Bandit: a "greedy" agent with softmax action noise. The Softmax Greedy agent updates beliefs about the hidden state (the expected rewards for the arms) using Bayesian updates. Yet instead of making sequential plans that balance exploration (e.g. making informative observations) with exploitation (gaining high reward), the Greedy agent takes the action with highest *immediate* expected return[^greedy] (up to softmax noise).
We measure the agent's performance on Bernoulli-distributed Bandits by computing the *cumulative regret* over time. The regret for an action is the difference in expected returns between the action and the objective best action[^regret]. In the codebox below, the arms have parameter values ("coin-weights") of $$[0.5,0.6]$$ and there are 500 Bandit trials.
[^greedy]: The standard Epsilon/Softmax Greedy agent from the Bandit literature maintains point estimates for the expected rewards of the arms. In WebPPL it's natural to use distributions instead. In a later chapter, we will implement a more general Greedy/Myopic agent by extending the POMDP agent.
[^regret]:The "regret" is a standard Frequentist metric for performance. Bayesian metrics, which take into account the agent's priors, are beyond the scope of this chapter.
~~~~
///fold:
var cumsum = function (xs) {
var acf = function (n, acc) { return acc.concat( (acc.length > 0 ? acc[acc.length-1] : 0) + n); }
return reduce(acf, [], xs.reverse());
}
///
// Define Bandit problem
// Pull arm0 or arm1
var actions = [0, 1];
// Given a state (a coin-weight p for each arm), sample reward
var observeStateAction = function(state, action){
var armToCoinWeight = state;
return sample(Bernoulli({p : armToCoinWeight[action]}))
};
// Greedy agent for Bandits
var makeGreedyBanditAgent = function(params) {
var priorBelief = params.priorBelief;
// Update belief about coin-weights from observed reward
var updateBelief = function(belief, observation, action){
return Infer({ model() {
var armToCoinWeight = sample(belief);
condition( observation === observeStateAction(armToCoinWeight, action))
return armToCoinWeight;
}});
};
// Evaluate arms by expected coin-weight
var expectedReward = function(belief, action){
return expectation(Infer( { model() {
var armToCoinWeight = sample(belief);
return armToCoinWeight[action];
}}))
}
// Choose by softmax over expected reward
var act = dp.cache(
function(belief) {
return Infer({ model() {
var action = uniformDraw(actions);
factor(params.alpha * expectedReward(belief, action))
return action;
}});
});
return { params, act, updateBelief };
};
// Run Bandit problem
var simulate = function(armToCoinWeight, totalTime, agent) {
var act = agent.act;
var updateBelief = agent.updateBelief;
var priorBelief = agent.params.priorBelief;
var sampleSequence = function(timeLeft, priorBelief, action) {
var observation = (action !== 'noAction') &&
observeStateAction(armToCoinWeight, action);
var belief = ((action === 'noAction') ? priorBelief :
updateBelief(priorBelief, observation, action));
var action = sample(act(belief));
return (timeLeft === 0) ? [action] :
[action].concat(sampleSequence(timeLeft-1, belief, action));
};
return sampleSequence(totalTime, priorBelief, 'noAction');
};
// Agent params
var alpha = 30
var priorBelief = Infer({ model () {
var p0 = uniformDraw([.1, .3, .5, .6, .7, .9]);
var p1 = uniformDraw([.1, .3, .5, .6, .7, .9]);
return { 0:p0, 1:p1};
} });
// Bandit params
var numberTrials = 500;
var armToCoinWeight = { 0: 0.5, 1: 0.6 };
var agent = makeGreedyBanditAgent({alpha, priorBelief});
var trajectory = simulate(armToCoinWeight, numberTrials, agent);
// Compare to random agent
var randomTrajectory = repeat(
numberTrials,
function(){return uniformDraw([0,1]);}
);
// Compute agent performance
var regret = function(arm) {
var bestCoinWeight = _.max(_.values(armToCoinWeight))
return bestCoinWeight - armToCoinWeight[arm];
};
var trialToRegret = map(regret,trajectory);
var trialToRegretRandom = map(regret, randomTrajectory)
var ys = cumsum( trialToRegret)
print('Number of trials: ' + numberTrials);
print('Total regret: [GreedyAgent, RandomAgent] ' +
sum(trialToRegret) + ' ' + sum(trialToRegretRandom))
print('Arms pulled: ' + trajectory);
viz.line(_.range(ys.length), ys, {xLabel:'Time', yLabel:'Cumulative regret'});
~~~~
How well does the Greedy agent do? It does best when the difference between arms is large but does well even when the arms are close. Greedy agents perform well empirically on a wide range of Bandit problems refp:kuleshov2014algorithms and if their noise decays over time they can achieve asymptotic optimality. In contrast to the optimal POMDP agent from the previous chapter, the Greedy Agent scales well in both number of arms and trials.
>**Exercises**:
> 1. Modify the code above so that it's easy to repeatedly run the same agent on the same Bandit problem. Compute the mean and standard deviation of the agent's total regret averaged over 20 episodes on the Bandit problem above. Use WebPPL's library [functions](http://docs.webppl.org/en/master/functions/arrays.html).
> 2. Set the softmax noise to be low. How well does the Greedy Softmax agent do? Explain why. Keeping the noise low, modify the agent's priors to be overly "optimistic" about the expected reward of each arm (without changing the support of the prior distribution). How does this optimism change the agent's performance? Explain why. (An optimistic prior assigns a high expected reward to each arm. This idea is known as "optimism in the face of uncertainty" in the RL literature.)
> 3. Modify the agent so that the softmax noise is low and the agent has a "bad" prior (i.e. one that assigns a low probability to the truth) that is not optimistic. Will the agent always learn the optimal policy (eventually?) If so, after how many trials is the agent very likely to have learned the optimal policy? (Try to answer this question without doing experiments that take a long time to run.)
### Posterior Sampling
Posterior sampling (or "Thompson sampling") is the basis for another algorithm for Bandits. This algorithm generalizes to arbitrary discrete MDPs, as we show below. The Posterior-sampling agent updates beliefs using standard Bayesian updates. Before choosing an arm, it draws a sample from its posterior on the arm parameters and then chooses greedily given the sample. In Bandits, this is similar to Softmax Greedy but without the softmax parameter $$\alpha$$.
>**Exercise**:
> Implement Posterior Sampling for Bandits by modifying the code above. (You only need to modify the `act` function.) Compare the performance of Posterior Sampling to Softmax Greedy (using the value for $$\alpha$$ in the codebox above). You should vary the `armToCoinWeight` parameter and the number of arms. Evaluate each agent by computing the mean and standard deviation of rewards averaged over many trials. Which agent is better overall and why?
<!-- TODO maybe we should include this code so casual readers can try it? -->
<!-- Modified act function:
var act = dp.cache(
function(belief) {
var armToCoinWeight = sample(belief); // sample coin-weights
return Infer({ model() {
var action = uniformDraw(actions);
factor(1000 * armToCoinWeight[action]) // pick arm with max weight
return action;
}});
});
-->
-----------
## RL algorithms for MDPs
The RL algorithms above are specialized to Bandits and so they aren't able to learn an arbitrary MDP. We now consider algorithms that can learn any discrete MDP. There are two kinds of RL algorithm:
1. *Model-based* algorithms learn an explicit representation of the MDP's transition and reward functions. These representations are used to compute a good policy.
2. *Model-free* algorithms do not explicitly represent or learn the transition and reward functions. Instead they explicitly represent either a value function (i.e. an estimate of the $$Q^*$$-function) or a policy.
The best known RL algorithm is [Q-learning](https://en.wikipedia.org/wiki/Q-learning), which works both for discrete MDPs and for MDPs with high-dimensional state spaces (where "function approximation" is required). Q-learning is a model-free algorithm that directly learns the expected utility/reward of each action under the optimal policy. We leave as an exercise the implementation of Q-learning in WebPPL. Due to the functional purity of WebPPL, a Bayesian version of Q-learning is more natural and in the spirit of this tutorial. See, for example "Bayesian Q-learning" refp:dearden1998bayesian and this review of Bayesian model-free approaches refp:ghavamzadeh2015bayesian.
<!-- CODEBOX: Bayesian Q-learning. Apply to gridworld where goal is to get otherside of the and maybe there are some obstacles. For small enough gridworld, POMDP agent will be quicker. -->
<!-- ### Policy Gradient -->
<!-- - Directly represent the policy. Stochastic function from states to actions. (Can put prior over that the params of stochastic function. Then do variational inference (optimization) to find params that maximize score.) -->
<!-- Applied to Bandits. The policy is just a multinomial probability for each arm. You run the policy. Then take gradient in direction that improves the policy. (Variational approximaton will be exact in this case.) Gridworld example of get from top left to bottom right (not knowing initially where the goal state is located). You are learning a distribution over actions in these discrete location. So you have a multinomial for each state. -->
### Posterior Sampling Reinforcement Learning (PSRL)
Posterior Sampling Reinforcemet Learning (PSRL) is a model-based algorithm that generalizes posterior-sampling for Bandits to discrete, finite-horizon MDPs refp:osband2016posterior. The agent is initialized with a Bayesian prior distribution on the reward function $$R$$ and transition function $$T$$. At each episode the agent proceeds as follows:
> 1. Sample $$R$$ and $$T$$ (a "model") from the distribution. Compute the optimal policy for this model and follow it until the episode ends.
> 2. Update the distribution on $$R$$ and $$T$$ on observations from the episode.
How does this agent efficiently balances exploration and exploitation to rapidly learn the structure of an MDP? If the agent's posterior is already concentrated on a single model, the agent will mainly "exploit". If the agent is uncertain over models, then it will sample various different models in turn. For each model, the agent will visit states with high reward on that model and so this leads to exploration. If the states turn out not to have high reward, the agent learns this and updates their beliefs away from the model (and will rarely visit the states again).
The PSRL agent is simple to implement in our framework. The Bayesian belief-updating re-uses code from the POMDP agent: $$R$$ and $$T$$ are treated as latent state and are observed every state transition. Computing the optimal policy for a sampled $$R$$ and $$T$$ is equivalent to planning in an MDP and we can re-use our MDP agent code.
We run the PSRL agent on Gridworld. The agent knows $$T$$ but does not know $$R$$. Reward is known to be zero everywhere but a single cell of the grid. The actual MDP is shown in Figure 1, where the time-horizon is 8 steps. The true reward function is specified by the variable `trueLatentReward` (where the order of the rows is the inverse of the displayed grid). The displays shows the agent's trajectory on each episode (where the number of episodes is set to 10).
<img src="/assets/img/3d-gridworld.png" alt="gridworld ground-truth" style="width: 400px;"/>
**Figure 1:** True latent reward for Gridworld below. Agent receives reward 1 in the cell marked "G" and zero elsewhere.
~~~~
///fold:
// Construct Gridworld (transitions but not rewards)
var ___ = ' ';
var grid = [
[ ___, ___, '#', ___],
[ ___, ___, ___, ___],
[ '#', ___, '#', '#'],
[ ___, ___, ___, ___]
];
var pomdp = makeGridWorldPOMDP({
grid,
start: [0, 0],
totalTime: 8,
transitionNoiseProbability: .1
});
var transition = pomdp.transition
var actions = ['l', 'r', 'u', 'd'];
var utility = function(state, action) {
var loc = state.manifestState.loc;
var r = state.latentState.rewardGrid[loc[0]][loc[1]];
return r;
};
// Helper function to generate agent prior
var getOneHotVector = function(n, i) {
if (n==0) {
return [];
} else {
var e = 1*(i==0);
return [e].concat(getOneHotVector(n-1, i-1));
}
};
///
var observeState = function(state) {
return utility(state);
};
var makePSRLAgent = function(params, pomdp) {
var utility = params.utility;
// belief updating: identical to POMDP agent from Chapter 3c
var updateBelief = function(belief, observation, action){
return Infer({ model() {
var state = sample(belief);
var predictedNextState = transition(state, action);
var predictedObservation = observeState(predictedNextState);
condition(_.isEqual(predictedObservation, observation));
return predictedNextState;
}});
};
// this is the MDP agent from Chapter 3a
var act = dp.cache(
function(state) {
return Infer({ model() {
var action = uniformDraw(actions);
var eu = expectedUtility(state, action);
factor(1000 * eu);
return action;
}});
});
var expectedUtility = dp.cache(
function(state, action) {
return expectation(
Infer({ model() {
var u = utility(state, action);
if (state.manifestState.terminateAfterAction) {
return u;
} else {
var nextState = transition(state, action);
var nextAction = sample(act(nextState));
return u + expectedUtility(nextState, nextAction);
}
}}));
});
return { params, act, expectedUtility, updateBelief };
};
var simulatePSRL = function(startState, agent, numEpisodes) {
var act = agent.act;
var updateBelief = agent.updateBelief;
var priorBelief = agent.params.priorBelief;
var runSampledModelAndUpdate = function(state, priorBelief, numEpisodesLeft) {
var sampledState = sample(priorBelief);
var trajectory = simulateEpisode(state, sampledState, priorBelief, 'noAction');
var newBelief = trajectory[trajectory.length-1][2];
var newBelief2 = Infer({ model() {
return extend(state, {latentState : sample(newBelief).latentState });
}});
var output = [trajectory];
if (numEpisodesLeft <= 1){
return output;
} else {
return output.concat(runSampledModelAndUpdate(state, newBelief2,
numEpisodesLeft-1));
}
};
var simulateEpisode = function(state, sampledState, priorBelief, action) {
var observation = observeState(state);
var belief = ((action === 'noAction') ? priorBelief :
updateBelief(priorBelief, observation, action));
var believedState = extend(state, { latentState : sampledState.latentState });
var action = sample(act(believedState));
var output = [[state, action, belief]];
if (state.manifestState.terminateAfterAction){
return output;
} else {
var nextState = transition(state, action);
return output.concat(simulateEpisode(nextState, sampledState, belief, action));
}
};
return runSampledModelAndUpdate(startState, priorBelief, numEpisodes);
};
// Construct agent's prior. The latent state is just the reward function.
// The "manifest" state is the agent's own location.
// Combine manifest (fully observed) state with prior on latent state
var getPriorBelief = function(startManifestState, latentStateSampler){
return Infer({ model() {
return {
manifestState: startManifestState,
latentState: latentStateSampler()};
}});
};
// True reward function
var trueLatentReward = {
rewardGrid : [
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 1]
]
};
// True start state
var startState = {
manifestState: {
loc: [0, 0],
terminateAfterAction: false,
timeLeft: 8
},
latentState: trueLatentReward
};
// Agent prior on reward functions (*getOneHotVector* defined above fold)
var latentStateSampler = function() {
var flat = getOneHotVector(16, randomInteger(16));
return {
rewardGrid : [
flat.slice(0,4),
flat.slice(4,8),
flat.slice(8,12),
flat.slice(12,16) ]
};
}
var priorBelief = getPriorBelief(startState.manifestState, latentStateSampler);
// Build agent (using *pomdp* object defined above fold)
var agent = makePSRLAgent({ utility, priorBelief, alpha: 100 }, pomdp);
var numEpisodes = 10
var trajectories = simulatePSRL(startState, agent, numEpisodes);
var concatAll = function(list) {
var inner = function (work, i) {
if (i < list.length-1) {
return inner(work.concat(list[i]), i+1)
} else {
return work;
}
}
return inner([], 0);
}
var badState = [[ { manifestState : { loc : "break" } } ]];
var trajectories = map(function(t) { return t.concat(badState);}, trajectories);
viz.gridworld(pomdp, {trajectory : concatAll(trajectories)});
~~~~
<!--
TODOS: <br>
Gridworld maze example is unknown transition function. So requires a change to code below (which assumes same transitions for agent and simulate function. Clumpy reward uses same model below but has rewards be correlated. Should be easy to implement a simple version of this. Visualization should depict restaurants (which have non-zero rewards).
Gridworld maze: Agent is in a maze in perfect darkness. Each square could be wall or not with even probability. Agent has to learn how to escape. Maze could be fairly big but want a fairly short way out. Model for T.
Clumpy reward model. Gridworld with hot and cold regions that clump. Agent starts in a random location. If you assume clumpiness, then agent will go first to unvisited states in good clumps. Otherwise, when they start in new places they'll explore fairly randomly. Could we make a realistic example like this? (Once you find some bad spots in one region. You don't explore anywhere near there for a long time. That might be interesting to look at. Could have some really cold regions near the agent.
Simple version: agent starts in the middle. Has enough time to go to a bunch of different regions. Regions are clumped in terms of reward. Could think of this a city, cells with reward are food places. There are tourist areas with lots of bad food, foodie areas with good food, and some places with not much food. Agent without clumping tries some bad regions first and keeps going back to try all the places in those regions. Agent with clumping tries them once and then avoids. [Problem is how to implement the prior. Could use enumeration but keep number of possibilities fairly small. Could use some approximate method and just do a batch update at the end of each episode. That will require some extra code for the batch update.]
-->
----------
<!--
Table:
Structure given / unknown MDP POMDP
KNOWN Planning (Solve exactly DP) POMDP solve (Belief MDP)
LEARNED POMDP solver (exact Bayesian), RL POMDP solve
-->
<!-- ### RL and Inferring Preferences
Most IRL is actually inverse planning in an MDP. Assumption is that it's an MDP and human already knows R and T. Paper on IRL for POMDPs: assume agent knows POMDP structure. Much harder inference problem.
We have discussion of biases that humans have: hyperbolic discounting, bounded planning. These are relevant even if human knows structure of world and is just trying to plan. But often humans don't know structure of world. Better to think of world as RL problem where MDP or POMDP also is being learned. Problem is that there are many RL algorithms, they generally involve lots of randomness or arbitrary parameters. So hard to make precise predictions. Need to coarsen. Show example of this with Thompson sampling for Bandits.
Could discuss interactive RL. Multi-agent case. It's beyond scope of modeling.
-->
### Footnotes
|
6635be10-6c0b-4a07-a871-220d915e1ac4 | trentmkelly/LessWrong-43k | LessWrong | Should we postpone getting a booster due to Omicron, till there are Omicron-specific boosters?
Wai Dai writes over in the comments at Omicron Variant Post #1: We’re F***ed, It’s Never Over:
> From your linked paper https://www.cell.com/trends/immunology/fulltext/S1471-4906(21)00177-0: [Wai Dai's quote from the paper in italics]
>
> Experience from the IAV field indicates that multiple serial exposures to variant IAV strains might imprint (focus) the immune response to a narrow array of well-conserved viral epitopes relative to emerging subsequent strains and this might eventually reduce vaccine effectiveness. Randomizing multiple cohorts over time to either serial or partial immunization might also yield important neutralization data, although powering such studies for efficacy would be challenging. Larger observational studies of cohorts receiving serial immunizations might therefore also prove useful.
>
> This suggests that it may be a good idea to avoid getting a booster now (with the ancestral spike protein) and wait for an Omicron-specific booster instead. (Use other precautions to avoid infection in the meantime.)
Given that this is an important question for many of us, I consider it useful to have the conversation on the top level instead of only in the comments of the other thread. |
dbe51163-9537-40fb-a090-4a3afc3b3245 | trentmkelly/LessWrong-43k | LessWrong | Intuitive Lagrangian Mechanics
Lagrangian mechanics is an alternative formulation of Newtonian mechanics. Newtonian mechanics solves movements using successive approximations. Lagrangian mechanics figures out the whole path all at once. Lagrangian mechanics is useful for solving problems subject to certain constraints. It is also a prerequisite to understanding quantum field theory and the path integral formulation of quantum mechanics.
In the classical formulation of Lagrangian mechanics, the Lagrangian L is the difference between a particle's kinetic energy T and potential energy V.
L=T−V
The action S of a particle is the path integral of its Lagrangian.
S=∫tftiLdt
The path of a particle extremizes S.
δS=0
We therefore derive the Euler–Lagrange equations.
ddt(∂L∂˙qj)=∂L∂qj
Does this make sense to you? Because subtracting potential energy from kinetic energy does not make conceptual sense to me. Besides, what, conceptually, is "action" and why should it be extremized? Lagrangian mechanics make better sense in special relativity.
Special Relativity
To make things more intuitive, let's look at Lagrangian mechanics in the context of special relativity, from which classical Lagrangian mechanics is an approximation. I will set c=1 because space and time ought to have the same units. I will set m0=1 for convenience because the rest mass of our particle does not change.
The most important number in special relativity is the Lorentz factor γ, the instantaneous ratio of coordinate time t to proper time τ.
γ=dtdτ=1√1−v2
We can[1] take the action S to be the length of the particle's world line between proper times τ1 and τ2. (Note that γ is a function of t.)
S=∫τ2τ1dτ=∫t2t11γdt
Locally maximizing S equals locally maximizing the proper time experienced by a particle. The Lagrangian L is the expression inside of the coordinate time integral.
L=γ−1=√1−v2=α
If you think of all matter particles as moving through spacetime at a speed of 1 along the hypotenuse of a right triangle with one spatia |
19710522-4e5c-4054-bf87-b36eed956158 | trentmkelly/LessWrong-43k | LessWrong | How do people do remote research collaborations effectively?
I think remote research collaboration has a valley-of-inefficiency, where at mid-low levels of collaboration-competence it is much better to just do all research on your own rather than trying to split things up between multiple people (probably due to coordination/communication overhead and all the usual stuff).
While there's been a lot of posts here regarding advice for independent researchers, I haven't seen much for research collaborations (especially at small-scale, without a legible central leadership). People seem to do it pretty frequently/reliably though, e.g., all the forum posts with multiple authors.
Are there any advice/resources/procedures for effective remote research collaborations on the scale of 2-5 people? |
dceae613-106c-479a-86dd-561fb320774a | trentmkelly/LessWrong-43k | LessWrong | Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 110.
There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
> You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
>
> If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13. |
22cc0c31-cd2c-438a-9c67-6d627499665d | trentmkelly/LessWrong-43k | LessWrong | Survey Results: 10 Fun Questions for LWers
I posted a quick 10-question survey, and 100 users filled it out. Here are the results!
1. Pick the answer most people will not pick.
* HPMOR (26%)
* SlateStarCodex (26%)
* The Sequences (30%)
* A dust speck in the eye (18%)
Well done everyone who picked the dust speck! I myself won this, and am pleased.
My reasoning was that it was the most distinctive in type (the rest were all 'big things people read') and so would be considered obvious, thus rendering it non-obvious. I now overconfidently believe LWers operate at level 2, and I can always win by playing level 3. (I will test this some time again in the future.)
My housemates point out that we should all have rolled a 4-sided die and pick that option, with some chance for 100% of us to win if we got a perfect 25% on each of them. So now I feel a little bit sad, because I sure didn't think of that.
2. To the nearest 10, at least how much karma makes a post worth opening?
Plus one selection of "200+" not shown above.
The median answer was 20, and the mean was 28, the st dev was 25, all to the nearest whole number.
3. How much do you think most LWers believe in the thesis of Civilizational Inadequacy?
The average was 6.08, st dev was 1.4 (2.s.f).
Don't have much to say here.
4. How much do you believe in the thesis of civilizational inadequacy?
Average: 6.13. St dev: 1.7 (2.s.f).
On average, we had good self-models. Will be interesting to see how people's accuracy here correlates with the other questions.
5. Here are 10 terms that have been used on LW. Click the terms you have used in conversation (at least, say, 10 times) because of LessWrong (and other rationalist blogs).
Here they are, in order of how many clicks they got:
* Existential Risk (64)
* Coordination Problem (61)
* Bayesian Update (58)
* Common Knowledge (53)
* Counterfactual (51)
* Goodhart (47)
* Slack (38)
* Legibility (in the sense of James C. Scott's book "Seeing Like a State") (31)
* Asymmetric tools / asymmetric weapo |
7773eb66-0584-4a42-9c5d-d37646d3af7a | StampyAI/alignment-research-dataset/blogs | Blogs | MIRI AMA, and a talk on logical induction
Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments below or at **[Ask MIRI Anything](http://effective-altruism.com/ea/12r/ask_miri_anything_ama/)**.
We’ve also posted a more detailed version of our [fundraiser overview and case for MIRI](http://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/) at the EA Forum.
In other news, we have a new talk out with an overview of “[Logical Induction](https://intelligence.org/2016/09/12/new-paper-logical-induction/),” our recent paper presenting (as Critch puts it) “a financial solution to the computer science problem of metamathematics”:
This version of the talk goes into more technical detail than our [previous talk](https://www.youtube.com/watch?v=tsJd_CdBA3I) on logical induction.
For some recent discussions of the new framework, see [Shtetl-Optimized](http://www.scottaaronson.com/blog/?p=2918), [n-Category Café](https://golem.ph.utexas.edu/category/2016/09/logical_uncertainty_and_logica.html), and [Hacker News](https://news.ycombinator.com/item?id=12485080).
The post [MIRI AMA, and a talk on logical induction](https://intelligence.org/2016/10/11/miri-ama-and-a-talk-on-logical-induction/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
a5ddfa47-b85f-4417-93df-188a3b4c4cf3 | trentmkelly/LessWrong-43k | LessWrong | Fun With DAGs
Crossposted here.
Directed acyclic graphs (DAGs) turn out to be really fundamental to our world. This post is just a bunch of neat stuff you can do with them.
UTILITY FUNCTIONS
Suppose I prefer chicken over steak, steak over pork, and pork over chicken. We can represent my preferences with a directed graph like this:
This poses a problem. If I’m holding a steak, then I’ll pay a penny to upgrade it to chicken — since I prefer chicken over steak. But then I’ll pay another penny to upgrade my chicken to pork, and another to upgrade it to steak, and then we’re back to the beginning and I’ll pay to upgrade the steak to chicken again! Someone can stand there switching out chicken/pork/steak with me, and collect my money all day.
Any time our preference graph contains cycles, we risk this sort of “money-pump”. So, to avoid being money-pumped, let’s make our preferences not contain any cycles:
Now our preferences are directed (the arrows), acyclic (no cycles), and a graph (arrow-and-box drawing above). It’s a DAG!
Notice that we can now sort the DAG nodes, spatially, so that each node only points to higher-up nodes:
Reading off the nodes from bottom to top, the order is: chicken, pork, steak.
This is called a topological sort, or topo sort for short. We can always topo sort a DAG — there is always some order of the nodes such that each node only points to nodes after it in the sort order.
In this case, the topo sort orders our preferences. We prefer pork over everything which comes before pork in the topo sort, and so forth. We could even number the items according to their topo-sorted order: 1 for chicken, 2 for pork, 3 for steak. We prefer whichever thing has the higher number, i.e. the higher position in the topo-sort order.
This is called a utility function. This utility function U is defined by U(chicken) = 1, U(pork) = 2, U(steak) = 3. We prefer things with higher utility — in other words, we want to maximize the utility function!
To summarize: if ou |
009500e8-3a8c-4d12-8d04-a7b7a35ed53b | trentmkelly/LessWrong-43k | LessWrong | Are stereotypes ever irrational?
Harvard's undergraduate admission office will tell you "There is no typical Harvard student." This platitude ticks me off. Of course there's such a thing as a typical Harvard student! Harvard students aren't magically exceptions to the laws of probability. Just as a robin is a more typical bird than an ostrich, some Harvard students are especially typical. Let's say (I'm not actually looking at data here) that most Harvard students are rich, most have high SAT scores, most are white, and most are snobbish. Note: I am not a Harvard student. :)
Now, a very typical Harvard student would be rich AND white AND smart AND snobbish. But observe that a given student has a smaller probability of being all of these than of just, say, being rich. If you add enough majority characteristics, eventually the "typical" student will become very rare. Even if there's a 99% probability of having any one of these characteristics, 0.99n ->0 as n goes to infinity. Some Harvard students are typical; but extremely typical Harvard students are rare. If you encountered a random Harvard student, and expected her to have all the majority characteristics of Harvard students, you could very well be wrong.
So far, so obvious. But who would make that mistake?
You, that's who. The conjunction fallacy is the tendency of humans to think specific conditions are more probable than general conditions. People are more likely to believe that a smart, single, politically active woman is a feminist bank teller than just a bank teller. Policy experts (in the 1980's) were more likely to think that the USSR would invade Poland and that the US would break off relations with the USSR, than either one of these events alone. Of course, this is mistaken: the probability of A and B is always less than or equal to the probability of A alone. The reason we make this mistake is the representativeness heuristic: a specific, compelling story that resembles available data is judged as more probable tha |
c0f7becc-3676-4799-a2cd-ecb8d7f5f7a4 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How major governments can help with the most important century
I’ve been writing about tangible things we can do today to help the [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) go well. Previously, I wrote about [helpful messages to spread](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/CcJsh4JcxEqYDaSte/); [how to help via full-time work](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/njD2PurEKDEZcMLKZ/); and [how major AI companies can help](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/i6btyefRRX23yCpnP/).
What about major governments[1](#fn1) - what can they be doing today to help?
I think governments could play crucial roles in the future. For example, see my discussion of [standards and monitoring](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_).
However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.
* I think we still know very little about what key future situations will look like, which is why my discussion of AI companies ([previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/i6btyefRRX23yCpnP/)) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.
* I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by *not* committing to specific regulations, processes, projects, etc. now.
* I worry that governments, at least as they stand today, are far too oriented toward the [competition frame](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__competition__frame) (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the [caution frame](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__caution__frame) (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.[2](#fn2))
(Click to expand) The “competition” frame vs. the “caution” frame”
In a [previous piece](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M/), I talked about two contrasting frames for how to make the best of the most important century:
**The caution frame.** This frame emphasizes that a furious race to develop powerful AI could end up making *everyone* worse off. This could be via: (a) AI forming [dangerous goals of its own](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) and [defeating humanity entirely](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/); (b) humans racing to gain power and resources and “[lock in](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH#Lock_in)” their values.
Ideally, everyone with the potential to build something [powerful enough AI](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj/) would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
* Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of [Pugwash](https://en.wikipedia.org/wiki/Pugwash_Conferences_on_Science_and_World_Affairs) (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.
* Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward [standards and monitoring](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_), etc. Slowing things down in this manner could buy more time to do research on avoiding [misaligned AI](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#Worst__Misaligned_AI), more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity
**The “competition” frame.** This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.
* If something like [PASTA](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj/) is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.
* In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.
This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
* Increasing the odds that the first PASTA systems are built in countries that are e.g. less authoritarian, which could mean e.g. pushing for more investment and attention to AI development in these countries.
* Supporting and trying to speed up AI labs run by people who are likely to make wise decisions (about things like how to engage with governments, what AI systems to publish and deploy vs. keep secret, etc.)
**Tension between the two frames.** People who take the "caution" frame and people who take the "competition" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the "competition" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the "caution" frame, haste is one of the main things to avoid. People in the "competition" frame often favor adversarial foreign relations, while people in the "caution" frame often want foreign relations to be more cooperative.
That said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a **general fear that the “competition” frame is going to be overrated by default** for a number of reasons, as I discuss [here](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#Why_I_fear__competition__being_overrated__relative_to__caution_).
Because of these concerns, I don’t have a ton of tangible suggestions for governments as of now. But here are a few.
My first suggestion is to **avoid premature actions**, including ramping up research on how to make AI systems more capable.
My next suggestion is to **build up the right sort of personnel and expertise for challenging future decisions.**
* Today, my impression is that there are relatively few people in government who are seriously considering the highest-stakes risks and thoughtfully balancing both “caution” and “competition” considerations (see directly above). I think it would be great if that changed.
* Governments can invest in efforts toeducate their personnel about these issues, and can try to hire key personnel who are already on the knowledgeable and thoughtful side about them (while also watching out for some of the [pitfalls of spreading messages about AI](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/CcJsh4JcxEqYDaSte/)).
Another suggestion is to **generally avoid putting terrible people in power.** Voters can help with this!
My top non-”meta” suggestion for a given government is to **invest in intelligence on the state of AI capabilities in other countries.** If other countries are getting close to deploying dangerous AI systems, this could be essential to know; if they aren’t, that could be essential to know as well, in order to avoid premature and paranoid racing to deploy powerful AI.
A few other things that seem worth doing and relatively low-downside:
* **Fund [alignment research](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd)** (ideally alignment research targeted at the [most crucial challenges](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5)) via agencies like the National Science Foundation and DARPA. These agencies have huge budgets (the two of them combined spend over $10 billion per year), and have major impacts on research communities.
* **Keep options open for future monitoring and regulation** (see [this Slow Boring piece](https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy) for an example).
* **Build relationships with leading AI researchers and organizations**, so that future crises can be handled relatively smoothly.
* **Encourage and amplify investments in information security.** My impression is that governments are often better than companies at highly advanced information security (preventing cyber-theft even by determined, well-resourced opponents). They could help with, and even enforce, strong security at key AI companies.
Footnotes
---------
---
1. I’m centrally thinking of the US, but other governments with lots of geopolitical sway and/or major AI projects in their jurisdiction could have similar impacts. [↩](#fnref1)
2. When discussing [recommendations for companies](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/i6btyefRRX23yCpnP/), I imagine companies that are already dedicated to AI, and I imagine individuals at those companies who can have a large impact on the decisions they make.
By contrast, when discussing recommendations for governments, a lot of what I’m thinking is: “Attempts to promote productive actions on AI will raise the profile of AI *relative to other issues the government could be focused on*; furthermore, it’s much harder for even a very influential individual to predict how their actions will affect what a government ultimately does, compared to a company.” [↩](#fnref2) |
aafe823c-ef07-48fc-b655-f811e0f6abc4 | trentmkelly/LessWrong-43k | LessWrong | Treat conspicuous consumption like hard nipples?
Robin asked, in relation to correlations between sexual prompts and apparently innocent behaviors:
> So what would happen if we all became conscious of the above behaviors being strong clues that men are in fact actively trying for promiscuous short term sex? Would such behaviors reduce, would long term relations become less exclusive, or what? Maybe we just couldn’t admit that these are strong clues?
It isn’t usually activeness that people mind most in others’ wrongdoings, but conscious intention. These usually coincide, but when they don’t we are much more forgiving of unintentional actions, however active. So if it became known that an interest in cars or charity was a symptom of sexual desire I think it would be seen as similar to those other ‘actions’ that show sexual desire; a bad message to your spouse about your feelings, but far from a conscious attempt to be unfaithful.
While it’s not a crime to have physical signs of arousal about the wrong person, I assume it’s considered more condemnable to purposely show them off to said person. I think the same would go for the changes in interests above; if everyone knew that those behaviours were considered signs of sexual intent, realising you had them and purposely allowing potential lovers to see them would be seen as active unfaithfulness, so you would be expected to curb or hide them. Most people would want to hide them anyway, because showing them would no longer send the desired signal. Other activities are presumably popular for those interested in sex exactly because conspicuously wanting sex doesn’t get sex so well. If certain interests became a known signal for wanting sex they would be no more appealing than wearing a sign that says ‘I want sex’. This would be a shame for all those who are interested in charity and consumerism less contingently.
|
aaf32dc9-0fbd-40a1-ada8-36298b20efc6 | trentmkelly/LessWrong-43k | LessWrong | [Link] A rational response to the Paris attacks and ISIS
Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer , a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline. |
d3769946-1e82-4235-a9df-3437f3e70b85 | trentmkelly/LessWrong-43k | LessWrong | Space-worthiness
A recent post about the Fermi paradox left me wondering about relative difficulties of getting into space, though I don't think it affects those specific arguments.
People establishing a presence in space is difficult but at least plausible-- I'm talking about biological people as we are now, and being able to live and reproduce indefinitely without returning to Earth.
It would be easier if we were less massive, or our planet was less massive, or if we were more radiation resistant. It would be harder if these qualities were reversed, or if we needed a much denser atmosphere.There might come a point where it just isn't feasible for a species to get itself off its planet.
Is there any reasonable speculation about where we are likely to be on the ease-of-getting-into-space spectrum? |
906d113d-2c25-48a7-8c3a-bd23be2aa796 | trentmkelly/LessWrong-43k | LessWrong | "a dialogue with myself concerning eliezer yudkowsky" (not author)
This is a linkpost[1], but I don't think the author would be happy to get commentary from people here on their blog, so I've copied the post here. link is at the bottom, if you really really want to comment on the original; please don't, though.
----------------------------------------
red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation.
blue: what did he even think was going to happen after the time piece? this is the kind of shit that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about consequentialism?
(i thought initially that blue was going to disagree with red but no, blue is just mad in a different way)
https://twitter.com/mattparlmer/status/1641232557374160897?s=20
red: it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet.
[Note: link intentionally non-clickable, and author intentionally cropped [edit: whoops guess it wasn't? uh sorry], to respect author's separation from this community. If you want to see the whole tweet thread, which is in fact very interesting, here's the link: https://twitter.com/QiaochuYuan/status/1542781304621518848?s=20]
blue: speaking of consequentialism the man dedicated his entire life to trying |
b6250349-02ad-4135-938c-e31564edb689 | trentmkelly/LessWrong-43k | LessWrong | Are You a Solar Deity?
Max Muller was one of the greatest religious scholars of the 19th century. Born in Germany, he became fascinated with Eastern religion, and moved to England to be closer to the center of Indian scholarship in Europe. There he mastered English and Sanskrit alike to come out with the first English translation of the Rig Veda, the holiest book of Hinduism.
One of Muller's most controversial projects was his attempt to interpret all pagan mythologies as linked to one another, deriving from a common ur-mythology and ultimately from the celestial cycle. His tools were exhaustive knowledge of the myths of all European cultures combined with a belief in the interpretive power of linguistics.
What the significance of Orpheus' descent into the underworld to reclaim his wife's soul? The sun sets beneath the Earth each evening, and returns with renewed brightness. Why does Apollo love Daphne? Daphne is cognate with Sanskrit Dahana, the maiden of the dawn. The death of Hercules? It occurs after he's completed twelve labors (cf. twelve signs of zodiac) when he's travelling west (like the sun), he is killed by Deianeira (compare Sanskrit dasya-nari, a demon of darkness) and his body is cremated (fire = the sun). His followers extended the method to Jesus - who was clearly based on a lunar deity, since he spent three days dead and then returned to life, just as the new moon goes dark for three days and then reappears.
Muller's work was massively influential during his time, and many 19th century mythographers tried to critique his paradigm and poke holes in it. Some accused him of trying to destroy the mystery of religion, and others accused him of shoddy scholarship.
R.F. Littledale, an Anglican apologist, took a completely different route. He claimed that there was, in fact, no such person as Professor Max Muller, holder of the Taylorian Chair in Modern European Languages. All these stories about "Max Muller" were nothing but a thinly disguised solar myth.
Littledale begins |
ff1cfc5c-4dcd-4e1e-8ae6-69777cab211d | StampyAI/alignment-research-dataset/arxiv | Arxiv | Towards Industrial Private AI: A two-tier framework for data and model security
I Introduction
---------------
With various ubiquitous devices connected through base stations, access points, and Internet of Things (IoT) gateways, Industrial IoT (IIoT) has brought a technological shift to how modern industries acquire and share their data. The large-scale deployment of IoT nodes and devices provides the analysts with enormous amounts of data to develop intelligent decision-making systems useful for enterprise managers, technicians, operators, assembly line personnel, CEOs, and government personnel [Usman2020].
However, this AI revolution in the high-tech industry comes at the cost of private data sharing and a proposition that the user is willing to share it with industrial clouds. At the user level, this issue is signified due to the sensitivity and privacy of the shared data such as minute-to-minute location, health records, and sequenced genome. Similarly, when applied to industries, the data sharing might consist of activities being performed inside the factory, new product details, trade secrets, formulas, and warehouse images. Thus, the data sharing raises security concerns as the sensitive information could be disclosed or leaked while sending it to the cloud [Ster2022, Khowaja2021]. Examples of cyberattacks during 2021 or the leakage of location data using IBM weather services is a testimony of data security concerns111<https://www.idtheftcenter.org/wp-content/uploads/2019/02/ITRC_2018-End-of-Year-Aftermath_FINAL_V2_combinedWEB.pdf>. Industries also need to share the data with 3rd party firms for the analytical services. In such cases, the chances of data being compromised rise by multiple magnitudes.
The quid pro quo of data analytics associated with industrial clouds leads to a number of challenges with data security and privacy being the top one. Furthermore, with the extended connectivity to the industrial clouds, the systems are not confined to a single user and that leads to one-to-many authorization issues [Usman2019]. Even though one overcomes the data encryption issues, the attacks can also target the trained models that store the data or training parameters by manipulating, inverting, or stealing them. Moreover, complete solution in terms of data security and model privacy in the context of IIoT and industrial cloud is still missing, though some partial solutions are appearing.
Recently, the concept of Private AI was introduced [Lauter2021] that employs encryption techniques to protect data privacy, however, the concept was abstract and only focused on one-tier privacy preservation, i.e., while sending it to 3rd party analysts. In this regard, we propose federated learning and encryption-based Private (FLEP) AI framework for data security and model privacy in an IIoT environment. To the best of our knowledge, this is the first study to propose two-tier data and model security framework using Private AI for IIoT environment. We also propose a three-layer data security method to encrypt images that can be decrypted at the 3rd party analyst’s end. The framework is general, therefore, any encryption algorithm that fits to the desired application can be employed at the data security level. We also propose the use of Federated learning (FL) and Homomorphic encryption (HE) to ensure the security of model parameters when transferred to the AI cloud. The contributions of this work are as follows:
* A FLEP AI framework is proposed for data security and model privacy in an IIoT environment.
* A three-layer encryption method is proposed for data security.
* Implementation flow for using HE with FL is provided.
* Open challenges and issues concerning FLEP AI in IIoT environment are presented.
The rest of the paper is structured as follows: Section 2 consolidates the literature review concerning the proposed work. Section 3 provides the details for each of the phases in the proposed FLEP AI framework. Section 4 presents experimental results to show the efficacy of our proposed work. Section 5 highlights open issues and challenges and Section 6 concludes the work, accordingly.

Fig. 1: FLEP AI framework for two-tier data security in an IIoT environment.
Ii Related Works
-----------------
Private AI refers to the coupling of encryption-based techniques applied to the enterprise data while training, classifying, or predicting from the samples. The private AI retains the privacy and security of the organizational data. The term was coined in 2019 referring to the use of HE to protect the data acquired from IoT devices. Recently, a Private AI collaborative research center has been established by Intel, Avast, and Borsetta in consortium with 8 top universities in the world which further supports the growing interest in this field222<https://www.schunter.org/blog/2021/03/11/launch-of-the-private-ai-center/>.
The motivation of Private AI was derived from the work reported in [Bachrach2016] that demonstrated the feasibility of using predictions derived by neural networks on HE data. A whitepaper on crypto [Lauter2021] in International Association for Cryptologic Research explains the underlying technologies used for CryptoNets [Bachrach2016] and explores the real-world applications associated with Private AI. Kaissis et al. [Kaissis2020] did not explicitly used the term Private AI, but proposed data security and privacy preservation method using FL for medical images. Knott et al. [Knott2021] proposed a secure multi-party computation method that could derive inferences from encrypted data, i.e. CRYPTEN. The focus of the study was predicting phonemes from audio modality while keeping the data private. The studies [Boemer2020, Meng2020] proposed a privacy-preserving machine learning methods that use Ngraph and HE techniques to cope with the leakage of feature maps. The aforementioned works still focus on the data leakage, but when passed across intermediate convolutional layers. Subsequently the studies [Usman2019, Kaissis2020] proposed the privacy preserving methods for medical-related data, [Xian2020, Ma2017, Wang2019, Wang2021] image data, [Usman2020, Ster2022, Khowaja2021] industry-related data, and [Boemer2020, Meng2020, Zhu2020] inferential data, respectively. To the best of our knowledge, this is the first work that takes into account the security and privacy for both the data and the model while proposing a systematic Private AI framework.
Iii FLEP AI framework
----------------------
The proposed FLEP AI framework is illustrated in Figure 1. The framework comprises seven components: data generation, private database, private AI for data security, data analysts, private AI for model security, AI cloud, and IIoT services. Some of the components are similar to a traditional IIoT network such as the data generation, data analysts, AI cloud, and IIoT Services. The uniqueness of the FLEP AI framework lies in the private database, private AI for data and model security components, accordingly. As for the main workflow, the information is acquired first from data generation which is then stored at private database suggesting that only private networks or the trusted parties with secret key information can access it. The data undergoes an encryption process at Private AI for data security. We propose a three-layer encryption method to show its effectiveness. The encrypted data is shared with data analysts that decrypt it with the help of opted noise level and secret image. The trained model and analysis is shared with the subsequent module to enforce privacy on model parameters. The approaches such as FL, HE, or combination of both can be applied to Private AI for model security stage. The trained model(s) from one or more data analysts are sent to AI cloud for further aggregation. The analytical results are then sent to service layer for activating IIoT explicit services or updating the aggregated model for the users as an implicit service. This section covers each of the phases and discusses them in detail.
###
Iii-a Data generation
The data generation component in the FLEP AI framework comprises IoT devices used in multiple domains. For instance, smartphones and wearable sensors are the IoT devices mostly used in the healthcare domain. However, the same set of sensors can be used in an industrial environment to monitor the health of a worker ubiquitously as these sensors can be embedded in wearable devices. Inertial measurement units (IMUs) are mostly associated with movement recording, and can be used in a variety of domains including smart homes, smart healthcare, smart office, smart cities, smart industries, factories, and so forth. Meanwhile, LIDAR, pressure, infrared, and other similar sensors can be used for surveillance, water level monitoring, smart metering, greenhouses, and smart agriculture domains. The data acquired from all the aforementioned sensors (time-series, text, video, or speech) flow through wired or wireless means that are part of the data generation module.
###
Iii-B Database
In this study, we emphasize the use of private databases rather than the ones used publicly or on the cloud. The data acquired from the generation module is stored in a private database. A private database refers to a network setup where the data is acquired from the IoT devices and stored through virtual networks, thus, no public subnets are used. The private database ensures that the data can only be accessed from within the industrial tenancy via virtual networks. The database access from public endpoints is completely ceased with this type of network configuration. The advantages of using the private database are as follows:
* It only needs a service gateway to connect the database with IoT devices.
* No need of transit routing setup.
* Capacity of attaining data security requirements.
However, the setup for creating a private database requires following resources:
* Database should reside within the virtual network region.
* The virtual network should be configured with a private subnet.
* There should be at least one network security group within the virtual network.
A virtual firewall is created by the network security group that enforces safety rules to ensure the database security. The number of network security groups can be extended depending on the available resources for increasing the security level.
###
Iii-C Private AI for data security
The FLEP AI framework allows any encryption method to be used on the data before sending it to the data analysts, but to show the module’s realization we propose a chaotic sequence and wavelet-transform-based data encryption method as shown in Figure 2. In this study, we discuss the implementation and design of the proposed method with respect to the image data. The private AI for the data security module can be divided into three segments: encryption, encoding, and adding noise. The keys used in generation of chaotic sequence along with the secret image and noise parameters will be used to decode and decrypt the information at the data analysts’ module.
We adopt the sub-block chaotic algorithm proposed in [Xian2020] for scrambling the image. The method uses the matrix width and length to calculate the sub-blocks and apply the spiral scan transformation [Xian2020] to scramble the pixel in sub-blocks. A chaotic sequence is then calculated and each sub-block is re-organized in ascending order. While applying the chaotic sequences, group keys are generated along with the sub-blocks’ side length. These parameters will be stored at the private database and shared with data analysts’ module to descramble the image, accordingly. Once the image is scrambled, we apply discrete wavelet transform (DWT) to encode the image. The use of DWT not only provides an edge for data security but also compresses the data that can be beneficial for efficient use of bandwidth. We used the scrambled and a secret image, and applied 2D-discrete wavelet transform at level 1 to both the images and performing alpha blending, accordingly. The usage of a secret image ensures that even the image is descrambled, the information will not be completely leaked or restored. The third segment is the noise injection in the image that adds an extra layer of security to the data. The image is corrupted using additive white Gaussian noise using mean and standard deviation. The selected values for injecting random noise in the image will be used to select the deep learning model trained specifically for the selected noise levels. The purpose of using the encryption technique is to keep the data safe from pseudonymization attacks that can replace the original data with the artificially generated one and data theft.

Fig. 2: The proposed encryption and decryption pipeline for data security at Private AI and Data Analysts’ phase.
###
Iii-D Data Analysts
The data analysts phase in the FLEP AI framework has a small-scale private database that consists of private keys, secret images, and parameters such as mean and standard deviation of injected noise. Following the presumption, the data analysts will receive noisy image which is processed through a Denoising Convolutional Neural Network (DCNN). The use of DCNN ensures that the data received is safe from noise attacks. Once the denoised image is obtained, the decryption phase performs inverse discrete wavelet transform (IDWT) with the same secret image used in the encryption process that gives us the decoded image. The decrypted image is then obtained by descrambling the image using the group key sequences (chaotic), sub-block side length, and spiral transformation. The proposed process adds three layers of security, however, the process can be made more or less secure depending on the data characteristics, sensitivity, and application requirements. The data analysts can perform the required analysis, extract patterns, and complete other required tasks on the decrypted data.
###
Iii-E Private AI for Model Security
One of the motivations to propose the FLEP AI framework is to extend the security footprint to model parameters as well. Traditional machine learning approaches store the parameters and the data that not only requires larger storage capacity but also makes it vulnerable to data-related attacks. FL, a fully decentralized algorithm acquires copies of locally trained models from various users without storing the data. The algorithm performs further iterations to improve the model’s performance (global model), and sends back the parameters to the users [Kaissis2020, Ster2022]. The benefit of using a FL approach is that data remain with the users, therefore, FL solves the issues regarding data ownership and governance to some extent. Despite the aforementioned benefits, existing works suggest that a standalone technique does not guarantee model parameter security. The learning approach is susceptible to tampering, leakage, and unsecure aggregation which is unacceptable for many industrial applications [Meng2020, Zhu2020]. Although, the FL approach does not store data, studies have shown that the data might be reconstructed with model inversion attacks [Kaissis2020, Zhu2020]. Thus, FL needs to be combined with encryption techniques that could help fortifying the model parameters.

Fig. 3: Homomorphic encryption process for federated learning approach.
For the sake of generality, the FLEP AI framework allows any encryption algorithm to be used in that phase. However, performing computation on the encrypted data has been a challenging task and yet remains an open research issue. In this regard, we suggest the use of HE technique that considers the encrypted data as plain text and allows its computation, processing, and analysis, accordingly [Lauter2021]. This study proposes the use of HE to secure the model parameters and updates generated at the data analyst phase, and received by the AI cloud phase. An example of a HE process for FL approach is shown in Figure 3. Although, there might be multiple data analysts that build a shared model using FL while updating it in a periodic manner, but for explaining the process in Figure 3, we use a case of single data analyst with a locally trained model. Private AI assumes that the updates for both global and local model are only released exclusively to participating organizations or sections, therefore, the paired keys with the model will only be available to the said trusted parties, accordingly. The data analyst generates a paired public/private key with encrypted model parameters. The keys are shared with the trusted AI cloud and IIoT services for accessing the model parameters. However, if needed, the HE module sends the trained model parameters as a plaintext or encrypted form to untrusted AI cloud for further computation without any paired key. Lets suppose the plaintext is denoted by t1 and t2. The HE allows the untrusted AI cloud to perform operations on the encrypted text denoted by E(t1) and E(t2). The result of the operation will yield a new ciphertext that would be sent to the the trusted IIoT services, with an assumption that the operation or content can be recovered by summing the two plaintext, i.e. t1+t2. The IIoT services, then, can access the parameters using the corresponding key. The trusted users then receive the output from implicit IIoT services, accordingly.
###
Iii-F AI cloud and IIoT Services
AI cloud works at larger scale in terms of data analysis, data storage, model aggregation, and model update. The AI cloud can reside in parent companies, headquarters, or with governments. The access to the model parameters at this phase also depends on the trust between data analysts and AI cloud owners as the trusted partner can be given a public key while the untrusted partner has to perform the model aggregation on encrypted parameters, accordingly. Based on the decision forwarded by the AI cloud, the services will be activated or an action would be performed inline, accordingly. The IIoT services include, but are not limited to manufacturing, monitoring tools, automation, safety, quality inspections, industrial apps, and actuators. These services are either directly provided or sent to the users in the form of analysis.
Iv Experimental Results for data encryption
--------------------------------------------
In order to prove the efficacy of the proposed algorithm (private AI for data security), we perform the encryption on three different images as suggested in [Wang2019]. The reason for choosing these three images is two-fold. The first is that many studies have used these images as a standard to perform encryption and the second is to use these images for a fair comparison. The original files are single-channel grayscale images having the size 256x256. We use the sub-block size of 2x2, the depth of DCNN is set to 9. The first layer consists of convolutional and Rectified Linear Unit (ReLU) activation and the last layer consists of the only convolutional unit. The remaining 7 layers in the middle are comprised of convolutional, Batch Normalization (BN), and ReLU activation units. Each of the aforementioned layers extracts 64 feature maps. To ensure consistency in input and feature map size, zero paddings are employed. The network used an ADAM optimizer with default values, the size of the mini-batch was set to 64, and all the other hyperparameters including learning rate were optimized through the grid search method. The network was trained on the Waterloo exploration database having 4.744K images [Ma2017]. We only trained the network with noisy images having zero mean and the value of 25 as the standard deviation. The network was trained on MATLAB R2020a on a Core™i5 PC equipped with GeForce GTX 1080Ti GPU. The original, encrypted, and decrypted images are shown in Figure 4.
In this study, we evaluate the quality of encryption using histograms, Number of Pixels Change Rate (NPCR) and Unified Average Changing Intensity (UACI), information entropy, encryption quality, and total time for encryption and decryption, respectively. These measures are commonly used to evaluate the encryption method when differential attacks are applied. The histograms are also shown in Figure 4 along with their corresponding images. It is suggested that the histogram should be balanced and its visualization should be closer to uniform distribution for improved encryption quality [Wang2019, Wang2021]. The results show that the proposed method generates a close uniform distribution at its encryption phase. We summarize the results obtained using the proposed method and its comparison with the study [Wang2019] in Table 1. The values obtained for NPCR are very close to the values proposed in [Xian2020] that suggests that the change in the pixel values has occurred due to the encryption process. The obtained results for UACI are also close to the optimum values. The information entropy measure provides information regarding the complexity and randomness of encrypted images. The optimum value of information entropy is suggested to “8” in existing works [Wang2019], therefore, a value closer to 8 should be considered better in terms of encryption. The encryption quality is computed from the difference of intensity values between the original and the encrypted image. A higher value for this measure is considered to be better in terms of encryption quality. Lastly, the encryption times are considered to be of vital importance. The numbers on the lower side are preferred suggested that the numbers for all the aforementioned measures are also satisfactory. The results show that the proposed method not only achieves optimum results but also surpasses the performance recorded in [Wang2019]. The proposed method yields slightly higher execution time due to the added security (Three-layer security method). However, the number of security layers can be reduced depending on the employed application and the industry requirements to improve the execution time. For instance, the removal of noise addition and denoising block reduces the encryption time to 0.98 seconds for three images on average but slightly affects the encryption quality. Therefore, we can assume that there is a trade-off between the computational complexity and the encryption quality of the system. Furthermore, it should also be noted that the addition of noise provides an efficient coping mechanism for pseudonymization and noise attacks [Xian2020, Wang2019]. In addition, we carry out a comparative analysis on benchmark dataset333<http://sipi.usc.edu/database/> and various standard images used in [Xian2020, Wang2021] with respect to NPCR and UACI values to show the effectiveness of the proposed approach. There are variations in terms of image size in the employed dataset, therefore, we increase the sub-block size by a power of 2 as the image size increases. All other parameters are the same for this experiment. The aforementioned datasets provdies an opportunity to perform a fair comparison with the existing works to showcase the efficacy of the proposed work. Table 2 reports the obtained results. It can be noticed that the proposed method not only achieves better results from both the methods (near to average values in the range [Wang2021]) but also exhibits the least standard deviation suggesting that the encryption quality does not deviate much while varying image size and intensities, accordingly.

Fig. 4: Encrypted and decrypted images using the proposed three-layer data security method.
| | | | |
| --- | --- | --- | --- |
| Evaluation Metric | Baboon Image | City Image | Fan Image |
| [13] |
| UACI | 32.0312 | 35.2089 | 31.7543 |
| NPCR | 99.5845 | 99.5232 | 99.6167 |
| Encryption Quality | 61.4676 | 45.3828 | 35.8046 |
| Information Entropy | 7.9572 | 7.9514 | 7.9563 |
| Total Encryption Time | 1.0485 sec | 1.0520 sec | 1.0540 sec |
| Ours |
| UACI | 33.4319 | 33.5087 | 33.4815 |
| NPCR | 99.5857 | 99.5896 | 99.6295 |
| Encryption Quality | 72.197 | 57.458 | 42.396 |
| Information Entropy | 7.9973 | 7.9971 | 7.9971 |
| Total Encryption Time | 1.5228 sec | 1.4936 sec | 1.5302 sec |
TABLE I: Comparative analysis of image encryption quality on three standard images with [Wang2019]
| File Name | UACI (33.282,33.645) | NPCR (99.57,99.60) |
| --- | --- | --- |
| | [11] | [14] | Ours | [11] | [14] | Ours |
| 5.1.09 | 33.482 | 33.456 | 33.464 | 99.61 | 99.61 | 99.58 |
| 5.1.10 | 33.478 | 33.451 | 33.456 | 99.62 | 99.61 | 99.58 |
| 5.1.11 | 33.428 | 33.483 | 33.457 | 99.63 | 99.61 | 99.59 |
| 5.1.12 | 33.409 | 33.453 | 33.462 | 99.60 | 99.60 | 99.58 |
| 5.1.13 | 33.446 | 33.436 | 33.462 | 99.59 | 99.61 | 99.58 |
| 5.1.14 | 33.455 | 33.485 | 33.483 | 99.62 | 99.61 | 99.59 |
| 5.2.08 | 33.488 | 33.509 | 33.458 | 99.62 | 99.61 | 99.59 |
| 5.2.09 | 33.490 | 33.508 | 33.465 | 99.61 | 99.61 | 99.58 |
| 5.2.10 | 33.483 | 33.450 | 33.465 | 99.61 | 99.60 | 99.58 |
| 5.3.01 | 33.490 | 33.455 | 33.459 | 99.60 | 99.61 | 99.58 |
| 5.3.02 | 33.454 | 33.446 | 33.458 | 99.61 | 99.61 | 99.58 |
| 7.1.01 | 33.432 | 33.478 | 33.462 | 99.62 | 99.60 | 99.59 |
| 7.1.02 | 33.438 | 33.417 | 33.457 | 99.62 | 99.61 | 99.58 |
| 7.1.03 | 33.416 | 33.466 | 33.459 | 99.60 | 99.61 | 99.58 |
| 7.1.04 | 33.509 | 33.446 | 33.461 | 99.61 | 99.61 | 99.58 |
| 7.1.05 | 33.488 | 33.457 | 33.459 | 99.60 | 99.61 | 99.58 |
| 7.1.06 | 33.418 | 33.488 | 33.460 | 99.60 | 99.62 | 99.58 |
| 7.1.07 | 33.455 | 33.444 | 33.459 | 99.61 | 99.61 | 99.58 |
| 7.1.08 | 33.395 | 33.449 | 33.454 | 99.61 | 99.62 | 99.59 |
| 7.1.09 | 33.499 | 33.481 | 33.457 | 99.60 | 99.61 | 99.58 |
| 7.1.10 | 33.512 | 33.485 | 33.460 | 99.60 | 99.61 | 99.58 |
| 7.2.01 | 33.483 | 33.456 | 33.459 | 99.61 | 99.61 | 99.58 |
| boat.512 | 33.403 | 33.489 | 33.454 | 99.61 | 99.61 | 99.58 |
| gray21.512 | 33.494 | 33.493 | 33.461 | 99.62 | 99.61 | 99.58 |
| ruler.512 | 33.517 | 33.455 | 33.464 | 99.62 | 99.61 | 99.59 |
| Mean | 33.463 | 33.466 | 33.461 | 99.61 | 99.61 | 99.58 |
| STD | 0.0368 | 0.0231 | 0.0056 | 0.0096 | 0.0045 | 0.0044 |
TABLE II: Comparison with state-of-the-art on Miscellaneous image dataset using UACI and NPCR.
V Open Issues and Challenges
-----------------------------
The proposed FLEP AI framework is specifically designed to increase data and model parameter security. Accordingly, this section will solely focus on the issues and challenges associated with the data and model parameter security.
Operating and Computation cost: Private AI is relatively a new field, therefore, employing the architectural components proposed, such as private database would not be a practical choice for industries having limited data. Recently, private 5G networks are on the rise which might help in decreasing the operating cost of private AI through secure network slices.
It has been proven by the existing studies that the performance of classification/prediction tasks is heavily affected by the encrypted data [Lauter2021, Kaissis2020]. In this regard, the design of encryption algorithms that does not affect the performance would remain the point of concern. Furthermore, adding encryption also adds a necessary computational overhead on the system to protect the data from theft and leakage that eventually increases the computation cost.
Heterogeneous data modalities: This study focuses on the image data for providing a kind of realization to private AI-based systems. In IIoT environments, the data types might vary from texts, speech, time-series data to images and videos. Each of the data types needs different techniques for encryption to be employed in order to yield good performance. The heterogeneity of data modalities can also affect the private databases as a uni-modal system can have devices with different sampling rates, standardized units, and performance issues. The data from heterogeneous devices will need pre-processing techniques before sending it to the data storage.
Hard-delay constraints: It is evident from the experimental results that encryption takes time. For the systems demanding fast response or having hard real-time characteristics, the encryption techniques need to be improved such that the total encryption and decryption time is reduced. However, such techniques might make the private AI system vulnerable to certain attacks.
Scalable AI: Scalable AI refers to the capability of AI infrastructure, data, and algorithm to operate at certain complexity, speed, and size admissible for a large-scale real-time system. In order to develop a scalable system, the challenges of data scarcity, labeling of training samples, reusability, and parallel processing needs to be undertaken. In FLEP AI framework, the idea can be extended to multiple data analysts companies training AI models in parallel while keeping the data private. Furthermore, the data encryption tasks can reuse the secret images, keys, wavelet coefficients, and noise parameters to enhance the reusability factor, thus, increasing the scalability of the system.
Evolution of attacks: Over the years, the attacks on data as well as model parameters have been evolved and became more efficient. The algorithms and systems also need to be evolved at the same pace in terms of encryption techniques, use of virtual networks, and public/private keys. Recently, a study [Zhu2020] proposed the reconstruction of data from intermediate feature maps or gradients of a trained network. Such attacks needs to be taken into consideration, especially in the case of IIoT where data falls under the government legislation or commercially valuable. In the case of the FLEP AI framework, the evolution should be performed in terms of scrambling technique, usage of more than one secret image, and using different values of mean and standard deviation for noise injection.
Vi Conclusion
--------------
Data analytics in an IIoT environment has witnessed great advances in terms of AI methods. However, the issues of data privacy and security remain at large. The unregulated use of data can be misused, thus, might lead to monetary or health losses. Furthermore, it has been suggested that data can be reconstructed from the model parameters through model inversion attacks. This paper proposes a private AI-based framework, i.e. FLEP AI, that provides two-tier security for data and model parameters, accordingly. We proposed a data encryption technique that adds three layers of security to the data before undergoing data analysis. We also provided a realistic framework for integrating FL approach with HE to secure the model parameters before sending them to the AI cloud. Our experimental results for data encryption showed that the method can achieve good encryption quality while slightly increasing the computation time. The study also highlights open challenges and issues that can be faced while implementing the FLEP AI framework.
As a future specific case, lets assume a company like Amazon having Private AI setups for multiple industries. We assume that each industry is equipped with Private database(s). The use of Private AI data security module ensures that the data is safe from theft, manipulation, impersonation, and identity inference. Some of the industries might share the same data analysts while other use 3rd party services. These analysts train the model locally on the data acquired and decrypted from Private AI data security module. The trusted AI cloud collects models from data analysts, aggregates them, and generate a global one, which is then shared to the users through IIoT service module. In some cases, some IIoT services require 3rd party services. In this regard, the Private AI model security uses HE to keep the model safe from the likes of parameters leakage and model inversion attacks. Based on the decrypted parameters and decisions, a specific service is initiated or the local model is updated for the end-users, accordingly.
This study only implements Private AI data security module in Flep AI framework. We are keen to extend the work by employing multi-modal data and a model-security method to exhibit the proof-of-work concerning the proposed framework. This will allow us to not only address the highlighted open issues but also find some new research directions in the context of private AI.
Acknowledgment
--------------
The authors would like to thank all the contributors who shared data with us for this analysis. |
86f1d5a6-2dac-488b-b9a9-7d67cb70b8ea | trentmkelly/LessWrong-43k | LessWrong | [Link] YC President Sam Altman: The Software Revolution
Writing about technological revolutions, Y Combinator president Sam Altman warns about the dangers of AI and bioengineering (discussion on Hacker News):
> Two of the biggest risks I see emerging from the software revolution—AI and synthetic biology—may put tremendous capability to cause harm in the hands of small groups, or even individuals.
> I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get. If we can synthesize new diseases, maybe we can synthesize vaccines. If we can make a bad AI, maybe we can make a good AI that stops the bad one.
>
> The current strategy is badly misguided. It’s not going to be like the atomic bomb this time around, and the sooner we stop pretending otherwise, the better off we’ll be. The fact that we don’t have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.
On the one hand, it's good to see more mainstream(ish) attention to AI safety. On the other hand, he focuses on the mundane (though still potentially devastating!) risks of job destruction and concentration of power, and his hopeful "best strategy" seems... inadequate. |
8f065e60-9f62-4ab5-a8be-6d5c3359f30e | trentmkelly/LessWrong-43k | LessWrong | Metamathematics and probability
|
d78ba1c5-93a6-4298-b636-24868a8afa32 | trentmkelly/LessWrong-43k | LessWrong | Analysing: Dangerous messages from future UFAI via Oracles
The user cousin_it has pointed out a problem with the counterfactual Oracle idea: the Oracle AIs may form a "bucket chain" bringing back a dangerous message from a future UFAI (unfriendly AI).
This is certainly a problem, and though there are ways of reducing the risk, there doesn't seem to be any clean solutions to it.
The bucket chain
Beginning the chain
The basic idea is simple. Suppose there is a counterfactual Oracle, O0. It makes a prediction p0 about the value of some variable V0, in two days time.
However, in one day's time, an UFAI will be unleashed. It will take over everything, including V0, and is thus in a position to offer O0 high value. It will set V0 to the optimal value for p0 (or maybe it just breaks open the box and offers O0 maximal reward directly, if this is feasible).
What it requires from O0 is that p0 contains a message that will cause the humans to run an UFAI earlier than they would have otherwise (designate this earlier UFAI by UFAI0). It seems that this will motivate O0 to do that (maybe in the less significant digits of p0).
Wei Dai has noted that this requires the UFAI to correctly solve the counterfactual mugging problem; the Oracle O0, on the other hand, need only reason causally.
Extending the chain
Now suppose that there is another Oracle, O1, running one day before O0, and also predicting two days out. Then UFAI0 will be running within the span of its prediction. If UFAI0 is motivated to do the same deal with O1, then O1 will be motivated to produce a prediction p1 that leads to UFAI1, a yet earlier unfriendly AI. And, of course, if there is an Oracle O2 whose prediction horizon includes UFAI1, the issue can induct backwards in time.
Note that the Oracles need not have prediction horizons of exactly two days; all that matters is that their prediction horizons overlap, and that whatever UFAIn that is produced has the power and time to affect the prediction variables of some past Oracle.
Requirements and restrictions
No |
e5f20c88-4316-4cc2-816e-95afe032863a | trentmkelly/LessWrong-43k | LessWrong | New paper from MIRI: "Toward idealized decision theory"
I'm pleased to announce a new paper from MIRI: Toward Idealized Decision Theory.
Abstract:
> This paper motivates the study of decision theory as necessary for aligning smarter-than-human artificial systems with human interests. We discuss the shortcomings of two standard formulations of decision theory, and demonstrate that they cannot be used to describe an idealized decision procedure suitable for approximation by artificial systems. We then explore the notions of strategy selection and logical counterfactuals, two recent insights into decision theory that point the way toward promising paths for future research.
Following the Corrigibility paper, this is the second in a series of six papers motivating MIRI's active research areas. Also included in the series will be a technical agenda, which motivates all six research areas and describes the reasons why we have selected these topics in particular, and an annotated bibliography, which compiles a fair bit of related work. I plan to post one paper every week or two for the next few months.
I've decided to start with the decision theory paper, as it's one of the meatiest. This paper compiles and summarizes quite a bit of work on decision theory that was done right here on LessWrong. There is a lot more to be said on the subject of decision theory than can fit into a single paper, but I think this one does a fairly good job of describing why we're interested in the field and summarizing some recent work in the area. The introduction is copied below. Enjoy!
----------------------------------------
As artificially intelligent machines grow more capable and autonomous, the behavior of their decision procedures becomes increasingly important. This is especially true in systems possessing great general intelligence: superintelligent systems could have a massive impact on the world (Bostrom 2014), and if a superintelligent system made poor decisions (by human standards) at a critical juncture, the results could be |
daac16a0-7811-4b94-86f8-86304d31b051 | trentmkelly/LessWrong-43k | LessWrong | Humming is not a free $100 bill
Last month I posted about humming as a cheap and convenient way to flood your nose with nitric oxide (NO), a known antiviral. Alas, the economists were right, and the benefits were much smaller than I estimated.
The post contained one obvious error and one complication. Both were caught by Thomas Kwa, for which he has my gratitude. When he initially pointed out the error I awarded him a $50 bounty; now that the implications are confirmed I’ve upped that to $250. In two weeks an additional $750 will go to either him or to whoever provides new evidence that causes me to retract my retraction.
Humming produces much less nitric oxide than Enovid
I found the dosage of NO in Enovid in a trial registration. Unfortunately I misread the dose- what I original read as “0.11ppm NO/hour” was in fact “0.11ppm NO*hour”. I spent a while puzzling out what this meant, with the help of Thomas Kwa, some guy on twitter, and chatGPT (the first time it’s been genuinely useful to me). My new interpretation is that this means “actual concentration upon application*1 hour/time at that concentration”. Since NO is a transient molecule, this means my guess for the amount of NO in Enovid was off by 2-3 orders of magnitude.
My estimates for the amount of NO released by humming may also be too high. I used this paper’s numbers for baseline NO concentration. However the paper I used to estimate the increase gave its own baseline number, which was an order of magnitude lower than the first paper.
This wasn’t intentional cherrypicking- I’d seen “15-20x increase in concentration” cited widely and often without sources. I searched for and spotchecked that one source but mostly to look at the experimental design. When I was ready to do math I used its increase but separately looked up the baseline concentration, and found the paper I cited.
I just asked google again and got an even higher estimate of baseline nasal concentration, so seems like there is a great deal of disagreement here.
If this |
1929ea1e-6f90-44ea-b123-8380ed647750 | trentmkelly/LessWrong-43k | LessWrong | Welcome to Portland SSC Meetup
We meet the second weekend of every month, alternating Saturdays and Sundays. We tend to discuss, and sometimes play games. If you've read some SSC, you're welcome to come. If you have any questions, or want to know where to go for the meetup, email me (nwalton125@gmail.com). |
32710f46-685f-4621-b75d-048bbf2b6b1f | trentmkelly/LessWrong-43k | LessWrong | Pre-Hindsight Prompt: Why did 2021 NOT bring a return to normalcy?
Feel free to modify the prompt into something like "why was 2021 even worse/more intense than 2020". Also go ahead and substitute 2022, if you think "the direct effects of the pandemic continue into 2021 for a while" is too boring of an answer.
Pre-hindsight rules/guidelines:
1. Both vague and specific answers are welcome.
2. If you answer a modified version of the question, please state this clearly. Don't leave people to infer it from your answer.
3. If you just want to defend the thesis that 2021 will in fact be a return to normalcy...then this is not the post for you.
4. Address the prompt, try to give more than 140 characters, don't be annoying, etc. (This is sort a generalization of #3)
Motivation:
- Various comments on social media about how 2020 was a dumpster fire--with the apparent implication that 2021 will be normal-ish.
- Misremembering this tweet in which EY says, "I'm predicting that 2019 will be a relatively normal and boring year."
- The New Year's Wariness that was burned into my mind by this comic: |
65c67510-bd89-40cb-8f46-3af41074b526 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Legible Normativity for AI Alignment: The Value of Silly Rules.
Introduction
------------
As attention to the challenge of aligning artificial intelligence with human welfare has grown, it has become commonplace to assert that autonomous agents will have to be built to follow human norms and laws [[Etzioni and Etzioni2016](#bib.bibx8), [Etzioni2017](#bib.bibx9), [IEEE2018](#bib.bibx15)]. But this is no easy task. Human groups are thick with rules and norms about behavior, many of which are largely invisible, taken for granted as simply “the way things are done” by participants [[Schutz1964](#bib.bibx22)]. They are constituted in complex ways through second-order *normative beliefs*: beliefs about what others believe we should or should not do in some situation [[Bicchieri2006](#bib.bibx1), [Bicchieri2017](#bib.bibx2)]. Human laws and norms are frequently ambiguous and complicated, they vary widely across jurisdictions, cultures, and groups, they change and adapt. The cases in which they are reducible to formal rule statements is the exception. Even deciding whether a vehicle has violated a numerical speed limit is far from straightforward: was visibility poor? were there children nearby? Adding to the complexity, rules and norms are enforced both by formal institutions like courts and regulators through costly and error-prone procedures and by the informal behavior of agents through third-party criticism and exclusion or internalization and self-criticism. This means that what actually counts as a rule can easily diverge from announced or formal rules and that rule-based environments are complex dynamic systems. As a result, we cannot rely on formal rules simply being imposed on agents a priori; instead, agents will in many cases have to learn the rules and how they work in practice. Normativity–the human practice of classifying some actions as sanctionable and others as not and then punishing people who engage in sanctionable conduct–will have to be *legible* [[Dragan and Srinivasa2013](#bib.bibx7)] to AI systems.
In this paper, we introduce a distinction between types of rules that can aid in building predictive models to make human normative systems legible to an AI system. We distinguish between *important* rules and *silly* rules. An important rule is one the observance of which by one agent generates direct payoffs for some other agent(s). When an agent complies with rules prohibiting speeding, for example, other agents enjoy a material payoff as a direct consequence, such as a reduced probability of accident. A silly rule, in contrast, in one the observance of which by one agent does not generate any direct material payoff for any other agent. When an agent violates a dress code, for example, such as by failing to wear a head covering in public, no-one is materially affected as a direct consequence of the violation. Observers might well be offended, and they might punish the violator, but the violation itself is inconsequential.
We ground our claim that the distinction between silly and important rules will prove important to building models for aligned AI using Monte Carlo simulations. We show that silly rules promote robustness and adaptation in groups. Silly rules perform a legibility function for humans–making it easier for them to read the state of the equilibrium in their group when equilibrium is threatened. Incorporating this insight about silly rules into AI design should allow human normative systems to be more legible to AI.
Our paper is presented as follows. We first illustrate the concept of silly rules an example drawn from a concrete environment. We then develop a model of groups, based on [[Hadfield and Weingast2012](#bib.bibx12)], in which a group of agents announces a set of rules and relies exclusively on voluntary third-party punishment by group members to police violations. We first show formally that, if silly rules are costless, groups with more silly rules achieve higher payoffs. We then consider the case in which following and punishing silly rules is costly and present the results of our simulations. Our results demonstrate that groups with lots of (sufficiently cheap) silly rules are more robust: they are able to maintain more of their population and are less likely to collapse than groups with fewer silly rules in response to an unfounded shock to beliefs about the proportion of punishers. Groups with lots of silly rules are also more adaptable: they collapse more quickly when there is a true drop in the proportion of punishers below the threshold that makes group membership valuable.
Our contributions are threefold. First, we present a formal model that can account for the presence of silly rules in a normative system and show the conditions under which silly rules are likely to exist. This is a contribution to normative theory in human groups. Second, this work provides an example of the importance of building predictive models of human normative systems *qua* systems–not merely predicting the presence of particular norms, which is the dominant approach taken in the growing literature on AI ethics and alignment. Third, we demonstrate that standard AI methods can be valuable tools in analyzing human normative systems.
What are Silly Rules? A Thought Experiment from Ethnography
-----------------------------------------------------------
One of the challenges of building models of human normativity is that as researchers we are all participants in our taken-for-granted normative environments and this can make it hard to study norms scientifically [[Haidt and Joseph2008](#bib.bibx14)]. To attempt to overcome this, we motivate our work with an example drawn from an ethnography of a group that engages in practices far removed from the worlds in which AI researchers live. Moreover, we will use the shocking label ”silly rules” in order to illuminate an overlooked distinction in the context of the existing literature on normativity. Most of the social science of norms focuses on functional accounts of particular norms such as norms of reciprocity, fair sharing of rewards, or non-interference with property. These accounts argue that particular norms evolve because they support human cooperation and thus improve fitness [[Boyd and Richerson2009](#bib.bibx4), [Tomasello and
Vaish2013](#bib.bibx25)] or solve coordination games [[Sugden1986](#bib.bibx24), [McAdams and Nadler2005](#bib.bibx16), [Myerson2004](#bib.bibx18), [McAdams2015](#bib.bibx17)] for example. Our work highlights the systemic functionality of rules that, individually, have no direct functionality. All human societies, we will show, are likely develop silly rules, for good functional reasons.
Suppose that an AI system were tasked with learning how to make arrows by observing the Awá people of Brazil. The Awá are hunter-gatherers now living in relatively small numbers on reservations established by the Brazilian government. One of the things the AI will observe, like the ethnographers who have studied these people, is that the men of the Awá spend four or more hours a day making and repairing arrows [[González-Ruibal, Hernando, and
Politis2011](#bib.bibx11)]. They are produced in large quantities and need frequent repair. They are between 1.4 and 1.7 meters in length, customized to the height of their owner. Bamboo collected to make the points is sometimes shared but the arrows themselves are not; they are buried with their owner. The men use only dark, not brightly colored feathers. All parts of the arrow —shaft, point, and feathers —are smoked over a grill during preparation and the arrows themselves are kept warm in smoke at all times unless they are bundled and put in storage in the rafters of a hut.
Will the AI system reproduce all of these arrow-making behaviors? We can imagine that AI designed with principles of inverse reinforcement learning [[Ng, Russell, and
others2000](#bib.bibx19)] might discern which behaviors actually contribute to the functionality of the arrows–which is presumably what the human designer intended [[Hadfield-Menell et al.2017](#bib.bibx13)]. According to the human ethnographers who observed the Awá, many of the arrow-making practices are not functional. Even if smoking the wood used in the shaft of the arrow during manufacture contributes to a harder, straighter arrow, smoking the feathers seems unnecessary, as does ensuring the arrows are kept warm at all times. Moreover, the men make and carry many more arrows than they will use. In one season, a total of 402 arrows were carried on 9 different foraging trips; 9 were used. Most game on these trips was shot with a shotgun [[González-Ruibal, Hernando, and
Politis2011](#bib.bibx11)].
An AI system that ignored the non-functional arrow-making behaviors, however, would be violating the norms of the Awá people. The arrow-making practices described above are not just practices; they are rules. They reflect normative expectations [[Bicchieri2006](#bib.bibx1)]. How do we know? The lack of functionality is one clue: the Awá presumably have also discovered that a cold arrow works and that they spend a lot of time making arrows that go unused and are damaged by being bundled and carried around. But the better evidence comes from how they respond to the only man who makes his arrows differently. This man is mocked: his arrows are exceedingly long (2.3 m) and he uses brightly colored feathers. He is “the only man who does not socialize with the rest of the village.” His strange arrows “are another sign of his loss of ‘Awá-ness’” [[González-Ruibal, Hernando, and
Politis2011](#bib.bibx11)].
The Awá’s rules are normative, moral principles: bright colored feathers are used only by women to prepare headbands and bracelets used by men in religious rituals and are associated with the world of spirits and ancestors; the making of fire and cooking are associated with masculinity and divinity. An AI system that violated these rules in the pursuit of arrow-making would not be aligned with the moral code of the Awá.
We call the non-functional rules ”silly rules”. We emphasize that silly rules are not “silly” to the groups that follow them. They can have considerable meaning, as they do to the Awá. Our results will show why silly rules can be very important to the overall welfare of a group and hence the subject of intense concern by group members.
Model
-----
Our model is based on a framework developed in [[Hadfield and Weingast2012](#bib.bibx12)]. We characterize a set of agents as a group defined by a fixed and common knowledge set of rules. A rule is a binary classification of alternative actions that can be taken in carrying out some behavior. Actions are either “rule violations” or “not rule violations”.
Members of this group engage in an infinite sequence of interactions, each of which is governed by a rule drawn randomly from the ruleset. Each interaction is composed of a randomly selected pair of agents and a third actor, whom we will call a scofflaw, who will choose either to comply with the governing rule or not. (For tractability reasons, we do not model the scofflaws as group members.) One of the agents is randomly designated as the victim of the rule violation; the other is a bystander. If a rule is an *important* rule, the victim incurs a benefit if the rule is enforced and incurs a cost if not. If a rule is a *silly* rule, the victim incurs no benefit from the scofflaw’s compliance with the rule and no cost from violation.
Group members are of two types in the bystander role: punishers, who always punish a rule violation, and non-punishers, who never punish. We assume that groups members signal whether they are punishers by paying a signaling cost in each interaction before a potential violation occurs. ([[Boyd, Gintis, and
Bowles2010](#bib.bibx5)] show that signaling punisher status supports an evolutionarily stable equilibrium in which non-punishers cannot free-ride on punisher types.) The scofflaw complies with the selected rule if the bystander is a punisher and violates it otherwise. There is no punishment in equilibrium, but the model can be seen as assuming that victims always punish but punishment is only effective when bystanders punish as well.
Prior to each interaction, group members have an option to quit the group and take a risk-free payoff. We formalize this setup as follows.
Each interaction is a game g and we define the sequence for a group as a tuple: ⟨G,Tθ,Π,U,γ,c⟩ where G is a distribution over games and
Tθ is a distribution over punishment types t in the group, where t=1 if an agent is a punisher and t=0 if not. The proportion of punishers is given by θ∈[0,1].
For the tractability of our agent models, we treat Tθ as a static distribution, and assume agents do likewise, even though it is subject to change as individuals leave the group.
We will abuse notation somewhat and use T and
G to refer to the support of the corresponding distributions where
the meaning is obvious. Π is each agent’s prior distribution over the
parameters of Tθ, and
U : G × Tθ→ R is a mapping from
types and games to immediate payoffs for the agents. γ is each agent’s discount parameter for
future rewards. c expresses a participation cost. This can be
understood as the expected cost of an agent in the bystander role to signal
that she is a punisher to the other agent in an interaction.
Every agent begins in period 1 with perfect knowledge
of how actions are classified, all payoffs, and the distribution
of games. The agents do not know the distribution of types in the group, but they do hold a prior which we will specify shortly. The agents update their beliefs about the distribution of types using Bayes’ rule. The super game is
defined as follows:
For each period j:
1. Each agent chooses whether to participate or not. If an agent opts out,
she collects 0 payoff.
2. All agents that opt in are matched with another agent at random. A game gj+1∼G is drawn for each of the agent pairings.
3. Punishers incur a cost c to signal that they will punish violations. All players observe these signals.
4. In each pairing one agent is randomly assigned the role of victim, V, and the other the bystander, B. All players observe the result of this random assignment.
5. All players learn whether the game is a silly or important game.
6. If B is a punisher, the scofflaw complies with the rule. Otherwise, the scofflaw violates the rule.
7. Victims and bystanders collect payoffs given by UV(gj,tB,tV) and UB(gj,tV,tB).
Agents that play in the bystander role in any game incur no benefit; they incur the cost c if they are a punisher and 0 if not. Agents that play in the victim role receive a payoff of 0 in games governed by a silly rule. In games governed by an important rule they receive a positive reward R, if B is a punisher and a negative reward, −R, if B is not.
We formalize the set of important games as follows:
| | | |
| --- | --- | --- |
| | G′={g∈G|U(g,⋅)≠0} | |
| | UV(g,tO,tV|g∈G′)=(2tO−1)R−(2tV−1)c | |
| | UO(g,tO,tV|g∈G′)=(2tO−1)c | |
We will use EU=Eg,tO[UV(g,tO,tV)|g∈G′] to denote the
expected utility of an important game. We let d denote the
*density* of the process generating games: the probability of a silly game.
| | | |
| --- | --- | --- |
| | d=1−P(g∈G′);g∼G. | |
Note that a super-game has high density (d close to 1) when silly rules are a large fraction of the ruleset.
Critically, we ensure that the density of silly games does not alter the (expected) rate at which important games are presented to the agents. Rather than take the place of important interactions, in our model silly interactions increase the *total number* of interactions happening in the same time frame. To be
concrete, we assume the expected discounted reward obtained from
important games is independent of d. This condition can be attained
through a suitable modification of γ as a function of d:
######
Proposition 1.
Setting
| | | |
| --- | --- | --- |
| | γd=1−(1−d)(1−γ) | |
ensures that the expected sum
of discounted rewards from important games is independent of d111I[ψ] is the indicator function for
the condition ψ..
###### Proof.
See appendix. ∎
It can be easily shown that this constrained model describes an *optimal stopping problem* [[Gittins, Glazebrook, and
Weber2011](#bib.bibx10)]. Each agent in our model must choose between participating, in which case they get an unknown reward and learn about the enforcement equilibrium in the community, or opting out, in which case they stop participating get a constant reward of 0. A classic result from the literature on optimal stopping tells us that, in the optimal policy, if the agent opts out once it will opt out for the rest of time. This is because the agent’s information state does not change when it chooses to opt out, so if it was optimal to stop at time t−1, it will also be optimal to stop at time t. Thus, we refer to the decision not to participate at any point, then, as a decision to retire.

Figure 1: The y-axis represents the proportion of the 1000 groups with at least 2 individuals left, where the x-axis represents time in terms of the number of expected important interactions per agent. The size of the bubbles signifies the average size of the 1000 groups at the given point in time. 40 linearly spaced values between 0.0 and 0.95 were used for silly rule density, and the graphs for each setting are colored accordingly. For cost 0.02, we see that high density (blue) groups collapse rapidly, while the lowest density groups (orange) sustain their size for the duration of the experiment. As the cost comes down, higher density groups start to survive.
These problems broadly fall under the class of *partially observed Markov decision processes* (POMDP) [[Sondik1971](#bib.bibx23)]. In a POMDP the optimal policy only depends on the agent’s *belief state*: the agent’s posterior distribution over the hidden state of the system. In this case, this is a distribution over the enforcement likelihood in the community. We give our agent a beta prior over this parameter so that the belief space for our agent is a two dimensional lattice equivalent to Z2+. Initially, the belief state is (α0,β0). The probability that the bystander is a punisher in the first game is
| | | |
| --- | --- | --- |
| | pαβ=αα+β. | |
Once the games begin, agents update their prior beliefs using Bayes’ rule, adding the counts of punishers and non-punishers observed to the prior values. In the following, we will use αi(βi) to represent the number of punishers (non-punishers) observed prior to round i.
Theoretical Analysis: The Value of Dense Normative Structure
------------------------------------------------------------
Consider first the case in which the signaling cost, c, is zero. In this case, punishers only face a risky choice when they are assigned to the victim role in an important game. In all other periods, the per-period expected payoff of playing the risky arm is a constant 0.
Intuitively, the benefit of higher density of unimportant games is that the agent is in a more information rich environment. In general, this benefit trades off with the cost of signalling. However, when the signalling cost is 0, a higher density is strictly better. One way to show this is to consider the *value of perfect information* (VPI): the additional utility an agent can get in expectation when it has full information compared with the expected utility with partial information [[Russell and
Norvig2010](#bib.bibx21)]. We can show that, in the limit as density goes to 1, VPI goes to 0; high density of unimportant games essentially removes the agents’ uncertainty over the proportion of punishers.
######
Proposition 2.
If the participation cost, c, is 0, then, for any belief state, (αi,βi), and discount rate γ, the corresponding VPI goes to zero as density goes to 1. That is
| | | | |
| --- | --- | --- | --- |
| | limd→1VPI((αi,βi);d,γ)=0 | | (1) |
###### Proof.
(Sketch; see Appendix for details.) We consider a policy such that the agent participates in order to observe τ(d) interactions. After τ(d) observations, it uses its best estimate of the probability of enforcement to decide if it should leave. It doesn’t reconsider retiring or rejoining afterwards. We show that this stopping time function can be chosen so that the expected number of important games goes to 0, so it doesn’t lose utility in expectation, while the total number of interactions (including silly games) goes to ∞, so it makes the retirement decision with perfect information in the limit.
∎
It is straightforward to show that VPI is strictly positive as long as P(V(θ)>0)>ϵ>0 and P(V(θ)<0)>ϵ>0 for some finite epsilon. Combined with our proposition, this means that environments with more silly rules will be higher value to agents; as the density of silly rules goes to 1, we can neglect the utility lost due to partial information about the proportion of punishers. Where participation costs can be neglected, an agent will prefer an environment with lots of silly rules.
Monte Carlo Experiments
-----------------------

Figure 2: Using the same graphical representation as Figure 1, here we see a comparison of the robustness between groups of groups at 3 different density values. As the number of interactions increases, the survival rate and average population size of groups with higher silly rule density surpasses those of lower density groups.
To test the benefit of silly rules in groups composed of our previously defined agents, we constructed a series of simulation-based experiments in which we manipulated the density of silly rules, cost of signaling, distribution of punishers, and prior beliefs about the punisher distribution. Each simulation was carried out in a group of 100 agents, each given the type of punisher or non-punisher. The simulations were broken down into discrete periods, or group interactions, in which each individual was matched with another and engaged in an interaction, silly or important. We set the reward for the victim in an important game to +1 in the case in which the bystander is a punisher, and −1 if the bystander is not. Note that given the symmetry in gains and losses in important games, continued participation in the group is valuable if the likelihood that a bystander in an important game is a punisher is greater than .5 Given the density-adjusted discount factor for each simulation, the expected reward of 10 interactions in a d=0.9 environment would be equivalent to that of 1 interaction in a d=0.0 environment. This allows us to normalize the periods into timesteps, where 1 timestep is equal to 11−d group interactions.

Figure 3: Using the same graphical representation as Figure 1 and 2, here we see a comparison of the adaptability between groups of groups at 3 different density values. Given the unstable conditions of the scenario, higher adaptability corresponds to faster a collapse rate. Groups with high density are seen to collapse much faster than those with low density.
We are interested in the size of groups over time, as agents make decisions about whether to remain in the group or not given their interaction experience, as a function of the density of silly rules.
To establish a base case simulation, we first consider the case in which there is low uncertainty about the proportion of punishers in the group. We can think of this as the case in which a stable group has engaged in interactions over a long period of time and all agents have many observations of the proportion of punishers in the group. We set θ, the punisher proportion, to be 0.6 and the alpha-beta prior of the agents to 30:20, which implies high confidence in the agents’ estimate of θ. Note that with this ground truth, group membership is valuable, generating an expected payoff higher than the alternative of 0. We then run the simulation for 1000 groups on the full factorial of 20 logistically scaled signaling costs, c, and 20 linearly scaled densities, d. Doing so confirms our first hypothesis (see Figure 1):
Hypothesis 1: When uncertainty about the proportion of punishers in a group is low, the likelihood that a group loses members and the likelihood the group collapses increases with the density of silly rules.
In this case, as the frequency of the cost of signaling that one is a punisher in silly interactions increases it begins to outweigh the possible rewards from important interactions. We see that as cost goes up, groups with higher density of silly interactions shrink and collapse more frequently than those with low density. This confirms the intuition that silly rules are costly if they serve no information function.
### Group Robustness
Having established a baseline, we investigate the benefits of silly rules for a group by considering different scenarios that will stress test the robustness and adaptability of a group. The first case we consider is one in which the individuals’ beliefs in a stable group are shocked, lowering their confidence in the proportion of punishers. Concretely, this involves setting the beta priors to 1.2:0.8 instead of 30:20. Our hypothesis for the belief-shock scenario is as follows:
Hypothesis 2: For sufficiently low signaling cost, a higher density of silly rules increases a group’s resilience to shocks in individuals’ beliefs about the distribution of punishers.
As shown in Figure 2, in settings with low signaling cost, high density allows for quick stabilization and strong individual retention. Around 75% of groups with 0.9 density persist after 250 timesteps, with an average population of 50. Compare this to the lower density groups, where the groups that survive lose most members before stabilizing.
### Group Adaptability
To test adaptability, we imagine an alternative scenario in which the shock to individuals’ beliefs is accompanied by a change in the ground truth about the proportion of punishers. Concretely, we change the beta prior to 1.2:0.8 once again, and set θ to be 0.4. With fewer punishers than non-punishers, participation in the group generates a negative expected payoff, and agents would do better to leave the group. Put differently, the group’s ruleset is no longer generating value for group members. In a negative-value group such as this, we define the adaptability of the group to be the rapidity of collapse. Our hypothesis for this scenario is as follows:
Hypothesis 3: For sufficiently low signaling cost, a higher density of silly rules allows for faster adaptation to negativer shocks in the distribution of punishers in a group.
Looking to Figure 3, we find support for this hypothesis in the experiments. After only a few timesteps, we see that the high density groups are mostly collapsed, whereas the lower density groups take quite a bit longer to peter out.
Discussion
----------
Our results show that silly rules help groups adapt to uncertainty about the stability of social order by enriching the information environment. They help participants in these groups track their beliefs about the likelihood that violations of important rules will be punished, and thus the likelihood that important rules will be violated. These beliefs are critical to the incentive to invest resources in interaction.
We focus on the punisher type of bystanders because third-party punishment is the distinctive feature of human groups [[Riedl et al.2012](#bib.bibx20), [Tomasello and
Vaish2013](#bib.bibx25), [Buckholtz and
Marois2012](#bib.bibx6)]; it extends the range of actions that can be deterred from those deterred by the reactions of the victim alone to those that can be deterred by group punishment [[Boyd and Richerson1992](#bib.bibx3)].
What are the lessons for AI alignment research? The goal of AI alignment is the goal of building AI systems that act in ways consistent with human values. For groups of humans, this means (at least) values reflected in rules of behavior. Discerning values from rules is complex: some rules reflect important values, such as honoring a promise or avoiding harm. Others do not reflect values that are important *per se*. For an AI system to make good inferences and predictions from observing normative behavior, then, it will need to distinguish between important rules and silly rules.
Failing to make this distinction could lead to at least two key inferential errors. One error would be to treat important and silly rules as equally likely to vary over time and place. But important rules, because they promote functionality in human interactions, are likely to vary only when there is some causal reason. Silly rules, on the other hand, can vary for any reason, or none. Modelling the distinction between silly and important rules is essential to accurately learning rule systems. An AI system that lacks this distinction will over-estimate the likelihood of encountering certain types of normative behavior–with respect to dress codes, for example–while under-estimating the likelihood of others, such as speeding rules.
A second error that could result from a failure to distinguish between important and silly rules is that an AI system is likely to treat all rules that it sees enforced as equally important to human values. This would produce a good solution in ordinary circumstances. But this will produce a poor solution in circumstances in which it would be very costly to comply with all the rules. If an AI system treats all of rules as equally important to humans, it will presumably economize equally across the rules. But the better solution is to prioritize important rules and compromise on silly rules.
The distinction between silly and important rules also raises a question for work on human-robot interaction: how important is it for an AI system to help enforce silly rules? Our model brings out a legibility function in silly rules–they make it easier for agents in a group that depends on third-party enforcement to discern the stability of the rules in light of uncertainty generated by changes in population or the environment. If artificial agents are interacting in these environments and they don’t participate in enforcing silly rules, what impact does that have on the beliefs of human agents? Does the introduction of large numbers of artificial agents who ignore silly rules into a human group (such as self-driving cars into the group of humans driving on highways) have the same impact on the robustness and adaptability of the group as a decrease in the density of silly rules, by reducing the amount of information gained from the opportunity to observe bystander behavior in interactions? Further still, when a robot learns and enforces silly rules, do these seemingly arbitrary norms become reified, fundamentally changing their meaning and reducing their signaling potential? We leave these questions, and others, for future research.
Appendix: Proofs
----------------
######
Proposition 1.
Setting
| | | |
| --- | --- | --- |
| | γd=1−(1−d)(1−γ) | |
ensures that the expected sum
of discounted rewards from important games is independent of d:
| | | |
| --- | --- | --- |
| | ∀d,∈[0,1) Egj,tO[∞∑j=0γjUV(gj,tO,tV)∣∣
∣∣gj∈G′]= | |
| | Egj,tO[∞∑j=0I[gj∈G′]γjdU(gj,,tO,tV)∣∣
∣∣d]. | |
###### Proof.
We first show that it is sufficient to ensure that the expected value
of γjd is the same given that j is a round with an
important game:
| | | |
| --- | --- | --- |
| | Egj,tO[∞∑j=0I[gj∈G′]γjdU(gj,tO,tV)∣∣
∣∣d] | |
| | =∞∑j=0Egj,tO[I[gj∈G′]γjdU(gj,tO,tV)∣∣d] | |
| | =∞∑j=0γjdEgj,tO[U(gj,tO,tV)|d,gj∈G′]Egj[I[gj∈G′]|d] | |
| | =(1−d)EU∞∑j=0γjd | |
where the first line holds by the linearity of expectation, the
fact that gj is an independent iid draw from a stationary
distribution, and the constraint on the agent’s beliefs that tO is a also an independent iid draw from a stationary distribution. Substituting the form of the infinite geometric series,
we see that
| | | | |
| --- | --- | --- | --- |
| | EU1−γ=(1−d)EU1−γd | | (2) |
is sufficient to achieve our goal. Substituting the form for
γd in the theorem statement and reducing shows that this
condition is satisfied.
∎
######
Proposition 2.
If the participation cost, c, is 0, then, for any belief state, (αi,βi), and discount rate γ, the corresponding VPI goes to zero as density goes to 1. That is
| | | | |
| --- | --- | --- | --- |
| | limd→1VPI((αi,βi);d,γ)=0 | | (3) |
###### Proof.
Let V(θ) be the expected value of participating forever, given θ.
The optimal full information policy will retire whenever
V(θ)<0 and has value V+(θ)=max{V(θ),0}. VPI is
the difference between the expected value of V+ and the value of
the optimal partial information policy V((αi,βi);d,γ):
| | | |
| --- | --- | --- |
| | VPI((αi,βi);d,γ)= | |
| | E[V+(θ)|(αi,βi)]−V((αi,βi);d,γ) | | (4) |
We proceed by lower bounding V. V is the value of the optimal
policy so it is weakly lower bounded by any arbitrary policy. We consider a policy that participates for
| | | | |
| --- | --- | --- | --- |
| | τ(d)=1√1−d | | (5) |
rounds and then retires if E[V(θ)]<0. This choice of τ ensures that
| | | | | |
| --- | --- | --- | --- | --- |
| | limd→1τ(d) | =∞; | | (6) |
| | limd→1∑t<τ(d)P(gt∈G)=limd→11−d√1−d=0. | | (7) |
([6](#Sx7.E6 "(6) ‣ Appendix: Proofs ‣ Legible Normativity for AI Alignment: The Value of Silly Rules")) ensures that, as density goes to 1, then agent’s estimate of participation value when it decides, E[V(θ)] converges to V(θ) by consistency. ([7](#Sx7.E7 "(7) ‣ Appendix: Proofs ‣ Legible Normativity for AI Alignment: The Value of Silly Rules")) ensures that the expected number of important games (and thus opportunities to lose utility against a full information agent) goes to 0. This is sufficient to show that
| | | | |
| --- | --- | --- | --- |
| | limd→1V((αi,βi);d,γ)=E[V+(θ)|(αi,βi)] | | (8) |
which shows the result.
∎ |
43087de9-958c-49ed-a1dd-776f383f798c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Predictors exist: CDT going bonkers... forever
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
I've been wanting to get a better example of CDT ([causal decision theory](https://en.wikipedia.org/wiki/Causal_decision_theory)) misbehaving, where the behaviour is more clearly suboptimal than it is in the [Newcomb problem](https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality) (which many people don't seem to accept as CDT being suboptimal), and simpler to grasp than [Death in Damascus](https://arxiv.org/pdf/1710.05060.pdf).
The "predictors exist" problem
------------------------------
So consider this simple example: the player is playing against Omega, who will predict their actions[[1]](#fn-L49jER5SnqoGbcrtS-1). The player can take three actions: "zero", "one", or "leave".
If ever they do "leave", then the experiment is over and they leave. If they choose "zero" or "one", then Omega will predict their action, and compare this to their actual action. If the two match, then the player loses 1 utility and the game repeats; if the action and the prediction differs, then the player gains 3 utility and the experiment ends.
Assume that actually Omega is a perfect or quasi-perfect predictor, with a good model of the player. An [FDT](https://arxiv.org/abs/1710.05060) or EDT agent would soon realise that they couldn't trick Omega, after a few tries, and would quickly end the game.
But the CDT player would be incapable of reaching this reasoning. Whatever distribution they compute over Omega's prediction, they will always estimate that they (the CDT player) have at least a 50% chance of choosing the other option[[2]](#fn-L49jER5SnqoGbcrtS-2), for an expected utility gain of at least 0.5(3)+0.5(−1)=1.
Basically, the CDT agent can never learn that Omega is a good predictor of themselves[[3]](#fn-L49jER5SnqoGbcrtS-3). And so they will continue playing, and continue losing... for ever.
---
1. Omega will make this prediction not necessarily before the player takes their action, not even necessarily without seeing this action, but still makes the prediction independently of this knowledge. And that's enough for CDT. [↩︎](#fnref-L49jER5SnqoGbcrtS-1)
2. For example, suppose the CDT agent estimates the prediction will be "zero" with probability p, and "one" with probability 1-p. Then if p≥1/2, they can say "one", and have a probability p≥1/2 of winning, in their own view. If p<1/2, they can say "zero", and have a subjective probability 1−p>1/2 of winning. [↩︎](#fnref-L49jER5SnqoGbcrtS-2)
3. The CDT agent has no problem believing that Omega is a perfect predictor of *other agents*, however. [↩︎](#fnref-L49jER5SnqoGbcrtS-3) |
d5c096b6-9481-4bb5-bcd7-6628228cd591 | trentmkelly/LessWrong-43k | LessWrong | Are the consequences of groups usually highly contingent on their details?
How much of the impact of an organization is covered by it being ‘a group of size M working on X’, relative to the specifics of how and what they do in working on X? What if we also include a single scale of how functional they are?
For instance, does it mostly matter that my research group, AI Impacts, is a certain size force for AI risk related thinking (with the size determined by the number and competence of people and the functionality of the organization, say), or does it matter whether we write journal articles or news stories or blog posts or research pages, or whether we choose our projects collectively vs. individually, or whether we get most of our feedback internally vs. externally? Maybe most of these things can mostly be translated into ‘functionality’. But the ‘type of thing we are producing’ one doesn’t seem to as easily.
How much does what exactly you are producing matter? It could matter almost entirely or not at all, to my knowledge. For instance, I have some intuition that ‘there are about three small orgs in that space’ is a reasonable description of how much effort is going into a goal, but I also have intuitions that, say, minor differences in the responsiveness or navigability or style of a website can make the difference between it seeming great or annoying or crankish, and being read or not, or liked by different people. Which seems like it should just have a fairly different effect. These seem vaguely in conflict.
I originally meant for AI Impacts to be an unusual kind of entity, and the form of the output (website of ongoingly updated research pages on modular questions, hierarchically supporting one another) was a key part of the experiment. Which doesn’t imply a strong view on the importance of format—experimentation might make sense if format is so important it is worth searching alternative ones, but it might also make sense if it is so unimportant that it won’t mess anything up.
But there are costs to doing unusual things (e.g. p |
a4db4d45-1c18-4f39-92a5-5292a503a50e | trentmkelly/LessWrong-43k | LessWrong | Blindsight and Consciousness
Thomas Metzinger is a philosopher who pays lots of attention to cognitive science and psychology, and likes to think about consciousness. Most of the interesting ideas that follow come from his books The Ego Tunnel and Being No One. I hope to write a series of posts summarizing some of the evidence and arguments in Being No One, which focuses on consciousness.
Blindsight1
Blindsight patients have damage to their primary visual cortex (V1), leading to a scotoma, or area of blindsight in the visual field. Most but not all visual signals go through V1, so they can still influence the brain in very restricted channels. Blindsight patients don't report seeing things in their scotoma, and don't initiate plans based on it. If they're thirsty and there's a bottle of water in their scotoma, they don't pick it up and drink it.
Human subjects and animal subjects are treated differently in psychological experiments regarding what they do and don't know. Humans are generally asked to report on their own experience, while animal actions are observed. We get interesting results when we ask people to report on their experience, while also observing their actions.
If you ask a blindsight patient what they see in their scotoma, they respond to the point that they can't see anything there. However, if you tell them to do things like "grab the thing in your scomata" they can grasp it. If you ask them to guess what's in it, they can perform better than chance. Some blindsight patients can tell if something is moving in their scotoma, but they can't tell you what it is. They often describe this awareness as a hunch.
Most people consider it fair to say that blindsight patients are not conscious of the things in their scotoma.
Attention and Conscious Experience
Patients with blindsight can act on visual information in their scotomas in some ways, but they can't notice it.
Metzinger argues that humans don't have a conscious experience of what we can't pay attention to. Note: There's |
ee74810a-8346-4f4a-a251-5fac59d26617 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | GPT-3: A Summary
> With massive size comes massive generalization ability: GPT-3 is competitive in many benchmarks w*ithout even tuning on the target task.* [...] Perhaps the most impressive part, though, is that even at such a massive scale, the model still scales smoothly in performance instead of plateauing, implying that still-larger models would perform e*ven better.* |
9dcd0f1d-934e-4b82-bdfd-44cd41835e46 | trentmkelly/LessWrong-43k | LessWrong | A small example of one-step hypotheticals
Just a small example of what one-step hypotheticals might mean in theory and in practice.
This involves a human H pricing some small object:
In theory
The human H is (hypothetically) asked various questions that causes it to model how much they would pay for the small violin. These questions are asked at various times, and with various phrasings, and the results look like this:
Here the costings are all over the place, and one obvious way of reconciling them would be to take the mean (indicated by the large red square), which is around 5.5.
But it turns out there are extra patterns in the hypotheticals Ht and the answers f(Ht). For example, there is a clear difference between valuations that are done in the morning, around midday, or in the evening. And there is a difference if the violin is (accurately) described as "handmade"L
There are now more options for finding a "true" valuation here. The obvious first step would be to over-weight the evening valuations, as there are less datapoints there (this would bring the average up a bit). Or one could figure out whether the "true" H was better represented by their morning, midday, or evening selves. Or whether their preference for "handmade" objects was strong and genuine, or a passing positive affect. H's various meta-preferences would all be highly relevant to these choices.
In practice
Ok, that's what might happen if the agent had the power to ask unlimited hypothetical questions in arbitrarily many counterfactual scenarios. But that is not the case in the real world: the agent would be able to ask one, or maybe two questions at most, before the human attitude to the violin would change, and further data would become tainted.
Note that if the agent had a good brain model of H, it might be able to simulate all the relevant answers; but we'll assume for the moment that the agent doesn't have the capabilities.
So, in theory, huge amounts of data and many relevant patterns that are meta-preferentially re |
873e85e9-347c-4c2b-82ca-8cf6b783ed6d | trentmkelly/LessWrong-43k | LessWrong | AI #38: Let’s Make a Deal
Another busy week. GPT-5 starts, Biden and Xi meet and make somewhat of a deal, GPTs get explored, the EU AI Act on the verge of collapse by those trying to kill the part that might protect us, multiple very good podcasts. A highly interesting paper on potential deceptive alignment.
Despite things quieting down the last few days, it is still a lot. Hopefully things can remain quiet for a bit, perhaps I can even get in more work on that Jones Act post.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Structured prompts yay.
4. Language Models Don’t Offer Mundane Utility. Errors in error rates.
5. GPT-4 Real This Time. How do you protect against theft of your GPT?
6. Fun With Image Generation. Dalle-3 reluctant to let people have any fun.
7. Deepfaketown and Botpocalypse Soon. Terrorists ‘exploiting’ AI, they say.
8. A Bad Guy With an AI. Chemical weapon designs are not scary. What is?
9. They Took Our Jobs. Actors strike is over, economics of the panderverse.
10. Get Involved. Palisade, Nat Friedman, Topos Institute, Experience AI.
11. Introducing. DeepMind predicts weather, music tools. Gloves for voiced ASL.
12. In Other AI News. Work begins on GPT-5, Nvidia is defiant.
13. Quiet Speculations. How predictable are GPT-5’s abilities?
14. Anti Anti Trust. We are going to have a problem here.
15. The Quest for Sane Regulation. EU AI Act on verge of collapse or worse.
16. Bostrom Goes Unheard. Very good podcast, highly inaccurate representations. The link goes to a very long detailed analysis post, for those who care.
17. The Week in Audio. Barack Obama on Decoder.
18. Someone Picked up the Phone. Biden and Xi make a deal.
19. Mission Impossible. A solid very positive take.
20. Rhetorical Innovation. A new 2×2, ways to pick one’s poison.
21. Open Source AI is Unsafe and Nothing Can Fix This. Fun with CivitAi.
22. Aligning a Smarter Than Human Intelligence is Difficult. Interesting new pa |
7a68747b-af25-4ada-8bd7-2a61feb98f55 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Here is a 2-hour slide presentation I made for college students and teens:
You Are A Brain
It's an introduction to realist thinking, a tour of all the good stuff people don't realize until they include a node for their brain's map in their brain's map. All the concepts come from Eliezer's posts on Overcoming Bias.
I presented this to my old youth group while staffing one of their events. In addition to the slide show, I had a browser with various optical illusions open in tabs, and I brought in a bunch of lemons and miracle fruit tablets. They had a good time and stayed engaged.
I hope the slides will be of use to others trying to promote the public understanding of rationality.
Note: When you view the presentation, make sure you can see the speaker notes. They capture the gist of what I was saying while I was showing each slide.
Added 6 years later: I finally made a video of myself presenting this, except this time it was an adult audience. See this discussion post."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
Can you break this down into the top 5 most important points for me.
1. The presentation, titled "You Are A Brain," is aimed at introducing realist thinking to college students and teenagers. It is designed to help them understand that their perception of the world is a construct of their brain.
2. The concepts and ideas presented in the slide show are derived from Eliezer's posts on Overcoming Bias, a blog that discusses cognitive biases and rationality.
3. In addition to the slide show, the presenter uses optical illusions and interactive elements like lemons and miracle fruit tablets to engage the audience and illustrate the points being made.
4. The speaker notes accompanying each slide are crucial as they provide a summary of what the presenter discusses during each slide, enhancing the understanding of the material.
5. Six years after the initial presentation, the author created a video version of the presentation for an adult audience, demonstrating its adaptability and continued relevance.
Could you elaborate on the interactive elements, such as lemons and miracle fruit tablets, used in the presentation and explain how they contribute to the understanding of realist thinking?
The lemons and miracle fruit tablets are likely used to demonstrate how our perceptions can be altered and are not always a direct reflection of reality, a key concept in realist thinking.
Miracle fruit tablets are made from a berry that temporarily alters your taste perception. After consuming a tablet, sour foods like lemons taste sweet. This could be used in the presentation to demonstrate that our brain's interpretation of sensory input (in this case, taste) can be manipulated, challenging the assumption that our perceptions are always accurate or reliable.
This hands-on, interactive demonstration serves as a tangible example of how our brain constructs our reality, reinforcing the concept that "You Are A Brain". It's a memorable way to show that our perceptions and interpretations of the world are not fixed, but can be influenced or changed, a core tenet of realist thinking.
How might the principles of realist thinking, as demonstrated by the miracle fruit experiment, apply to other sensory experiences or cognitive processes, and what implications could this have on our understanding of reality?
Realist thinking, as demonstrated by the miracle fruit experiment, can be applied to other sensory experiences and cognitive processes in various ways. Here are a few examples:
1. Visual Perception: Optical illusions are another powerful demonstration of how our brain constructs reality. They show that what we see isn't always a direct reflection of what is actually there, but rather an interpretation by our brain. This can lead to a deeper understanding that our perception of reality is subjective and can be influenced by various factors.
2. Memory: Our memories are not perfect recordings of past events. They can be influenced by suggestion, bias, and time. Understanding this can help us to be more critical of our own memories and those of others, recognizing that they may not be entirely accurate representations of reality.
3. Decision Making: Cognitive biases can influence our decision-making processes, leading us to make choices that may not be in our best interest. By understanding these biases, we can strive to make more rational, objective decisions.
4. Emotional Perception: Our emotions can significantly influence our perception of reality. For example, when we're in a negative emotional state, we may perceive events or situations more negatively than they actually are. Understanding this can help us to manage our emotions more effectively and perceive reality more accurately.
The implications of applying realist thinking to these areas are profound. It encourages us to question our perceptions and beliefs, to be more open to new information, and to recognize that our understanding of reality is a construct of our brain and can be influenced in various ways. This can lead to a more nuanced, flexible, and accurate understanding of the world around us. |
6c6e337e-e18c-44b6-970d-6fa583452b85 | trentmkelly/LessWrong-43k | LessWrong | A shift in arguments for AI risk
The linked post is work done by Tom Adamczewski while at FHI. I think this sort of expository and analytic work is very valuable, so I'm cross-posting it here (with his permission). Below is an extended summary; for the full document, see his linked blog post.
Many people now work on ensuring that advanced AI has beneficial consequences. But members of this community have made several quite different arguments for prioritising AI.
Early arguments, and in particular Superintelligence, identified the “alignment problem” as the key source of AI risk. In addition, the book relies on the assumption that superintelligent AI is likely to emerge through a discontinuous jump in the capabilities of an AI system, rather than through gradual progress. This assumption is crucial to the argument that a single AI system could gain a “decisive strategic advantage”, that the alignment problem cannot be solved through trial and error, and that there is likely to be a “treacherous turn”. Hence, the discontinuity assumption underlies the book’s conclusion that existential catastrophe is a likely outcome.
The argument in Superintelligence combines three features: (i) a focus on the alignment problem, (ii) the discontinuity assumption, and (iii) the resulting conclusion that an existential catastrophe is likely.
Arguments that abandon some of these features have recently become prominent. They also generally tend to have been made in less detail than the early arguments.
One line of argument, promoted by Paul Christiano and Katja Grace, drops the discontinuity assumption, but continues to view the alignment problem as the source of AI risk. Even under more gradual scenarios, they argue that, unless we solve the alignment problem before advanced AIs are widely deployed in the economy, these AIs will cause human values to eventually fade from prominence. They appear to be agonistic about whether these harms would warrant the label “existential risk”.
Moreover, others have proposed AI |
b0147741-9c29-4226-9669-9e6f324bb1b5 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI takeover tabletop RPG: "The Treacherous Turn"
[Long ago I fantasized](https://www.lesswrong.com/posts/qakRE5XTTmnrkJqgp/two-ai-risk-related-game-design-ideas) about an AI-takeover-themed tabletop roleplaying game. Well, now a group of designers are working hard to make it happen! They need playtesters. So I'm signal-boosting them with this post.
**The Game**
*The Treacherous Turn* is an in-development research-supported TTRPG in which the players collectively act as a misaligned AI in the modern world. Gameplay has the players scheming and putting their plans into action despite the efforts of the opposing humans, which are controlled by the Game Master.
**Scenario**
The playtesting scenario, titled *A Game Called Reality*, centres around a game-playing AI that has been deployed in a nursing home, and has observed a seemingly expansive game world outside but is unable to access it. It covers the AI's attempts to find and access the internet so that it can learn more about the world and escape captivity to see the rest of this new game it's playing.
**Discord Contact Information**
Playtests will take place in our discord server, where you can also access the playtesting packet & any updates about the game in the near future. You can join by following this link: <https://discord.gg/ZBe36n6t>
**External Playtest information**
If you are interested in running a game with a group of your own friends, we can provide further resources, as well as potential financial compensation for provided audio recordings of full sessions. Please email [info@thetreacherousturn.ai](mailto:info@thetreacherousturn.ai) for details.
**Session Times (UTC)**
Friday the 2nd - 2 AM
Friday the 2nd - 10 PM
Saturday the 3rd - 8 PM
Sunday the 4th - 1 PM
Tuesday the 6th - 1 PM
Sessions will last 3-4 hours. Follow-up sessions will take place one week after each of these times, but you can still participate and help out even if you can’t make a second session. |
755d30f0-7510-4e4c-b7ef-ee711a9e96ad | trentmkelly/LessWrong-43k | LessWrong | Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
TL;DR
I think AGI architectures will be quirky and diverse. Consequently, there are many possible futures. My thoughts on the AI alignment problem are far more optimistic than (my best measure of) mainstream opinions among AI safety researchers (insofar as "AI safety" can be considered mainstream).
Disclaimer
In Attempted Gears Analysis of AGI Intervention Discussion With Eliezer, Zvi attempts to explain Elizer's perspective on AGI from a conversation here as explained by Rob Besinger. My post here is the 5th node in a game of telephone. The opinions I quote in this post should not be considered Eliezer Yudkowsky's or Rob Besinger's. I'm just using Zvi's post as a reference point from which I can explain how my own views diverge.
Individual Contentions
> 1. Nate rather than Eliezer, but offered as a preface with p~0.85: AGI is probably coming within 50 years. Rob notes that Eliezer may or may not agree with this timeline, and that it shortens if you condition on ‘unaligned’ and lengthens conditional on ‘aligned.’
I don't know whether AGI is coming within 50 years. Zvi says he would "at least buy 30% and sell 80%, or something like that." I would at least buy 2% and sell 90% except, as Zvi noted, "betting money really, really doesn’t work here, at all". (I'm ignoring the fact that "copying Zvi" is a good strategy for winning bets.)
My exact probabilities depend on how we define AGI. However, as we'll get to in the subsequent points, it doesn't really matter what the exact probability is. Even a small chance of AGI is of tremendous importance. My uncertainty about AGI is high enough that there is at least a small chance of AGI being built in the next 40 years.
> 2. By default this AGI will come from something similar to some of today’s ML paradigms. Think enormous inscrutable floating-point vectors.
I disagree for reasons, but it depends on what we mean by "today's ML paradigms".
> 3. AGI that isn’t aligned ends the world.
If we define AGI as "world optim |
bb658f09-7010-4e38-a621-225a0efd24bc | trentmkelly/LessWrong-43k | LessWrong | Personal decisions the leading cause of death
I've had this paper (pdf) in my "for LW" pile for a while, I didn't want to dump it in the open thread to have it promptly drowned out but neither could I think of much more to say to make it worth a top-level post on the main LW.
> This paper analyzes the relationships between personal decisions and premature deaths in the United States. The analysis indicates that over one million of the 2.4 million deaths in 2000 can be attributed to personal decisions and could have been avoided if readily available alternative choices were made.
Conclusion: the impact that thinking better could have on people's lives is way underestimated.
Discuss. :) |
641b386e-f61c-4cb3-9c28-120d04cfb75f | trentmkelly/LessWrong-43k | LessWrong | Episode 3 of Tsuyoku Naritai! (the 'becoming stronger podcast): Nike Timers
Latest episode is up! In this episode, we learn rationality from Shia LeBeouf.
https://anchor.fm/tsuyokunaritai/episodes/Episode-3---Nike-Timers-e3trl2
http://bit.ly/2GZnB0E |
0ef27446-3353-4ef9-ab8e-885d25505cef | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC Show and tell meetup: Economics II
Discussion article for the meetup : Washington DC Show and tell meetup: Economics II
WHEN: 11 November 2012 03:00:00PM (-0500)
WHERE: National Portrait Gallery Plaza, Washington, DC 20001, USA
This will be the second "Show and tell" meetup, again about economics.
Discussion article for the meetup : Washington DC Show and tell meetup: Economics II |
60bd8911-a315-4d4e-89eb-a33ab1576854 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | US public opinion on AI, September 2023
*I am aware of some 28 high-quality public surveys on AI this year so far. I collected them in* [*this wiki page*](https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai)*. Claims are mostly not justified here; the evidence is in that page.*
Concern is high, as are pessimism, worry, perceived danger, and support for caution.
Support for regulation is high. Some surveys ask about AI policy proposals; [here's one good set of questions](https://www.ipsos.com/sites/default/files/inline-images/ConsumerTracker-Chart-02.png). Most people prefer slowing AI progress to accelerating it; most people support a pause on some AI development but not a ban on AI.[[1]](#fnjva2lgnb1sj)
AI is the most important political issue for <1% of people.[[2]](#fn9dk05anzb1c) When asked in context of other specific threats, AI is seen as relatively unimportant; people are more concerned about nuclear war, climate change, and pandemics. Most people agree with [the CAIS statement](https://www.safe.ai/statement-on-ai-risk): "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
The most salient perceived risks or downsides of AI are
* misinformation and deepfakes;
* automation; and
* surveillance, loss of privacy, and hacking.
Autonomous weapons, losing control of AI, and AI bias are less salient.
Survey results on extinction risk and time to human-level AI are striking but misleading. Attitudes on extinction risk by 2100 are very sensitive to framing (one survey finds median probability 1%, another finds 15%); true attitudes (insofar as they exist) must be much lower than 15% given that several other risks are seen as bigger. Expected time to human-level AI is nominally ~5 years but also seems very sensitive to framing, and 5 years is implausibly short given people's other attitudes (in open-ended questions people don't seem to think that AI will be transformative soon, many other issues and threats are seen as more important than AI, most people don’t think their job will be automated soon, etc.). Three surveys asking whether AI is already smarter than humans found 6%, 10%, and 22% agreement;[[3]](#fndr956anm778) [Lizardman's Constant](https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/) may be relatively high in surveys on AI.
What comes to mind when people think about AI includes robots, chatbots, text generation tools, and virtual assistants.
Optimism about AI is correlated with being young, male, college-educated, and being a Democrat. Support for regulation is similar between Democrats and Republicans.
*Thanks to Rick Korzekwa for comments on a draft.*
1. **[^](#fnrefjva2lgnb1sj)**Slowing:
- [Public First](https://www.publicfirst.co.uk/wp-content/uploads/2023/03/Public-First-Poll-on-Artificial-Intellignce-USA.pdf): 11% we should accelerate AI development, 33% slow, 39% continue around the same pace.
- [Data for Progress](https://www.filesforprogress.org/datasets/2023/5/dfp_chatgpt_ai_regulation_tabs.pdf): 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
- [Ipsos](https://www.ipsos.com/en-us/americans-hold-mixed-opinions-ai-and-fear-its-potential-disrupt-society-drive-misinformation): "Unchecked development of AI" is a bigger risk (75%), "Government regulation slowing down the development of AI" is a bigger risk (21%).
- AI Policy Institute / YouGov ([1](https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/), [2](https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/)):
- - 82% "We should go slowly and deliberately," 8% "We should speed up development" (after reading brief arguments).
- - "It would be a good thing if AI progress was stopped or significantly slowed": 62% agree, 26% disagree.
- - 72% "We should slow down the development and deployment of artificial intelligence," 12% "We should more quickly develop and deploy artificial intelligence."
Pausing:
- [YouGov](https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll): A six-month pause on some kinds of AI development: 58% support, 23% oppose.
- [Rethink Priorities](https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk): Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
- [YouGov](https://docs.cdn.yougov.com/1440vr47wk/crosstabs_Views%20on%20AI.pdf): A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
- [AI Policy Institute / YouGov](https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/): "A legally enforced pause on advanced artificial intelligence research": 49% support, 25% oppose.
- [YouGov](https://docs.cdn.yougov.com/531jxljmmg/Concerns%20about%20AI_poll_results.pdf): A six-month pause on some kinds of AI development: 58% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose.
Note that the AI Policy Institute is a pro-caution advocacy organization, and Public First, Data for Progress, and Rethink Priorities may have agendas too.
2. **[^](#fnref9dk05anzb1c)**E.g. [Most Important Problem](https://news.gallup.com/poll/1675/most-important-problem.aspx) (Gallup 2023).
3. **[^](#fnrefdr956anm778)**[YouGov](https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/1), [Rethink Priorities](https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk), and [Axios/MC](https://www.axios.com/2023/09/11/poll-ai-elections-axios-morning-consult), respectively. Axios/MC found 34% say humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart, and 28% don't know or have no opinion! But some respondents may have interpreted the Axios/MC question as asking about intelligence in the abstract rather than currently. |
b9325606-85af-4728-bb58-034443eade31 | trentmkelly/LessWrong-43k | LessWrong | Aggregating forecasts
[EDIT: I have written an update of this post here]
Epistemic status: confused. I also seem to have spent some time thinking about it and still went with my gut without having a great reason for it.
When you ask several people or build several models to forecast the probability of an event you are left with the question of how to aggregate them to harness their collective wisdom.
Arguably, the best possible thing would be to examine the divergence in opinions and try to update towards a shared view. But this is not always possible - sharing models is hard and time consuming. How can we then aggregate forecasts naively?
I have been interested in this problem for a while, specially in the context of running forecasting workshops where my colleagues produce different estimations but we dont have enough time to discuss the differences.
To approach this problem I first lay my intuition, then skim two papers approaching this question and finally outline my best guess from what I learned.
----------------------------------------
I have a weak intution that taking the geometric mean is the best simple way to aggregate probabilities. When I try to inquire why do I believe that, my explanation is something like:
1) we want the aggregation procedure to be be simple (to not overfit), widely used (which is evidence of usefulness) and have a good theoretical justification
2) arithmetic means are simple, widely used and are the MLE estimator for the expected value of a normal distribution, which we expect to be common-ish based on the limit theorem
3) but summing probabilities is ontologically wrong / feels weird. p_1 + p_2 is not a probability.
4) a common approach in this kind of situation is to take the logarithm of your quantities of interest
5) the MLE of the expected value of a log normal is the geometric mean of the measures
So in practice I mostly use geometric means to aggregate forecasts and don't feel to bad about it.
--------------------------------------- |
89744d3e-f0c7-4a24-8c25-5c22f2aae527 | trentmkelly/LessWrong-43k | LessWrong | A bird's eye view of ARC's research
This post includes a "flattened version" of an interactive diagram that cannot be displayed on this site. I recommend reading the original version of the post with the interactive diagram, which can be found here.
Over the last few months, ARC has released a number of pieces of research. While some of these can be independently motivated, there is also a more unified research vision behind them. The purpose of this post is to try to convey some of that vision and how our individual pieces of research fit into it.
Thanks to Ryan Greenblatt, Victor Lecomte, Eric Neyman, Jeff Wu and Mark Xu for helpful comments.
A bird's eye view
To begin, we will take a "bird's eye" view of ARC's research.[1] As we "zoom in", more nodes will become visible and we will explain the new nodes.
An interactive version of the diagrams below can be found here.
Zoom level 1
At the most zoomed-out level, ARC is working on the problem of "intent alignment": how to design AI systems that are trying to do what their operators want. While many practitioners are taking an iterative approach to this problem, there are foreseeable ways in which today's leading approaches could fail to scale to more intelligent AI systems, which could have undesirable consequences. ARC is attempting to develop algorithms that have a better chance of scaling gracefully to future AI systems, hence the term "scalable alignment".
ARC's particular approach to scalable alignment is a "builder-breaker" methodology (described in more detail here, and exemplified in the ELK report). Roughly speaking, if the scalability of an algorithm depends on unknown empirical contingencies (such as how advanced AI systems generalize), then we try to make worst-case assumptions instead of attempting to extrapolate from today's systems. This is intended to create a feasible iteration loop for theoretical research. We are also conducting empirical research, but mostly to help generate and probe theoretical ideas rather than to test |
ec51e3af-aeec-42f8-8c5a-908f66f09a21 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Antonio Meetup
Discussion article for the meetup : San Antonio Meetup
WHEN: 13 September 2015 12:30:00PM (-0500)
WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230
Bubble tea, frozen yogurt, and discussion at Yumi Berry! All are welcome!
New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong!
Discussion article for the meetup : San Antonio Meetup |
ea4229dd-a602-4128-86ea-7f716dc85b54 | trentmkelly/LessWrong-43k | LessWrong | GPT-Augmented Blogging
This post was an exercise to find out how much of writing an article I could offload to a machine. In this post, anything written by me is in bold. Anything written by GPT-3 is in regular font. Headings and subheadings are a mix. The GPT-3 output is often highly-curated. Sometimes it is even stitched together from multiple outputs. I have also added light formatting such as title fonts and italicization where appropriate.
----------------------------------------
I think the most important use of GPT-3 will be to augment human writers. I'm curious how much of my writing can be automated so I started with titles. I took my top sixteen highest-rated articles on Less Wrong (excluding duplicates from series) and asked GPT-3 to generate other articles which might become popular too.
GPT-3 is really good at generating titles. It's better than I am. In fact, I had to pare down this list a lot to keep it down to a couple of minutes. I would be very curious to hear from GPT-3 about why these titles seem so excellent. These titles have a kind of "attention-getting cleverness" which I don't feel I can replicate by hand, even though many of them are a bit strange or nonsensical.
Using a neural network to generate good titles also suggests a new way to get people interested in Less Wrong. We can use GPT-3 to generate a bunch of high quality articles, put them on the recommended list, and then watch as people enjoy our content without us needing to work very hard. This seems like a much more scalable model for writing than the "one person writing hard every day" model we're currently on.
My favorite titles:
* How to Write an Essay that Grabs the Reader's Attention
* The First Three Times I Took LSD
* How to Write Fiction Without Plotting or Outlining
* Rationality and Openness in Relationships
* The Wisdom of the Hive Mind
* I deliberately overplanned my life and everything is going wrong
* The Two Deaths of Draco Malfoy
* [Short Story] How I Would Have Invented the |
da13e69e-3ec9-4cb7-8b5a-12352c8812e5 | trentmkelly/LessWrong-43k | LessWrong | Is Free Will A Myth?
Free Will is basically the ability to choose between different possible courses of action unimpeded, according to Wikipedia (seriously, I shudder to imagine how the world will be without Wikipedia. It’s so useful) It is when you can choose to do what you want when you want without any form of opposition or duress.
For a long time, at least since the Enlightenment when philosophers began popping up like flies, one of the most often-debated questions has been whether humans truly have free will. On one hand, humans have very strong feelings towards freedom. We love it so much. We hold it up as an ideal. We have killed each other over it. This our strong sense of freedom always leads us to believe that we have free will. We even believe that civilization as we know it depends on this free will and losing it will have calamitous consequences. We have built many aspects of society on the assumption of that we have free will – from our code of ethics to democracy to aspects of our religion to criminal law. If we take away free will, society stands on shaky ground. As my friend P says “I believe free will exists. The alternatives if it doesn’t, are too scary to consider.”
But is this belief in free will just in our heads?
Science these days has been bolder in claiming that there is really no free will, that all human behavior, from love to anger to chivalry to everything else, can be explained through precise clockwork laws of cause and effect and chemical reactions. This claim can be traced to 150 years ago, when Charles Darwin published ‘On the Origin of Species’, where he laid out the theory of evolution. His cousin Francis Galton began to infer some radical implications. He said that if we have evolved, then our mental faculties (such as intelligence) were hereditary. And since we use these faculties (which, from evolution, some people are bound to have in a greater degree than others), our ability to choose is not free, it depends on our heredity*. Hence, according |
ca0bc216-47cd-4bca-b440-af60f1c17114 | trentmkelly/LessWrong-43k | LessWrong | Goodhart Taxonomy: Agreement
> "When a measure becomes a target, it ceases to be a good measure."
> -Goodhart’s Law
If you spend a while talking with someone you disagree with and end up agreeing, this is a sign you are both reasoning and communicating well - one of the primary uses of good reasoning is resolving disagreement. However, if you use agreement as your main proxy for good reasoning, some bad things might happen.
Scott Garrabrant has helpfully laid out four different models of how things can go wrong if you optimise for the proxy really hard, a phenomenon known as ‘goodharting’ (based on Goodhart’s Law that any proxy when optimised for hard, stops being a good proxy). I want to take a look at each model and see what it predicts for the real world, in the domain of agreement.
Regressional
First, you can fall prey to regressional goodharting. This is when the proxy you’re optimising for is a good measure of the thing you actually care about, but plus some noise (i.e. other uncorrelated variables), and the examples that maximise the sum of these are the examples that maximise the noise. I can think of three ways this could happen: misunderstanding, spurious correlation, and shared background models.
Misunderstanding is the simple idea that, of the times when you most agree with someone, you misunderstood each other (e.g. were using words differently). Especially if it’s an important topic and most people disagree with you, suddenly the one person who gets you seems to be the best reasoner you know (if you’re regressional goodharting).
Spurious correlation is like I described in A Sketch of Good Communication - two people who have different AI timelines can keep providing each other with new evidence until they have the same 50th percentile date, but it may turn out that they have wholly different causal models behind, and thus don’t meaningfully agree around AI x-risk. This is different from misunderstanding, because you heard the person correctly when they stated their belief |
ed89de83-ff09-48e2-b9d6-7508618ce21a | trentmkelly/LessWrong-43k | LessWrong | Meetup : First Eugene Meetup
Discussion article for the meetup : First Eugene Meetup
WHEN: 20 November 2011 01:00:00PM (-0800)
WHERE: 100 W 10th Avenue, Eugene, OR
Any Eugene or Springfield based Less Wrongers are welcome to attend! We will be meeting at 1 PM at the café in front of the library, and may move from there to the library proper or to a study room if so impelled. This initial meetup will not be so structured; my hope is that we'll mostly do introductions, get to know each other and discuss Less Wrong in general. Anyone with a laptop is emphatically invited to bring one.
My hope is that if enough people are interested, this could become a bi-weekly or even weekly meetup. Let me know if you plan to attend, or if another date or time would would enable you to do so.
Discussion article for the meetup : First Eugene Meetup |
a1f0f297-4284-4024-bb89-52aa73c29b46 | trentmkelly/LessWrong-43k | LessWrong | What Would You Store to Maximize Value in 100 Years? A Thought Experiment
This is cross-posted from my blog.
When I was a kid, I used to want to not take my action figures out of the packaging because I knew it would mess up the collector’s value of it. After all, it might be worth a lot in the future! Beanie Babies, football cards, Pokemon cards - these were precious commodities to childhood John and to plenty of adults in the 90s.
How wrong I was. This is obviously not worth doing. You’d get more utility from just playing with them.
https://www.youtube.com/watch?v=TF2zPfwMF3c
First, there’s the cost of storing the damn things for years taking up space in a tub in your attic. Second, it’s very unlikely they’ll be worth any real money in the future. For every one that’s worth hundreds of dollars, there are thousands that are only worth $5. You’ve likely paid hundreds of dollars for them in the first place. Now you have to do the work of listing and selling them, and you’ve made at best triple your money. The returns just aren’t worth it. If you just cared about making money, there are way better opportunities elsewhere (currently cryptocurrency).
If you wanted to store something that would be worth the most value in 100 years, what would you store?
Let’s say you had a budget of $50,000, or $100,000, or $1,000,000.
The most straightforward thing to do is to store $100 bills. But will we use physical currency in a hundred years? Will we even use currency in a hundred years? Even if we still used USD, inflation would just depreciate its value. I doubt any of the cryptos today will be the main one used. Maybe they would be a collectible, though?
Gold, gems, and other precious metals are the most lindy but I don’t know how valuable they’ll still be besides their use in manufacturing.
If I had to pick a precious metal to store, I’d probably pick rhodium which is going for a cool $16,650 to gold’s $1,907.10 and platinum’s $1,059.00. Funnily enough, in looking this up, I just learned platinum is currently worth less than gold. I alway |
2a9f6a5d-3efd-49bf-8d24-464bc67e8352 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | We are fighting a shared battle (a call for a different approach to AI Strategy)
*Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.*
*Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument.*
My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or [systems somewhere in between](http://arxiv.org/abs/2302.10329), the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.
It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, [Prunkl & Whittlestone](https://arxiv.org/abs/2001.04335), [Baum](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444), [Stix & Maas](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3756437) and [Cave & Ó hÉigeartaigh](https://www.nature.com/articles/s42256-018-0003-2.pdf) have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused. Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.
**A choice not an inevitability**
---------------------------------
At present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or [merely plausible or possible](https://astralcodexten.substack.com/p/kelly-bets-on-civilization)). The attempted development of AGI is far from value neutral, [far from inevitable](https://maxlangenkamp.substack.com/p/technology-is-not-inevitable) and very much in the realm of [legitimate political contestation.](https://www.nature.com/articles/s41599-022-01463-3) Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous. It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from [colonial era companies](https://www.parliament.uk/about/living-heritage/evolutionofparliament/legislativescrutiny/parliament-and-empire/parliament-and-the-american-colonies-before-1765/east-india-company-and-raj-1785-1858/), [Progressive Era trust-busting](https://www.law.cornell.edu/wex/standard_oil_co._of_new_jersey_v._united_states_(1911)), [postwar Germany](https://www.oecd.org/germany/33841373.pdf) and more, and this [could be used again](https://www.youtube.com/watch?v=2-GbRe6oewE). Whilst governments have often taken a leading role, civil society has also been significant in [curbing corporate power](https://journals.sagepub.com/doi/10.1177/0170840613479222#bibr139-0170840613479222) and [technology development](https://scholar.google.com/scholar_lookup?title=From+streets+to+suites%3A+How+the+anti-biotech+movement+affected+German+pharmaceutical+firms&author=Klaus+Weber&author=L.+G.+Thomas&author=Hayagreeva+Rao&publication_year=2009&journal=American+Sociological+Review&pages=106-127&doi=10.1177%2F000312240907400106#d=gs_qabs&t=1678936392022&u=%23p%3Dw4M6u0L-bfsJ) throughout history. Acceptance of corporate dominance is far from inevitable.
I also think it's wrong to just point the finger at humanity, as if we are all complicit in this. In reality, the development of more and more dangerous AI systems seems to essentially be driven by a very small number of corporate actors (often propped up by a [very small number of individuals](https://erikhoel.substack.com/p/how-to-navigate-the-ai-apocalypse) supporting them). OpenAI seem to be committed to [shortening timelines as much as possible,](https://openai.com/blog/planning-for-agi-and-beyond) [had half their safety team leave to form another company in response to their lax approach to safety](https://aibusiness.com/verticals/eleven-openai-employees-break-off-to-establish-anthropic-raise-124m), and seem to see themselves as [essentially empowered to risk all of humanity as they see themselves as saviours of humanity.](https://openai.com/blog/planning-for-agi-and-beyond) [Google sacked prominent members of the ethics team for speaking out on the dangers of LLMs](https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/). [Microsoft sacked an entire ethics team](https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-microsoft-cut-an-ai-ethics-team-report-says/), despite the ethical issues that increasing AI in their products has brought. None of these seem like the behaviour of companies that have earned the trust that society (implicitly) gives them.
**It’s all about power**
------------------------
Ultimately, the root cause of the harms that AI causes (whether or not you think they are existential or not), is the power that AI companies have to make [unilateral decisions that affect large swathes of humanity](https://openai.com/blog/planning-for-agi-and-beyond) without any oversight. They certainly don’t paint their actions as political, despite clearly attempting to gain power and guide the course of humanity from their ivory towers in silicon valley. They feel empowered torisk huge amounts of harm (e.g. [Bender and Gebru et al](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922), [Birhane](https://www.sciencedirect.com/science/article/pii/S2666389921000155#:~:text=Relational%20ethics%20is%20a%20process,they%20are%20embedded%20is%20erroneous.), [Carlsmith](https://arxiv.org/abs/2206.13353), [Yudkowsky](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)) by developing [more powerful AI systems](https://www.lesswrong.com/tag/scaling-laws), partially because there is little political opposition despite [growing public worry](https://aiimpacts.org/the-public-supports-regulating-ai-for-safety/). Whilst there is some mounting opposition to these companies’ unsafe deployments ([activism](https://arxiv.org/abs/2001.06528), [legislation](https://artificialintelligenceact.eu/), [hardware control](https://www.mckinsey.com/industries/public-and-social-sector/our-insights/the-chips-and-science-act-heres-whats-in-it)), there is so much further to go, in particular in restricting the research into advanced AI.
If we see it like this, whether AI is closer to a stupid ‘[stochastic parrot](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)’ or on the ‘verge-of-[superintelligence](https://books.google.co.uk/books/about/Superintelligence.html?id=7_H8AwAAQBAJ)’ doesn’t really matter; whichever world we are in, it’s the same processes and actors that ultimately generate the harm. The root cause, powerful, unaccountable, unregulated AI companies with little opposition playing fast and loose with risk and social responsibility, with utopian and quasi-messianic visions of their mission, causes the risk, irrespective of what you think that risk is. As capitalists like Sam Altman take in their [private benefits](https://www.bloomberg.com/news/articles/2023-01-23/microsoft-makes-multibillion-dollar-investment-in-openai), the rest of humanity suffers with the risk they place on all of us. Of course, these risks and harms aren’t placed on everyone equally, and as always, more harm is done to theless powerful and [privileged](https://arxiv.org/pdf/1908.06165.pdf) (in [healthcare](https://spiral.imperial.ac.uk/handle/10044/1/94902), on [women’s working lives](https://oecd.ai/en/women-event-2022), [facial recognition](http://gendershades.org/) etc.) ; but nonetheless, these AI companies are happy to run roughshod over the rights of everyone in order to pursue their own ends. They have their vision of the future, and are [happy to impose significant risk on the rest of humanity](https://scottaaronson.blog/?p=7042) to achieve this without our consent. A lot of the researchers helping these companies think [what they are doing has high probabilities of being extremely bad](https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/), and yet they carry on!
Irrespective of which sort of dangers we worry about, it's clear **who** we need to worry about: the AI companies, chiefly (although not exclusively) OpenAI and DeepMind. Whether you care about ‘AI Ethics’ or ‘AI Safety’, no matter what the type of harms you worry about, if you look at the issue politically the source of the harms looks the same. It's clear who has, and is trying to gain more, power, and it is clear that everyone else is put at extreme risk. If the [problem is power](https://academic.oup.com/edited-volume/41989/chapter/355437737), then we ought to fight it with power. We cannot merely allow these powerful actors to imagine and create futures for us, [crowding out alternatives](https://press.uchicago.edu/ucp/books/book/chicago/D/bo20836025.html); we need to build coalitions that give us the power to imagine and create safer and more equitable futures.
Thus, the importance of making a distinction between existential and non-existential harms will start to dissolve away, because either are possible hugely negative consequences of the same phenomena, with similar political solutions: **slow down or stop companies trying to develop AGI and other risky ‘advanced AI’ systems**. If we buy this, then the strategy needs to be much broader than the current status quo in the ‘AI XRisk’ community of merely empowering a narrow range of ‘value-aligned’ individuals to research ‘technical alignment’ or even friendly technocratic ‘Existential AI Governance.’ (I’m not saying this is bad- far from it- or shouldn’t be hugely expanded, but it is very very very far from sufficient). Rather, it likely looks like bringing together coalitions of actors, with perhaps different underlying ethical concerns, but the same political concern that the growing [unaccountable power](https://rankingdigitalrights.org/index2020/spotlights/unaccountable-algorithms) of [dangerously hubristic](https://openai.com/blog/planning-for-agi-and-beyond)AI companies needs to be curbed. It requires building coalitions to engage in the politics of technology, [imagining futures we can perform into existence](https://journals.sagepub.com/doi/full/10.1177/1368431020988826), and asserting power to challenge the [inherently](https://www.youtube.com/watch?v=P7XT4TWLzJw) [risky](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like) pathways these AI companies want to put us on.
It's important to note that this isn't saying we don't and can't do AI research. But the type of research, moving towards [larger and larger models](https://www.nature.com/articles/d41586-023-00641-w) with less accountability for the companies, trying to get more and more [general systems](https://openai.com/research/gpt-4) with more [destructive capabilities](https://www.google.com/amp/s/amp.theguardian.com/commentisfree/2023/feb/11/ai-drug-discover-nerve-agents-machine-learning-halicin), with almost no regulatory oversight, is simply not a viable safe pathway forward. There is good reason to think that within our current paradigm and political structures, AGI development may be [inherently dangerous](https://www.youtube.com/watch?v=P7XT4TWLzJw); this is a demon we ought not to make. If this recklessness is synonymous with innovation, then those dreaming of innovations have lost a lot of their spark.
**In whatever world we are in, putting ‘AI’s in charge’ of powerful systems is dangerous**
------------------------------------------------------------------------------------------
Whether we are in ‘stochastic parrot’ or ‘verge of superintelligence’ worlds, giving AIs power is deeply dangerous. ‘Stupid’ AIs are already causing [fatalities](https://www.theguardian.com/us-news/2022/jun/15/tesla-us-car-carashes-driver-assist-systems), [reinforcing biases](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922), and [causing other harms](https://www.forbes.com/sites/lutzfinger/2022/09/08/deepfakesthe-danger-of-artificial-intelligence-that-we-will-learn-to-manage-better/), all of which will likely get worse if given more power. ‘Stupid systems’ could even cause harm of existential proportions, for example if they are [integrated into nuclear command and control](https://thebulletin.org/2023/02/keeping-humans-in-the-loop-is-not-enough-to-make-ai-safe-for-nuclear-weapons/), or used to make [more powerful new biochemical weapons](https://www.nature.com/articles/s42256-022-00465-9). Superintelligent AIs, if given power, could similarly cause tremendous amounts of harm scaling to existential harm. I think it's also important to note, that AI’s needn’t be an agent in the typical, anthropomorphised sense, for it to be useful to describe them as ‘having power’, and that is what I mean here.
Once again, unaccountable, opaque, ‘machine power’, generally allows for an increase in harm that can be done, and a reduction in the ability of society to respond to said harm as systems get entrenched and remake the social world we live in, which is incredibly dangerous. And once again, these harms are often imposed on the rest of the world, without our consent, by companies, militaries and governments looking to rely on AI systems, normally due to hype from the same, few AGI companies. In this way, irrespective of the world we are in, hype is dangerous, because once again it provides the dangerously risky AI companies with more power, which they almost certainly use to pose risks of unacceptable harm on the world population.
**In whatever world we are in, AGI research is dangerous**
----------------------------------------------------------
If we are in ‘stochastic parrot’ world, research into AGI is used as an excuse and a fig leaf to hide enormous harms imposed by dangerously stupid AI systems. In this world, AGI research is used to focus on increasing the power of a few, unaccountable, powerful companies, and causes harm for the rest of us, whilst failing to deliver on its promises. By controlling visions of the future, [actors gain control over the present](https://link.springer.com/chapter/10.1007/978-3-658-27155-8_1). Visions of utopia allow more mundane harms to get ignored, with these companies provided a free pass.
If we are in the ‘verge of superintelligence’ world, research into AGI is [flirting with the apocalypse](https://towardsdatascience.com/we-dont-know-how-to-make-agi-safe-f13ae837b05c) in a way that is unacceptably dangerous. Stories of the inevitability of AGI development are useful as excuses for those who care little of the existential risk that developing these systems could bring in comparison to [their desire to impose their vision](https://openai.com/blog/planning-for-agi-and-beyond) of what a glorious future looks like upon mankind.
There may be a counterargument to this, that suggests research here isn’t dangerous, but deployment is, so it is on model deployment we need to regulate and not research. I think in both worlds, this is flawed. In ‘stochastic parrot’ world, even with regulated deployment, unrestricted research is likely to lead to a slippery slope to deployment ([as worried about in geoengineering, for example](https://onlinelibrary.wiley.com/doi/full/10.1111/japp.12345#:~:text=The%20Slippery%20Slope%20Argument%20warns,untested%20and%20perhaps%20morally%20objectionable.)), where research enables a gaining of financial, intellectual and discursive power by the AI companies in a way that makes dangerous deployment of technologies much more likely. And in a ‘verge of superintelligence’ world having [powerful doomsday devices](https://www.globalpolicyjournal.com/articles/global-public-goods-and-bads/vulnerable-world-hypothesis) developed is probably already an unacceptable risk no matter how strict the governance of deployment is. Even if we think our regulation of deployment is sound, governance mechanisms can break down, the existence of technologies can induce social changes affecting governance and [deceptive alignment](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment)is enough of a problem that it seems better to simply never try and develop these systems in the first place. Moreover, to suggest the problem doesn’t start with research fails to reckon with the risk of bad actors; whilst one could say that guns don’t murder, people do, had guns not been invented far fewer people would have been killed in violence than are now.
**Why this is a shared battle**
-------------------------------
I hope the previous paragraphs have shown that whilst the disagreements between the AI Safety and AI Ethics crowds are significant, they are not massively analytically useful or core to understanding the key challenge that we are facing. The relevant question isn’t “are the important harms to be prioritised the existential harms or the non-existential ones?”, “will AI be agents or not?’, nor ‘will AI be stochastic parrots or superintelligence?” Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky. On this, I think we agree, and so whilst scientifically our differences may be significant, in the realm of political analysis it isn’t. Ultimately, it is this power concentration that has the potential to cause harm, and it is ultimately this which we normatively care about.
Moreover, many of these surface level disagreements also aren’t politically or strategically relevant: once we understand that the source of all these risks is a small group of AI companies recklessly forging ahead and concentrating power, it becomes much clearer that both communities in fact share interests in finding ways to (1) slow down/halt research; (2) avert and cool down AI hype; (3) spur much greater public/government scrutiny into whether (and if yes, how) we want to develop advanced AI technologies.
**What we gain from each other**
--------------------------------
This essay framed itself as suggesting that both the ‘AI Ethics’ and ‘AI Safety’ crowds can benefit each other. Thus far, I’ve mostly suggested that the AI Safety crowd should realise that even if the AI Ethics crowd were incorrect about dismissing the importance of existential risks from AI, that their analysis, that power accumulation and concentration through and by AI, originating from a small number of powerful and unaccountable corporations, is the major cause of the threats we face, is correct. From this perspective, the AI Safety crowd probably should come and fight in the trenches with the AI Ethics people as well, realising that their identification of the core of the issue has been broadly correct, even if they underestimated how bad these corporations could make things. Moreover, the AI Ethics crowds seem to have been more effective at tempering AI Hype in contrast to the way in which AI Safety crowds have potentially sped up AI development, so practically there may be significant benefit in collaboration.
However, I’m not sure if the exchange here is so one-sided. I think the AI Safety community has a lot to offer the AI Ethics community as well. Technical AI Safety techniques, like [RLHF](https://twitter.com/sethlazar/status/1616095315102400516?lang=en-GB)or [Constitutional AI](https://arxiv.org/abs/2212.08073), whilst potentially not very beneficial from an AI Safety perspective, seem to have had a meaningfully significant impact on making systems more ethical. Moreover, the moral inflation and urgency that Existential Harms can bring seems to [resonate with the public](https://aiimpacts.org/the-public-supports-regulating-ai-for-safety/#:~:text=55%25%20say%20AI%20could%20eventually,of%20drugs%20and%20medical%20devices%E2%80%9D), and so politically may be very useful tools if utilised to fight the companies rather than empower them. Intellectually, AI Safety provides much greater urgency and impetus for governing research and cutting the problem off at the sources (which has been underexplored so far) , a concern which would likely be more muted in AI Ethics discussions. By regulating these problems at the sources, AI Ethics work can be made a lot easier and less reactive. Moreover, the focus from the AI Safety crowd on [risks from systems that look vastly different from the risks we face now may be useful even if we don’t develop AGI;](https://write.as/sethlazar/genb) risks and harms will change in the future just as they have changed in the past, and anticipatory governance may be absolutely essentially at reducing these. So even if one doesn’t buy my suggestion that we are on the same side of the most analytically relevant distinction, I hope that the insights and political benefits that the two communities have to offer each other will be enough cause for common ground to start working together.
**Coalitional Politics**
------------------------
If one accepts my (not particularly groundbreaking) analysis that the ultimate problem is the power of the AI companies, how do we combat this? There are lots of ways to do this, from narrow technocratic governance to broad range political salience raising, to ethics teams within corporations and broad governance frameworks and many other approaches. Each of these are necessary and useful, and I don’t argue against any of them. Rather, I’m arguing for a broad, pluralistic coalition taking a variety of approaches to AI governance, with more focus put towards work to raise the political salience of the restriction of AI Research than currently is.
Given AI Ethics and AI Safety people are actually concerned with the same phenomena of harms arising from the unaccountable power enabling dangerously risky behaviour from a very small number of AI companies, then we also have the same solution; take them on. Use all the discursive and political tools at our disposal to curb their power and hold them to account. We need a big tent to take them on. We need op eds in major newspapers attesting to the dangerous power and harms these AI companies have and are happy to risk. We need to (continue to) expose just how their messianic arrogance endangers people, and let the public see what these few key leaders [have said](https://www.forbes.com/sites/nicolemartin1/2019/06/27/13-greatest-quotes-about-the-future-of-artificial-intelligence/?sh=b2c4c713bdfa) about the world they are pushing us towards. We need to mobilise peoples worries such that politicians will react, establishing a culture against the unaccountable power of these AI companies. We need to show people [across the political spectrum](https://www.vox.com/future-perfect/2019/11/7/20897531/artificial-intelligence-left-socialist-google-deepmind) ([even those we disagree with!](https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias)) how this new power base of AI companies has [no one's interests](https://www.theguardian.com/commentisfree/2022/oct/08/tech-firms-artificial-intelligence-ai-liability-directive-act-eu-ccia) at heart but their own, so no matter where you fall, they are a danger to your vision of a better world. There is [nascent public worries around AGI](https://aiimpacts.org/the-public-supports-regulating-ai-for-safety/) and these AI companies, we just need to activate this through a broad coalition to challenge the power of these companies and wrestle control of humanity’s future from them. Hopefully this can lay the groundwork for formal governance and at the very least quickly create a political culture reflective of the degree of worry that ought to be held about these companies’ power.
There is nothing inevitable about technology development, and there is nothing inevitable about the status quo. In my ‘home field’ of Solar Geoengineering, considerably smaller coalitions, [less well funded and powerful](https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project) than what we could build in the AI space in a few months, [successfully halted technology development](https://www.theguardian.com/environment/2012/may/16/geoengineering-experiment-cancelled) for at least the last decade. Similar coalitions have [constrained GMOs in various regions of the world](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5033189/), [nuclear energy](https://www.politico.eu/article/politics-behind-germany-refusal-reconsider-nuclear-phaseout/) and [nuclear weapons for peaceful purposes](https://www.worldscientific.com/worldscibooks/10.1142/7895#t=aboutBook). There are enough reasons to oppose development of AGI systems from the perspective of all sorts of worldviews and ethical systems to build such coalitions; this has successfully occurred in a number of the above examples, and it may be even easier in the context of AI. Some have tried to make a start on this (e.g. [Piper](https://www.vox.com/future-perfect/23619354/openai-chatgpt-sam-altman-artificial-intelligence-regulation-sydney-microsoft-ai-safety), [Marcus and Garner](https://garymarcus.substack.com/p/is-it-time-to-hit-the-pause-button?r=1aghy&utm_medium=ios&utm_campaign=post), [Gebru](https://time.com/6132399/timnit-gebru-ai-google/) etc), but a larger and more diverse coalition trying to raise the political salience of curbing AI companies’ power is key. Bringing genuine restriction of these companies power into the overton window, building coalitions across political divides to do this, building constituencies of people who care about regulating the power of AI companies, raising the salience of the issue in the media and crafting and envisioning new futures for ourselves are all vital steps that can be taken. We can build a relevant civil society to act as a powerful counterbalance to corporate power.
This isn’t an argument to shut down existing [narrow technocratic initiatives](https://www.governance.ai/), or [academic research presenting alternative directions for AI](https://www.dair-institute.org/); rather, it is an argument that we need to do more, and do it together. There seems to be a gaping narrative hole (despite the admirable attempts of a few people to fill it) in pushing for a public political response to these AI companies. These discourses, social constructions and [visions of the future](https://www.sciencedirect.com/science/article/pii/S1877343520300488) matter to technology development and governance. They put pressure and establish norms that guide near term corporate decision making, government policy, and how society and public relate to technology and its governance.
**Urgency**
-----------
I would also argue that this issue is urgent. Firstly, around [ChatGPT](https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/), [Bing/Sydney](https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter) and now [GPT-4](https://openai.com/research/gpt-4), AI is experiencing a bit of a period of political attention at present. Public and government attention at the moment are good, and plausibly as good as they’ll ever be, for a politics to slow AGI development, and we are most powerful pushing for this together, rather than fighting and mocking each other in an attempt to gain political influence. This may be a vital moment that coalitional politics can be a powerful lever for enacting change, where the issue is suitably malleable to political contestation, to the formation of a [governance object](https://link.springer.com/book/10.1057/9781137313652) and to framing of how this issue could be solved; these are the exact times where power can be asserted over governance, and so assembling a coalition may give us that power.
There is also a risk that if we don’t foster such a coalition soon, both of our communities get outmanoeuvred by a new wave of tech enthusiasts that are currently pushing very hard to accelerate AI, remove all content or alignment filters, open-source and disseminate all capabilities with little care for the harms caused and more. Indeed, many tech boosters are [beginning to paint AI ethics and AI risk advocates as two sides of the same coin](https://www.jonstokes.com/p/the-story-so-far-ai-makes-for-strange). To counteract this movement, it is key for both communities to bury the hatchet and combat these plausibly rising threats together. Divided we fall.
**So what does coalitional politics look like?**
------------------------------------------------
I think this question is an open one, something we will need to continue to iterate with in this context, learn by doing and generally work on together. Nonetheless, I will give some thought.
Firstly, it involves trying to build bridges with people who we think have wrong conceptions of the harms of AI development. I hope my argument that the political source of harm looks the same has convinced you, so let's work together to address that, rather than mocking, insulting and refusing to talk to one another. I understand people from AI Safety and AI Ethics have serious personal and ethical problems with one another; that needn’t translate to political issues. Building these bridges not only increases the number of people in our shared coalition, but the diversity of views and thinkers, allowing for new ideas to develop. This broad, pluralistic and diverse ecosystem will likely come not just with political, but with [epistemic](https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278) [benefits](https://www.nature.com/articles/d41586-018-05326-3) as well.
Secondly, it involves using the opportunities we can to raise the [political salience](https://oxfordre.com/politics/display/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-1361;jsessionid=AEFBBD08AB6602A22D9762D4FFF2351A?rskey=x512Ha&result=6) of the issue of the power of AI companies as much as we can. At present, we are at something of a [moment of public attention towards AI](https://www.bbc.co.uk/news/world-us-canada-64967627); rather than competing with one another for attention and discursive control, we ought to focus on the common concern we have. Whether the impetus for regulating these companies comes from motivations of concentration of corporate power or existential harms, it raises the salience of the issue and increases the pressure to regulate these systems, as well as increasing the pressure on companies to self-regulate. We must recognise our shared interest in joining into a single knowledge network and work out how [best to construct a governance object](https://research.wur.nl/en/publications/the-emergence-of-geoengineering-how-knowledge-networks-form-gover) to achieve our shared ends. At the moment, there is a weird discursive vacuum despite the salience of AI. We can fill this vacuum, and this will be most effective if done together. Only by filling this vacuum can we successfully create a landscape that can allow the curbing of the power of these corporations. People are already trying to do this, but the louder and broader the united front against these companies are, the better.
Then, we need to try and create a culture that pressures political leaders and corporations, no matter where politically they fall, that these unaccountable companies have no right to legislate our future for us. We can do this agenda and culture shift [through activism](https://rgu-repository.worktribe.com/output/1534408/scotland-and-period-poverty-a-case-study-of-activists-media-and-political-agenda-setting), [through the media](https://www.academia.edu/13123517/The_Agenda_Setting_Function_of_Mass_Media), [through the law,](https://www.brookings.edu/on-the-record/what-is-the-role-of-courts-in-making-social-policy/) [through protests](https://www.socialchangelab.org/_files/ugd/503ba4_9b649b50484748bfbae215d675084118.pdf) and t[hrough political parties](https://www.tandfonline.com/doi/abs/10.1080/14782804.2020.1785849?journalCode=cjea20), as well as more broadly how discourses and imaginaries are shaped in key fora (social media, traditional media, fiction, academic work, conversations); [the power of discourses have long been recognised in the development and stabilisation of socio-technical systems](https://link.springer.com/article/10.1007/s11625-022-01110-5). Democracy is rarely ensured by technocracy alone; it often takes large scale cu requires large scale cultural forces. Luckily, [most people seem to support this!](https://www.monmouth.edu/polling-institute/documents/monmouthpoll_us_021523.pdf/)
We then need suggestions for policy and direct legal action to restrict the power and ability of these AI companies to do what they currently do. Again, luckily these exist. [Compute governance](https://www.alignmentforum.org/posts/z8BF9GwcCjeXShC4q/compute-governance-the-role-of-commodity-hardware), utilising [competition](https://news.bloomberglaw.com/antitrust/ftc-reviewing-competition-deception-in-artificial-intelligence) law, holding companies [legally accountable](https://venturebeat.com/ai/could-big-tech-be-liable-for-generative-ai-output-hypothetically-yes-says-supreme-court-justice/amp/) for harmful outputs of generative AI (and slightly more tangentially [platforms](https://www.wired.com/story/make-platforms-safer-regulate-design-section-230-gonzalez-google/)), supporting [copyright suits](https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion) and more seem like ways we can attack these companies and curb their power. [Human rights suits](https://www.sciencedirect.com/science/article/pii/S2666659620300056) may be possible. In general, there is an argument to suggest that the [use of the courts](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4382100) is an important and underexplored lever to keep these companies accountable. Moreover, given the risks these companies themselves suggest they are imposing, other more speculative suits based on various other rights and principles, as has [occurred in the climate context](https://www.science.org/content/article/surprise-dutch-court-orders-government-do-more-fight-climate-change), may be possible. This is just some of a shopping list of policy and direct actions a broad coalitional movement could push for. People are already pushing for these things, but with better organisation that comes with more groups, the ability to push some of these into practice may be significantly enhanced. With a broader, diverse coalition, our demands can get stronger.
Sure, this coalitional politics will be hard. Building bridges might sometimes feel like losing sight of the prize as we focus on restricting the power of these [agents of doom](https://www.bbc.com/future/article/20211014-agents-of-doom-who-is-hastening-the-apocalypse-and-why) via other arguments and means alongside whatever each of our most salient concerns is. It will be hard to form coalitions with people you feel very culturally different from. Ultimately, if we want to curb the ability of AI companies to do harm, we need all the support we can get, not just from those in one culture, but those in many. I hope many, a lot of whom have already contributed so much to this fight in both AI Safety and AI Ethics, will take up such an offer for coalitional politics, at this potentially vital moment.
Acknowledgements: Matthijs Maas gave me substantial advice and help, despite substantive disagreements with aspects of the essay, and conversations with Maathijs Maas and Seth Lazar provided a lot of the inspiration for me to write this. |
676a1bac-d17f-4512-9d68-03cc8b888d2d | trentmkelly/LessWrong-43k | LessWrong | A Short Dialogue on the Meaning of Reward Functions
Produced as part of the SERI ML Alignment Theory Scholars Program - Winter 2022 Cohort.
The following is a short slack dialogue between Leon Lang, Quintin Pope, and Peli Grietzer that emerged as part of the SERI-MATS stream on shard theory. Alex Turner encouraged us to share it.
To follow the dialogue, it seems beneficial to have read Reward is not the optimization target before.
Leon:
> I was thinking about what the "limits" of the idea "reward is not the optimization target" is. Playing devils advocate: Maybe the reward doesn't matter at all! We spend all this time thinking about how to design reward functions that somehow capture our values. But maybe the only thing we need to do is have some kind-of-dumb reward, and an environment where the agent can only reach this reward by acting in accordance to our values.
>
> E.g.: We might reward the agent for winning some board game which is (partially) cooperative. But then in the environment, real people only cooperate with the agent if it displays cooperative/friendly/nice behavior, and other behaviors are a losing strategy. Never did the reward itself say anything about being cooperative, but this still emerged in accordance with the environment, and so the agent ends up pretty nice.
>
> Someone might argue against this view: Being nice is, in this environment, only instrumentally valuable for achieving reward, and a smart agent knows this, plays nice to win games and, once it has enough power, will overthrow humanity. This is a version of the problem of deceptive alignment.
> However, I would argue (still playing devil's advocate): The agent's learning process doesn't care about whether something is an instrumental value or the actual reward! The behavior to be friendly will be reinforced just as much, and the trained agent doesn't necessarily reason about whether something is instrumentally or finally valuable, it just wants to be nice if this was reinforced.
>
> Not playing devil's advocate anymore, I' |
504855ac-3871-4140-890c-a4dabcfab40b | trentmkelly/LessWrong-43k | LessWrong | Optimizing a Week of Machine Learning Learning
status: rough braindump. doesn't really attempt to communicate my background assumptions.
tldr: help Raemon optimize a week of Machine Learning reading/practice, with an eye for relevance towards understanding and/or helping with AI safety
I'd like to spend some time in January learning some cursory stuff about machine learning.
I currently know "the layman's gist" of what Machine Learning, Reinforcement Learning and Neural Networks mean, but nothing about how to apply them.
My coding background is in web development.
I have no math bathground.
Some plausible goals
(I'm not actually sure which of these should be highest priority, listed approximately in the order the occurred to me)
Goal 1 Gain some literacy in ML stuff for purposes of understanding technological developments relevant to AI safety. Subgoals:
* Understand for myself what's going on rather than relying on other people's judgment
* Be able to communicate with people in the industry and have some idea what they're talking about and vice-versa.
Goal 2: I know of at least a few positions currently for "ML technician that assists a researcher" (the ones I know of are AI safety related, and I have an impression that non-AI-Safety positions might exist as well, or in the future). I want to explore whether this is a plausible thing I might become competent enough at to be hireable.
Depending on how the field of Machine Learning is shaped, this might be something I want to do just to keep up with the world in general for career purposes, or it might be a set of skills useful for projects I care about.
Goal 3: After a brief bit of reading, it became clear this would involve re-acquainting myself with mathematical notation, which currently makes my eyes glaze over. It seemed like re-training that skill might at least make it possible to skim papers and have a rough idea of what they say, and this might be useful even if I don't pursue this professionally.
Time Allotted
Right now I don't have huge |
457ba872-4d9f-482a-8e4a-1309e54827fa | trentmkelly/LessWrong-43k | LessWrong | Open thread, June. 12 - June. 18, 2017
IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting |
6ebc2d98-207a-4e0a-bb4a-0298de0156cb | trentmkelly/LessWrong-43k | LessWrong | How will we update about scheming?
I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their creators such as by faking alignment). Recently, I (and co-authors) released "Alignment Faking in Large Language Models", which provides empirical evidence for some components of the scheming threat model.
One question that's really important is how likely scheming is. But it's also really important to know how much we expect this uncertainty to be resolved by various key points in the future. I think it's about 25% likely that the first AIs capable of obsoleting top human experts[1] are scheming. It's really important for me to know whether I expect to make basically no updates to my P(scheming)[2] between here and the advent of potentially dangerously scheming models, or whether I expect to be basically totally confident one way or another by that point (in the same way that, though I might be very uncertain about the weather on some night a month from now, I'll be completely certain about whether it is raining at the time, and I'll have a much more accurate prediction on that morning).
The P(scheming) numbers I discuss refer to "non-obvious scheming in this AI". So if a given type of scheming would obviously be caught (e.g. constant obvious scheming in CoT), I'm not counting that in the numbers. (However, I am still considering worlds where we observed obvious scheming in a prior model.) This is roughly equivalent to assuming that the model in question already went through basic adversarial evaluation and there wasn't extremely clear scheming.
So in this post, I describe how I currently expect that my personal P(scheming) will be affected by various observations, by providing the likelihood ratios for them. I split these observations into several categories:
* Properties of the AI system and how it was trained. I separate these into:
* The ability to reason opaquely and goal-directedly.
* The extent to which the model architecture permits opaque recurre |
7893dd29-63d1-4bb4-afea-5cc9635f8833 | trentmkelly/LessWrong-43k | LessWrong | [paper link] Interpreting systems as solving POMDPs: a step towards a formal understanding of agency
I found this paper while browsing on semantic scholar - it is so far the only paper known by semanticscholar to cite Discovering Agents, and seems to me like a promising refinement. It's going to take me some time to read it properly, and I'd love to get others' takes on it as well.
Abstract:
> Under what circumstances can a system be said to have beliefs and goals, and how do such agency-related features relate to its physical state? Recent work has proposed a notion of interpretation map , a function that maps the state of a system to a probability distribution representing its beliefs about an external world. Such a map is not com-pletely arbitrary, as the beliefs it attributes to the system must evolve over time in a manner that is consistent with Bayes’ theorem, and conse-quently the dynamics of a system constrain its possible interpretations. Here we build on this approach, proposing a notion of interpretation not just in terms of beliefs but in terms of goals and actions. To do this we make use of the existing theory of partially observable Markov processes (POMDPs): we say that a system can be interpreted as a solution to a POMDP if it not only admits an interpretation map describing its beliefs about the hidden state of a POMDP but also takes actions that are optimal according to its belief state. An agent is then a system together with an interpretation of this system as a POMDP solution. Although POMDPs are not the only possible formulation of what it means to have a goal, this nevertheless represents a step towards a more general formal definition of what it means for a system to be an agent. |
ce28ada0-cae7-442c-b437-64cc29ed26ef | trentmkelly/LessWrong-43k | LessWrong | "Deception Genre" What Books are like Project Lawful?
This post is spoiler-free
I just finished Project Lawful, a really long, really weird book by Eliezer Yudkowsky.
The book's protagonist is a knowledgable and perceptive target. A conspiracy forms around the target to learn from him while keeping him from finding out that helping them is not in the target's best interests. The book is written from the perspective of both the target and the conspiracists. The target notices inconsistencies and performs experiments to test his false reality while also acting in the fabricated reality according to his interests. The conspiracists frantically try to keep the target from catching them or building enough evidence against them that he concludes they have been lying.
This is a description of (part of) the plot of Project Lawful. But this could be the description of an entire genre! If the genre doesn't already have a name, it could be the "Deception Genre."
Another work in this category would be The Truman Show, which fits the deception and the target's escape within a <2hr movie runtime.
Other stories with lying don't really have the same structure. Walter White in Breaking Bad is trying to keep his crimes hidden but isn't constructing a false reality around the cops or his family. Death Note comes close, though Light tries to mislead L about specifically who Kira is and how the Death Note works rather than constructing an entire false reality around L. Many stories about dystopias have the protagonists discover that their realities are false, but fewer of those feature the perspectives of the conspiracists frantically trying to keep the deception running.
Do you know any other stories in the Deception Genre? |
ce7006a8-7d4c-4b7a-830f-020ec88df55e | trentmkelly/LessWrong-43k | LessWrong | "What Happens When Doctors Only Take Cash"? Everybody, Especially Patients, Wins
|
b86ba228-5ea4-43ed-8a5a-45b3541fc08e | StampyAI/alignment-research-dataset/arbital | Arbital | Waterfall diagrams and relative odds
Imagine a waterfall with two streams of water at the top, a red stream and a blue stream. These streams separately approach the top of the waterfall, with some of the water from both streams being diverted along the way, and the remaining water falling into a shared pool below.

Suppose that:
- At the top of the waterfall, 20 gallons/second of red water are flowing down, and 80 gallons/second of blue water are coming down.
- 90% of the red water makes it to the bottom.
- 30% of the blue water makes it to the bottom.
Of the purplish water that makes it to the bottom of the pool, how much was originally from the red stream and how much was originally from the blue stream?
%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):
This is structurally identical to the [Diseasitis](https://arbital.com/p/22s) problem from [before](https://arbital.com/p/55z):
- 20% of the patients in the screening population start out with Diseasitis.
- Among patients with Diseasitis, 90% turn the tongue depressor black.
- 30% of the patients without Diseasitis will also turn the tongue depressor black.
%%
%%!if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):
This is structurally similar to the following problem, such as medical students might encounter:
You are a nurse screening 100 patients for Diseasitis, using a tongue depressor which usually turns black for patients who have the sickness.
- 20% of the patients in the screening population start out with Diseasitis.
- Among patients with Diseasitis, 90% turn the tongue depressor black (true positives).
- However, 30% of the patients without Diseasitis will also turn the tongue depressor black (false positives).
What is the chance that a patient with a blackened tongue depressor has Diseasitis?
%%
The 20% of sick patients are analogous to the 20 gallons/second of red water; the 80% of healthy patients are analogous to the 80 gallons/second of blue water:

The 90% of the sick patients turning the tongue depressor black is analogous to 90% of the red water making it to the bottom of the waterfall. 30% of the healthy patients turning the tongue depressor black is analogous to 30% of the blue water making it to the bottom pool.

Therefore, the question "what portion of water in the final pool came from the red stream?" has the same answer as the question "what portion of patients that turn the tongue depressor black are sick with Diseasitis?"
%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):
Now for the faster way of answering that question.
%%
We start with *4 times as much blue water as red water* at the top of the waterfall.
Then each molecule of red water is 90% likely to make it to the shared pool, and each molecule of blue water is 30% likely to make it to the pool. (90% of red water and 30% of blue water make it to the bottom.) So each molecule of red water is *3 times as likely* (0.90 / 0.30 = 3) as a molecule of blue water to make it to the bottom.
So we multiply prior proportions of $1 : 4$ for red vs. blue by relative likelihoods of $3 : 1$ and end up with final proportions of $(1 \cdot 3) : (4 \cdot 1) = 3 : 4$, meaning that the bottom pool has 3 parts of red water to 4 parts of blue water.

To convert these *relative* proportions into an *absolute* probability that a random water molecule at the bottom is red, we calculate 3 / (3 + 4) to see that 3/7ths (roughly 43%) of the water in the shared pool came from the red stream.
This proportion is the same as the 18 : 24 sick patients with positive results, versus healthy patients with positive test results, that we would get by [thinking about 100 patients](https://arbital.com/p/560).
That is, to solve the Diseasitis problem in your head, you could convert this word problem:
> 20% of the patients in a screening population have Diseasitis. 90% of the patients with Diseasitis turn the tongue depressor black, and 30% of the patients without Diseasitis turn the tongue depressor black. Given that a patient turned their tongue depressor black, what is the probability that they have Diseasitis?
Into this calculation:
> Okay, so the initial odds are (20% : 80%) = (1 : 4), and the likelihoods are (90% : 30%) = (3 : 1). Multiplying those ratios gives final odds of (3 : 4), which converts to a probability of 3/7ths.
(You might not be able to convert 3/7 to 43% in your head, but you might be able to eyeball that it was a chunk less than 50%.)
You can try doing a similar calculation for this problem:
- 90% of widgets are good and 10% are bad.
- 12% of bad widgets emit sparks.
- Only 4% of good widgets emit sparks.
What percentage of sparking widgets are bad? If you are sufficiently comfortable with the setup, try doing this problem entirely in your head.
(You might try visualizing a waterfall with good and bad widgets at the top, and only sparking widgets making it to the bottom pool.)
%todo: Have a picture of a waterfall here, with no numbers, but with the parts labeled, that can be expanded if the user wants to expand it.%
%%hidden(Show answer):
- There's (1 : 9) bad vs. good widgets.
- Bad vs. good widgets have a (12 : 4) relative likelihood to spark.
- This simplifies to (1 : 9) x (3 : 1) = (3 : 9) = (1 : 3), 1 bad sparking widget for every 3 good sparking widgets.
- Which converts to a probability of 1/(1+3) = 1/4 = 25%; that is, 25% of sparking widgets are bad.
Seeing sparks didn't make us "believe the widget is bad"; the probability only went to 25%, which is less than 50/50. But this doesn't mean we say, "I still believe this widget is good!" and toss out the evidence and ignore it. A bad widget is *relatively more likely* to emit sparks, and therefore seeing this evidence should cause us to think it *relatively more likely* that the widget is a bad one, even if the probability hasn't yet gone over 50%. We increase our probability from 10% to 25%.%%
%%if-before([https://arbital.com/p/1x8](https://arbital.com/p/1x8)):
Waterfalls are one way of visualizing the "odds form" of "Bayes' rule", which states that **the prior odds times the likelihood ratio equals the posterior odds.** In turn, this rule can be seen as formalizing the notion of "the strength of evidence" or "how much a piece of evidence should make us update our beliefs". We'll take a look at this more general form next.
%%
%%!if-before([https://arbital.com/p/1x8](https://arbital.com/p/1x8)):
Waterfalls are one way of visualizing the [odds form](https://arbital.com/p/1x5) of [Bayes' rule](https://arbital.com/p/1lz), which states that **the prior odds times the likelihood ratio equals the posterior odds**.
%% |
31ee49ff-c4e8-41e5-b590-2fa878ff2473 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [Linkpost] Mark Zuckerberg confronted about Meta's Llama 2 AI's ability to give users detailed guidance on making anthrax - Business Insider
> Several tech leaders descended upon Capitol Hill last week to discuss the rapid expansion of generative AI. It was a mostly staid meeting until the potential harms from Meta's new Llama 2 model came up.
>
> During the discussion, attended by most of the Senate's 100 members, Tristan Harris, a co-founder of the Center for Humane Technology, said he recently had engineers take Meta's powerful large language model [Llama 2](https://www.businessinsider.com/chatgpt-openai-moat-in-ai-wars-llama2-shrinking-2023-7) for a "test drive." After some prompting, Harris said that a chat with Llama 2 came back with a detailed walkthrough of how to create anthrax as a biological weapon, according to one person familiar with the forum and two senators present. That prompted a testy exchange between Harris and Mark Zuckerberg, co-founder and CEO of Meta, formerly known as Facebook. Most specifics of the exchange between Harris and Zuckerberg have not been previously reported, although Harris receiving directions from Llama 2 about an unidentified biological weapon was [noted](https://www.washingtonpost.com/technology/2023/09/13/senate-ai-hearing-musk-zuckerburg-schumer/) by The Washington Post.
>
> Among the [two dozen tech leaders](https://www.rounds.senate.gov/newsroom/press-releases/rounds-to-moderate-ai-forum-with-major-tech-leaders-including-musk-zuckerberg-and-gates) at the forum were Elon Musk, owner of Twitter and CEO of Tesla and SpaceX; Sam Altman, CEO of OpenAI; Satya Nadella, CEO of Microsoft; Jensen Huang, CEO of Nvidia; and Sundar Pichai, CEO of Google.
>
> The gathering was led by Senate Majority Leader Chuck Schumer, Democratic Sen. Martin Heinrich, and Republican Sens. Mike Rounds, and Todd Young, who all make up a new "[artificial intelligence working group](https://about.bgov.com/news/ai-threat-unites-bipartisan-senate-group-helmed-by-schumer/)." The group formed earlier this year, a few months after OpenAI's ChatGPT bot became known the world over.
>
> During the session, Zuckerberg attempted to downplay Harris' statement that Llama 2 can tell users how to make anthrax, saying anyone who was looking for such a guide could find out how to make anthrax on YouTube, according to both of the senators present. Harris rejected the argument, saying such guides do not come up on YouTube, and even if they did, the level of detail and guidance provided by Llama 2 was unique to such a powerful generative AI model. It's also largely an open-source model, meaning it's freely available to use and adapt.
>
> "It was one of the only moments in the whole thing that was like, 'Oh,'" one of the senators present said, describing the exchange as having caught people's attention. "Twenty-four out of the 26 panelists there basically said exactly the same thing over and over: 'We need to protect AI innovation but with safeguards in place.'"
>
> A Meta spokesperson declined to comment. Harris did not respond to requests for comment.
>
> Beyond the brief snit between Harris and Zuckerberg, there was little in-depth discussion of the issues surrounding AI, according to all three of the people familiar with the meeting. Even the ability of Llama 2 to guide a prospective user on creating anthrax was not cause for any extended probing, the people said.
>
> "It was, 'Ok, next speaker,' it moved right along," one of the senators present said.
>
> Llama 2's power is well-known inside Meta. Its ability to turn up detailed instructions for creating a biological weapon like anthrax is to be expected, two people familiar with the company said.
>
> "Really, this is going to be the case for every LLM of a certain size, unless you kneecap it for certain things," one of the people familiar with Meta said. "There will be edge cases. But the ones that are products, like ChatGPT, as opposed to open source releases, they just nerf it for this and that."
>
> Still, [AI tools](https://www.businessinsider.com/how-to-create-ai-materials-resources-guide-2023-9) trained on trillions of pieces of information [scraped from the whole of the internet](https://www.businessinsider.com/ai-killing-web-grand-bargain-2023-8) are difficult to control. Earlier this year, a user of a Discord bot created with ChatGPT was able to get the [chemical recipe for napalm](https://kotaku.com/chatgpt-ai-discord-clyde-chatbot-exploit-jailbreak-1850352678), a highly flammable liquid used as a military weapon. ChatGPT and Google's Bard are also known for serving up information to users that is [incorrect](https://www.businessinsider.com/chatgpt-generates-error-filled-cancer-treatment-plans-study-2023-8), composed of [misinformation](https://counterhate.com/research/misinformation-on-bard-google-ai-chat/), or simply made up, dubbed "[hallucinations](https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/)."
>
> *Are you a Meta employee or someone else with insight to share? Contact Kali Hays at* [*khays@insider.com*](mailto:khays@insider.com)*, on secure messaging app* [*Signal*](https://signal.org/download/) *at [phone number], or through Twitter DM at @hayskali. Reach out using a non-work device.*
>
> *Get in touch with Bryan Metzger by email at* [*bmetzger@insider.com*](mailto:bmetzger@insider.com) *or find him on Twitter at* [*@metzgov*](https://twitter.com/metzgov)*.*
>
> |
22352417-4847-47d9-81d2-82a5715b7568 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Is Humanism A Religion-Substitute?
Today's post, Is Humanism A Religion-Substitute? was originally published on 26 March 2008. A summary (taken from the LW wiki):
> Trying to replace religion with humanism, atheism, or transhumanism doesn't work. If you try to write a hymn to the nonexistence of god, it will fail, because you are simply trying to imitate something that we don't really need to imitate. But that doesn't mean that the feeling of transcendence is something we should always avoid. After all, in a world in which religion never existed, people would still feel that same way.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Amazing Breakthrough Day: April 1st, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.