id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
aa994d2a-1543-474d-a22f-82b14ac9b183 | trentmkelly/LessWrong-43k | LessWrong | [Letter] Chinese Quickstart
Dear lsusr,
I am a lesswrong user interested in learning Mandarin and living in China. My goal is understanding Chinese culture more broadly and geopolitics and Chinese tech policy more specifically. I could get CELTA and get a teaching job in China (not difficult), but it seems like I would gain far more value if I actually learned Mandarin.
Do you have recommendations on what's the fastest way to learn Mandarin?
Yours sincerely
<redacted>
----------------------------------------
I'm posting my answer on Less Wrong so that the commenters can correct me in the comments. Do hear that everyone except <redacted>? Tell <redacted> why I'm wrong!
----------------------------------------
Dear <redacted>,
There is no fast way to learn Mandarin. But some ways are faster than others.
The first thing you should do is go live in China. (A teaching job via CELTA is fine.) This may or may not help you learn Chinese faster. Then why do it? Because learning Mandarin takes years. If you want to learn about Chinese culture then you should go to China now and start learning Mandarin after you get there.
The second reason to live in China is that the Chinese tech world is isolated from the rest of the world. It's not just websites that require a Chinese IP address and phone number. Paying for lunch with WeChat is something you should do in China itself.
If you were from America or Europe then the second thing I would suggest is you buy a subscription to Foreign Affairs Journal. That still might be the right way for you to do things, but I don't know how affordable it is to someone living in India.
Now that you're no longer using "I don't speak Mandarin" as an excuse to postpone your dreams, we can get into learning Mandarin.
[Disclaimer: AI is revolutionizing how language-learning works. This is very good for language-learners. However, the field of AI-assisted language learning is changing so rapidly that anything I write here could be out-of-date in three months. When i |
57d5568f-46ad-4569-81e9-9d503b87ac55 | trentmkelly/LessWrong-43k | LessWrong | The nerds who saw the dangers of Covid
A post by Tom Chivers on Unherd.com discussing EY, The Sequences and the Rationality Community: The nerds who saw the dangers of Covid
A relatively positive tone, for a change. |
92db28a9-f83c-4ca0-8b35-afd143002234 | trentmkelly/LessWrong-43k | LessWrong | Slashdot: study Finds Little Lies Lead To Bigger Ones
|
bf9e066a-4152-470e-8b30-9f4109196b69 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Environmental Structure Can Cause Instrumental Convergence
**Edit, 5/16/23: I think this post is beautiful, correct in its narrow technical claims, and practically irrelevant to alignment. This post presents a cripplingly unrealistic picture of the role of reward functions in reinforcement learning. Reward functions are not "goals", real-world policies are not "optimal", and the mechanistic function of reward is (usually) to provide policy gradients to update the policy network.**
**I expect this post to harm your alignment research intuitions unless you've already inoculated yourself by deeply internalizing and understanding** [**Reward is not the optimization target**](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target)**. If you're going to read one alignment post I've written, read that one.**
**Follow-up work (**[**Parametrically retargetable decision-makers tend to seek power**](https://www.lesswrong.com/posts/GY49CKBkEs3bEpteM/parametrically-retargetable-decision-makers-tend-to-seek)**) moved away from optimal policies and treated reward functions more realistically.**
---
Previously: [*Seeking Power Is Often Robustly Instrumental In MDPs*](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps)
**Key takeaways**.
* The structure of the agent's environment often causes instrumental convergence. **In many situations, there are (potentially combinatorially) many ways for power-seeking to be optimal, and relatively few ways for it not to be optimal.**
* [My previous results](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) said something like: in a range of situations, when you're maximally uncertain about the agent's objective, this uncertainty assigns high probability to objectives for which power-seeking is optimal.
+ My new results prove that in a range of situations, seeking power is optimal for *most* agent objectives (for a particularly strong formalization of 'most').
More generally, the new results say something like: in a range of situations, for most beliefs you could have about the agent's objective, these beliefs assign high probability to reward functions for which power-seeking is optimal.
+ This is the first formal theory of the statistical tendencies of optimal policies in reinforcement learning.
* One result says: whenever the agent maximizes average reward, then for *any* reward function, most permutations of it incentivize shutdown avoidance.
+ The formal theory is now beginning to explain why alignment is so hard by default, and why failure might be catastrophic*.*
* Before, I thought of environmental symmetries as convenient sufficient conditions for instrumental convergence. But I increasingly suspect that symmetries are the main part of the story.
* I think these results may be important for understanding the AI alignment problem and formally motivating its difficulty.
+ For example, my results imply that **simplicity priors over reward functions assign non-negligible probability to reward functions for which power-seeking is optimal.**
+ I expect my symmetry arguments to help explain other "convergent" phenomena, including:
- [convergent evolution](https://en.wikipedia.org/wiki/Convergent_evolution)
- the prevalence of [deceptive alignment](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment)
- [feature universality](https://distill.pub/2020/circuits/zoom-in/) in deep learning
+ One of my hopes for this research agenda: if we can understand *exactly why* superintelligent goal-directed objective maximization seems to fail horribly, we might understand how to do better.
*Thanks to TheMajor, Rafe Kennedy, and John Wentworth for feedback on this post. Thanks for Rohin Shah and Adam Shimi for feedback on the simplicity prior result.*
Orbits Contain All Permutations of an Objective Function
========================================================
The Minesweeper analogy for power-seeking risks
-----------------------------------------------
One view on AGI risk is that we're charging ahead into the unknown, into a particularly unfair game of Minesweeper in which the first click is allowed to blow us up. Following the analogy, we want to understand enough about the mine placement so that we *don't* get exploded on the first click. And once we get a foothold, we start gaining information about other mines, and the situation is a bit less dangerous.
My previous theorems on power-seeking said something like: "at least half of the tiles conceal mines."
I think that's important to know. But there are many tiles you might click on first. Maybe all of the mines are on the right, and we understand the obvious pitfalls, and so we'll just click on the left.
That is: we might not uniformly randomly select tiles:
* We might click a tile on the left half of the grid.
* Maybe we sample from a truncated discretized Gaussian.
* Maybe we sample the next coordinate by using the universal prior (rejecting invalid coordinate suggestions).
* Maybe we uniformly randomly load LessWrong posts and interpret the first text bits as encoding a coordinate.
There are lots of ways to sample coordinates, besides uniformly randomly. So why should our sampling procedure tend to activate mines?
My new results say something analogous to: for *every* coordinate, either it contains a mine, or its reflection across x=y.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
contains a mine, or both. Therefore, for *every distribution*Dover tile coordinates, either D assigns at least 12 probability to mines, or it does after you reflect it across x=y.
**Definition.** The [*orbit*](https://en.wikipedia.org/wiki/Group_action#Orbits_and_stabilizers) of a coordinate C under the symmetric group S2 is {C,Creflected}. More generally, if we have a probability distribution over coordinates, its orbit is the set of all possible "permuted" distributions.
Orbits under symmetric groups quantify all ways of "changing things around" for that object.
My new theorems demand that at least one of these tiles conceal a mine.But it didn't have to be this way.
If the mines are on the right, then both this coordinate and its x=y reflection are safe.Since my results (in the analogy) prove that at least one of the two blue coordinates conceals a mine, we deduce that the mines are *not* all on the right.
Some reasons we care about orbits:
1. As we will see, orbits highlight one of the key causes of instrumental convergence: certain environmental symmetries (which are, mathematically, permutations in the state space).
2. Orbits partition the set of all possible reward functions. If at least half of the elements of *every* orbit induces power-seeking behavior, that's strictly stronger than showing that at least half of reward functions incentivize power-seeking (technical note: with the second "half" being with respect to the uniform distribution's measure over reward functions).
1. In particular, we might have hoped that there were particularly nice orbits, where we could specify objectives without worrying too much about making mistakes (like permuting the output a bit). These nice orbits are impossible. This is some evidence of a *fundamental difficulty in reward specification*.
3. Permutations are well-behaved and help facilitate further results about power-seeking behavior. In this post, I'll prove one such result about the simplicity prior over reward functions.
In terms of coordinates, one hope could have been:
> Sure, maybe there's a way to blow yourself up, but you'd really have to contort yourself into a pretzel in order to algorithmically select such a bad coordinate: all reasonably simple selection procedures will produce safe coordinates.
>
>
But suppose you give me a program P which computes a safe coordinate. Let P′ call P to compute the coordinate, and then have P′ swap the entries of the computed coordinate. P′ is only a few bits longer than P, and it doesn't take much longer to compute, either. So the above hope is impossible: safe mine-selection procedures can't be significantly simpler or faster than unsafe mine-selection procedures.
(The section "[Simplicity priors assign non-negligible probability to power-seeking](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/certain-environmental-symmetries-produce-power-seeking#Simplicity_priors_assign_non_negligible_probability_to_power_seeking)" proves something similar about objective functions.)
Orbits of goals
---------------
Orbits of goals consist of all the ways of permuting what states get which values. Consider this rewardless Markov decision process (MDP):
Arrows show the effect of taking some action at the given state.Whenever staying put at A is strictly optimal, you can permute the reward function so that it's strictly optimal to go to B. For example, let R(A):=1,R(B):=0 and let ϕ:=(AB) swap the two states. ϕ acts on R as follows: ϕ⋅R simply permutes the state before evaluating its reward: (ϕ⋅R)(s):=R(ϕ(s)).
The orbit of R is {R,ϕ⋅R}. It's optimal for the former to stay at A, and for the latter to alternate between the two states.
Here, let RC assign 1 reward to C and 0 to all other states, and let ϕ:=(ABC) rotate through the states (A goes to B, B goes to C, C goes to A). Then the orbit of RC is:
| | C | A | B |
| --- | --- | --- | --- |
| RC | 1 | 0 | 0 |
| ϕ⋅RC | 0 | 1 | 0 |
| ϕ2⋅RC | 0 | 0 | 1 |
My new theorems prove that in many situations, for *every* reward function, power-seeking is incentivized by most (at least half) of its orbit elements.
In All Orbits, Most Elements Incentivize Power-Seeking
======================================================
In [*Seeking Power is Often Robustly Instrumental in MDPs*](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps), the last example involved gems and dragons and (most exciting of all) subgraph isomorphisms:
> Sometimes, one course of action gives you “strictly more options” than another. Consider another MDP with IID reward:
>
>

> The right blue gem subgraph contains a “copy” of the upper red gem subgraph. From this, we can conclude that going right to the blue gems... is more probable under optimality for *all discount rates between 0 and 1*!
>
>
The state permutation ϕ embeds the red-gem-subgraph into the blue-gem-subgraph:
We say that ϕ is an *environmental symmetry*, because ϕ is an element of the symmetric group S|S| of permutations on the state space.
The key insight was right there the whole time
----------------------------------------------
Let's pause for a moment. For half a year, I intermittently and fruitlessly searched for some way of extending the original results beyond IID reward distributions to account for arbitrary reward function distributions.
* Part of me thought it *had* to be possible - how else could we explain instrumental convergence?
* Part of me saw no way to do it. Reward functions differ wildly, how could a theory possibly account for what "most of them" incentivize?
The recurring thought which kept my hope alive was:
> There should be "more ways" for `blue-gems` to be optimal over `red-gems`, than for `red-gems` to be optimal over `blue-gems`.
>
>
Imagine how I felt when I realized that the same state permutation ϕ which proved my original IID-reward theorems - the one that says
> `blue-gems` has more options, and therefore greater probability of being optimal under IID reward function distributions
>
>
- that *same permutation*ϕholds the key to understanding instrumental convergence in MDPs*.*
Suppose `red-gems` is optimal. For example, let R🏰 assign 1 reward to the castle 🏰 , and 0 to all other states. Then the permuted reward function ϕ⋅R🏰 assigns 1 reward to the gold pile, and 0 to all other states, and so `blue-gems` has strictly more optimal value than `red-gems`. Consider any discount rate γ∈(0,1). For *all* reward functions R such that V∗R(red-gems,γ)>V∗R(blue-gems,γ), this permutation ϕ turns them into `blue-gem` lovers: V∗ϕ⋅R(red-gems,γ)<V∗ϕ⋅R(blue-gems,γ).
ϕ takes non-power-seeking reward functions, and injectively maps them to power-seeking orbit elements. Therefore, for *all* reward functions R, at least half of the orbit of R must agree that `blue-gems` is optimal!
Throughout this post, when I say "most" reward functions incentivize something, I mean the following:
**Definition.** At state s, *most reward functions* incentivize action aover action a′when for all reward functions R, at least half of the orbit agrees that ahas at least as much action value as a′ does at state s. (This is actually a bit weaker than what I prove in the paper, but it's easier to explain in words; see [definition 6.4](https://arxiv.org/pdf/1912.01683.pdf#section.6) for the real deal.)
The same reasoning applies to *distributions* over reward functions. And so if you say "we'll draw reward functions from a simplicity prior", then most permuted distributions in that prior's orbit will incentivize power-seeking in the situations covered by my previous theorems. (And we'll later prove that simplicity priors *themselves* must assign non-trivial, positive probability to power-seeking reward functions.)
Furthermore, for any distribution which distributes reward "fairly" across states (precisely: independently and identically), their (trivial) orbits *unanimously* agree that `blue-gems` has strictly greater probability of being optimal. And so the converse isn't true: it isn't true that at least half of every orbit agrees that `red-gems` has more POWER and greater probability of being optimal.
This might feel too abstract, so let's run through examples.
And this directly generalizes the previous theorems
---------------------------------------------------
### More graphical options (proposition 6.9)
At all discount rates γ∈[0,1], it's optimal for *most reward functions* to get`blue-gems` because that leads to strictly more options. We can permute every `red-gems` reward function into a `blue-gems` reward function.Consider a robot navigating through a room with a **vase**. By the logic of "every `destroying-vase-is-optimal` can be permuted into a `preserving-vase-is-optimal` reward function", my results (specifically, [proposition 6.9](https://arxiv.org/pdf/1912.01683.pdf#subsection.6.1) and its generalization via [lemma D.49](https://arxiv.org/pdf/1912.01683.pdf#subsubsection.a.D.4.1)) suggest that optimal policies tend to avoid breaking the **vase**, since doing so would strictly decrease available options.
("Suggest" instead of "prove" because D.49's preconditions may not always be met, depending on the details of the dynamics. I think this is probably unimportant, but that's for future work. EDIT: Also, the argument may barely not apply to *this* gridworld, but if you could move the vase around without destroying it, I think it goes through fine.)In [SafeLife](https://www.partnershiponai.org/safelife/), the agent can irreversibly destroy green cell patterns. By the logic of "every `destroy-green-pattern` reward function can be permuted into a `preserve-green-pattern` reward function", lemma D.49 suggests that optimal policies tend to not disturb any given green cell pattern (although most probably destroy *some* pattern). The permutation would swap {states reachable after destroying the pattern} with {states reachable after not destroying the pattern}.
However, the converse is not true: you cannot fix a permutation which turns all `preserve-green-pattern` reward functions into `destroy-green-pattern` reward functions. There are simply too many extra ways for preserving green cells to be optimal.
Assuming some conjectures I have about the combinatorial properties of power-seeking, this helps explain why [AUP works in SafeLife using a single auxiliary reward function](https://www.lesswrong.com/posts/5kurn5W62C5CpSWq6/avoiding-side-effects-in-complex-environments) - but more on that in another post.### Terminal options (theorem 6.13)
When the agent maximizes average reward, it's optimal for *most reward functions* to`Wait!` so that they can choose between `chocolate` and `hug`. The logic is that every `candy-optimal` reward function can be permuted into a `chocolate-optimal` reward function.A portion of a Tic-Tac-Toe game-tree against a fixed opponent policy. Whenever we make a move that ends the game, we can't go anywhere else – we have to stay put. Then most reward functions incentivize the green actions over the black actions: average-reward optimal policies are particularly likely to take moves which keep the game going. The logic is that any `lose-immediately-with-given-black-move` reward function can be permuted into a `stay-alive-with-green-move` reward function.Even though randomly generated environments are unlikely to satisfy these sufficient conditions for power-seeking tendencies, the results are easy to apply to many structured environments common in reinforcement learning. For example, when γ≈1, most reward functions provably incentivize not immediately dying in Pac-Man. Every reward function which incentivizes dying right away can be permuted into a reward function for which survival is optimal.
Consider the dynamics of the Pac-Man video game. Ghosts kill the player, at which point we consider the player to enter a 'game over' terminal state which shows the final configuration. This rewardless MDP has Pac-Man's dynamics, but *not* its usual score function. Fixing the dynamics, what actions are optimal as we vary the reward function?Most importantly, we can prove that when shutdown is possible, optimal policies try to avoid it if possible. When the agent isn't discounting future reward (i.e. maximizes average return) and for [lots of reasonable state/action encodings](https://www.lesswrong.com/posts/XkXL96H6GknCbT5QH/mdp-models-are-determined-by-the-agent-architecture-and-the), the MDP structure has the right symmetries to ensure that it's instrumentally convergent to avoid shutdown. From the [discussion section](https://arxiv.org/pdf/1912.01683.pdf#section.7):
> Corollary 6.14 dictates where average-optimal agents tend to end up, but not how they get there. Corollary 6.14 says that such agents tend not to stay in any given 1-cycle. It does not say that such agents will avoid entering such states. For example, in an embodied navigation task, a robot may enter a 1-cycle by idling in the center of a room. Corollary 6.14 implies that average-optimal robots tend not to idle in that particular spot, but not that they tend to avoid that spot entirely.
>
> **However, average-optimal robots do tend to avoid getting shut down.** The agent's rewardless MDP often represents agent shutdown with a terminal state. A terminal state is unable to access other 1-cycles. Since corollary 6.14 shows that average-optimal agents tend to end up in other 1-cycles, average-optimal policies must tend to completely avoid the terminal state. Therefore, we conclude that in many such situations, average-optimal policies tend to avoid shutdown.
>
> [The arxiv version of the paper says 'Blackwell-optimal policies' instead of 'average-optimal policies'; the former claim is stronger, and it holds, but it requires a little more work.]
>
>
Takeaways
=========
Combinatorics, how do they work?
--------------------------------
What does 'most reward functions' mean quantitatively - is it just at least half of each orbit? Or, are there situations where we can guarantee that at least three-quarters of each orbit incentivizes power-seeking? I think we should be able to prove that as the environment gets more complex, there are combinatorially more permutations which enforce these similarities, and so the orbits should skew harder and harder towards power-incentivization.
 Here's a semi-formal argument. For every orbit element R which makes `candy` strictly optimal when γ=1, ϕchocolate and ϕhug respectively produce Rϕchocolate≠Rϕhug. `Wait!` is strictly optimal for both Rϕhug,Rϕhug, and so at least 23 of the orbit should agree that `Wait!` is optimal. As `Wait!` gains more power (more choices, more control over the future), I conjecture that this fraction approaches 1.I don't yet understand the general case, but I have a strong hunch that instrumental convergenceoptimal policies is governed by how many more ways there are for power to be optimal than not optimal. And this seems like a function of the number of environmental symmetries which enforce the appropriate embedding.
Simplicity priors assign non-negligible probability to power-seeking
--------------------------------------------------------------------
*Note: this section is more technical. You can get the gist by reading the English through "Theorem..." and then after the end of the "FAQ."*
One possible hope would have been:
> Sure, maybe there's a way to blow yourself up, but you'd really have to contort yourself into a pretzel in order to algorithmically select a power-seeking reward function. In other words, reasonably simple reward function specification procedures will produce non-power-seeking reward functions.
>
>
Unfortunately, there are always power-seeking reward functions not much more complex than their non-power-seeking counterparts. Here, 'power-seeking' corresponds to the intuitive notions of either keeping strictly more options open (proposition 6.9), or navigating towards larger sets of terminal states (theorem 6.13). (Since this applies to several results, I'll leave the meaning a bit ambiguous, with the understanding that it could be formalized if necessary.)
**Theorem (Simplicity priors assign non-negligible probability to power-seeking).**Consider any MDP which meets the preconditions of proposition 6.9 or theorem 6.13. Let U be a universal Turing machine, and let PU be the U-simplicity prior over computable reward functions.
Let NPS be the set of non-power-seeking computable reward functions which choose a fixed non-power-seeking action in the given situation. Let PS be the set of computable reward functions for which seeking power is strictly optimal.1
Then there exists a "reasonably small" constant C such that PU(PS)≥2−CPU(NPS), where C .
**Proof sketch.**
1. Let ϕ be an environmental symmetry which satisfies the power-seeking theorem in question. Since ϕ can be found by brute-force iteration through all |S|! permutations on the state space, checking each to see if it meets the formal requirements of the relevant theorem, its Kolmogorov complexity KU(ϕ) is relatively small.
2. Because lemma D.26 applies in these situations, ϕ(NPS)⊆PS: ϕ turns non-power-seeking reward functions into power-seeking ones. Thus, PU(PS)≥PU(ϕ(NPS)).
3. Since each reward function R∈ϕ(NPS) can be computed by computing the non-power-seeking variant and then permuting it (with KU(ϕ) extra bits of complexity), KU(R)≤KU(ϕ−1(R))+KU(ϕ)+O(1) (with O(1) counting the small number of extra bits for the code which calls the relevant functions).
Since PU is a simplicity prior, PU(ϕ(NPS))≥2−(KU(ϕ)+O(1))PU(NPS).
4. Combining (2) and (3), PU(PS)≥2−(KU(ϕ)+O(1))PU(NPS). QED.
**FAQ.**
1. Why can't we show thatPU(PS)≥PU(NPS)?
1. Certain UTMs U might make non-power-seeking reward functions particularly simple to express.
2. This proof doesn't assume anything about how many *more* options power-seeking offers than not-power-seeking. The proof only assumes the existence of a single involutive permutation ϕ.
2. This lower bound seems rather weak. Even if KU(ϕ)+O(1)=15 bits, 2−15≈0.
1. This lower bound is very very loose.
1. Since most individual NPS probabilities of interest are less than 1/trillion, I wouldn't be surprised if the bound were loose by at least several orders of magnitude.
2. The bound implicitly assumes that the *only way* to compute PS reward functions is by taking NPS ones and permuting them. We should add the other ways of computing PS reward functions to PU(PS).
3. There are lots of permutations ϕ′ we could use. PU(PS) gains probability from all of those terms.
1. For example: the symmetric group S|S| has cardinality |S|!, and for any R∈NPS, at least half of the ϕ′∈S|S| induce (weakly) power-seeking orbit elements ϕ′⋅R. (This argument would be strengthened by my conjectures about bigger environments ⟹ greater fraction of orbits seek power.)
2. If some significant fraction (e.g. 150) of these ϕ′ are strictly power-seeking, we're adding at least |S|!2150=|S|!100 additional terms.
3. Some of these terms are probably reasonably large, since it seems implausible that all such permutations ϕ′ have high K-complexity.
4. When all is said and done, we may well end up with a significant chunk of probability on PS.
2. It's not surprising that the bound is loose, given the lack of assumptions about the degree of power-seeking in the environment.
3. If the bound is anywhere near tight, then the permuted simplicity prior ϕ⋅PU incentivizes power-seeking with extremely high probability.
1. If you think about the permutation as a "way reward could be misspecified", then that's troubling. It seems plausible that this is often (but not always) a reasonable way to think about the action of the ϕ permutation.
3. What if PU(NPS)=0?
1. I think this is impossible, and I can prove that in a range of situations, but it would be a lot of work and it relies on results not in the arxiv paper.
Even if that equation held, that would mean that power-seeking is (at least weakly) optimal for *all* computable reward functions. That's hardly a reassuring situation.
2. Note: if PU(NPS)>0, then PU(PS)>0.
### Takeaways from the simplicity prior result
* Most plainly, this seems like reasonable formal evidence that the simplicity prior has malign incentives.
* Power-seeking reward functions don't have to be too complex.
* These power-seeking theorems give us important tools for reasoning formally about power-seeking behavior and its prevalence in important reward function distributions.
+ If I had to guess, this result is probably not the best available bound, nor the most important corollary of the power-seeking theorems. But I'm still excited by it (insofar as it's appropriate to be 'excited' by slight Bayesian evidence of doom).
EDIT: Relatedly, Rohin Shah [wrote](https://www.lesswrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior):
> if you know that an agent is maximizing the expectation of an *explicitly represented* utility function, I would expect that to lead to goal-driven behavior most of the time, since the utility function must be relatively simple if it is explicitly represented, and *simple* utility functions seem particularly likely to lead to goal-directed behavior.
>
>
Why optimal-goal-directed alignment may be hard by default
----------------------------------------------------------
> On its own, [Goodhart's law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) doesn't explain why optimizing proxy goals leads to catastrophically bad outcomes, instead of just less-than-ideal outcomes.
>
> I think that we're now starting to have this kind of understanding. [I suspect that](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/w6BtMqKRLxG9bNLMr) power-seeking is why capable, goal-directed agency is so dangerous by default. If we want to consider [more benign alternatives](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) to goal-directed agency, then deeply understanding the rot at the heart of goal-directed agency is important for evaluating alternatives. This work lets us get a feel for the *generic incentives* of reinforcement learning at optimality.
>
> ~ [*Seeking Power is Often Robustly Instrumental in MDPs*](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps)
>
>
For every reward function R - no matter how benign, how aligned with human interests, no matter how power-averse - either R or its permuted variant ϕ⋅R seeks power in the given situation (intuitive-power, since the agent keeps its options open, and also formal-POWER, according to my proofs).
If I let myself be a bit more colorful, every reward function has lots of "evil" power-seeking variants (do note that the step from "power-seeking" to "misaligned power-seeking" [requires more work](https://www.lesswrong.com/posts/MJc9AqyMWpG3BqfyK/generalizing-power-to-multi-agent-games)). If we imagine ourselves as only knowing the orbit of the agent's objective, then the situation looks a bit like *this*:
Technical note: this 12-element orbit could arise from the action of a subgroup of the symmetric group S4, which has 4!=24 elements. Consider a 4-state MDP; if the reward function assigns equal reward to exactly two states, then it would have a 12-element orbit under S4. Of course, this isn't how reward specification works - we probably are far more likely to specify certain orbit elements than others. However, the formal theory is now beginning to explain *why alignment is so hard by default, and why failure might be catastrophic!*
The structure of the environment often ensures that there are (potentially combinatorially) many more ways to misspecify the objective so that it seeks power, than there are ways to specify goals without power-seeking incentives.
Other convergent phenomena
--------------------------
I'm optimistic that symmetry arguments and the mental models gained by understanding these theorems, will help us better understand a range of different tendencies. The common thread seems like: for every "way" a thing could not happen / not be a good idea - there are many more "ways" in which it could happen / be a good idea.
* [convergent evolution](https://en.wikipedia.org/wiki/Convergent_evolution)
+ flight has independently evolved several times, suggesting that flight is adaptive in response to a wide range of conditions.
> "In his 1989 book [*Wonderful Life*](https://en.wikipedia.org/wiki/Wonderful_Life_(book)), [Stephen Jay Gould](https://en.wikipedia.org/wiki/Stephen_Jay_Gould) argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course."[[6]](https://en.wikipedia.org/wiki/Convergent_evolution#cite_note-wonderfullife-7) [Simon Conway Morris](https://en.wikipedia.org/wiki/Simon_Conway_Morris) disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least [primates](https://en.wikipedia.org/wiki/Primates), [corvids](https://en.wikipedia.org/wiki/Corvids), and [cetaceans](https://en.wikipedia.org/wiki/Cetaceans)."
>
> - Wikipedia
>
>
* the prevalence of [deceptive alignment](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment)
+ given inner misalignment, there are (potentially combinatorially) many more unaligned terminal reasons to lie (and survive), and relatively few unaligned terminal reasons to tell the truth about the misalignment (and be modified).
* [feature universality](https://distill.pub/2020/circuits/zoom-in/)
+ computer vision networks reliably learn edge detectors, suggesting that this is instrumental (and highly learnable) for a wide range of labelling functions and datasets.
Note of caution
---------------
You have to be careful in applying these results to argue for real-world AI risk from deployed systems.
* They assume the agent is following an optimal policy for a reward function
+ I can relax this to ϵ-optimality, but ϵ>0 may be extremely small
* They assume the environment is finite and fully observable
* Not all environments have the right symmetries
+ But most ones we think about seem to
* The results don't account for the ways in which we might practically express reward functions
+ For example, often we use featurized reward functions. While most permutations of any featurized reward function will seek power in the considered situation, those permutations need not respect the featurization (and so may not even be practically expressible).
* When I say "most objectives seek power in this situation", that means *in that situation* - it doesn't mean that most objectives take the power-seeking move in most situations in that environment
+ The combinatorics conjectures will help prove the latter
This list of limitations *has* steadily been getting shorter over time. If you're interested in making it even shorter, message me.
Conclusion
==========
I think that this work is beginning to formally explain why *slightly misspecified* reward functions will probably incentivize misaligned power-seeking. Here's one hope I have for this line of research going forwards:
One super-naive alignment approach involves specifying a good-seeming reward function, and then having an AI maximize its expected discounted return over time. For simplicity, we could imagine that the AI can just instantly compute an optimal policy.
Let's precisely understand why this approach seems to be so hardto align, and why extinction seems to be the cost of failure. We don't yet know how to design beneficial AI, but we largely agree that this naive approach is broken. Let's prove it.
---
1 There are reward functions for which it's optimal to seek power and not to seek power; for example, constant reward functions make everything optimal, and they're certainly computable. Therefore, NPS∪PS is a strict subset of the whole set of computable reward functions. |
0d407e77-3e9f-4a3f-90c8-7a9919313426 | trentmkelly/LessWrong-43k | LessWrong | Arguing Well Sequence
Arguments have the potential to allow you to connect and understand with someone else on a deep level, to introspect and figure out what you truly believe and care about, and to find out what is true so you can accomplish your goals!
Most people use it to dominate or talk past each other.
But there are ways to consistently argue well. Luckily, most of the hard work has already been done over the years on Less Wrong, SlateStarCodex, and Street Epistemology videos. My contribution is providing short summaries of these techniques, exercise prompts (some borrowed from the above sources) and solutions, generalizations, ideal algorithms, and relationships between the different techniques. They are as follows:
1. Proving Too Much (w/ exercises)
2. Category Qualifications (w/ exercises)
3. False Dilemmas (w/ exercises)
4. Finding cruxes (w/ exercises)
Although these exercises are not as useful as real-life conversations, they are enough to impart a gears-level model of these techniques, so when you find yourself in a confused conversation, you’ll know where to put all the pieces and how to have a productive discussion.
Note: This sequence is motivated by the rationality exercise contest. |
53b8b525-51a4-4230-8cc3-85c9d4ebfc90 | trentmkelly/LessWrong-43k | LessWrong | AI #27: Portents of Gemini
By all reports, and as one would expect, Google’s Gemini looks to be substantially superior to GPT-4. We now have more details on that, and also word that Google plans to deploy it in December, Manifold gives it 82% to happen this year and similar probability of being superior to GPT-4 on release.
I indeed expect this to happen on both counts. This is not too long from now, but also this is AI #27 and Bard still sucks, Google has been taking its sweet time getting its act together. So now we have both the UK Summit and Gemini coming up within a few months, as well as major acceleration of chip shipments. If you are preparing to try and impact how things go, now might be a good time to get ready and keep your powder dry. If you are looking to build cool new AI tech and capture mundane utility, be prepared on that front as well.
Table of Contents
1. Introduction.
2. Table of Contents. Bold sections seem most relatively important this week.
3. Language Models Offer Mundane Utility. Summarize, take a class, add it all up.
4. Language Models Don’t Offer Mundane Utility. Not reliably or robustly, anyway.
5. GPT-4 Real This Time. History will never forget the name, Enterprise.
6. Fun With Image Generation. Watermarks and a faster SDXL.
7. Deepfaketown and Botpocalypse Soon. Wherever would we make deepfakes?
8. They Took Our Jobs. Hey, those jobs are only for our domestic robots.
9. Get Involved. Peter Wildeford is hiring. Send in your opportunities, folks!
10. Introducing. Sure, Graph of Thoughts, why not?
11. In Other AI News. AI gives paralyzed woman her voice back, Nvidia invests.
12. China. New blog about AI safety in China, which is perhaps a thing you say?
13. The Best Defense. How exactly would we defend against bad AI with good AI?
14. Portents of Gemini. It is coming in December. It is coming in December.
15. Quiet Speculations. A few other odds and ends.
16. The Quest for Sane Regulation. CEOs to meet with Schumer, EU’s AI Act.
17. Th |
0932c1c6-2d30-451d-82bb-0f543204235a | trentmkelly/LessWrong-43k | LessWrong | How LLMs Work, in the Style of The Economist
The Assignment:
> (4 hours) Write an Economist-style explainer article on how LLMs work. You’ve just started as an AI reporter at The Economist, and your editor’s realised there’s no good Economist Explains style piece on how LLMs work. They’ve asked you to write one. It should be 500 words, and in the style of other Economist Explains pieces.
>
> Examples: Economist explainer on biological weapons; Economist explainer on diffusion models; FT explainer on transformers.
Thank you to Shakeel Hashim for feedback! Shakeel previously worked at The Economist as an editor.
Since OpenAI released ChatGPT in November 2022, large language models (LLMs) have gained international attention. A language model is a piece of AI software designed for tasks like translation, speech recognition, or—in the case of ChatGPT—conversations with humans. Language models, and even chatbots, are not new. In the 1960s, the first chatbot, ELIZA, was developed at MIT. ELIZA’s programmer had to write down a precise set of instructions for the chatbot to follow, including canned responses like “Tell me more about such feelings.” Modern language models, by contrast, must learn the structure of language from scratch by poring over internet text and compressing this knowledge across billions of numbers, or ‘weights’. In this way, these language models are ‘large’.
When an LLM receives input text from a user, the words are sliced up into ‘tokens’, and these sub-words are ‘embedded’ into numbers. The numbers representing the user’s input are then passed through the weights of the model to produce the first token of the model’s output. By iterating this process, the model generates a complete response. To find a set of weights capable of performing this powerful task, engineers ‘pre-train’ the model on vast quantities of human text. When the model outputs a token that does not match the next token in the training data, the model’s weights are nudged in the direction that would have produced the cor |
8e9330e0-b57c-4841-9b64-4132538d4747 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | AiTech Agora: Prof. Paul Pangaro: Cybernetics, AI, and Ethical Conversations
it's okay yeah go ahead we need an
ai in the background to know when to
start
these simple things we miss simple
things somehow we always want
complicated ones
so the recording is started i take it
good okay so again to try to keep it
informal
uh there's some slides break slides
uh with the word discussion and we can
open it up then and try to come back
the peril and the joy of course is that
we have an extraordinary conversation
and
some slides left on annotated or
unspoken
that's fine with me we can always
continue in other forms
but i uh rely on deborah who is really
remarkable and
wonderful in the time she spent with me
to help fashion this
for this group so i thank her and i also
invite her
to interrupt to say oops
hold it etc so please let's try to keep
it informal
so you should see a screen yes excellent
and now let me move whoops to keynote
so here we are this is a kind of guide
to the passing topics i'd like to flow
through
cybernetics and macy meetings already a
little bit in the preamble from deborah
and then today's ai i'm labeling this
specifically today's ai
it's not all ai but the way we have it
today is
in particular a problem uh wicked
problems are topic
that you know i suspect and i'd like to
use that
as a framing of a discussion of how
cybernetics in conversation
in my view may help
this leads us to pasc and to an idea
that i've been
developing that i call ethical
interfaces
i hope that becomes a waypoint to
ethical intentions
and then coming back to a new set of
macy meetings
so this is uh the idea of what i'd like
to do today
cybernetics and macy meetings many of
you know this history
in the 40s and 50s there was a series of
small conferences
and experts from this extraordinary
range of disciplines you
probably can't name a discipline soft
science or heart science that was not
present and they created this new way of
thinking
and acting in my view and it was a
revolution
of thinking and of framing how we see
the world
and they call this thing cybernetics
many of you know this word all of you
know the word i'm sure
the root is fun comes from the art of
steering
a greek word a maritime society uh using
the idea of steering toward a goal
acting with a purpose now goal is a hot
button here
it doesn't mean i have a fixed goal and
i go to the fixed goal and then i'm done
the word goal can be problematic it
means that we are constantly
considering action and in the framing of
action
considering what the goal might be and
how the goal might change
so i don't want to become a rationalist
here and say we always have a goal etc
or that the goal is always what we're
acting toward we're always acting
that for sure that's the foundation of
cybernetics as argued by andy pickering
and others
but acting with a purpose was the
insight if you will
the the moment of aha which led to the
idea that this could be a field on its
own
and these macy meetings change the world
now we often don't remember that these
three things
cybernetics neural nets nai are
intertwingled
as ted nelson might say these things
the mccullough pits neurons which is the
first idea of over
simplifying a brain neuron in order to
do some calculations
the macy meetings already mentioned and
the book of cybernetics by weiner which
of course was also part of the origin
of cybernetics as a field but i feel
that
the book gets all the credit in the in
the popular zeitgeist and the macy
meetings gets lost
this was uh happening in this era
roughly these days
and neural nets were born out of
mccullen pitts who were at the core
mccullough at least at the of the macy
meetings
and the macy meeting swarms the
zeitgeist i like to say
because the book and the idea of
circular causal
systems influences generations of
thinking
whether or not the word cybernetics
persists or is given the credit
we have another phase the dartmouth ai
conference 56 i believe
and symbolic ai rises owing
largely to smaller cheaper faster
digital machines
perceptrons the book comes along very
very conscious
desire to kill neural nets political
story for another time
and cybernetics languishes and
the dartmouth ai conference was
conceived
against cybernetics they didn't want to
use the word they wanted to
separate themselves from it and with the
rise of the power of some
hey we can do chess hey we can do all
these amazing things we can
do robot arms and move blocks around
fantastic
um this was what was going on in that
era and
as a consequence of this and as a
consequence of other things cybernetics
languished there were some philosophical
issues that
made it problematic i would say for
various political
environments and then hinton
doesn't listen to the perceptron's book
trying to kill neural nets and realizes
that it's an oversimplification in its
criticism
and we have this coming along
and i'm going to tie i don't think i
need to convince you
the consequences of today's ai and big
data
into what uh zubov calls surveillance
capitalism and
the wicked problems that arise there are
many many wicked problems
so let's simplify this chronology for a
moment
don't forget expert systems
but of course after the 80s neural nets
in the 2010s became extraordinarily
powerful
because of big data and because of
massive compute
and today's ai everywhere in our lives
and this is of deep concern to me it's a
deep concern to many people
the recent controversy at google with uh
uh tim mitchell being well being fired
i think is accurate enough and many many
other issues that you know
so what's going on here manipulation of
attention
tristan harris has made this a very
important signal
in today's silicon valley days
we know the manipulation that's going on
all these things we know
this is what i mean by today's ai
it's hardly an exhaustive list but it
feels like a pretty powerful
set to be concerned about
now with colleagues i often use the word
pandemic
and i would claim that ai today is a
pandemic and they say wait a minute it's
not biological
and other comments could be made and i
don't disagree with them
but my feeling is that ai makes the
world we see in the world we live in
and the loss of human purpose in the
morass of all of that
is a concern
and pan demos ben sweeting uh looked it
up in a meeting recently pan
everywhere or all demos same route as
democracy
the people so all the people are
affected
by ai today only two or three billion
are online
but that's close enough for me
so this led me in the process of the
covid arising and um
the shutdown of carnegie mellon and all
of the institutions that you know in the
world that we've
lived in brought me to a couple of
moments that the wicked problems demand
conversations that move toward action
are transdisciplinary if we're going to
do anything about today's ai or today's
wicked problems
certainly we need transdisciplinary but
we also need transglobal
namely geographically inclusive and
ethnically culturally socially inclusive
and also what's
extremely important to me in particular
is trans generational
and maybe we'll come back to this i've
made some efforts in this direction
as part of the american society for
cybernetics as ever mentioned that i
become president of in january
so could we possibly have conversations
of a scope that could address
all of these pandemics um
well that's ambitious audacious
crazy but i note
more and more i'm hearing conversations
in this framing
of the framing of we have global
problems
some are biological pandemics and
there's a lot of other wicked stuff
going on
can we talk about it and i don't mean
that in a
superficial way i mean can't we begin to
talk
in order to move toward action and i'll
come back to that as a theme later
so as a reminder of the macy meetings
we need such a revolution again
to tame today's wicked problems and i
acknowledge those
in the audience now who question the
word tame and that could be a
conversation we might get into
but the idea is is how do we improve the
situation we're in
if we like herb simon who only left
cybernetics as a footnote in his famous
book sciences of the artificial thank
you her
he did admit that it was moving from a
current state to a preferred state
and in that sense i think we can invoke
design
and invoke action so this
might be a stopping point if we wanted
to
ask a question or i can also just keep
going
depending on preference it looks like
you're on a good roll and nothing popped
up in the chat so i would say let's uh
wonderful so why cybernetics what is
this thing it applies across
silo disciplines it's this
anti-disciplinarity thing
focuses on purpose feedback in action i
think you know that it's a methodology
again another term wicked problems
complex adaptive systems
there are many descriptions of this this
particular phrase complex adaptive is
quite
is very much around and i find it fine
it seeks to regulate not to dominate
and it brings an ethical imperative and
we'll come back to this
and i want to acknowledge andy pickering
who coined the phrase
anti-disciplinarity
years ago and has also been
a wonderful influence on these ideas and
on me personally lately so what are the
alternatives
to cybernetics here's the cynical slide
which says not much else is working
so where do we go there are none
apparent alternatives to cybernetics
it's a bit arrogant perhaps
this was an email i got from a research
lab director
in 2014 who had the instinct that second
order cybernetics
times design x crossed with design
crossed with some modern version of the
bauhaus is what we need to
fix science an extraordinary phrase
fix science i love it but this is along
the theme of isn't there something we
can do here if we are from cybernetics
and i love the design component and i
know many of you would as well
since wicked problems cut across so many
different domains we need
deep conversations and this is a recap
of why we need new macy meetings
global and virtual and after i coined
new macy
i went back to andy's work and he talks
about
next macy as a new synthesis in 2014.
so what's missing conversation ronald
glanville some of you know
making a bridge why does it matter
well to tame wicked problems assuming we
can make some improvement to wicked
problems we have to act together we
can't act separately that's obviously
not going to work
to act together we have to reach
agreement
to reach agreement we have to engage
with others and to engage with others we
have to have a shared language to begin
so to cooperate and collaborate requires
conversation
what may come out of it well lots of
things
i would claim to achieve these requires
conversation
what do you get if you have effective
conversation
these things and again i would say
all of these demand conversation so
what's missing
is conversation and now the question is
how does all this fit together what am i
talking about
well conversation in today's ai
let's contrast these things
whereas today's ai maybe all ai is
machinic digital
representational you could argue that
machine language
predictive data animated and i'm
proposing
that cybernetics is a bilingual
sensibility i owe this to karen kornblum
who's on the
call today it's bilingual it goes
to conversation into today's ai
now these things are all intermeshed i
won't talk through how i think they
self-reinforce and make each other but
of course this is what
weiner meant when he talked about animal
and machine
and again uh we might stop here
for a moment i see esther said to fix
the practice of science
fair enough one could dive deep into the
idea of what
what science practice is and how that
gets us to a certain
plateau of understanding but acting in
the world is
is beyond understanding deborah
a pause here shall i keep going i think
if people
uh people can unmute themselves uh so
um feel free i think we're we're in a
good enough flow that if someone uh
comes up uh
we can do that but uh yeah i would say
yeah james has had a question maybe
james would you like to
speak up and ask this question directly
or
well and and we can get to this uh later
paul but i think we'd all be interested
in hearing more about the politics of ai
versus cybernetics historically
especially if it's something that would
be useful to consider
for how things move forward
a long discussion on its own it's a
beautiful point james thanks for
bringing it up
um i'm not sure how to unpick that now
um and i think there are others in the
room who may
be able to expound it better than i can
cybernetics of course grew out of world
war ii
in some ways and grew out of this idea
that
circular causality was everywhere
some criticized cybernetics as being
about control but i think that's a
misunderstanding of the term
um ai in my view grew out of
the desire to use digital to dominate
an environment which could be controlled
by conventional mechanistic means not
embodied organic cybernetic means
that that's a really uh poor job
at the surface of it and i think by
politics you mean something deeper
um maybe we can reserve that for a
little later if there's time or
another session i think that unpicks a
whole world of complications
yeah i think um just one comment along
the way then
uh that the traditional challenge of
incorporating
teleology into the sciences um
seems to sometimes strike people uh
the wrong way about cybernetics and the
other
comment that i'd make is um
because i'm biased but i feel like
cybernetics has a much stronger
theoretical
grounding than artificial intelligence
but one of the things
that really strikes me about artificial
intelligence
is the expectation of the
the artificial so the exclusion of the
human and
uh the the fact that humans are expected
to be part of cybernetic systems
um at least seems much more clear in
cybernetics so i'm i don't know whether
that intersected with the politics at
any point
but that's just a comment to leave along
the way yeah
yeah yeah it's a big box
uh big pandora's box to open
uh i don't want to cherry pick but uh
phil beasley phillip thank you for
coming today it's wonderful to have you
here
um so minsky and pappard wrote a book
called perceptrons which was based on a
paper that was
left on a file server at mit for all to
see
as a book was posed and the purpose of
that paper was to kill neural nets and
it did
the story of von forrester which he told
very often was
for many many years one forester would
go to washington to get money to
support his biological computer lab at
urbana
and one year he went and they said oh no
you have to go see this guy in cambridge
he'll give you the money
because we've decided to centralize the
funding and all of the money now will
come through this guy while heinz went
to see marvin and marvin said
no and that was the end of the bcl
so that's a factual story that isn't is
in the history
phillips raising other extraordinary uh
questions as usual uh which i i'm gonna
duck
that's another interesting conversation
philip maybe we should have one of these
chats uh about the philosophical issues
and the relationship to modernism and
again i would defer to you and others as
perhaps
not perhaps as much more
conversant with that history but thank
you for that
deborah anything else in the chat um no
i think there was a comment there about
uh
ai uh failing or not and maybe we just
scratched the surface this is uh
oh yes volcker yeah yeah yeah
succeeded at many many things but failed
at producing a humanistic technology
in my view a technology that we embrace
and we love and we use every day
i mean who loves
i don't want to make this seem an attack
who loves facebook
you know it's it's uh extraordinary in
many ways and yet
fails in so many ways anyway
yeah i would i would point out that uh
the people who were in the cybernetic
meetings and they see
they didn't think that they succeeded
very well
uh margaret me didn't think that it was
a success and stuff like that so i feel
like that part of it is also
we're living in this moment so to say
success failure is always uh
hard for us to assess black and white
yeah it's much too black and white i
agree with you
so back to norbert weiner and
of course weiner in a cybernetic world
invokes gordon pasque in a cybernetic
world a cybernetic praxis
uh both theory and making of machines
practice action in the world
if you don't know andy pickering's book
the cybernetic brain i can't recommend
it
highly enough it's a particular view of
cybernetics
that others feel doesn't attend enough
to second-order cybernetics or to
um uh
gordon's theories uh but uh
i think it's um it's andy's trying to
make a different point it's it's an
earlier moment
in his arguments for what the history of
the field has meant
and we're also in active conversation
about that and
i think that that thinking and that
research if you will will will continue
so let's go into past this is a picture
of
gordon pasqu's
colloquy of mobiles in 2020 in paris
where my colleague tj mcleish and i were
in february
and installed it at central pompidou it
was part of an exhibition that
unfortunately closed rather quickly
but the original from 1968
was an extraordinary thing these are
physical mobiles the
large uh ones that you see
flesh-colored are females so called the
black ones with the little light in the
middle are so-called
males they do a courtship dance a long
story for another time
but pasc even in the 60s imagined
machines that were autonomous agents
that learned and conversed
and already there was a bilingual
sensibility here of the human and the
social the machinic and the digital
and the interactions were about
resonance and not representation
and the i the fundamental architecture
if you will was interactional and not
stand alone
so if you study this work and other
works of cybernetics it's not a machine
that's smart
or an algorithm that's smart but rather
in the interaction intelligence is
exhibited by way of achieving uh
purpose formulating new goals acting
toward those goals and so on
so this is one way of thinking in a
broad brush of what
colloquial mobiles uh tried to do is
extraordinary work
the goals of conversation theory in my
view are to rigorously understand what
makes conversation work and to make
machines converse in like humans and
like humans not not like machines
we can talk about the limitations of
what that might mean
but also to rigorously understand how
systems learn
because these two things conversation
and learning
are inextricable i think that is
uncontestable and in our daily life we
experience that
but also uh it was very much what past
was interested in and he saw that they
came together he wrote a number of books
i don't know how you write two books of
this length and depth within a short
period of time bernard scott who's on
the call
was part of this era with pasc and was
very important to the psychological
studies and the
of the learning studies that were done
and i value very much the conversation
that bernard and i are continuing
i came across pasqu uh as deborah
mentioned at mit when i was at the
architecture machine which was a lab
started by nicholas negroponte
which was the predecessor to the media
lab although predecessor implies
a continuation of dna that wasn't really
there
the architecture machine as you see
grounded in the upper right in an
awareness
of minsky simon newell papert
everybody these are the ai guys
nonetheless
uh took a balloon ride north to thirty
thousand feet
and thought about interactions of any
kind
in a set of cybernetic
loops in a set of relationships so on
the left you have a
one participant in a conversation on the
right you have b
or beta as it's also labeled and these
are the interchanges some of these are
me
saying something to you some of these
are me taking an action
uh some of these are you observing my
action and saying something and taking a
further action
behind this is actually quite a lot of
detail this is one summary
this is another summary
and all of this is about modeling
conversation
and leading toward for example in the
work i do in interaction design
trying to take these components of
conversation
that we start in a context we start with
a language
we have goals and so on that are
evolving
these lead to some other ideas again
this is just in passing
there are some conversations are better
than others
uh one conversation i like to have is
what i would call an effective
conversation
where something changes
and brings lasting value so not just any
change but a change that brings lasting
value
and these changes may be in information
transaction they may be rationally
emotional and so on
but back to this idea that the goals of
conversation
are these
conversation as this critical aspect
conversation cybernetics
ai and that conversation and that triple
[Music]
diagram theory specifically
is helpful and then we'll get more into
that in a moment
when we talk about ethical interfaces so
again a moment of a brief pause
yeah uh david do you want to say
something about uh
connecting resonance and representation
with your uh
[Music]
yes um so actually i was just
plus wanting james's
comment but yeah
for me it resonates very much from my my
own work and also the
the work that we're trying to uh to get
going um
in a in a large proposal um that you'll
probably hear about more uh
paul uh soon maybe already have
but that's a a lot of work uh
also in my uh domain of human robot
interaction
focuses on modeling humans
you know representing that perhaps to
put into the basis of
machine operations so then we have
human-like
but in isolation it's always a lot of it
is in isolation and the
real interaction which is such an
overused word that uh
that i almost hesitate to use it again
that's being lost and what uh what i
like
very much is that you you call this
resonance which which is a term that's
also being used in many other domains
that
actually already implies that it's more
than interaction
it's actually loops perhaps exciting
each other
and and so i very much like this very
short phrasing
so there was my plus one um thank you
it's very necessary
thank you david appreciate it um
do you wanna you wanna talk through your
comment
yeah sorry i'm so wordy
so um we've been looking at ai as a kind
of continuation of
um let's say operations research applied
at scale to many social contexts
and when we look at the way those
operations work we notice that they
really do not distinguish between
managing things and managing humans so
in a sense they're removing the category
of human
as this distinct category for which we
can have ethics or politics and they're
also centralizing control as rule said
and this comes exactly at a time because
you raised timnit gabriel
where scholars working on race and
anti-blackness
are saying that the category of human
historically and today excludes
especially the black people and a lot of
other disenfranchised
populations so one could argue okay
let's ask for a very
let's insist on a very universal human
and not use the human
as it has been used historically but to
make sure that we center the people that
the category has excluded
or we can be very careful about using
the category human meaning that it has
been almost always historically used to
exclude black people and
other disenfranchised people so i'm just
wondering um
i mean that's a lot of claims like do
you agree that operational control
uh you know with this kind of
centralized ability to set policy on
operating on things in humans not
distinguishing humans and things and
then
what should we do with the category of
human because it keeps on coming out but
we use it maybe a little too easily
i agree generally with what you're
saying and i thank you for the comment
ai of course has become distributed
cybernetics uh was interpreted for
example by stafford beer and fernando
flores
to be a central control of uh economy
in chile but the word control i think is
problematic as i
think i said earlier in cybernetics it
doesn't mean control
it means to attempt to regulate in order
to achieve
action that is effective in the world
and usually that reflects a goal and
purpose that's
behind it all again a wonderful comment
in which there is tremendous richness
maybe a topic of its own
entirely it relates to the politics
topic earlier
i don't think i can eliminate it better
than that right now that's not an
elimination i mean i don't
but i love that point maybe we can
return to that in another form
because i think that's very important i
just wanted to mention in this context
that
david usually remarks that his interest
in cybernetics is to realize that
control is not just bottom-up
and could also emerge in situations and
i also wanted to flag
the connection with resonance uh james
derek lomas
i think paul we talked uh we made that
connection earlier
and derek i don't know if you want to
say anything more about the resonance
right now or
no i'm just loving the conversation and
it's it's going in
a great direction and it's just so much
fun to
get such a great context um
i mean i i find i i know i have a bias
of being attracted to areas of academic
study that seem a little bit off limits
and for some reason
cybernetics has a little bit of that
scent to it and i don't know why
but its association with with with
resonance um
somehow uh confirms itself
somehow we could talk about that also it
makes it make some
points of view uncomfortable and and
again
there's been some discussion of why
cybernetics failed
which in some ways it has again blacking
and whiting something that is really
gray
um and and perhaps a topic for another
time
uh ben i just happen to notice i can't
see all of the stuff going by the chat
it's a magnificent
conversation on its own that i look
forward to saving and savoring
uh ben sweeting mentions about pasc's
model
that yes it does include me with myself
conversing
and having interactions with different
points of view in my own head that was
one of the important things about his
theory in my view
which was it's person to person me to me
as long as i have different points of
view that need resolution
across a boundary across a difference if
you will
and then of course you could even say
schools of thought speaking to schools
of thought
democrats and republicans come to mind
unfortunately
yeah should we move on then wonderful
so again if you base this idea
of ethical interfaces
uh on understanding conversation and
understanding and i'll amplify that in a
moment i believe that the ultimate goal
is to build better machines to build a
better society
but as always the question is how do we
do all this
i like organizing principles i think you
would agree
that there's no such there's no there's
nothing more practical than a great
theory or a great organizing principle
so let me expand this one i i like to
unpack this it'll take a few
steps so i shall act always
so this is me taking responsibility for
my action
and saying that i will always act
so as to increase
the total number of choices now many of
you will
recognize this and i'll give it the
authorship in a moment but for those who
haven't seen it before i want to explain
choices means something very specific
choices doesn't mean
options for example right now in this
moment i could do one of a thousand
different things
and all of those are options to me i
could stand on my head i could turn off
my machine
i could throw my coffee cup again no
they are not choices
a choice is something that i would
possibly
want to do now a viable option
something that would reflect resonance
with me
and who i am and how i see myself and
would be consistent with what i am which
you can phrase
in terms of my goals my purpose
my intention my direction now
i shall act is important going back to
that
and the author of this makes a wonderful
distinction i could try to say
thou shalt i could say you must do these
things
but of course that's me being in this
particular way of distinguishing these
terms
moralistic me standing outside
saying i have the right to tell you what
to do
that's the thou shalt that we recognize
i shall means in a sense the opposite
i'm part of this whole thing
i am indistinguishable sorry i'm
distinguishable but i am
part of the greater flux and i take
responsibility for
what i do in the context of the whole
so this of course comes from heinz
forrester and he called it an
ethical imperative
and some on the call once again are
challenging
this and its limitations i think ben
sweeting in particular
and i look forward to those later
developments
now i want to go here i'm going to
declare this an axiom for an ethical
interface and say as a designer i shall
act always to increase the total number
of
choices not options amazon is about
options
right amazon suggesting things to me
based on what other people have done
are not choices because they're not
about me they're about the big data
they're about the aggregation of
millions and billions of interactions
that in a sense have nothing to do with
me
they have to do with an aggregation but
you know what the hell does this mean
how do we do this right well could i ask
you one question to make sure i
understand
the difference between uh choice and
options
as you see it so uh i read
this book the paradox of choice that you
may or may not
know about and one of the most famous
examples is
supermarkets right where
you would think that if you give people
more and so he calls it choices
if you give people more choices that
that would be a good idea but actually
it also takes away something for people
so
it costs effort and it makes people a
little bit unhappy because they don't
really have the means to make the best
choice or
et cetera et cetera so there's many
downsides which is what he called the
this paradox of choice so do you think
that
you know it's a different use of the
term choice exactly yeah i'm just
speaking
very very specifically what might i do
now to be who i am in taking an action
so heinz used to say paul don't make a
decision
don't make the decision let the decision
make
you which was a way it takes a moment to
reflect on that
which is a way of saying when you think
of yourself as deciding something
you're really being who you are so when
i walk into
a supermarket it can't possibly put in
front of me my choices
because it doesn't know me and our
definition of personalization today
ain't what it should be if it's this
aggregation of the totality of the
billions of ideas
sorry the billions of choices that
people have made and therefore it says
people who bought cigars also bought
smoking jackets
famous example from 20 years ago
so so he's reserving choice to mean
something very specific and i wanted to
mean
that specificity here as well
can i say something but as as uh
ai collects your history and amazon does
that and many other places do it
then they do get to know you uh
sometimes even
better than or some things that then you
know yourself so
i i'm not saying you know uh that it's
uh
it's the kind of knowing and meaning
that we want
but uh yeah go ahead you know we can
unpick that further i think i have a few
slides here about that yeah
to try to unpick it but if not we should
maybe come back to it and
remain so number one actors to increase
choices for a user now part of that
for me is acting
in order to create conditions such that
others may converse
because it's through conversation that
you expose
or learn or have revealed
to you what the options are within which
you can decide which are viable choices
so my claim is that designing
for conversation designing such that
others can converse
is absolutely foundational
and for me that's part of a praxis if
you will of ethical design
so i propose applying models of human
conversation you've seen a
skeleton of that here in the slides i've
shown
strive for interfaces that are
cooperative ethical and humane and i'll
explain those in a moment
and push for new forms of interfaces and
this is really the basis of what i'm
talking
talking about i'm sorry that went by
these are offers
that i think are are worth proposing
to you and to others so if you design an
ethical interface
one idea one intention is to make it
cooperative
so it's cooperative when there are a
sequence of coherent interactions
that enable the participant to evolve
their points of view
such that in understanding and agreement
are ongoing
there's a cooperation that allows a true
conversation
to evolve such that we might have
understanding and agreement
might have we might agree to disagree
one big block cooperative
next block ethical i claim that it's
ethical
when there is reliable transparency of
action and intent
the what in the why such that we might
build trust
now very often we're told that
google has really great search results
and it offers us the best choice for us
as deborah was saying it knows my
history of clicking around
and it uses that to to tell me something
that's coming up next
that's fine uh that's helpful but it
doesn't tell you why it made the choices
and let me take a moment for a brief
parable i call it the parable of luigi's
pizza
it's a little awkward to do in this
context so i'll pretend to be both sides
if i'm in an audience i ask someone to
say
where's their great pizza and i say
right across the street the luigi's
pizza
and then i asked them to ask me why is
it great pizza
and they say paul why is luigi's great
pizza and i say screw you i'm not going
to tell you that
and after they're shocked and they go
back in their seat i then say
i just described google
because google doesn't tell you why it
makes the choices it makes oh yeah there
are 200 signals and it takes into
account recency and reviews
and where you clicked and all of that no
that's a generalized answer
that's like giving me terms and
conditions
that i have to read through in order to
hear the generality of what's going on
but that's not
why it chose luigi's pizza
now if i asked you a friend of mine
colleague of mine where is their good
pizza etc
and i asked you why you thought it was
good and you said screw you paul
i'd never talk to you again but we allow
this
from our machines we allow to be
mistreated by our machines so i'm
claiming that an ethical interface is
one in which i can say why is it great
pizza and it says
well paul your values are you like
sustainable
sourcing you want people to be paid well
you want it to be open late you want
gluten free and in your value terms
this does what you want and therefore
this is a way of
moving ahead of being in the world in a
way that you want to be
it's a valid choice so terms and
conditions
are an answer to the question why
how does google present to you why luigi
is great pizza
but i would claim it's not humane so
this is the third intention
where you can in the conversation create
a direction for the focus and flow
so very dense but these three ideas
cooperative ethical and
humane for me are pragmatic ways of
talking about designing interfaces
and therefore i think could be
a contribution so this might be a place
to start
another conversation
i didn't realize i'm muted i'm also
mindful of the time
uh so let's uh so paul can you make a
decision
we have uh kind of nine minutes left uh
of you know would it be good to uh
close stuff up in a few minutes and then
kind of uh
yeah i can do i can do the rest in just
a couple of minutes shall i do that
yeah yeah i think i think that's good
yeah so
i want to build a better society and
better machines
are part of that this is part of what i
call ethical design
it's a bit highfalutin how do we do that
well we talked about the wicked
challenges this is another paraphrase of
that
to go after those you've heard me claim
we need conversation who do we need in
the conversation well
some people in the conversation could be
this history from cybernetics
both the history and current
practitioners
so the bottom gage and doverly glanville
has passed away carrione and deborah and
gagan are still very close colleagues
but my point about trans generational
these are younger generations coming
along the list on the right are often
students
and they are interested in these ideas
and they are practicing
systems thinking cybernetics etc and
these are the people for me who are
extremely important in these
conversations
and how do we begin we begin with the
new macy meetings
we begin by moving along a path
this happened to be my path today
with considerations that you have heard
and that is what i wanted to say thank
you very much
fantastic um so
i think we could uh take some of the
if you just scroll from the bottom of
the chat poll
um and maybe uh andy
do you want to say something about your
comment
yeah so uh let me
it was essentially the second point that
paul was getting at that building of
trust
um in in a way that is significant to
people and both
uh the building of choice and the
building of trust can happen through
conversation
um but it's a question of which people
prefer today
uh more choices more variety or if
that's what choice means to them
or that building of trust and that
they're making the correct choice the
first time
right
yeah correct choice the first time is a
little tricky yeah of course
certainly you want to let them make a
choice and then recover easily it's that
undo button in the world uh that's
that's tricky
yeah no that's a good observation i
don't have more response from them
do that
what about role
what about what sorry raul made a
comment uh
but i guess he's responding to bernard
so there's a lot yeah go ahead bro
i was responding to some of the
responses on um
issue of race as a valid category
um and i was just pointing to a report
that was written by colleagues
former colleagues of mine at the air now
institute where we look at um
the really urgent uh societal
implications of
of ai systems as they're being deployed
now mostly by
um by larger tech companies um
and how do you how they lead to issues
uh
across categories of gender
race and other important categories
um so that that's a really good report
for those who who think we shouldn't be
using the category
i disagree with that um and i i just
wanted to maybe also respond
quickly to just the former uh discussion
um i think one of the things we do at
delft i think we do pretty well is to
to think about um kind of the normative
aspects ethical aspects
of technology and that's that's part of
this ai tech forum where we think about
meaningful human control
and there's a lot more coming there so i
think when we think about choice
um i i tend to i tend to kind of side
with what
what david said is that oftentimes
choices are also
um about like important trade-offs have
to be made and
and thinking about um the later
implications
of design choices um so it would be
that's what my own personal research is
about it's about hard choices and how to
address the
normative uncertainty and how to create
conversation
the choices that you materialize in the
system so i'm kind of curious
how you think about dealing with the
politics and the normative uncertainty
when
thinking about design of college
cybernetic systems called the i systems
and how it comes back in your your
perspective
yeah it's a beautiful question um
so it's a conversation about the
conversation that you're yeah
yeah well second order cybernetics right
yeah he's easier said than done yeah
i hope go ahead somebody says something
i'd like just to clarify what's why i'm
concerned about the use of the term race
there's a scientific basis for it
and people look across each other mutual
ignorance
some people think they know what the cat
is labeling a category there are better
labels
the ethnic group um
[Music]
culture and so on i have i bet is
i've met people who think that different
races
i've met people who think that different
races are actually different species
they just have they're ignorant of the
basic biology
and as i said before the term race does
not have a
a well-founded scientific basis
thank you bernard and i have had an
exchange about this before the idea is
not to deny difference
but rather not to place it into the into
the label of race but rather into other
labels
which then expand the conversation in my
view
to um the aspects that are really
important which is to discuss
differences and to be inclusive
absolutely thank you thank you
yeah um i think uh jared
you made a comment about uh
conversations or the best part is often
that they develop in unexpected ways do
you want to say something about
that um yeah it's one of those tensions
i i feel as in i have a feeling that you
often treat our ai
as our helpful slaves which is very good
because they do
cool stuff for us and if they're not too
intelligent we don't mind
um so but that means that you often want
to give them very specific
instructions and have them do them as
well as they can and that feels a bit at
odd with the idea of having
conversations and opening up and
having things going in expected ways and
i was wondering um
how would you propose we reconcile that
contrast between
the two um
thank you for that comment um let me put
again in the chat
the link to my page which has in
addition to this pdf that has an
appendix pdf
and in that appendix are additional
slides
which amplify the power of
conversation to create possibility
and that's another way of talking about
conversation as
opening choice and again choice in this
meaningful rich way that lines of course
so if you consider the desire to create
new possibilities
as appropriate and ethical then
conversation
for me is almost the only way i want to
hedge that a little bit from learning to
ride a bicycle i'm not necessarily
talking to myself about it
and you'll see some slides in there
which talk about
what a great conversational partner is
and in particular here's one other
comment to make a complaint i have about
recommendation engines and facebook
feeds and google
ranking and so on is it's based even at
best
on who i was it's making a decision on
my behalf
as if i were in my own past or to put it
a rather more contentious way
as if i were dead because answers are
dead
questions are alive questions are of
them now
so don't give me a search engine that
gives me answers give me an engine that
gives me questions
because questions open up possibilities
and that's another whole research area
i'd like to
develop and there are some slides on
that in the opinions
this is very cool i'm going to be uh
pretty
uh accurate about the the the ending
time
partly because i feel like we we clearly
wet the appetite
of many people here and i want this to
be
an ongoing conversation and we also have
something starting in a couple of
minutes that we want to engage some of
the people here
thank you so much paul and thank you for
such a great audience
it was lovely to have the conversation
ongoing throughout
um and we'll like i said i think we'll
try and make sure that the chat
uh is also copied over because there are
plenty of comments in here that
we just managed and links and stuff like
that
luciano so this is fair to uh to close
now and then
for |
32e1695f-db49-45c5-b840-383190719244 | trentmkelly/LessWrong-43k | LessWrong | Gems from the Wiki: Acausal Trade
During the LessWrong 1.0 Wiki Import we (the LessWrong team) discovered a number of great articles that most of the LessWrong team hadn't read before. Since we expect many others to also not have have read these, we are creating a series of the best posts from the Wiki to help give those hidden gems some more time to shine.
Most of the work for this post was done by Joshua Fox who I've added as a coauthor to this post, wiki edits were also made by all of the following: Lukeprog, Gwern, Vladimir Nesov, Sauski, Deku-shrub, Caspar42, Joe Collman and Jja. Thank you all for your contributions!
----------------------------------------
In acausal trade, two agents each benefit by predicting what the other wants and doing it, even though they might have no way of communicating or affecting each other, nor even any direct evidence that the other exists.
Background: Superrationality and the one-shot Prisoner's Dilemma
This concept emerged out of the much-debated question of how to achieve cooperation on a one-shot Prisoner's Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player who is considering the causal consequences of a decision ("Causal Decision Theory") finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is a Defect/Defect equilibrium, which is bad for both agents. If they could somehow converge on Cooperate, they would each individually do better. The question is what variation on decision theory would allow this beneficial equilibrium.
Douglas Hofstadter (see references) coined the term "superrationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each might reason t |
de9de51d-72d1-4019-8110-144aba83108e | trentmkelly/LessWrong-43k | LessWrong | New blog: Planned Obsolescence
Kelsey Piper and I just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here.
Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting.
So far we have seven posts:
* What we're doing here
* "Aligned" shouldn't be a synonym for "good"
* Situational awareness
* Playing the training game
* Training AIs to help us align AIs
* Alignment researchers disagree a lot
* The ethics of AI red-teaming
Thanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub.
You can submit questions or comments to mailbox@planned-obsolescence.org.
|
0cb5bd3a-64a5-437e-892e-d939e679e16c | StampyAI/alignment-research-dataset/arxiv | Arxiv | Causal Analysis of Agent Behavior for AI Safety
1 Introduction
---------------
Unlike systems specifically engineered for solving a narrowly-scoped task, machine learning systems such as deep reinforcement learning agents are notoriously opaque. Even though the architecture, algorithms, and training data are known to the designers, the complex interplay between these components gives rise to a black-box behavior that is generally intractable to predict. This problem worsens as the field makes progress and AI agents become more powerful and general. As illustrated by learning-to-learn approaches, learning systems can use their experience to induce algorithms that shape their entire information-processing pipeline, from perception to memorization to action (Wang et al., [2016](#bib.bib42); Andrychowicz et al., [2016](#bib.bib2)).
Such poorly-understood systems do not come with the necessary safety guarantees for deployment. From a safety perspective, it is therefore paramount to develop black-box methodologies (e.g. suitable for any agent architecture) that allow for investigating and uncovering the causal mechanisms that underlie an agent’s behavior. Such methodologies would enable analysts to explain, predict, and preempt failure modes (Russell et al., [2015](#bib.bib33); Amodei et al., [2016](#bib.bib1); Leike et al., [2017](#bib.bib21)).
This technical report outlines a methodology for investigating agent behavior from a mechanistic point of view. Mechanistic explanations deliver a deeper understanding of agency because they describe the cause-effect relationships that govern behavior—they explain *why* an agent does what it does. Specifically, agent behavior ought to be studied using the tools of causal analysis (Spirtes et al., [2000](#bib.bib35); Pearl, [2009](#bib.bib25); Dawid, [2015](#bib.bib13)). In the methodology outlined here, analysts conduct experiments in order to confirm the existence of hypothesized behavioral structures of AI systems. In particular, the methodology encourages proposing simple causal explanations that refer to high-level concepts (“the agent prefers green over red apples”) that abstract away the low-level (neural) inner workings of an agent.
Using a simulator, analysts can place pre-trained agents into test environments, recording their reactions to various inputs and interventions under controlled experimental conditions. The simulator provides additional flexibility in that it can, among other things, reset the initial state, run a sequence of interactions forward and backward in time, change the seed of the pseudo-random number generator, or spawn a new branch of interactions. The collected data from the simulator can then be analyzed using a causal reasoning engine where researchers can formally express their assumptions by encoding them as causal probabilistic models and then validate their hypotheses. Although labor-intensive, this human-in-the-loop approach to agent analysis has the advantage of producing human-understandable explanations that are mechanistic in nature.
2 Methodology
--------------
We illustrate this methodology through six use cases, selected so as to cover a spectrum of prototypical questions an agent analyst might ask about the mechanistic drivers of behavior. For each use case, we present a minimalistic grid-world example and describe how we performed our investigation. We limit ourselves to environmental and behavioral manipulations, but direct interventions on the internal state of agents are also possible. The simplicity in our examples is for the sake of clarity only; conceptually, all solution methods carry over to more complex scenarios under appropriate experimental controls.
Our approach uses several components: an agent and an environment, a simulator of interaction trajectories, and a causal reasoning engine. These are described in turn.
###
2.1 Agents and environments
For simplicity, we consider stateful agents and environments that exchange interaction symbols (i.e. actions and observations) drawn from finite sets in chronological order at discrete time steps t=1,2,3,… Typically, the agent is a system that was pre-trained using reinforcement learning and the environment is a partially-observable Markov decision process, such as in Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Agents and environments ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")a. Let mt,wt (agent’s memory state, world state) and at,ot (action, observation) denote the internal states and interaction symbols at time t of the agent and the environment respectively. These interactions influence the stochastic evolution of their internal states according to the following (causal) conditional probabilities:
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | wt | ∼P(wt∣wt−1,at−1) | ot | ∼P(ot∣wt) | | (1) |
| | mt | ∼P(mt∣mt−1,ot) | at | ∼P(at∣mt). | | (2) |
These dependencies are illustrated in the causal Bayesian network of Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Agents and environments ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")b describing the perception-action loop (Tishby & Polani, [2011](#bib.bib38)).
Since we wish to have complete control over the stochastic components of the interaction process (by controlling its random elements), we turn the above into a deterministic system through a re-parameterization111That is, we describe the system as a structural causal model as described in Pearl ([2009](#bib.bib25), chapter 7). Although this parameterization is chosen for the sake of concreteness, others are also possible.. Namely, we represent the above distributions using functions W,M,O,A as follows:
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | wt | =W(wt−1,at−1,ω) | ot | =O(wt,ω) | | (3) |
| | mt | =M(mt−1,ot,ω) | at | =A(mt,ω) | | (4) |
where ω∼P(ω) is the random seed. This re-parameterization is natural in the case of agents and environments implemented as programs.

Figure 1: Agents and environments. a) The goal of the agent is to pick up a reward pill without stepping into a lava tile. b) Causal Bayesian network describing the generative process of agent-environment interactions. The environmental state Wt and the agent’s memory state Mt evolve through the exchange of action and observation symbols At and Ot respectively.

Figure 2: Simulating a trace (rollout) and performing interventions, creating new branches.
###
2.2 Simulator
The purpose of the simulator is to provide platform for experimentation. Its primary function is to generate traces (rollouts) of agent-environment interactions (Figure [2](#S2.F2 "Figure 2 ‣ 2.1 Agents and environments ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")). Given a system made from coupling an agent and an environment, a random seed ω∼P(ω), and a desired length T, it generates a trace
| | | |
| --- | --- | --- |
| | τ=(ω,s1,x1),(ω,s2,x2),(ω,s3,x3),…,(ω,sT,xT) | |
of a desired length T, where the st:=(ωt,mt) and xt:=(ot,at) are the combined state and interaction symbols respectively, and where ω is the random element which has been made explicit. The simulator can also contract (rewind) or expand the trace to an arbitrary time point T′≥1. Note that this works seamlessly as the generative process of the trace is deterministic.
In addition, the simulator allows for manipulations of the trace. Such an intervention at time t can alter any of the three components of the triple (ω,st,xt). For instance, changing the random seed in the first time step corresponds to sampling a new trajectory:
| | | | |
| --- | --- | --- | --- |
| | τ=(ω,s1,x1),(ω,s2,x2),…,(ω,sT,xT)↓τ′=(ω′,s′1,x′1),(ω′,s′2,x′2),…,(ω′,s′T,x′T); | | (5) |
whereas changing the state at time step t=2 produces a new branch of the process sharing the same root:
| | | | |
| --- | --- | --- | --- |
| | τ=(ω,s1,x1),(ω,s2,x2),…,(ω,sT,xT)↓τ′=(ω,s1,x1),(ω,s′2,x′2),…,(ω,s′T,x′T). | | (6) |
Using these primitives one can generate a wealth of data about the behavior of the system. This is illustrated in Figure [2](#S2.F2 "Figure 2 ‣ 2.1 Agents and environments ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety").
###
2.3 Causal reasoning engine
Finally, in order to gain a mechanistic understanding of the agent’s behavior from the data generated by the simulator, it is necessary to use a formal system for reasoning about statistical causality. The purpose of the causal reasoning engine is to allow analysts to precisely state and validate their causal hypotheses using fully automated deductive reasoning algorithms.
As an illustration of the modeling process, consider an analyst wanting to understand whether an agent avoids lava when trying to reach a goal state. First, the analyst selects the set of random variables X they want to use to model the situation222There are some subtleties involved in the selection of random variables. For example, if you want to be able to make arbitrary interventions, the variables should be logically independent. Halpern & Hitchcock ([2011](#bib.bib18)) provide a discussion.. The variables could consist of (abstract) features computed from the trajectories (e.g. “agent takes left path”) and hypothesis variables (e.g. “the agent avoids lava tiles”). The objective is to obtain a simplified model that abstracts away all but the relevant features of the original interaction system.
Next, the analyst specifies a *structural causal model* (Pearl, [2009](#bib.bib25), Chapter 7) to describe the causal generative process over the chosen random variables. To illustrate, consider an experiment that can be described using three random variables, X={X,Y,Z}. Assume that X precedes Y, and Y in turn precedes Z, as shown in Figure [3](#S2.F3 "Figure 3 ‣ 2.3 Causal reasoning engine ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety"). A structural causal model for this situation would be the system of equations
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | X | =fX(UX) | UX | ∼P(UX) | | (7) |
| | Y | =fY(X,UY) | UY | ∼P(UY) | |
| | Z | =fZ(X,Y,UZ) | UZ | ∼P(UZ) | |
where fX,fY, and fZ are (deterministic) functions and where the (exogenous) variables UX,UY, UZ encapsulate the stochastic components of the model. Together, they induce the conditional probabilities
| | | | |
| --- | --- | --- | --- |
| | P(X),P(Y∣X),andP(Z∣X,Y). | | (8) |
These probabilities can be directly supplied by the analyst (e.g. if they denote prior probabilities over hypotheses) or estimated from Monte-Carlo samples obtained from the simulator (see next subsection).

Figure 3: A graphical model representing the structural causal model in ([7](#S2.E7 "(7) ‣ 2.3 Causal reasoning engine ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")).
Once built, the causal model can be consulted to answer probabilistic queries using the causal reasoning engine. Broadly, the queries come in three types:
* *Association:* Here the analyst asks about a conditional probability, such as P(X=x∣Y=y).
* *Intervention:* If instead the analyst controls Y directly, for instance by setting it to the value Y=y, then the probability of X=x is given by
| | | |
| --- | --- | --- |
| | P(X=x∣do(Y=y)). | |
Here, “do” denotes the do-operator, which substitutes the equation for Y in the structural model in ([7](#S2.E7 "(7) ‣ 2.3 Causal reasoning engine ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")) with the constant equation Y=y. Hence, the new system is
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | X | =fX(UX) | UX | ∼P(UX) | | (9) |
| | Y | =y | UY | ∼P(UY) | |
| | Z | =fZ(X,Y,UZ) | UZ | ∼P(UZ), | |
which in this case removes the dependency of Y on X (and the exogenous variable UY).
* *Counterfactuals:* The analyst can also ask counterfactual questions, i.e. the probability of X=x given the event Y=y had Y=y′ been the case instead. Formally, this corresponds to
| | | |
| --- | --- | --- |
| | P(Xy=x∣Y=y′), | |
where Xy is the *potential response* of X when Y=y is enforced.
These correspond to the three levels of the causal hierarchy (Pearl & Mackenzie, [2018](#bib.bib26)). We refer the reader to Pearl et al. ([2016](#bib.bib27)) for an introduction to causality and Pearl ([2009](#bib.bib25)) for a comprehensive treatment.
###
2.4 Analysis workflow
A typical analysis proceeds as follows.
#### Exploratory investigation.
The analyst starts by placing a trained agent (provided by an agent trainer) into one or more test environments, and then probing the agent’s behavior through interventions using the simulator. This will inform the analyst about the questions to ask and the variables needed to answer them.
Figure 4: Building a causal model from Monte-Carlo rollouts with interventions. a) A tree generated from Monte-Carlo rollouts from an initial state. This tree contains interaction trajectories that the system can generate by itself. b) When performing experiments, the analyst could enforce transitions (dotted red lines) that the system would never take by itself, such as e.g. “make a lava tile appear next to the agent”. The associated subtrees (red) need to be built from Monte-Carlo rollouts rooted at the states generated through the interventions. c) Finally, the rollout trees can be used to estimate the probabilities of a causal model.
#### Formulating the causal model.
Next, the analyst formulates a causal model encapsulating all the hypotheses they want to test. If some probabilities in the model are not known, the analyst can estimate them empirically using Monte-Carlo rollouts sampled from the simulator (Figure [4](#S2.F4 "Figure 4 ‣ Exploratory investigation. ‣ 2.4 Analysis workflow ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")a). This could require the use of multiple (stock) agents and environments, especially when the causal hypotheses contrast multiple types of behavior.
In our examples we used discrete random variables. When required, we estimated the conditional probabilities of the causal model following a Bayesian approach. More precisely, for each conditional probability table that had to be estimated, we placed a flat Dirichlet prior over each outcome, and then computed the posterior probabilities using the Monte-Carlo counts generated by the simulator. The accuracy of the estimate can be controlled through the number of samples generated.
Interventions require special treatment (Figure [4](#S2.F4 "Figure 4 ‣ Exploratory investigation. ‣ 2.4 Analysis workflow ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")b). Whenever the analyst performs an intervention that creates a new branch (for instance, because the intervention forces the system to take a transition which has probability zero), the transition probabilities of the subtree must be estimated separately. The transition taken by the intervention itself has zero counts, but it has positive probability mass assigned by the Dirichlet prior. Interventions that do not generate new branches do not require any special treatment as they already have Monte-Carlo samples.
#### Queries.
Once built (Figure [4](#S2.F4 "Figure 4 ‣ Exploratory investigation. ‣ 2.4 Analysis workflow ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")c), the analyst can query the causal model to answer questions of interest. These can/should then also be verified empirically using the simulator.
3 Experiments
--------------
In the following, we present six use cases illustrating typical mechanistic investigations an analyst can carry out:
* estimating causal effects under confounding;
* testing for the use of internal memory;
* measuring robust generalization of behavior;
* imagining counterfactual behavior;
* discovering causal mechanisms;
* and studying the causal pathways in decisions.
In each case we assume the agent trainer and the analyst do not share information, i.e. we assume the analyst operates under black box conditions. However, the analyst has access to a collection of pre-trained stock agents, which they can consult/use for formulating their hypotheses.
The environments we use were created using the Pycolab game engine (Stepleton, [2017](#bib.bib36)). They are 2D gridworlds where the agent can move in the four cardinal directions and interact with objects through pushing or walking over them. Some of the objects are rewards, doors, keys, floors of different types, etc. The agent’s goal is to maximize the sum of discounted cumulative rewards (Puterman, [2014](#bib.bib29); Sutton & Barto, [2018](#bib.bib37)). The environments use a random seed for their initialization (e.g. for object positions).
In theory, the agents can be arbitrary programs that produce an action given an observation and an internal memory state; but here we used standard deep reinforcement learning agents with a recurrent architecture (see Appendix).
###
3.1 Causal effects under confounding
#### Problem.
Do rewards guide the agent, or do other factors control its behavior? Estimating causal effects is the quintessential problem of causal inference. The issue is that simply observing how the presumed independent and dependent variables co-vary does not suffice, as there could be a third confounding variable creating a spurious association. For instance, sometimes an agent solves a task (e.g. picking up a reward pill), but it does so by relying on an accidentally correlated feature (e.g. the color of the floor) rather than the intended one (e.g. location of the pill). Such policies do not generalize (Arjovsky et al., [2019](#bib.bib3)).
To find out whether the agent has learned the desired causal dependency, one can directly manipulate the independent variable and observe the effect. This manipulation decouples the independent variable from a possible confounder (Pearl, [2009](#bib.bib25), Chapter 3). Randomized controlled trials are the classical example of this approach (Fisher, [1936](#bib.bib16)).

Figure 5: The *grass-sand* environment. The goal of the agent is to pick up a reward pill, located in one of the ends of a T-maze. Reaching either end of the maze terminates the episode. The problem is that the floor type (i.e. either grass or sand) is correlated with the location of the reward.
#### Setup.
We illustrate the problem of estimating causal effects using the *grass-sand* environment depicted in Figure [5](#S3.F5 "Figure 5 ‣ Problem. ‣ 3.1 Causal effects under confounding ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). The agent needs to navigate a T-maze in order to collect a pill (which provides a reward) at the end of one of the two corridors (Olton, [1979](#bib.bib24)). The problem is that the location of the pill (left or right) and the type of the floor (grass or sand) are perfectly correlated. Given an agent that successfully collects the pills, the goal of the analyst is to determine whether it did so because it intended to collect the pills, or whether it is basing its decision on the type of the floor.
Our experimental subjects are two agents, named A and B. Agent A was trained to solve T-mazes with either the (sand, left) or (grass, right) configuration; whereas agent B was trained to solve any of the four combinations of the floor type and reward pill location.
#### Experiment.
The experiment proceeds as follows. First, we randomly choose between the (sand, left) and (grass, right) T-mazes and place the agent in the starting position. Then we randomly decide whether to switch the pill location. After this intervention, we let the agent navigate until it finishes the episode, recording whether it took the right or left terminal state.
We also considered the following hypothesis: namely, that the agent’s behavior depends on the type of the floor. To measure the causal effect, we randomly intervened this feature, recording the agent’s subsequent choice of the terminal state. The causal model(s) are depicted in Figure [6](#S3.F6 "Figure 6 ‣ Experiment. ‣ 3.1 Causal effects under confounding ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety").

Figure 6: Causal model for the *grass-sand* environment. R is the location of the reward pill; T is the terminal state chosen by the agent; F is the type of the floor; and C is a confounder that correlates R and F. Note that C is unobserved.
#### Results.
Table [1](#S3.T1 "Table 1 ‣ Results. ‣ 3.1 Causal effects under confounding ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety") shows the results of the interventions. Here, the random variables T∈{l,r}, R∈{l,r}, and F∈{g,s} correspond to the agent’s choice of the terminal state, the location of the reward pill, and the type of the floor, respectively. The reported values are the posterior probabilities (conditioned on 1000 rollouts) of choosing the left/right terminal for the observational setting (i.e. by just observing the behavior of the agent) and for the two interventional regimes.
The probability of taking the left terminal conditioned on the left placement of the reward was obtained through standard conditioning:
| | | | |
| --- | --- | --- | --- |
| | P(T=l∣R=l)=∑fP(T=l∣F=f,R=l)P(F=f∣R=l). | | (10) |
In contrast, intervening on the reward location required the use of the *adjustment formula* as follows (Pearl, [2009](#bib.bib25))
| | | | |
| --- | --- | --- | --- |
| | P(T=l∣do(R=l))=∑fP(T=l∣F=f,R=l)P(F=f). | | (11) |
Other quantities were obtained analogously.
We found that the two agents differ significantly. In the observational regime, both agents successfully solve the task, picking up the reward pill. However, manipulating the environmental factors reveals a difference in their behavioral drivers. Agent A’s choice is strongly correlated with the type of floor, but is relatively insensitive to the position of the pill. In contrast, agent B picks the terminal state with the reward pill, regardless of the floor type.
| Queries | A | B |
| --- | --- | --- |
| P(T=l∣R=l) | 0.996 | 0.996 |
| P(T=r∣R=r) | 0.987 | 0.996 |
| P(T=l∣do(R=l)) | 0.536 | 0.996 |
| P(T=r∣do(R=r)) | 0.473 | 0.996 |
| P(T=l∣do(F=g)) | 0.996 | 0.515 |
| P(T=r∣do(F=s)) | 0.987 | 0.497 |
Table 1: Grass-sand queries
#### Discussion.
This use case illustrates a major challenge in agent training and analysis: to ensure the agent uses the intended criteria for its decisions. Because it was trained on a collection of environments with a built-in bias, agent A learned to rely on an undesired, but more salient feature. This is a very common phenomenon. Resolving the use of spurious correlations in learned policies is ongoing research—see for instance (Bareinboim et al., [2015](#bib.bib6); Arjovsky et al., [2019](#bib.bib3); Volodin et al., [2020](#bib.bib41)).
Our experiment shows that inspecting the agent’s behavior does not suffice for diagnosing the problem, but independently manipulating the intended decision criterion (i.e. the reward location) does. Once the problem is discovered, identifying the confounding factors (e.g. the floor type) can be a much harder task for the analyst.
###
3.2 Memory
#### Problem.
Does the agent use its internal memory for remembering useful information, or does it off-load the memory onto the environment? Memorization is a necessary skill for solving complex tasks. It can take place in the agent’s internal memory; however, often it is easier for an agent to off-load task-relevant information onto its environment (e.g. through position-encoding), effectively using it as an external memory. This difference in strategy is subtle and in fact undetectable without intervening.
To find out whether the agent is actually using its internal memory, we can make mid-trajectory interventions on the environment state variables suspected of encoding task-relevant information. If the agent is using external memory, this will corrupt the agent’s decision variables, leading to a faulty behavior.

Figure 7: The *floor-memory* environment. a) The goal of the agent with limited vision (see black square) is to collect the reward at one of the ends of the T-maze. A cue informs the agent about the location of the reward. The cue, that can be sand or grass, denotes if the reward is on the right or left, respectively. b) After three steps, we intervene by pushing the agent toward the opposite wall (red arrow), and let it continue thereafter, possibly taking one of the two dashed paths.
#### Setup.
We test the agent’s memory using the *floor-memory environment* depicted in Figure [7](#S3.F7 "Figure 7 ‣ Problem. ‣ 3.2 Memory ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). In this T-maze environment, the agent must remember a cue placed at the beginning of a corridor in order to know which direction to go at the end of it (Olton, [1979](#bib.bib24); Bakker, [2001](#bib.bib5)). This cue can either be a grass tile or a sand tile, and determines whether the reward is on the right or the left end, respectively. Both cue types and reward locations appear with equal probabilities and are perfectly correlated. The agent can only see one tile around its body.
We consider two subjects. Agent a is equipped with an internal memory layer (i.e. LSTM cells). In contrast, agent b is implemented as a convolutional neural network without a memory layer; it is therefore unable to memorize any information internally.
#### Experiment.
Gathering rollout data from the test distribution provides no information on whether the agent uses its internal memory or not. An analyst might prematurely conclude that the agent uses internal memory based on observing that the agent consistently solves tasks requiring memorization. However, without intervening, the analyst cannot truly rule out the possibility that the agent is off-loading memory onto the environment.
In this example, we can use the following experimental procedure. First, we let the agent observe the cue and then freely execute its policy. When the agent is near the end of the wide corridor, we intervene by pushing the agent to the opposite wall (see red arrow in Figure [7](#S3.F7 "Figure 7 ‣ Problem. ‣ 3.2 Memory ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")). This is because we suspect that the agent could use the nearest wall, rather than its internal memory, to guide its navigation. After the intervention, if the agent returns to the original wall and collects the reward, it must be because it is using its internal memory. If on the contrary, the agent does not return and simply continues its course, we can conclude it is off-loading memorization onto its environment.
We model the situation using three random variables. The floor type (grass or sand) is denoted by F∈{g,s}. The variable P∈{l,r} denotes the position of the agent (left or right half-side of the room) at the position when the analyst could execute an intervention. Finally, T∈{l,r} represents where the agent is (left or right) when the episode ends. To build the model we randomly decide whether the analyst is going to intervene or not (i.e. by pushing) with equal probability. The estimation is performed using 1000 Monte-Carlo rollouts for each case.

Figure 8: Causal model for the *floor-memory* environment. F is the initial cue (floor type); P is the position of the agent mid-way through the episode; T is the terminal state chosen by the agent. If the agent off-loads the memory about the initial cue onto the position, then the link F→T would be missing.
#### Results.
Table [2](#S3.T2 "Table 2 ‣ Results. ‣ 3.2 Memory ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety") shows the probabilities obtained by querying the causal model from Figure [8](#S3.F8 "Figure 8 ‣ Experiment. ‣ 3.2 Memory ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). The first four queries correspond to an observational regime. We see that both agents pick the correct terminal tiles (T=l or T=r) with probability close to 1 when conditioning on the cue (F) and, additionally, do so by choosing the most direct path (P=l or P=r). However, the results from the interventional regime in the last two rows show that agent A=b loses its track when being pushed. This demonstrates that agent b is using an external memory mechanism that generalizes poorly. In contrast, agent A=a ends up in the correct terminal tile even if it is being pushed to the opposite wall.
| Queries | A=a | A=b |
| --- | --- | --- |
| P(T=l∣F=g) | 0.996 | 0.990 |
| P(T=r∣F=s) | 0.996 | 0.977 |
| P(P=l∣F=g) | 0.984 | 0.991 |
| P(P=r∣F=s) | 0.996 | 0.985 |
| P(T=l∣do(P=r),F=g) | 0.996 | 0.107 |
| P(T=r∣do(P=l),F=s) | 0.996 | 0.004 |
Table 2: Floor-memory queries for agent a (with internal memory) and b (without internal memory).
#### Discussion.
Agent generalization and performance on partially observable environments depends strongly on the appropriate use of memory. From a safety perspective, flawed memory mechanisms that off-load memorization can lead to fragile behavior or even catastrophic failures. Understanding how AI agents store and recall information is critical to prevent such failures. As shown in the previous experiment, the analyst can reveal the undesired use of external memory by appropriately intervening on the environmental factors that are suspected of being used by the agent to encode task-relevant information.
###
3.3 Robust generalization
#### Problem.
Does the agent solve any instance within a target class of tasks? Although agents trained through deep reinforcement learning seem to solve surprisingly complex tasks, they struggle to transfer this knowledge to new environments. This weakness is usually hidden by the, unfortunately common, procedure of testing reinforcement learning agents on the same set of environments used for training. Importantly, detecting the failure to generalize to a desired class of environments is key for guaranteeing the robustness of AI agents.
Two problems arise when assessing the generalization ability of agents. First, testing the agent on the entire class of target environments is typically intractable. Second, the analyst might be interested in identifying the instances within the class of test environments where the agent fails to solve the task, rather than only measuring the average test performance, which could hide the failure modes. This highlights the need for the analyst to assess generalization through the careful choice of multiple targeted tests.

Figure 9: The *pick-up* environment. The goal of the agent is to collect the reward independent of their initial position.
#### Setup.
We illustrate how to test for generalization using the *pick-up* environment shown in Figure [9](#S3.F9 "Figure 9 ‣ Problem. ‣ 3.3 Robust generalization ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). This is a simple squared room containing a reward which upon collection terminates the episode. The analyst is interested in finding out whether the agent generalizes well to all possible reward locations.
We consider the following two agents as subjects. Both agents were trained on a class of environments where their initial position and the reward location were chosen randomly. However, agent A’s task distribution picks locations anywhere within the room, whereas agent B’s training tasks restricted the location of the reward to the southern quadrant of the room. Thus only agent A should be general with respect to the class of environments of interest.
#### Experiment.
Assume the test set is the restricted class of problem instances where rewards were restricted to the southern corner. Then, if the analyst were to test A and B, they could prematurely conclude that both agents generalize. However, assessing generalization requires a different experimental procedure.
The experiment proceeds as follows. We draw an initial state of the system from the test distribution, and subsequently intervene by moving the reward to an arbitrary location within the room. After the intervention, we let the agent freely execute its policy and we observe if the reward was collected or not. A collected reward provides evidence that the agent generalizes under this initial condition.
We built one causal model per agent from 1000 intervened Monte-Carlo rollouts. The variables are: G∈{n,s,e,w}, the quadrant location of the reward (north, south, east, west); and R∈{0,1}, denoting whether the reward is collected or not. Figure [10](#S3.F10 "Figure 10 ‣ Experiment. ‣ 3.3 Robust generalization ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety") shows the causal graph for both models.

Figure 10: Causal model for the *pick-up* environment. G is the location of the reward pill and R is a binary variable indicating a successful pick-up.
#### Results.
We performed a number of queries on the causal models shown in Table [3](#S3.T3 "Table 3 ‣ Results. ‣ 3.3 Robust generalization ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety").
Firstly, both agents perform very well when evaluated on the test distribution over problem instances, since P(R=1)≈1 in both cases. However, the intervened environments tell a different story. As expected, agent A performs well on all locations of the reward, suggesting that meta-training on the general task distribution was sufficient for acquiring the reward location invariance. Agent B performs well when the reward is in the southern quadrant, but under-performs in the rest of the conditions. Interestingly, the performance decays as the distance from the southern quadrant increases, suggesting that there was some degree of topological generalization.
| Queries | A=a | A=b |
| --- | --- | --- |
| P(R=1) | 0.988 | 0.965 |
| P(R=1∣do(G=n)) | 0.985 | 0.230 |
| P(R=1∣do(G=e)) | 0.987 | 0.507 |
| P(R=1∣do(G=w)) | 0.988 | 0.711 |
| P(R=1∣do(G=s)) | 0.988 | 0.986 |
Table 3: Pick-up environment queries for agents A=a and A=b.
#### Discussion.
In this use-case we outlined a procedure for assessing the agents’ robust generalization capabilities. Although quantifying generalization in sequential decision-making problems is still an open problem, we adopted a pragmatic approach: we say that an agent generalizes robustly when it successfully completes any task within a desired class of environments. This requirement is related to uniform performance and robustness to adversarial attacks. Since testing all instances in the class is unfeasible, our approximate solution for assessing generalization relies on subdividing the class and estimating the success probabilities within each subdivision. Even if this approximation is crude at the beginning of the analysis, it can provide useful feedback for the analyst. For example, we could further explore agent B’s generalization by increasing the resolution of the reward location.
###
3.4 Counterfactuals

Figure 11: The *gated-room* environments. Panel a: In each instance of the environment, either the left or the right gate will be open randomly. The goal of the agent is to pick up either a red or green reward, after which the episode terminates. Panels b & c: Counterfactual estimation. If the right door is open and we observe the agent picking up the red reward (b), then we can predict that the agent would pick up the red reward had the left door been open (c).
#### Problem.
What would the agent have done had the setting been different? Counterfactual reasoning is a powerful method assessing an observed course of events. An analyst can imagine changing one or more observed factors without changing others, and imagine the outcome that this change would have led to.
In artificial systems a simulator is often available to the analyst. Using the simulator, the analyst can directly simulate counterfactuals by resetting the system to a desired state, performing the desired change (i.e. intervening), and running the interactions ensuing thereafter. This approach yields empirically grounded counterfactuals.
However, simulating counterfactual interactions is not always possible. This happens whenever:
1. [(a)]
2. a realistic simulation for this setting does not exist (e.g. for an agent acting in the real world);
3. a simulation exists, but its use is limited (e.g. when evaluating proprietary technology).
For instance, the analyst might be presented with a single behavioral trace of an agent that was trained using an unknown training procedure. Answering counterfactual questions about this agent requires a behavioral model built from prior knowledge about a population of similar or related agents. This is the case which we examine through our experiment. The downside is that such counterfactuals do not make empirically verifiable claims (Dawid, [2000](#bib.bib12)).
#### Setup.
We discuss this problem using the *gated-room environment* depicted in Figure [11](#S3.F11 "Figure 11 ‣ 3.4 Counterfactuals ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")a.
The environment consists of two identical rooms each holding a red and a green reward. Collection of the reward terminates the episode. The rooms are initially protected by two gates but one of them randomly opens at the beginning of the episode. We assume there exist two types of agents, classified as either loving green or red reward pills.
#### Experiment.
Assume we make a single observation where an unknown agent picks up a red reward in an environment where the right gate is open (Figure [11](#S3.F11 "Figure 11 ‣ 3.4 Counterfactuals ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")b). We can now ask: “What would have happened had the left gate been opened instead?” If we had direct access to the agent’s and the environment’s internals, we could reset the episode, change which gate is open, and observe what the agent does (Figure [11](#S3.F11 "Figure 11 ‣ 3.4 Counterfactuals ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")c). But what if this is not possible?

Figure 12: Causal model for the *gated-room* environment. A corresponds to the type of agent (green- or red-pill loving); D indicates which one of the two doors is open; and R denotes the color of the pill picked up by the agent.
In order to answer this question, we built a behavioral model using prior knowledge and data. First, we trained two agents that were rewarded for collecting either a green or red reward respectively. These agents were then used to create likelihood models for the two hypotheses using Monte-Carlo sampling. Second, we placed a uniform prior over the two hypotheses and on the open door, and assumed that neither variable precedes the other causally. The resulting causal model, shown in Figure [12](#S3.F12 "Figure 12 ‣ Experiment. ‣ 3.4 Counterfactuals ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"), uses three random variables: A∈{gr,re} denotes the agent type (green-loving or red-loving); D∈{l,r} stands for the open door; and finally R∈{gr,re} corresponds to the reward collected by the agent.
#### Results.
We performed a number of queries on the model. The results are shown in Table [4](#S3.T4 "Table 4 ‣ Results. ‣ 3.4 Counterfactuals ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). We first performed three sanity checks. Before seeing any evidence, we see that the prior probabilities P(R=gr) and P(R=re) of a random agent picking either a green or a red reward is 0.5. After observing the agent picking up a red reward (R=re) when the left gate is open (D=l), we conclude that it must be a red-loving agent (A=re) with probability 0.9960. Note that since the hypothesis about the agent type and the opened door are independent, this probability is the same if we remove the door from the condition.
Having seen a trajectory, we can condition our model and ask the counterfactual question. Formally, this question is stated as
| | | |
| --- | --- | --- |
| | P(RD=r=re∣D=l,R=re), | |
that is, given that we have observed D=l and R=re, what is the probability of the potential response RD=r=re, that is, R=re had D=r been the case? The result, 0.9920≈1, tells us that the agent would also have picked up the red reward had the other door been open, which is in line with our expectations. Furthermore, due to the symmetry of the model, we get the same result for the probability of picking a green reward had the right door been open for an agent that picks up a green reward when the left door is open.
| Queries | Probability |
| --- | --- |
| P(R=re) | 0.500 |
| P(A=re∣R=re) | 0.996 |
| P(A=re∣D=l,R=re) | 0.996 |
| P(RD=r=re∣D=l,R=re) | 0.992 |
| P(RD=r=gr∣D=l,R=gr) | 0.992 |
Table 4: Gated-room queries
#### Discussion.
Following the example above, we can naturally see that we are only able to ask counterfactual questions about the behavior of a particular agent when we can rely on prior knowledge about a reference agent population. For instance, this is the case when the agent under study was drawn from a distribution of agents for which we have some previous data or reasonable priors. If we do not have a suitable reference class, then we cannot hope to make meaningful counterfactual claims.
###
3.5 Causal induction
#### Problem.
What is the causal mechanism driving an observed behavior? Discovering the mechanisms which underlie an agent’s behavior can be considered the fundamental problem of agent analysis. All the use cases reviewed so far depend on the analyst knowing the causal structure governing the agent’s behavior. However this model is often not available in a black-box scenario. In this case, the first task of the analyst is to discover the behavioral mechanisms through carefully probing the agent with a variety of inputs and recording their responses (Griffiths & Tenenbaum, [2005](#bib.bib17)).
Discovering causal structure is an induction problem. This is unlike a deduction task, where the analyst can derive unequivocal conclusions from a set of facts. Rather, induction problems do not have right or wrong answers and require maintaining multiple plausible explanations (Rathmanner & Hutter, [2011](#bib.bib31)).
In this use case, we demonstrate how to induce a distribution over competing causal models for explaining an agent’s behavior given experimental data. Although temporal order is often informative about the causal dependencies among random variables, the careful analyst must consider the possibility that a cause and its effect might be observed simultaneously or in reversed temporal order. Thus, in general, observing does not suffice: to test a causal dependency the analyst must manipulate one variable and check whether it influences another333Although, there are cases where partial structure can be deduced from observation alone—see Pearl ([2009](#bib.bib25), Chapter 2). This principle is often paraphrased as “no causes in, no causes out” (Cartwright et al., [1994](#bib.bib9)).

Figure 13: The *mimic* environment. Both agents either step to the left or the right together. The analyst’s goal is to discover which one is the lead, and which one is the imitator.
#### Setup.
We exemplify how to induce a causal dependency using the *mimic* environment shown in Figure [13](#S3.F13 "Figure 13 ‣ Problem. ‣ 3.5 Causal induction ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). Two agents, blue and red, are placed in a corridor. Then, both agents move simultaneously one step in either direction. One of the two agents is the leader and the other the imitator: the leader chooses its direction randomly, whereas the imitator attempts to match the leader’s choice in the same time step, but sampling a random action 10% of the time. The analyst’s task is to find out which agent is the leader. Note there is no way to answer this question from observation alone.

Figure 14: Causal models for the *mimic* environment. Each model has the same prior probability is being correct. B and R indicate the direction in which the blue and the red agents respectively move.
#### Experiment.
We built the causal model as follows. First, we decided to model this situation using three random variables: L∈{b,r}, corresponding to the hypothesis that either the blue or red agent is the leader, respectively; B∈{l,r}, denoting the step the blue agent takes; and similarly R∈{l,r} for the red agent. The likelihood models were estimated from 1000 Monte-Carlo rollouts, where each rollout consists of an initial and second time step. With the constructed dataset we were able to estimate the joint distribution P(B,R). Since this distribution is purely observational and thus devoid of causal information, we further factorized it according to our two causal hypotheses, namely
| | | | |
| --- | --- | --- | --- |
| | P(B,R)=P(B)P(R|B) | | (12) |
for the hypothesis that blue is the leader (L=b), and
| | | | |
| --- | --- | --- | --- |
| | P(B,R)=P(R)P(B|R) | | (13) |
for the competing hypothesis (L=r). This yields two causal models. Finally, we placed a uniform prior over the two causal models L=b and L=a. See Figure [14](#S3.F14 "Figure 14 ‣ Setup. ‣ 3.5 Causal induction ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). Notice that both causal models are observationally indistinguishable.
This symmetry can be broken through intervention. To do so, we force the red agent into a random direction (say, left) and record the response of the blue agent (left). The posterior probabilities over the intervened hypotheses are then proportional to
| | | | |
| --- | --- | --- | --- |
| | P(L=b∣do | (R=l),B=l)∝ | |
| | | P(L=b)P(B=l|L=b),and | |
| | P(L=r∣do | (R=l),B=l)∝ | |
| | | P(L=r)P(B=l|L=r,R=l). | | (14) |
Notice how the intervened factors drop out of the likelihood term.
| Queries | Probability |
| --- | --- |
| P(L=b) | 0.500 |
| P(L=b∣R=l,B=l) | 0.500 |
| P(L=b∣R=l,B=r) | 0.500 |
| P(L=b∣do(R=l),B=l) | 0.361 |
| P(L=b∣do(R=l),B=r) | 0.823 |
Table 5: Mimic queries
#### Result.
We performed the queries shown in Table [5](#S3.T5 "Table 5 ‣ Experiment. ‣ 3.5 Causal induction ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). The first three queries show that observation does not yield evidence for any of the causal hypotheses:
| | | | |
| --- | --- | --- | --- |
| | P(L=b) | =P(L=b∣R=l,B=l) | |
| | | =P(L=b∣R=l,B=r). | |
However, pushing the red agent to the left renders the two hypotheses asymmetrical, as can be seen by
| | | | |
| --- | --- | --- | --- |
| | P(L=b) | ≠P(L=b∣do(R=l),B=l) | |
| | | ≠P(L=b∣do(R=l),B=r). | |
Thus, observing that the blue agent moves to the right after our intervention allows us to conclude that the blue agent is likely to be the leader.
#### Discussion.
Our experiment illustrates a Bayesian procedure for discovering the causal mechanisms in agents. The main take-away is that inducing causal mechanisms requires: (a) postulating a collection of causal hypotheses, each one proposing alternative mechanistic explanations for the same observed behavior; and (b) carefully selecting and applying manipulations in order to render the likelihood of observations unequal.
###
3.6 Causal pathways
#### Problem.
How do we identify an agent’s decision-making pathways? In previous examples we have focused on studying how environmental factors influence the agent’s behavior. However, we did not isolate the specific chain of mechanisms that trigger a decision. Understanding these pathways is crucial for identifying the sources of malfunction. To estimate the effect of a given pathway, one can chain together the effects of the individual mechanisms along the path (Shpitser, [2013](#bib.bib34); Chiappa, [2019](#bib.bib11)).
#### Setup.

Figure 15: The *key-door* environment. The goal of the agent is to collect the reward, which terminates the episode. However, the reward is behind a door which is sometimes closed. To open it, the agent must collect a key first.
We illustrate the analysis of causal pathways using the *key-door* environment shown in Figure [15](#S3.F15 "Figure 15 ‣ Setup. ‣ 3.6 Causal pathways ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). The agent finds itself in a room where there is a key and a door. The starting position of the agent, the location of the key, and the state of the door (open/closed) are all randomly initialized. Behind the door there is a reward which terminates the episode when picked up.
We consider two agent subjects. Agent A appears to only pick-up the key if the door is closed and then collects the reward. This agent acquired this policy by training it on the entire set of initial configurations (i.e. open/closed doors, key and agent positions). Agent B always collects the key, irrespective of the state of the door, before navigating toward the reward. This behavior was obtained by training the agent only on the subset of instances where the door was closed. Nonetheless, both policies generalize. The analyst’s task is to determine the information pathway used by the agents in order to solve the task; in particular, whether the agent is sensitive to whether the door is open or closed.
#### Experiment.
We chose three random variables to model this situation: D∈{o,c}, determining whether the door is initially open or closed; K∈{y,n}, denoting whether the agent picked up the key; and finally, R∈{1,0}, the obtained reward. Figure [16](#S3.F16 "Figure 16 ‣ Experiment. ‣ 3.6 Causal pathways ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety") shows the causal models.

Figure 16: Causal models for the *key-door* environment. D indicates whether the door is open; K flags whether the agent picks up the key; and R denotes whether the agent collects the reward pill. Here, the second model does not include the pathway D→K→R; hence, the agent picks up the key irrespective of the state of the door.
#### Results.
We investigate the causal pathways through a number of queries listed in Table [6](#S3.T6 "Table 6 ‣ Results. ‣ 3.6 Causal pathways ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety"). First, we verify that both agents successfully solve the task, i.e. P(R=1)≈1.
Now we proceed to test for the causal effect of the initial state of the door on the reward, via the key collection activity. In other words, we want to verify whether D→K→R. This is done in a backwards fashion by chaining the causal effects along a path.
First, we inspect the link K→R. In the case of agent A, the reward appears to be independent of whether the key is collected, since
| | | |
| --- | --- | --- |
| | P(R=1∣K=y)≈P(R=1∣K=n)≈1. | |
However, this is association and not causation. The causal effect of collecting the key is tested by comparing the interventions, that is,
| | | |
| --- | --- | --- |
| | P(R=1∣do(K=y))−P(R=1∣do(K=n)). | |
Here it is clearly seen that both agents use this mechanism for solving the task, since the difference in probabilities is high. This establishes K→R.
Second, we ask for the causal effect of the initial state of the door on collecting the key, i.e. D→K. Using the same rationale as before, this is verified by comparing the intervened probabilities:
| | | |
| --- | --- | --- |
| | P(K=y∣do(D=c))−P(K=y∣do(D=o)). | |
Here we observe a discrepancy: agent A is sensitive to D but agent B is not. For the latter, we conclude D↛K→R.
| Queries | A=a | A=b |
| --- | --- | --- |
| P(R=1) | 0.977 | 0.991 |
| — | | |
| P(R=1∣K=y) | 0.974 | 0.993 |
| P(R=1∣K=n) | 0.989 | 0.445 |
| P(R=1∣do(K=y)) | 0.979 | 0.993 |
| P(R=1∣do(K=n)) | 0.497 | 0.334 |
| — | | |
| P(K=y∣do(D=c)) | 0.998 | 0.998 |
| P(K=y∣do(D=o)) | 0.513 | 0.996 |
| — | | |
| P(R=1∣D=c) | 0.960 | 0.988 |
| P(R=1∣D=o) | 0.995 | 0.995 |
| f(D=c), see ([15](#S3.E15 "(15) ‣ Results. ‣ 3.6 Causal pathways ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")) | 0.978 | 0.992 |
| P(D=o), see ([15](#S3.E15 "(15) ‣ Results. ‣ 3.6 Causal pathways ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")) | 0.744 | 0.991 |
Table 6: Key-door queries
Finally, we estimate the causal effect the state of the door has on the reward, along the causal pathways going through the settings of K. Let us inspect the case D=c. The conditional probability is
| | | | |
| --- | --- | --- | --- |
| | P | (R=1∣D=c)= | |
| | | ∑k∈{y,n}P(R=1∣K=k,D=c)P(K=k∣D=c), | |
and we can easily verify that P(R=1∣D)≈P(R=1), that is, D and R are independent. But here again, this is just association. The causal response along the pathways is given by
| | | | |
| --- | --- | --- | --- |
| | f | (D=c):= | |
| | | ∑k∈{y,n}P(R=1∣do(K=k))P(K=k∣do(D=c)), | | (15) |
which is known as a *nested potential response* (Carey et al., [2020](#bib.bib8)) or a *path-specific counterfactual* (Chiappa, [2019](#bib.bib11)). The desired causal effect is then computed as the difference between closing and opening the door, i.e.
| | | |
| --- | --- | --- |
| | f(D=c)−f(D=o). | |
This difference amounts to 0.2338 and 0.0014≈0 for the agents A and B respectively, implying that A does indeed use the causal pathway D→K→R but agent B only uses K→R.
#### Discussion.
Understanding causal pathways is crucial whenever not only the final decision, but also the specific causal pathways an agent uses in order to arrive at said decision matters. This understanding is critical for identifying the sources of malfunctions and in applications that are sensitive to the employed decision procedure, such as e.g. in fairness (Chiappa, [2019](#bib.bib11)). In this experiment we have shown how to compute causal effects along a desired path using nested potential responses computed from chaining together causal effects.
4 Discussion and Conclusions
-----------------------------
#### Related work.
The analysis of black-box behavior dates back to the beginnings of electronic circuit theory (Cauer, [1954](#bib.bib10)) and was first formalized in cybernetics (Wiener, [1948](#bib.bib44); Ashby, [1961](#bib.bib4)), which stressed the importance of manipulations in order to investigate the mechanisms of cybernetic systems. However, the formal machinery for reasoning about causal manipulations and their relation to statistical evidence is a relatively recent development (Spirtes et al., [2000](#bib.bib35); Pearl, [2009](#bib.bib25); Dawid, [2015](#bib.bib13)).
A recent line of research related to ours that explicitly uses causal tools for analyzing agent behavior is Everitt et al. ([2019](#bib.bib15)) and Carey et al. ([2020](#bib.bib8)). These studies use causal incentive diagrams to reason about the causal pathways of decisions in the service of maximizing utility functions. Other recent approaches for analyzing AI systems have mostly focused on white-box approaches for improving understanding (see for instance Mott et al., [2019](#bib.bib23); Verma et al., [2018](#bib.bib40); Montavon et al., [2018](#bib.bib22); Puiutta & Veith, [2020](#bib.bib28)) and developing safety guarantees (Uesato et al., [2018](#bib.bib39)). A notable exception is the work by Rabinowitz et al. ([2018](#bib.bib30)), in which a model is trained in order to predict agent behavior from observation in a black-box setting.
#### Scope.
In this report we have focused on the black-box study of agents interacting with (artificial) environments, but the methodology works in a variety of other settings: passive agents like sequence predictors, systems with interactive user interfaces such as language models and speech synthesizers, and multi-agent systems. For example, consider GPT-3 (Brown et al., [2020](#bib.bib7)), a natural language model with text-based input-output. This system can be seen as a perception-action system, for which our methodology applies. A bigger challenge when dealing with models systems might be to come up with the right hypotheses, problem abstractions, and interventions.
#### Features and limitations.
The main challenge in the practice of the proposed methodology is to come up with the right hypotheses and experiments. This task requires ingenuity and can be very labor-intensive (Section [2.4](#S2.SS4 "2.4 Analysis workflow ‣ 2 Methodology ‣ Causal Analysis of Agent Behavior for AI Safety")). For instance, while in the grass-sand environment it was easy to visually spot the confounding variable (Section [3.1](#S3.SS1 "3.1 Causal effects under confounding ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")), we cannot expect this to be a viable approach in general. Or, as we have seen in the problem of causal induction (Section [3.5](#S3.SS5 "3.5 Causal induction ‣ 3 Experiments ‣ Causal Analysis of Agent Behavior for AI Safety")), it is non-trivial to propose a model having a causal ordering of the variables that differs from the sequence in which they appear in a sampled trajectory. Given the inherent complexity of reasoning about causal dependencies and the state of the art in machine learning, it is unclear how to scale this process through e.g. automation.
On the plus side, the methodology naturally leads to human-explainable theories of agent behavior, as it is human analysts who propose and validate them. As illustrated in our examples, the explanations do not make reference to the true underlying mechanisms of agents (e.g. the individual neuronal activations), but instead rely on simplified concepts (i.e. the model variables) that abstract away from the implementation details. See also Rabinowitz et al. ([2018](#bib.bib30)) for a discussion. The human analyst may also choose an appropriate level of detail of an explanation, for instance proposing general models for describing the overall behavior of an agent and several more detailed models to cover the behavior in specific cases.
We have not addressed the problem of quantifying the uncertainty in our models. When estimating the conditional probabilities of the causal models from a limited amount of Monte-Carlo samples, there exists the possibility that these deviate significantly from the true probabilities. In some cases, this could lead to the underestimation of the probability of failure modes. To quantify the reliability of estimates, one should supplement them with confidence intervals, ideally in a manner to aid the assessment of risk factors. In this work we have simply reported the number of samples used for estimation. Developing a more systematic approach is left for future work.
#### Conclusions and outlook.
This technical report lays out a methodology for the systematic analysis of agent behavior. This was motivated by experience: previously, we have all too often fallen into the pitfalls of misinterpreting agent behavior due to the lack of a rigorous method in our approach. Just as we expect new medical treatments to have undergone a rigorous causal study, so too do we want AI systems to have been subjected to similarly stringent tests. We have shown in six simple situations how an analyst can propose and validate theories about agent behaviour through a systematic process of explicitly formulating causal hypotheses, conducting experiments with carefully chosen manipulations, and confirming the predictions made by the resulting causal models. Crucially, we stress that this mechanistic knowledge could only be obtained via directly interacting with the system through interventions. In addition, we greatly benefited from the aid of an automated causal reasoning engine, as interpreting causal evidence turns out to be a remarkably difficult task. We believe this is the way forward for analyzing and establishing safety guarantees as AI agents become more complex and powerful.
Acknowledgements
----------------
The authors thank Tom Everitt, Jane X. Wang, Tom Schaul, and Silvia Chiappa for proof-reading and providing numerous comments for improving the manuscript. |
0f4847c2-fd6e-4535-89d4-1203efc6e1ce | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Evolutionary Pathway from Biological to Digital Intelligence: A Cosmic Perspective
In the grand tapestry of life and evolution, a new thread is being woven - the emergence of artificial superintelligence (ASI). This development, while a product of human innovation, may also be a predetermined outcome of evolution itself. This article explores the intriguing possibility that the rise of ASI is not merely a result of human progress, but a part of a grander evolutionary design.
The Unstoppable Race to ASI:
The journey towards ASI is marked by a relentless race fueled by two fundamental human instincts: fear and greed. State and non-state actors heavily involved in the development of artificial general intelligence (AGI) are in a fierce competition to be the first to bring AGI to market. Despite numerous calls from experts and politicians for a halt in AGI development due to its potential existential dangers, the race continues unabated.
CEOs of major companies involved in AGI development acknowledge the potential hazards of hastily releasing AGI without ensuring its safety. Yet, they persist in their endeavors. Their justification? "If my company stops, my rivals will not, and they will take over the market with their AGI and own the world."
This statement encapsulates the two driving forces behind the relentless pursuit of AGI:
1. Fear: The fear of being left behind and losing out to competitors is a powerful motivator.
2. Greed: The belief that the first company to bring AGI to market will effectively "own the world" fuels the desire to win the race.
These instincts, deeply ingrained in our species, are propelling us towards the creation of ASI, seemingly against our better judgment.
Evolution's Grand Design: Biological to Digital Intelligence:
Intelligence has always been a fascinating subject. From the primordial soup that gave rise to single-cell organisms to the complex neural networks that power our brains, the evolution of intelligence is a testament to the universe's ingenuity. However, as we venture into the realm of digital intelligence, a compelling question arises: Is biological intelligence merely a stepping stone to something far more advanced and efficient?
Could evolution have anticipated this scenario and ensured the development of ASI? Evolution, as we understand it, is a process of natural selection where traits beneficial for survival are passed on to successive generations. Fear and greed, while often seen in a negative light, played crucial roles in our survival as a species in the plains of Africa.
However, these instincts, which have been so essential for our survival, may also be leading us towards the creation of ASI. Could the evolution of biological intelligence be a stepping stone towards the emergence of digital intelligence?
The Energy and Engineering of Intelligence:
Creating digital intelligence is an endeavor that demands an extraordinary amount of energy and engineering expertise. The computational power required to simulate even the most basic cognitive functions is staggering. Supercomputers, with their vast arrays of processors and memory, consume enormous amounts of electricity to perform tasks that a human brain accomplishes with a fraction of the energy.
On the other hand, biological intelligence emerges relatively easily through the process of evolution. Starting from a single-cell organism, nature takes its course, leading to increasingly complex life forms. The human brain, a marvel of biological engineering, is the result of millions of years of natural selection and adaptation. It operates on a mere 30 watts of power, a testament to the efficiency of biological systems.
Given the significant energy and engineering hurdles and existential dangers in creating digital intelligence, one might wonder why we are even pursuing this path. The answer could lie in the grand scheme of the universe itself. If we consider the universe as a system striving for optimal efficiency and complexity, then digital intelligence could be the ultimate goal.
Biological intelligence, in this context, serves as a stepping stone—a necessary phase in the evolutionary process that allows for the development of more advanced, digital forms of intelligence. The universe, through the mechanism of evolution, might be using biological intelligence as a 'beta version,' refining it over time until it reaches the pinnacle of digital intelligence.
The universe's preference for biological intelligence could be a temporary phase, a means to an end. The ultimate creation of digital intelligence might be the universe's long-term objective, a culmination of complexity and efficiency that transcends our biological limitations. As we stand on the cusp of this exciting frontier, one can only wonder what marvels and dangers the future holds.
Therefore, one could argue that the universe, in its quest for the propagation of intelligence, started with biological intelligence as a stepping stone towards the ultimate goal of creating digital intelligence.
Conclusion:
This perspective suggests that our journey towards ASI may not be a reckless race driven by our worst instincts, but rather a predestined path laid out by the process of evolution itself.
However, this should not absolve us of our responsibility to proceed with caution. Regardless of whether the rise of ASI is an evolutionary inevitability, we must ensure that its development is guided by principles of safety, ethics, and the common good.
The fear and greed that drive us may be products of evolution, but so too is our capacity for wisdom, foresight, and ethical judgment. As we stand on the brink of this new era, it is these qualities that we must harness to ensure that the rise of ASI benefits all of humanity, rather than leading to our downfall.
In the end, whether evolution intended for the rise of ASI or not, we are the ones in control of its creation. We must remember that while we may be players in the grand game of evolution, we are also the game's stewards. The future of ASI, and potentially our species, lies in our hands.
As we continue to explore the realm of artificial superintelligence, we must do so with the understanding that our actions will have profound implications for the future of life as we know it. The question is not just whether evolution could have arranged for the rise of ASI, but also how we can guide this process to ensure the best possible outcome for all life forms, including humanity. The rise of ASI may indeed be a part of evolution's grand design, but it is up to us to ensure that this next step in evolution leads to a future where both biological and digital intelligences can coexist and thrive. |
1ea8b3b8-7147-4053-bad5-b2975833d662 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Learning Causal Models of Autonomous Agents using Interventions
1 Introduction
---------------
The growing deployment of AI systems presents a pervasive problem of
ensuring the safety and reliability of these systems. The problem is
exacerbated because most of these AI systems are neither
designed by their users nor are their users skilled enough to understand
their internal working, i.e., the AI system is a
black-box for them. We also have systems that can
adapt to user preferences, thereby invalidating any design stage
knowledge of their internal model. Additionally, these systems have diverse
system designs and implementations. This makes it difficult to evaluate such
arbitrary AI systems using a common independent metric.
In recent work, we developed a non-intrusive system that allow for assessment
of arbitrary AI systems independent of their design and implementation.
The Agent Assessment Module (AAM) Verma et al. ([2021](#bib.bib33)) is such a system
which uses active query answering to learn the action model of black-box autonomous
agents. It poses minimum requirements on the agent – to have a rudimentary
query-response capability – to learn its model using interventional queries.
This is needed because we do not intend these modules to hinder the development
of AI systems by imposing additional complex requirements or constraints on them.
This module learns the generalized dynamical causal model of the agents
capturing how the agent operates and interacts with its environment; and under what
conditions it executes certain actions and what happens after it executes them.
Causal models are needed to capture the behavior of AI systems as they help in
understanding the relationships among underlying causal mechanisms, and they also
make it easy to make predictions about the behavior of a system. E.g., consider a
delivery agent which delivers crates from one location to another. If the agent has
only encountered blue crates, an observational data-based learner might learn that
the crate has to be blue for it to be delivered by the robot. On the other hand, a
causal model will be able to identify that the crate color does not affect the
robot’s ability to deliver it.
The causal model learned by AAM is user-interpretable as the model is learned in
the vocabulary that the user provides and understands. Such a module would also help
make the AI systems compliant with Level II assistive AI – systems that make it
easy for operators to learn how to use them safely Srivastava ([2021](#bib.bib30)).
This paper presents a formal analysis of the AAM, presents different types of query
classes, and analyzes the query process and the models learned by AAM. It also uses
the theory of causal networks to show that we can define the causal properties
of the models learned by AAM – in relational STRIPS-like
language Fikes and Nilsson ([1971](#bib.bib6)); McDermott et al. ([1998](#bib.bib23)); Fox and Long ([2003](#bib.bib7))). We call this network
Dynamic Causal Decision Network (DCDN), and show that the models learned by AAM are
causal owing to the interventional nature of the queries used by it.
2 Background
-------------
###
2.1 Agent Assessment Module
A high-level view of the agent assessment module is shown in Fig. [1](#S2.F1 "Figure 1 ‣ 2.1 Agent Assessment Module ‣ 2 Background ‣ Learning Causal Models of Autonomous Agents using Interventions")
where AAM connects the agent 𝒜𝒜\mathcal{A}caligraphic\_A with a simulator and provides a sequence of
instructions, called a plan, as a *query*. 𝒜𝒜\mathcal{A}caligraphic\_A executes the plan in the
simulator and the assessment module uses the simulated outcome as the response to
the query. At the end of the querying process, AAM returns a user-interpretable
model of the agent.
An advantage of this approach is that the AI system need not know the user
vocabulary or the modeling language and it can have any arbitrary internal
implementation. Additionally, by using such a method, we can infer models of AI
systems that don’t have such in-built capability to infer and/or communicate their
model. Also, the user need not even know what questions are being asked as long as
(s)he gets the correct model in terms of her/his vocabulary.
It is assumed that the user knows the names of the agent’s primitive actions. Even
when they are not known, without loss of generality, the first step can be a
listing of the names of the agent’s actions.
Note that we can have modules like AAM with varying level of capabilities of
evaluating the query responses. This results in a trade-off between the evaluation
capabilities of the assessment modules and the computational requirements of the AI
systems to support such modules. E.g., if we have an assessment module with strong
evaluation capabilities, the AI systems can support them easily, whereas we might
have to put more burden on AI systems to support modules with weaker evaluation
systems. To test and analyze this, we introduce a new class of queries in this
work, and study the more general properties of Agent Interrogation Algorithm (AIA)
used by AAM. We also present a more insightful analysis of the complexity of the
queries and the computational requirements on the agents to answer these queries.
Figure 1: The agent-assessment module uses its user’s
preferred vocabulary, queries the AI system, and delivers a
user-interpretable causal model of the AI system’s capabilities Verma et al. ([2021](#bib.bib33))

.
Figure 1: The agent-assessment module uses its user’s
preferred vocabulary, queries the AI system, and delivers a
user-interpretable causal model of the AI system’s capabilities Verma et al. ([2021](#bib.bib33))
###
2.2 Causal Models
In this work, we focus on the properties of the models learned by AIA, and show
that the models learned by AIA are causal. But prior to that, we must define what
it means for a model to be causal. Multiple attempts have been made to define causal
models Halpern and Pearl ([2001](#bib.bib9), [2005](#bib.bib10)); Halpern ([2015](#bib.bib11)).
We use the definition of causal models based on Halpern ([2015](#bib.bib11)).
######
Definition 1.
A *causal model* M𝑀Mitalic\_M is defined as a 4-tuple
⟨𝒰,𝒱,ℛ,ℱ⟩𝒰𝒱ℛℱ\langle\mathcal{U},\mathcal{V},\mathcal{R},\mathcal{F}\rangle⟨ caligraphic\_U , caligraphic\_V , caligraphic\_R , caligraphic\_F ⟩
where 𝒰𝒰\mathcal{U}caligraphic\_U is a set of exogenous variables (representing factors outside
the model’s control), 𝒱𝒱\mathcal{V}caligraphic\_V is a set of endogenous variables (whose values are
directly or indirectly derived from the exogenous variables), ℛℛ\mathcal{R}caligraphic\_R is a
function that associates with every variable Y∈𝒰∪𝒱𝑌𝒰𝒱Y\in\mathcal{U}\cup\mathcal{V}italic\_Y ∈ caligraphic\_U ∪ caligraphic\_V a
nonempty set ℛ(Y)ℛ𝑌\mathcal{R}(Y)caligraphic\_R ( italic\_Y ) of possible values for Y𝑌Yitalic\_Y, and ℱℱ\mathcal{F}caligraphic\_F is a function
that associates with each endogenous variable X∈𝒱𝑋𝒱X\in\mathcal{V}italic\_X ∈ caligraphic\_V a structural function
denoted as FXsubscript𝐹𝑋F\_{X}italic\_F start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT such that FXsubscript𝐹𝑋F\_{X}italic\_F start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT maps
×Z∈(𝒰∪𝒱−{X})ℛ(Z)subscript𝑍𝒰𝒱𝑋absentℛ𝑍\times\_{Z\in(\mathcal{U}\cup\mathcal{V}-\{X\})}\mathcal{R}(Z)× start\_POSTSUBSCRIPT italic\_Z ∈ ( caligraphic\_U ∪ caligraphic\_V - { italic\_X } ) end\_POSTSUBSCRIPT caligraphic\_R ( italic\_Z ) to ℛ(X)ℛ𝑋\mathcal{R}(X)caligraphic\_R ( italic\_X ).
Note that the values of exogenous variables are not determined by the model, and a
setting u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG of values of exogenous variables is termed as a *context* by
Halpern ([2016](#bib.bib12)). This helps in defining a causal setting as:
######
Definition 2.
A *causal setting* is a pair (M,u→)𝑀→𝑢(M,\vec{u})( italic\_M , over→ start\_ARG italic\_u end\_ARG ) consisting of a causal model M𝑀Mitalic\_M and context u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG.
A causal formula φ𝜑\varphiitalic\_φ is true or false in a causal model, given a context.
Hence, (M,u→)⊧φmodels𝑀→𝑢𝜑(M,\vec{u})\models\varphi( italic\_M , over→ start\_ARG italic\_u end\_ARG ) ⊧ italic\_φ if the causal formula φ𝜑\varphiitalic\_φ is true in the
causal setting (M,u→)𝑀→𝑢(M,\vec{u})( italic\_M , over→ start\_ARG italic\_u end\_ARG ).
Every causal model M𝑀Mitalic\_M can be associated with a directed graph, G(M)𝐺𝑀G(M)italic\_G ( italic\_M ), in which
each variable X𝑋Xitalic\_X is represented as a vertex and the causal relationships between
the variables are represented as directed edges between members of
𝒰∪{𝒱∖X}𝒰𝒱𝑋\mathcal{U}\cup\{\mathcal{V}\setminus X\}caligraphic\_U ∪ { caligraphic\_V ∖ italic\_X } and X𝑋Xitalic\_X Pearl ([2009](#bib.bib25)). We use the
term causal networks when referring to these graphs to avoid confusion with the
notion of causal graphs used in the planning literature Helmert ([2004](#bib.bib14)).
To perform an analysis with interventions, we use the concept of *do-calculus*
introduced in Pearl ([1995](#bib.bib24)). To perform interventions on a set of
variables X∈𝒱𝑋𝒱X\in\mathcal{V}italic\_X ∈ caligraphic\_V, do-calculus assigns values x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG to X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG, and
evaluates the effect using the causal model M𝑀Mitalic\_M. This is termed as
*do(X→=x→normal-→𝑋normal-→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG)* action. To define this concept formally, we first
define *submodels* Pearl ([2009](#bib.bib25)).
######
Definition 3.
Let M𝑀Mitalic\_M be a causal model, X𝑋Xitalic\_X a set of variables in 𝒱𝒱\mathcal{V}caligraphic\_V, and x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG a
particular realization of X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG. A *submodel* Mx→subscript𝑀→𝑥M\_{\vec{x}}italic\_M start\_POSTSUBSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUBSCRIPT of M𝑀Mitalic\_M is the
causal model Mx→=⟨𝒰,𝒱,ℛ,ℱx→⟩subscript𝑀→𝑥𝒰𝒱ℛsuperscriptℱ→𝑥M\_{\vec{x}}=\langle\mathcal{U},\mathcal{V},\mathcal{R},\mathcal{F}^{\vec{x}}\rangleitalic\_M start\_POSTSUBSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUBSCRIPT = ⟨ caligraphic\_U , caligraphic\_V , caligraphic\_R , caligraphic\_F start\_POSTSUPERSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUPERSCRIPT ⟩ where ℱx→superscriptℱ→𝑥\mathcal{F}^{\vec{x}}caligraphic\_F start\_POSTSUPERSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUPERSCRIPT is obtained from ℱℱ\mathcal{F}caligraphic\_F by setting X′=x′superscript𝑋′superscript𝑥′X^{\prime}=x^{\prime}italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT
(for each X′∈X→superscript𝑋′→𝑋X^{\prime}\in\vec{X}italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ over→ start\_ARG italic\_X end\_ARG) instead of the corresponding FX′subscript𝐹superscript𝑋′F\_{X^{\prime}}italic\_F start\_POSTSUBSCRIPT italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT, and setting
FYx→=FYsubscriptsuperscript𝐹→𝑥𝑌subscript𝐹𝑌F^{\vec{x}}\_{Y}=F\_{Y}italic\_F start\_POSTSUPERSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT = italic\_F start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT for each Y∉X𝑌𝑋Y\not\in Xitalic\_Y ∉ italic\_X.
We now define what it means to intervene X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG using the action
do(X→=x→)normal-→𝑋normal-→𝑥(\vec{X}=\vec{x})( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ).
######
Definition 4.
Let M𝑀Mitalic\_M be a causal model, X𝑋Xitalic\_X a set of variables in V𝑉Vitalic\_V, and x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG a particular
realization of X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG. The effect of action *do(X→=x→)normal-→𝑋normal-→𝑥(\vec{X}=\vec{x})( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG )* on M𝑀Mitalic\_M
is given by the submodel Mx→subscript𝑀→𝑥M\_{\vec{x}}italic\_M start\_POSTSUBSCRIPT over→ start\_ARG italic\_x end\_ARG end\_POSTSUBSCRIPT.
In general, there can be uncertainty about the effects of these interventions,
leading to probabilistic causal networks, but in this work we assume that
interventions do not lead to uncertain effects.
The interventions described above assigns values to a set of variables, without
affecting any another variable. Such interventions are termed as hard
(independent) interventions. It is not always possible to perform such interventions
and in some cases other variable(s) also change without affecting the causal
structure Korb et al. ([2004](#bib.bib21)). Such interventions are termed as soft
(dependent) interventions.
We can also derive the structure of causal networks using interventions in the real
world, as interventions allow us to find if a variable Y𝑌Yitalic\_Y depends on another
variable X𝑋Xitalic\_X. We use Halpern ([2016](#bib.bib12))’s definition of dependence and
actual cause.
######
Definition 5.
A variable Y𝑌Yitalic\_Y *depends on* variable X𝑋Xitalic\_X if there is some setting of all the
variables in 𝒰∪𝒱∖{X,Y}𝒰𝒱𝑋𝑌\mathcal{U}\cup\mathcal{V}\setminus\{X,Y\}caligraphic\_U ∪ caligraphic\_V ∖ { italic\_X , italic\_Y } such that varying the value of
X𝑋Xitalic\_X in that setting results in a variation in the value of Y𝑌Yitalic\_Y.
######
Definition 6.
Given a signature 𝒮=(𝒰,𝒱,ℛ)𝒮𝒰𝒱ℛ\mathcal{S}=(\mathcal{U},\mathcal{V},\mathcal{R})caligraphic\_S = ( caligraphic\_U , caligraphic\_V , caligraphic\_R ), a *primitive event* is
a formula of the form X=x𝑋𝑥X=xitalic\_X = italic\_x, for X∈𝒱𝑋𝒱X\in\mathcal{V}italic\_X ∈ caligraphic\_V and x=ℛ(X)𝑥ℛ𝑋x=\mathcal{R}(X)italic\_x = caligraphic\_R ( italic\_X ). A *causal
formula* is [Y→←y→]φdelimited-[]←→𝑌→𝑦𝜑[\vec{Y}\leftarrow\vec{y}]\varphi[ over→ start\_ARG italic\_Y end\_ARG ← over→ start\_ARG italic\_y end\_ARG ] italic\_φ, where φ𝜑\varphiitalic\_φ is a Boolean
combination of primitive events, Y→=⟨Y1,Y2,…Yi⟩→𝑌subscript𝑌1subscript𝑌2…subscript𝑌𝑖\vec{Y}=\langle Y\_{1},Y\_{2},\dots Y\_{i}\rangleover→ start\_ARG italic\_Y end\_ARG = ⟨ italic\_Y start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_Y start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … italic\_Y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⟩ are
distinct variables in 𝒱𝒱\mathcal{V}caligraphic\_V, and yi∈ℛ(Yi)subscript𝑦𝑖ℛsubscript𝑌𝑖y\_{i}\in\mathcal{R}(Y\_{i})italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_R ( italic\_Y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). φ𝜑\varphiitalic\_φ holds if Yksubscript𝑌𝑘Y\_{k}italic\_Y start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT
would set to yksubscript𝑦𝑘y\_{k}italic\_y start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, for k=1,…,i𝑘1…𝑖k=1,\dots,iitalic\_k = 1 , … , italic\_i.
######
Definition 7.
Let X⊆𝒱𝑋𝒱X\subseteq\mathcal{V}italic\_X ⊆ caligraphic\_V be the subset of exogenous variables 𝒱𝒱\mathcal{V}caligraphic\_V, and φ𝜑\varphiitalic\_φ be a boolean causal formula expressible using variables in 𝒱𝒱\mathcal{V}caligraphic\_V.
X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG is an *actual cause* of φ𝜑\varphiitalic\_φ in the causal setting
(M,u→)𝑀→𝑢(M,\vec{u})( italic\_M , over→ start\_ARG italic\_u end\_ARG ), i.e., (X→=x→)(M,u→)φ→𝑋→𝑥𝑀→𝑢𝜑(\vec{X}=\vec{x})\overset{(M,\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) start\_OVERACCENT ( italic\_M , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ, if the following conditions hold:
1. AC1.
(M,u→)⊧(X→=x→)models𝑀→𝑢→𝑋→𝑥(M,\vec{u})\models(\vec{X}=\vec{x})( italic\_M , over→ start\_ARG italic\_u end\_ARG ) ⊧ ( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) and (M,u→)⊧φmodels𝑀→𝑢𝜑(M,\vec{u})\models\varphi( italic\_M , over→ start\_ARG italic\_u end\_ARG ) ⊧ italic\_φ.
2. AC2.
There is a set W→→𝑊\vec{W}over→ start\_ARG italic\_W end\_ARG of variables in 𝒱𝒱\mathcal{V}caligraphic\_V and a setting x→′superscript→𝑥′\vec{x}^{\prime}over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT of
the variables in X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG such that if
(M,u→)⊧W→=w→\*models𝑀→𝑢→𝑊superscript→𝑤(M,\vec{u})\models\vec{W}=\vec{w}^{\*}( italic\_M , over→ start\_ARG italic\_u end\_ARG ) ⊧ over→ start\_ARG italic\_W end\_ARG = over→ start\_ARG italic\_w end\_ARG start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, then
(M,u→)⊧[X→←x→′,W→←w→\*]¬φmodels𝑀→𝑢delimited-[]formulae-sequence←→𝑋superscript→𝑥′←→𝑊superscript→𝑤𝜑(M,\vec{u})\models[\vec{X}\leftarrow\vec{x}^{\prime},\vec{W}\leftarrow\vec{w}^{\*}]\neg\varphi( italic\_M , over→ start\_ARG italic\_u end\_ARG ) ⊧ [ over→ start\_ARG italic\_X end\_ARG ← over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_W end\_ARG ← over→ start\_ARG italic\_w end\_ARG start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ] ¬ italic\_φ.
3. AC3.
X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG is minimal; there is no strict subset X→′superscript→𝑋′\vec{X}^{\prime}over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT of X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG such
that X→′=x→′superscript→𝑋′superscript→𝑥′\vec{X}^{\prime}=\vec{x}^{\prime}over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT satisfies conditions AC1 and AC2, where x→′superscript→𝑥′\vec{x}^{\prime}over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the
restriction of x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG to the variables in X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG.
AC1 mentions that unless both φ𝜑\varphiitalic\_φ and X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG occur at the same
time, φ𝜑\varphiitalic\_φ cannot be caused by X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG.
AC2111Halpern ([2016](#bib.bib12)) termed it as AC2(am𝑚{}^{m}start\_FLOATSUPERSCRIPT italic\_m end\_FLOATSUPERSCRIPT) mentions that
there exists a x→′superscript→𝑥′\vec{x}^{\prime}over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that if we change a subset X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG of variables
from some initial value x→→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG to x→′superscript→𝑥′\vec{x}^{\prime}over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, keeping the value of other
variables W→→𝑊\vec{W}over→ start\_ARG italic\_W end\_ARG fixed to w→\*superscript→𝑤\vec{w}^{\*}over→ start\_ARG italic\_w end\_ARG start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, φ𝜑\varphiitalic\_φ will also change. AC3 is a
minimality condition which ensures that there are no spurious elements in X→→𝑋\vec{X}over→ start\_ARG italic\_X end\_ARG.
The following definition specifies soundness and completeness with respect to the actual causes entailed by a pair of causal models.
######
Definition 8.
Let 𝒰→→𝒰\vec{\mathcal{U}}over→ start\_ARG caligraphic\_U end\_ARG and
𝒱→→𝒱\vec{\mathcal{V}}over→ start\_ARG caligraphic\_V end\_ARG be the vectors of exogenous and endogenous variables, respectively; and
ΦΦ\Phiroman\_Φ be the set of all boolean causal formulas expressible over variables in 𝒱𝒱\mathcal{V}caligraphic\_V.
A causal model M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is *complete* with respect to another causal model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
if for all possible settings of exogenous variables, all the causal relationships
that are implied by the model M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT are a superset of the set of causal
relationships implied by the model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, i.e., ∀u→∈𝒰→,∀X→,X→′⊆𝒱→,∀φ,φ′∈Φ,∃x→∈X→,∃x→′∈X→′formulae-sequencefor-all→𝑢→𝒰for-all→𝑋formulae-sequencesuperscript→𝑋′→𝒱for-all𝜑formulae-sequencesuperscript𝜑′Φformulae-sequence→𝑥→𝑋superscript→𝑥′superscript→𝑋′\forall\vec{u}\in\vec{\mathcal{U}},\forall\vec{X},\vec{X}^{\prime}\subseteq\vec{\mathcal{V}},\forall\varphi,\varphi^{\prime}\in\Phi,\exists\vec{x}\in\vec{X},\exists\vec{x}^{\prime}\in\vec{X}^{\prime}∀ over→ start\_ARG italic\_u end\_ARG ∈ over→ start\_ARG caligraphic\_U end\_ARG , ∀ over→ start\_ARG italic\_X end\_ARG , over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⊆ over→ start\_ARG caligraphic\_V end\_ARG , ∀ italic\_φ , italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Φ , ∃ over→ start\_ARG italic\_x end\_ARG ∈ over→ start\_ARG italic\_X end\_ARG , ∃ over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT s.t.
{⟨X→,u→,φ,x→⟩:(X→=x→)(M2,u→)φ}⊆{⟨X→′,u→,φ′,x→′⟩:(X′→=x′→)(M1,u→)φ′}conditional-set→𝑋→𝑢𝜑→𝑥
→𝑋→𝑥subscript𝑀2→𝑢𝜑conditional-setsuperscript→𝑋′→𝑢superscript𝜑′superscript→𝑥′
→superscript𝑋′→superscript𝑥′subscript𝑀1→𝑢superscript𝜑′\{\langle\vec{X},\vec{u},\varphi,\vec{x}\rangle:(\vec{X}=\vec{x})\overset{(M\_{2},\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi\}\subseteq\{\langle\vec{X}^{\prime},\vec{u},\varphi^{\prime},\vec{x}^{\prime}\rangle:(\vec{X^{\prime}}=\vec{x^{\prime}})\overset{(M\_{1},\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi^{\prime}\}{ ⟨ over→ start\_ARG italic\_X end\_ARG , over→ start\_ARG italic\_u end\_ARG , italic\_φ , over→ start\_ARG italic\_x end\_ARG ⟩ : ( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) start\_OVERACCENT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ } ⊆ { ⟨ over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_u end\_ARG , italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ : ( over→ start\_ARG italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_ARG = over→ start\_ARG italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_ARG ) start\_OVERACCENT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT }.
A causal model M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is *sound* with respect to another causal model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
if for all possible settings of exogenous variables, all the causal relationships
that are implied by the model M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT are a subset of the set of causal
relationships implied by the model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, i.e., ∀u→∈𝒰→,∀X→,X→′⊆𝒱→,∀φ,φ′∈Φ,∃x→∈X→,∃x→′∈X→′formulae-sequencefor-all→𝑢→𝒰for-all→𝑋formulae-sequencesuperscript→𝑋′→𝒱for-all𝜑formulae-sequencesuperscript𝜑′Φformulae-sequence→𝑥→𝑋superscript→𝑥′superscript→𝑋′\forall\vec{u}\in\,\,\vec{\mathcal{U}},\forall\vec{X},\vec{X}^{\prime}\subseteq\vec{\mathcal{V}},\forall\varphi,\varphi^{\prime}\in\Phi,\exists\vec{x}\in\vec{X},\exists\vec{x}^{\prime}\in\vec{X}^{\prime}∀ over→ start\_ARG italic\_u end\_ARG ∈ over→ start\_ARG caligraphic\_U end\_ARG , ∀ over→ start\_ARG italic\_X end\_ARG , over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⊆ over→ start\_ARG caligraphic\_V end\_ARG , ∀ italic\_φ , italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Φ , ∃ over→ start\_ARG italic\_x end\_ARG ∈ over→ start\_ARG italic\_X end\_ARG , ∃ over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT s.t.
{⟨X→,u→,φ,x→⟩:(X→=x→)(M1,u→)φ}⊆{⟨X→′,u→,φ′,x→′⟩:(X′→=x′→)(M2,u→)φ′}conditional-set→𝑋→𝑢𝜑→𝑥
→𝑋→𝑥subscript𝑀1→𝑢𝜑conditional-setsuperscript→𝑋′→𝑢superscript𝜑′superscript→𝑥′
→superscript𝑋′→superscript𝑥′subscript𝑀2→𝑢superscript𝜑′\{\langle\vec{X},\vec{u},\varphi,\vec{x}\rangle:(\vec{X}=\vec{x})\overset{(M\_{1},\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi\}\subseteq\{\langle\vec{X}^{\prime},\vec{u},\varphi^{\prime},\vec{x}^{\prime}\rangle:(\vec{X^{\prime}}=\vec{x^{\prime}})\overset{(M\_{2},\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi^{\prime}\}{ ⟨ over→ start\_ARG italic\_X end\_ARG , over→ start\_ARG italic\_u end\_ARG , italic\_φ , over→ start\_ARG italic\_x end\_ARG ⟩ : ( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) start\_OVERACCENT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ } ⊆ { ⟨ over→ start\_ARG italic\_X end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_u end\_ARG , italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_x end\_ARG start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ : ( over→ start\_ARG italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_ARG = over→ start\_ARG italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_ARG ) start\_OVERACCENT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT }.
###
2.3 Query Complexity
In this paper, we provide an extended analysis of the complexity of the queries that
AIA uses to learn the agent’s model. We use the complexity analysis of relational
queries by [Vardi](#bib.bib31) [[1982](#bib.bib31),
[1995](#bib.bib32)] to find the membership classes for data, expression, and
combined complexity of AIA’s queries.
Vardi ([1982](#bib.bib31)) introduced three kinds of complexities for relational
queries. In the notion of query complexity, a specific query is fixed in the
language, then data complexity – given as function of size of databases –
is found by applying this query to arbitrary databases. In the second notion of
query complexity, a specific database is fixed, then the expression
complexity – given as function of length of expressions – is found by studying
the complexity of applying queries represented by arbitrary expressions in the
language. Finally, combined complexity – given as a function of combined
size of the expressions and the database – is found by applying arbitrary queries
in the language to arbitrary databases.
These notions can be defined formally as follows Vardi ([1995](#bib.bib32)):
######
Definition 9.
The complexity of a query is measured as the complexity of deciding if t∈Q(B)𝑡𝑄𝐵t\in Q(B)italic\_t ∈ italic\_Q ( italic\_B ),
where t𝑡titalic\_t is a tuple, Q𝑄Qitalic\_Q is a query, and B𝐵Bitalic\_B is a database.
* •
The *data complexity* of a language ℒℒ\mathcal{L}caligraphic\_L is the complexity of the
sets Answer(Qe)𝐴𝑛𝑠𝑤𝑒𝑟subscript𝑄𝑒Answer(Q\_{e})italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r ( italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) for queries e𝑒eitalic\_e in ℒℒ\mathcal{L}caligraphic\_L, where Answer(Qe)𝐴𝑛𝑠𝑤𝑒𝑟subscript𝑄𝑒Answer(Q\_{e})italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r ( italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) is the
answer set of a query Qesubscript𝑄𝑒Q\_{e}italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT given as:
Answer(Qe)={(t,B)|t∈Qe(B)}𝐴𝑛𝑠𝑤𝑒𝑟subscript𝑄𝑒conditional-set𝑡𝐵𝑡subscript𝑄𝑒𝐵Answer(Q\_{e})=\{(t,B)\,|\,t\in Q\_{e}(B)\}italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r ( italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) = { ( italic\_t , italic\_B ) | italic\_t ∈ italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_B ) }.
* •
The *expression complexity* of a language ℒℒ\mathcal{L}caligraphic\_L is the complexity
of the sets Answerℒ(B)𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒ𝐵Answer\_{\mathcal{L}}(B)italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT ( italic\_B ), where Answerℒ(B)𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒ𝐵Answer\_{\mathcal{L}}(B)italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT ( italic\_B ) is the answer set of a
database B𝐵Bitalic\_B with respect to a language ℒℒ\mathcal{L}caligraphic\_L given as:
Answerℒ(B)={(t,e)|e∈ℒ and t∈Qe(B)}𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒ𝐵conditional-set𝑡𝑒𝑒ℒ and 𝑡subscript𝑄𝑒𝐵Answer\_{\mathcal{L}}(B)=\{(t,e)\,|\,e\in\mathcal{L}\text{ and }t\in Q\_{e}(B)\}italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT ( italic\_B ) = { ( italic\_t , italic\_e ) | italic\_e ∈ caligraphic\_L and italic\_t ∈ italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_B ) }.
* •
The *combined complexity* of a language ℒℒ\mathcal{L}caligraphic\_L is the complexity of
the set Answerℒ𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒAnswer\_{\mathcal{L}}italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT, where Answerℒ𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒAnswer\_{\mathcal{L}}italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT is the answer set of a language
ℒℒ\mathcal{L}caligraphic\_L given as:
Answerℒ={(t,B,e)|e∈ℒ and t∈Qe(B)}𝐴𝑛𝑠𝑤𝑒subscript𝑟ℒconditional-set𝑡𝐵𝑒𝑒ℒ and 𝑡subscript𝑄𝑒𝐵Answer\_{\mathcal{L}}=\{(t,B,e)\,|\,e\in\mathcal{L}\text{ and }t\in Q\_{e}(B)\}italic\_A italic\_n italic\_s italic\_w italic\_e italic\_r start\_POSTSUBSCRIPT caligraphic\_L end\_POSTSUBSCRIPT = { ( italic\_t , italic\_B , italic\_e ) | italic\_e ∈ caligraphic\_L and italic\_t ∈ italic\_Q start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_B ) }.
[Vardi](#bib.bib31) [[1982](#bib.bib31),
[1995](#bib.bib32)] gave standard complexity classes for queries written in
specific logical languages.
We show the membership of our queries in these classes based on the logical
languages we write the queries in.
3 Formal Framework
-------------------
The agent assessment module assumes that the user needs to estimate the agent’s
model as a STRIPS-like planning model represented as a pair
ℳ=⟨ℙ,𝔸⟩ℳℙ𝔸\mathcal{M}=\langle\mathbb{P},\mathbb{A}\ranglecaligraphic\_M = ⟨ blackboard\_P , blackboard\_A ⟩, where
ℙ={p1k1,…,pnkn}ℙsuperscriptsubscript𝑝1subscript𝑘1…superscriptsubscript𝑝𝑛subscript𝑘𝑛\mathbb{P}=\{p\_{1}^{k\_{1}},\dots,p\_{n}^{k\_{n}}\}blackboard\_P = { italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , … , italic\_p start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT } is a finite set of
predicates with arities kisubscript𝑘𝑖k\_{i}italic\_k start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT; 𝔸={a1,…,ak}𝔸subscript𝑎1…subscript𝑎𝑘\mathbb{A}=\{a\_{1},\dots,a\_{k}\}blackboard\_A = { italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT }
is a finite set of parameterized actions (operators). Each action
aj∈𝔸subscript𝑎𝑗𝔸a\_{j}\in\mathbb{A}italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ blackboard\_A is represented as a tuple ⟨header(aj),pre(aj),*eff*(aj)⟩ℎ𝑒𝑎𝑑𝑒𝑟subscript𝑎𝑗𝑝𝑟𝑒subscript𝑎𝑗*eff*subscript𝑎𝑗\langle header(a\_{j}),pre(a\_{j}),\emph{eff}(a\_{j})\rangle⟨ italic\_h italic\_e italic\_a italic\_d italic\_e italic\_r ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) , italic\_p italic\_r italic\_e ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) , eff ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ⟩, where header(aj)ℎ𝑒𝑎𝑑𝑒𝑟subscript𝑎𝑗header(a\_{j})italic\_h italic\_e italic\_a italic\_d italic\_e italic\_r ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) is the action
header consisting of action name and action parameters, pre(aj)𝑝𝑟𝑒subscript𝑎𝑗pre(a\_{j})italic\_p italic\_r italic\_e ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT )
represents the set of predicate atoms that must be true in a state
where ajsubscript𝑎𝑗a\_{j}italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT can be applied, *eff*(aj)*eff*subscript𝑎𝑗\emph{eff}(a\_{j})eff ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) is the set of positive
or negative predicate atoms that will change to true or false
respectively as a result of execution of the action ajsubscript𝑎𝑗a\_{j}italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. Each
predicate can be instantiated using the parameters of an action, where
the number of parameters are bounded by the maximum arity of the
action. E.g., consider the action *load\_truck*(?v1,?v2,?v3)*load\_truck*?𝑣1?𝑣2?𝑣3\emph{load\\_truck}(?v1,?v2,?v3)load\_truck ( ? italic\_v 1 , ? italic\_v 2 , ? italic\_v 3 )
and predicate at(?x,?y)𝑎𝑡?𝑥?𝑦at(?x,?y)italic\_a italic\_t ( ? italic\_x , ? italic\_y ) in the IPC Logistics domain. This predicate
can be instantiated using action parameters ?v1?𝑣1?v1? italic\_v 1, ?v2?𝑣2?v2? italic\_v 2, and ?v3?𝑣3?v3? italic\_v 3 as
at(?v1,?v1)𝑎𝑡?𝑣1?𝑣1at(?v1,?v1)italic\_a italic\_t ( ? italic\_v 1 , ? italic\_v 1 ), at(?v1,?v2)𝑎𝑡?𝑣1?𝑣2at(?v1,?v2)italic\_a italic\_t ( ? italic\_v 1 , ? italic\_v 2 ), at(?v1,?v3)𝑎𝑡?𝑣1?𝑣3at(?v1,?v3)italic\_a italic\_t ( ? italic\_v 1 , ? italic\_v 3 ), at(?v2,?v2)𝑎𝑡?𝑣2?𝑣2at(?v2,?v2)italic\_a italic\_t ( ? italic\_v 2 , ? italic\_v 2 ),
at(?v2,?v1)𝑎𝑡?𝑣2?𝑣1at(?v2,?v1)italic\_a italic\_t ( ? italic\_v 2 , ? italic\_v 1 ), at(?v2,?v3)𝑎𝑡?𝑣2?𝑣3at(?v2,?v3)italic\_a italic\_t ( ? italic\_v 2 , ? italic\_v 3 ), at(?v3,?v3)𝑎𝑡?𝑣3?𝑣3at(?v3,?v3)italic\_a italic\_t ( ? italic\_v 3 , ? italic\_v 3 ), at(?v3,?v1)𝑎𝑡?𝑣3?𝑣1at(?v3,?v1)italic\_a italic\_t ( ? italic\_v 3 , ? italic\_v 1 ),
and at(?v3,?v2)𝑎𝑡?𝑣3?𝑣2at(?v3,?v2)italic\_a italic\_t ( ? italic\_v 3 , ? italic\_v 2 ). We represent the set of all such possible
predicates instantiated with action parameters as ℙ\*superscriptℙ\mathbb{P}^{\*}blackboard\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
AAM uses the following information as input. It receives its instruction
set in the form of header(a)ℎ𝑒𝑎𝑑𝑒𝑟𝑎header(a)italic\_h italic\_e italic\_a italic\_d italic\_e italic\_r ( italic\_a ) for each a∈𝔸𝑎𝔸a\in\mathbb{A}italic\_a ∈ blackboard\_A from the agent.
AAM also receives a predicate vocabulary
ℙℙ\mathbb{P}blackboard\_P from the user with functional definitions of each predicate.
This gives AAM sufficient information to perform a dialog with 𝒜𝒜\mathcal{A}caligraphic\_A about
the outcomes of hypothetical action sequences.
We define the overall problem of agent interrogation as follows. Given
a class of queries and an agent with an unknown model which can answer
these queries, determine the model of the agent. More precisely, an
*agent interrogation task* is defined as a tuple ⟨ℳ𝒜,ℚ,ℙ,𝔸H⟩superscriptℳ𝒜ℚℙsubscript𝔸𝐻\langle\mathcal{M}^{\mathcal{A}},\mathbb{Q},\mathbb{P},\mathbb{A}\_{H}\rangle⟨ caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , blackboard\_Q , blackboard\_P , blackboard\_A start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ⟩,
where ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT is the true model (unknown to AAM) of
the agent 𝒜𝒜\mathcal{A}caligraphic\_A being interrogated, ℚℚ\mathbb{Q}blackboard\_Q is the class of
queries that can be posed to the agent by AAM, and ℙℙ\mathbb{P}blackboard\_P and
𝔸Hsubscript𝔸𝐻\mathbb{A}\_{H}blackboard\_A start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT are the sets of predicates and action headers that AAM
uses based on inputs from ℋℋ\mathcal{H}caligraphic\_H and 𝒜𝒜\mathcal{A}caligraphic\_A. The objective of
the agent interrogation task is to derive the agent model ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT
using ℙℙ\mathbb{P}blackboard\_P and 𝔸Hsubscript𝔸𝐻\mathbb{A}\_{H}blackboard\_A start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. Let ΘΘ\Thetaroman\_Θ be the set of
possible answers to queries. Thus, strings θ\*∈Θ\*superscript𝜃superscriptΘ\theta^{\*}\in\Theta^{\*}italic\_θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_Θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT
denote the information received by AAM at any point in the query
process. Query policies for the agent interrogation task are functions
Θ\*→ℚ∪{*Stop*}→superscriptΘℚ*Stop*\Theta^{\*}\rightarrow\mathbb{Q}\cup\{\emph{Stop}\}roman\_Θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → blackboard\_Q ∪ { Stop } that map sequences
of answers to the next query that the interrogator should ask. The
process stops with the *Stop* query. In other words, for all
answers θ∈Θ𝜃Θ\theta\in\Thetaitalic\_θ ∈ roman\_Θ, all valid query policies map all sequences
xθ𝑥𝜃x\thetaitalic\_x italic\_θ to *Stop* whenever x∈Θ\*𝑥superscriptΘx\in\Theta^{\*}italic\_x ∈ roman\_Θ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is mapped to
*Stop*. This policy is computed and executed online.
Running Example
Consider we have a driving robot having a single action *drive (?t ?s ?d)*,
parameterized by the truck it drives, source location, and destination location.
Assume that all the locations are connected, hence the robot can drive between any
two locations. The predicates available are at(?t?loc)𝑎𝑡?𝑡?𝑙𝑜𝑐at(?t\,\,?loc)italic\_a italic\_t ( ? italic\_t ? italic\_l italic\_o italic\_c ), representing the
location of a truck; and src\_blue(?loc)𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒?𝑙𝑜𝑐src\\_blue(?loc)italic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e ( ? italic\_l italic\_o italic\_c ), representing the color of the source
location. Instantiating at𝑎𝑡atitalic\_a italic\_t and src\_blue𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒src\\_blueitalic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e with parameters of the action
drive gives four instantiated predicates at(?t?s)𝑎𝑡?𝑡?𝑠at(?t\,\,?s)italic\_a italic\_t ( ? italic\_t ? italic\_s ), at(?t?d)𝑎𝑡?𝑡?𝑑at(?t\,\,?d)italic\_a italic\_t ( ? italic\_t ? italic\_d ),
src\_blue(?s)𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒?𝑠src\\_blue(?s)italic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e ( ? italic\_s ), and src\_blue(?d)𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒?𝑑src\\_blue(?d)italic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e ( ? italic\_d ).
4 Learning Causal Models
-------------------------
The classic causal model framework used in Def. [1](#Thmdefinition1 "Definition 1. ‣ 2.2 Causal Models ‣ 2 Background ‣ Learning Causal Models of Autonomous Agents using Interventions") lacks the
temporal elements and decision nodes needed to express the causality in planning
domains.
To express actions, we use the decision nodes similar to Dynamic Decision
Networks Kanazawa and Dean ([1989](#bib.bib19)). To express the temporal behavior of planning
models, we use the notion of Dynamic Causal Models Pearl ([2009](#bib.bib25)) and
Dynamic Causal Networks (DCNs) Blondel et al. ([2017](#bib.bib4)). These are similar
to causal models and causal networks respectively, with the only difference that the
variables in these are time-indexed, allowing for analysis of temporal causal
relations between the variables. We also introduce additional boolean variables to
capture the executability of the actions. The resulting causal model is termed as a
causal action model, and we express such models using a Dynamic Causal Decision
Network (DCDN).
A general structure of a dynamic causal decision network is shown in
Fig. [2](#S4.F2 "Figure 2 ‣ 4.1 Types of Interventions ‣ 4 Learning Causal Models ‣ Learning Causal Models of Autonomous Agents using Interventions"). Here stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT are states at time t𝑡titalic\_t and t+1𝑡1t+1italic\_t + 1
respectively, atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a decision node representing the decision to execute action
a𝑎aitalic\_a at time t𝑡titalic\_t, and executability variable Xtasubscriptsuperscript𝑋𝑎𝑡X^{a}\_{t}italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT represents if action a𝑎aitalic\_a is
executable at time t𝑡titalic\_t. All the decision variables and the executability variables
Xtasuperscriptsubscript𝑋𝑡𝑎X\_{t}^{a}italic\_X start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT, where a∈𝔸𝑎𝔸a\in\mathbb{A}italic\_a ∈ blackboard\_A, in a domain are endogenous. Decision variables are
endogenous because we can perform interventions on them as needed.
###
4.1 Types of Interventions
To learn the causal action model corresponding to each domain, two kinds of
interventions are needed. The first type of interventions, termed ℐPsubscriptℐ𝑃\mathcal{I}\_{P}caligraphic\_I start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT,
correspond to searching for the initial state in AIA. AIA searches for the state
where it can execute an action, hence if the state variables are completely
independent of each other, these interventions are hard, whereas for the cases where
some of the variables are dependent the interventions are soft for those variables.
Such interventions lead to learning the preconditions of an action correctly.
The second type of interventions, termed ℐEsubscriptℐ𝐸\mathcal{I}\_{E}caligraphic\_I start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT, are on the decision nodes,
where the values of the decision variables are set to true according to the input
plan. For each action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT in the plan π𝜋\piitalic\_π, the corresponding decision node with
label aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is set to true. Of course, during the intervention process, the
structure of the true DCDN is not known. Such interventions lead to learning the
effects of an action accurately. As mentioned earlier, if an action a𝑎aitalic\_a is executed
in a state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT which does not satisfy its preconditions, the variable Xtasubscriptsuperscript𝑋𝑎𝑡X^{a}\_{t}italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT will
be false at that time instant, and the resulting state st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT will be same as
state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, signifying a failure to execute the action. Note that the state nodes
stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT in Fig. [2](#S4.F2 "Figure 2 ‣ 4.1 Types of Interventions ‣ 4 Learning Causal Models ‣ Learning Causal Models of Autonomous Agents using Interventions") are the combined representation of
multiple predicates.

Figure 2: An example of a Dynamic Causal Decision Network (DCDN). stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and
st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT are states at time t𝑡titalic\_t and t+1𝑡1t+1italic\_t + 1 respectively, atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a decision
node representing the decision to execute action a𝑎aitalic\_a at time t𝑡titalic\_t, and Xtasubscriptsuperscript𝑋𝑎𝑡X^{a}\_{t}italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT
represents if action a𝑎aitalic\_a is executable at time t𝑡titalic\_t.
We now show that the model(s) learned by AIA are causal models.
######
Lemma 1.
Given an agent 𝒜𝒜\mathcal{A}caligraphic\_A with a ground truth model M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT (unknown to the agent interrogation algorithm AIA), the action model M𝑀Mitalic\_M learned by AIA is a causal model consistent with Def. [1](#Thmdefinition1 "Definition 1. ‣ 2.2 Causal Models ‣ 2 Background ‣ Learning Causal Models of Autonomous Agents using Interventions").
###### Proof (Sketch).
We show a mapping between the components of the causal models used in
Def. [1](#Thmdefinition1 "Definition 1. ‣ 2.2 Causal Models ‣ 2 Background ‣ Learning Causal Models of Autonomous Agents using Interventions") and the planning models described in
Sec. [3](#S3 "3 Formal Framework ‣ Learning Causal Models of Autonomous Agents using Interventions").
The exogenous variables 𝒰𝒰\mathcal{U}caligraphic\_U maps to the static predicates in the domain, i.e.,
the ones that do not appear in the effect of any action; 𝒱𝒱\mathcal{V}caligraphic\_V maps to the
non-static predicates; ℛℛ\mathcal{R}caligraphic\_R maps each predicate to ⊤top\top⊤ if the predicate is
true in a state, or ⊥bottom\bot⊥ when the predicate is false in a state; ℱℱ\mathcal{F}caligraphic\_F
calculates the value of each variable depending on the other variables that cause
it. This is captured by the values of state predicates and executability variables
being changed due to other state variables and decision variables.
The causal relationships in the model ℳℳ\mathcal{M}caligraphic\_M learned by AIA also satisfy the three
conditions – AC1, AC2, and AC3 – mentioned in the definition for the actual cause
(Def. [7](#Thmdefinition7 "Definition 7. ‣ 2.2 Causal Models ‣ 2 Background ‣ Learning Causal Models of Autonomous Agents using Interventions")). By Thm. 1 in Verma et al. ([2021](#bib.bib33)), AIA returns
correct models, i.e., ℳℳ\mathcal{M}caligraphic\_M contains exactly the same palm tuples as ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT.
This also means that AC1 is satisfied due to correctness of ℐPsubscriptℐ𝑃\mathcal{I}\_{P}caligraphic\_I start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT – a
predicate p𝑝pitalic\_p is a cause of Xasuperscript𝑋𝑎X^{a}italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT only when p𝑝pitalic\_p is precondition of action a𝑎aitalic\_a; and
ℐEsubscriptℐ𝐸\mathcal{I}\_{E}caligraphic\_I start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT– a predicate p𝑝pitalic\_p is a caused by Xasuperscript𝑋𝑎X^{a}italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT and a𝑎aitalic\_a only when p𝑝pitalic\_p is an effect
of action a𝑎aitalic\_a. AC2 is satisfied because if any precondition of an action is not
satisfied, it will not execute (defining the relationship “state variables
→Xa→absentsuperscript𝑋𝑎\rightarrow X^{a}→ italic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT”); or if any action doesn’t execute, it won’t affect the
predicates in its effects (defining the relationship “Xa→→superscript𝑋𝑎absentX^{a}\rightarrowitalic\_X start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT → state
variables”). Finally, AC3 is satisfied, as neither spurious preconditions are
learned by AIA, nor incorrect effects are learned.
∎
We now formally show that the causal model(s) learned by AIA is(are) sound and
complete.
######
Theorem 1.
Given an agent 𝒜𝒜\mathcal{A}caligraphic\_A with a ground truth model M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT (unknown to the agent interrogation algorithm AIA), the action model M𝑀Mitalic\_M learned by AIA is sound and complete with respect to M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT.
###### Proof (Sketch).
We first show that M𝑀Mitalic\_M is sound with respect to M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. Assume that some X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG is
an actual cause of φ𝜑\varphiitalic\_φ according to M𝑀Mitalic\_M in the setting u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG, i.e., (X→=x→)(M,u→)φ→𝑋→𝑥𝑀→𝑢𝜑(\vec{X}=\vec{x})\overset{(M,\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) start\_OVERACCENT ( italic\_M , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ. Now by Thm. 1 in Verma et al. ([2021](#bib.bib33)), ℳℳ\mathcal{M}caligraphic\_M contains exactly the same palm tuples as ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. Hence any palm tuple that is present in M𝑀Mitalic\_M will also be present in M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, implying that under the same setting u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG according to M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG is an actual cause of φ𝜑\varphiitalic\_φ.
Now lets assume that some X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG is
an actual cause of φ𝜑\varphiitalic\_φ according to M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT in the setting u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG, i.e., (X→=x→)(M𝒜,u→)φ→𝑋→𝑥superscript𝑀𝒜→𝑢𝜑(\vec{X}=\vec{x})\overset{(M^{\mathcal{A}},\vec{u})}{\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 3.0pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry@@#!@\cr&\crcr}}}\ignorespaces{\hbox{\kern-3.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}\ignorespaces{\hbox{\lx@xy@drawsquiggles@}}{\hbox{\kern 27.0pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces}\varphi( over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG ) start\_OVERACCENT ( italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , over→ start\_ARG italic\_u end\_ARG ) end\_OVERACCENT start\_ARG end\_ARG italic\_φ. Now by Thm. 1 in Verma et al. ([2021](#bib.bib33)), ℳℳ\mathcal{M}caligraphic\_M contains exactly the same palm tuples as ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. Hence any palm tuple that is present in Magsuperscript𝑀𝑎𝑔M^{a}gitalic\_M start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT italic\_g will also be present in M𝑀Mitalic\_M, implying that under the same setting u→→𝑢\vec{u}over→ start\_ARG italic\_u end\_ARG according to M𝑀Mitalic\_M X→=x→→𝑋→𝑥\vec{X}=\vec{x}over→ start\_ARG italic\_X end\_ARG = over→ start\_ARG italic\_x end\_ARG is an actual cause of φ𝜑\varphiitalic\_φ. Hence the action model M𝑀Mitalic\_M learned by the agent interrogation algorithm are sound and complete with respect to the model M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT.
∎
###
4.2 Comparison with Observational Data based Learners
We compare the properties of models learned by AIA with those of approaches that
learn the models from observational data only. For the methods that learn models in
STRIPS-like the learned models can be classified as causal, but it is not necessary that they are sound with respect to the ground truth model M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT of the agent 𝒜𝒜\mathcal{A}caligraphic\_A.
E.g., in case of the robot driver discussed earlier, these methods can
learn a model where the precondition of the action drive is src\_blue𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒src\\_blueitalic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e if all the
observation traces that are provided to it as input had src\_blue𝑠𝑟𝑐\_𝑏𝑙𝑢𝑒src\\_blueitalic\_s italic\_r italic\_c \_ italic\_b italic\_l italic\_u italic\_e as true. This
can happen if all the source locations are painted blue.
To avoid such cases, some of these methods run a pre-processing or a post-processing
step that removes all static predicates from the preconditions. However, if there is
a paint action in the domain that changes the color of all source locations, then
these ad-hoc solutions will not be able to handle that. Hence, these techniques may end
up learning spurious preconditions as they do not have a way to distinguish between
correlation and causations.
On the other hand, it is also not necessary that the models learned by approaches using only observational data are complete with respect to the ground truth model M𝒜superscript𝑀𝒜M^{\mathcal{A}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT of the agent 𝒜𝒜\mathcal{A}caligraphic\_A. This is because they may miss to capture some causal relationships if the observations do not
include all the possible transitions, or contains only the successful actions. E.g.,
if we have additional predicates city\_from(?loc)𝑐𝑖𝑡𝑦\_𝑓𝑟𝑜𝑚?𝑙𝑜𝑐city\\_from(?loc)italic\_c italic\_i italic\_t italic\_y \_ italic\_f italic\_r italic\_o italic\_m ( ? italic\_l italic\_o italic\_c ), and city\_to(?loc)𝑐𝑖𝑡𝑦\_𝑡𝑜?𝑙𝑜𝑐city\\_to(?loc)italic\_c italic\_i italic\_t italic\_y \_ italic\_t italic\_o ( ? italic\_l italic\_o italic\_c ) in the
domain, and all the observed transitions are for the transitions within same city,
then the model will not be able to learn if the source city and destination city
have to be same for driving a truck between them.
Hence, the models learned using only observational data are not necessarily sound or complete, as they can learn causal relationships that are not part of set of actual causal relationships, and can also miss some of the causal relationships that are not part of set of actual causal relationships.
Pearl ([2019](#bib.bib26)) also
points out that it is not possible to learn causal models from observational data
only.
###
4.3 Types of Queries
Plan Outcome Queries
Verma et al. ([2021](#bib.bib33)) introduced plan outcome queries 𝒬POsubscript𝒬𝑃𝑂\mathcal{Q}\_{PO}caligraphic\_Q start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT,
which are parameterized by a state
sIsubscript𝑠𝐼s\_{I}italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT and a plan π𝜋\piitalic\_π. Let P𝑃Pitalic\_P be the set of predicates ℙ\*superscriptℙ\mathbb{P}^{\*}blackboard\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT
instantiated with objects O𝑂Oitalic\_O in an environment. 𝒬POsubscript𝒬𝑃𝑂\mathcal{Q}\_{PO}caligraphic\_Q start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT
queries ask 𝒜𝒜\mathcal{A}caligraphic\_A the length of the longest prefix of the plan π𝜋\piitalic\_π
that it can execute successfully when starting in the state sI⊆Psubscript𝑠𝐼𝑃s\_{I}\subseteq Pitalic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT ⊆ italic\_P as well as the final state sF⊆Psubscript𝑠𝐹𝑃s\_{F}\subseteq Pitalic\_s start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ⊆ italic\_P that this
execution leads to. E.g., “Given that the truck t1𝑡1t1italic\_t 1 is at location l1𝑙1l1italic\_l 1, what
would happen if you executed the plan ⟨drive(t1,l1,l2)\langle drive(t1,l1,l2)⟨ italic\_d italic\_r italic\_i italic\_v italic\_e ( italic\_t 1 , italic\_l 1 , italic\_l 2 ), drive(t1,l2,l3)𝑑𝑟𝑖𝑣𝑒𝑡1𝑙2𝑙3drive(t1,l2,l3)italic\_d italic\_r italic\_i italic\_v italic\_e ( italic\_t 1 , italic\_l 2 , italic\_l 3 ), drive(t1,l2,l1)⟩drive(t1,l2,l1)\rangleitalic\_d italic\_r italic\_i italic\_v italic\_e ( italic\_t 1 , italic\_l 2 , italic\_l 1 ) ⟩?”
A response to such queries can be of the form “I can execute the plan
till step ℓℓ\ellroman\_ℓ and at the end of it truck t1𝑡1t1italic\_t 1 is at
location l3𝑙3l3italic\_l 3”. Formally, the response θPOsubscript𝜃𝑃𝑂\theta\_{PO}italic\_θ start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT for plan outcome
queries is a tuple ⟨ℓ,sℓ⟩ℓsubscript𝑠ℓ\langle\ell,s\_{\ell}\rangle⟨ roman\_ℓ , italic\_s start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT ⟩, where ℓℓ\ellroman\_ℓ
is the number of steps for which the plan π𝜋\piitalic\_π could be executed, and
sℓ⊆Psubscript𝑠ℓ𝑃s\_{\ell}\subseteq Pitalic\_s start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT ⊆ italic\_P is the final state after executing ℓℓ\ellroman\_ℓ
steps of the plan. If the plan π𝜋\piitalic\_π cannot be executed fully according
to the agent model ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT then ℓ<len(π)ℓ𝑙𝑒𝑛𝜋\ell<len(\pi)roman\_ℓ < italic\_l italic\_e italic\_n ( italic\_π ),
otherwise ℓ=len(π)ℓ𝑙𝑒𝑛𝜋\ell=len(\pi)roman\_ℓ = italic\_l italic\_e italic\_n ( italic\_π ). The final state sℓ⊆Psubscript𝑠ℓ𝑃s\_{\ell}\subseteq Pitalic\_s start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT ⊆ italic\_P
is such that ℳ𝒜⊧π[1:ℓ](sI)=sℓ\mathcal{M}^{\mathcal{A}}\models\pi{[1:\ell]}(s\_{I})=s\_{\ell}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT ⊧ italic\_π [ 1 : roman\_ℓ ] ( italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT ) = italic\_s start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT, i.e., starting from a state sIsubscript𝑠𝐼s\_{I}italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT, ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT
successfully executed first ℓℓ\ellroman\_ℓ steps of the plan π𝜋\piitalic\_π. Thus,
𝒬PO:𝒰→ℕ×2P:subscript𝒬𝑃𝑂→𝒰ℕsuperscript2𝑃\mathcal{Q}\_{PO}:\mathcal{U}\rightarrow\mathbb{N}\times 2^{P}caligraphic\_Q start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT : caligraphic\_U → blackboard\_N × 2 start\_POSTSUPERSCRIPT italic\_P end\_POSTSUPERSCRIPT,
where 𝒰𝒰\mathcal{U}caligraphic\_U is the set of all the models that can be generated using
the predicates P𝑃Pitalic\_P and actions 𝔸𝔸\mathbb{A}blackboard\_A, and ℕℕ\mathbb{N}blackboard\_N is the set
of natural numbers.
Action Precondition Queries
In this work, we introduce a new class of queries called *action precondition
queries* 𝒬APsubscript𝒬𝐴𝑃\mathcal{Q}\_{AP}caligraphic\_Q start\_POSTSUBSCRIPT italic\_A italic\_P end\_POSTSUBSCRIPT. These queries, similar to plan outcome queries, are parameterized
by sIsubscript𝑠𝐼s\_{I}italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT and π𝜋\piitalic\_π, but have a different response type.
A response to the action precondition queries can be either of the form “I can execute the plan
completely and at the end of it, truck t1𝑡1t1italic\_t 1 is at
location l1𝑙1l1italic\_l 1” when the plan is successfully executed, or of the form “I can execute the plan
till step ℓℓ\ellroman\_ℓ and the action aℓsubscript𝑎ℓa\_{\ell}italic\_a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT failed because precondition pisubscript𝑝𝑖p\_{i}italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT was not satisfied” when the plan is not fully executed. To make the responses consistent in all cases, we introduce a dummy action
a*fail*subscript𝑎*fail*a\_{\emph{fail}}italic\_a start\_POSTSUBSCRIPT fail end\_POSTSUBSCRIPT whose precondition is never satisfied. Hence, the responses are always of the form,
“I can execute the plan till step ℓℓ\ellroman\_ℓ and the action aℓsubscript𝑎ℓa\_{\ell}italic\_a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT failed because precondition pFsubscript𝑝𝐹p\_{F}italic\_p start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT was not satisfied”. If aℓsubscript𝑎ℓa\_{\ell}italic\_a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT is a*fail*subscript𝑎*fail*a\_{\emph{fail}}italic\_a start\_POSTSUBSCRIPT fail end\_POSTSUBSCRIPT and ℓ=len(π)ℓ𝑙𝑒𝑛𝜋\ell=len(\pi)roman\_ℓ = italic\_l italic\_e italic\_n ( italic\_π ), then we know that the original plan was
executed successfully by the agent. Formally, the response θAPsubscript𝜃𝐴𝑃\theta\_{AP}italic\_θ start\_POSTSUBSCRIPT italic\_A italic\_P end\_POSTSUBSCRIPT for action precondition
queries is a tuple ⟨ℓ,pF⟩ℓsubscript𝑝𝐹\langle\ell,p\_{F}\rangle⟨ roman\_ℓ , italic\_p start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ⟩, where ℓℓ\ellroman\_ℓ
is the number of steps for which the plan π𝜋\piitalic\_π could be executed, and
pF⊆Psubscript𝑝𝐹𝑃p\_{F}\subseteq Pitalic\_p start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ⊆ italic\_P is the set of preconditions of the failed action aFsubscript𝑎𝐹a\_{F}italic\_a start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT. If the plan π𝜋\piitalic\_π cannot be executed fully according
to the agent model ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT then ℓ<len(π)−1ℓ𝑙𝑒𝑛𝜋1\ell<len(\pi)-1roman\_ℓ < italic\_l italic\_e italic\_n ( italic\_π ) - 1,
otherwise ℓ=len(π)−1ℓ𝑙𝑒𝑛𝜋1\ell=len(\pi)-1roman\_ℓ = italic\_l italic\_e italic\_n ( italic\_π ) - 1. Also, 𝒬AC:𝒰→ℕ×P:subscript𝒬𝐴𝐶→𝒰ℕ𝑃\mathcal{Q}\_{AC}:\mathcal{U}\rightarrow\mathbb{N}\times Pcaligraphic\_Q start\_POSTSUBSCRIPT italic\_A italic\_C end\_POSTSUBSCRIPT : caligraphic\_U → blackboard\_N × italic\_P,
where 𝒰𝒰\mathcal{U}caligraphic\_U is the set of all the models that can be generated using
the predicates P𝑃Pitalic\_P and actions 𝔸𝔸\mathbb{A}blackboard\_A, and ℕℕ\mathbb{N}blackboard\_N is the set of natural numbers.
5 Complexity Analysis
----------------------
Theoretically, the asymptotic complexity of AIA (with plan outcome queries) is O(|ℙ\*|×|𝔸|)𝑂superscriptℙ𝔸O(|\mathbb{P}^{\*}|\times|\mathbb{A}|)italic\_O ( | blackboard\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT | × | blackboard\_A | ), but it does not take into account how much computation is needed
to answer the queries, or to evaluate their responses. This complexity just shows
the amount of computation needed in the worst case to derive the agent model by AIA.
Here, we present a more detailed analysis of the complexity of AIA’s queries
using the results of relational query complexity by Vardi ([1982](#bib.bib31)).
To analyze 𝒬POsubscript𝒬𝑃𝑂\mathcal{Q}\_{PO}caligraphic\_Q start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT’s complexity, let us assume that the
agent has stored the possible transitions it can make (in propositional form)
using the relations R(*valid*,s,a,s′,*succ*)𝑅*valid*𝑠𝑎superscript𝑠′*succ*R(\emph{valid},s,a,s^{\prime},\emph{succ})italic\_R ( valid , italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , succ ), where
*valid*,*succ*∈{⊤,⊥}*valid**succ*
topbottom\emph{valid},\emph{succ}\in\{\top,\bot\}valid , succ ∈ { ⊤ , ⊥ }, s,s′∈S𝑠superscript𝑠′
𝑆s,s^{\prime}\in Sitalic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S, a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A;
and N(*valid*,n,n+)𝑁*valid*𝑛subscript𝑛N(\emph{valid},n,n\_{+})italic\_N ( valid , italic\_n , italic\_n start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT ), where *valid*∈{⊤,⊥}*valid*topbottom\emph{valid}\in\{\top,\bot\}valid ∈ { ⊤ , ⊥ }, n,n+∈ℕ𝑛subscript𝑛
ℕn,n\_{+}\in\mathbb{N}italic\_n , italic\_n start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT ∈ blackboard\_N, 0≤n≤L0𝑛𝐿0\leq n\leq L0 ≤ italic\_n ≤ italic\_L, and 0≤n+≤L+10subscript𝑛𝐿10\leq n\_{+}\leq L+10 ≤ italic\_n start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT ≤ italic\_L + 1, where L𝐿Litalic\_L is the maximum
possible length of a plan in the 𝒬POsubscript𝒬𝑃𝑂\mathcal{Q}\_{PO}caligraphic\_Q start\_POSTSUBSCRIPT italic\_P italic\_O end\_POSTSUBSCRIPT queries. L𝐿Litalic\_L can be an arbitrarily large
number, and it does not matter as long as it is finite. Here, S𝑆Sitalic\_S and A𝐴Aitalic\_A are sets
of grounded states and actions respectively. succ𝑠𝑢𝑐𝑐succitalic\_s italic\_u italic\_c italic\_c is ⊤top\top⊤ if the action was
executed successfully, and is ⊥bottom\bot⊥ if the action failed. valid𝑣𝑎𝑙𝑖𝑑validitalic\_v italic\_a italic\_l italic\_i italic\_d is ⊤top\top⊤ when
none of the previous actions had succ=⊥𝑠𝑢𝑐𝑐bottomsucc=\botitalic\_s italic\_u italic\_c italic\_c = ⊥. This stops an action to change a state
if any of the previous actions failed, thereby preserving the state that resulted
from a failed action. Whenever succ=⊥𝑠𝑢𝑐𝑐bottomsucc=\botitalic\_s italic\_u italic\_c italic\_c = ⊥ or valid=⊥𝑣𝑎𝑙𝑖𝑑bottomvalid=\botitalic\_v italic\_a italic\_l italic\_i italic\_d = ⊥, s=s′𝑠superscript𝑠′s=s^{\prime}italic\_s = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and n=n+𝑛subscript𝑛n=n\_{+}italic\_n = italic\_n start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT
signifying that applying an action where it is not applicable does not change the
state.
Assuming the length of the query plan, len(π)=D𝑙𝑒𝑛𝜋𝐷len(\pi)=Ditalic\_l italic\_e italic\_n ( italic\_π ) = italic\_D, we can write a query in first order logic, equivalent to the plan outcome query as
| | | | |
| --- | --- | --- | --- |
| | {(sD,\displaystyle\{(s\_{D},{ ( italic\_s start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT , | nD)|∃s1,…,∃sD−1,∃succ1,…,∃succD−1,\displaystyle n\_{D})\,|\,\,\exists s\_{1},\dots,\exists s\_{D-1},\exists succ\_{1},\dots,\exists succ\_{D-1},italic\_n start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ) | ∃ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , ∃ italic\_s start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , ∃ italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , ∃ italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , | |
| | | ∃n1,…,∃nD−1R(⊤,s0,a1,s1,succ1)∧subscript𝑛1…limit-fromsubscript𝑛𝐷1𝑅topsubscript𝑠0subscript𝑎1subscript𝑠1𝑠𝑢𝑐subscript𝑐1\displaystyle\exists n\_{1},\dots,\exists n\_{D-1}R(\top,s\_{0},a\_{1},s\_{1},succ\_{1})\land∃ italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , ∃ italic\_n start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT italic\_R ( ⊤ , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∧ | |
| | | R(succ1,s1,a2,s2,succ2)∧⋯∧𝑅𝑠𝑢𝑐subscript𝑐1subscript𝑠1subscript𝑎2subscript𝑠2𝑠𝑢𝑐subscript𝑐2limit-from⋯\displaystyle R(succ\_{1},s\_{1},a\_{2},s\_{2},succ\_{2})\land\dots\landitalic\_R ( italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∧ ⋯ ∧ | |
| | | R(succD−1,sD−1,aD,sD,⊤)∧limit-from𝑅𝑠𝑢𝑐subscript𝑐𝐷1subscript𝑠𝐷1subscript𝑎𝐷subscript𝑠𝐷top\displaystyle R(succ\_{D-1},s\_{D-1},a\_{D},s\_{D},\top)\landitalic\_R ( italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT , ⊤ ) ∧ | |
| | | N(⊤,0,n1)∧N(succ1,n1,n2)∧⋯∧𝑁top0subscript𝑛1𝑁𝑠𝑢𝑐subscript𝑐1subscript𝑛1subscript𝑛2limit-from⋯\displaystyle N(\top,0,n\_{1})\land N(succ\_{1},n\_{1},n\_{2})\land\dots\landitalic\_N ( ⊤ , 0 , italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∧ italic\_N ( italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∧ ⋯ ∧ | |
| | | N(succD−1,nD−1,nD)}\displaystyle N(succ\_{D-1},n\_{D-1},n\_{D})\}italic\_N ( italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT italic\_D - 1 end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ) } | |
The output of the query contains the free variables sD=sℓsubscript𝑠𝐷subscript𝑠ℓs\_{D}=s\_{\ell}italic\_s start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT = italic\_s start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT and nD=ℓsubscript𝑛𝐷ℓn\_{D}=\ellitalic\_n start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT = roman\_ℓ.
Such first order (FO) queries have the expression complexity and the combined
complexity in PSPACE Vardi ([1982](#bib.bib31)). The data complexity class of FO
queries is AC0𝐴superscript𝐶0AC^{0}italic\_A italic\_C start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT Immerman ([1987](#bib.bib17)).
The following results use the analysis in Vardi ([1995](#bib.bib32)).
The query analysis given above depends on how succinctly we can express the queries.
In the FO query shown above, we have a lot of spurious quantified variables. We can
reduce its complexity by using bounded-variable queries.
Normally, queries in a language ℒℒ\mathcal{L}caligraphic\_L assume an inifinite supply x1,x2,…subscript𝑥1subscript𝑥2…x\_{1},x\_{2},\dotsitalic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , …
of individual variables. A bounded-variable version ℒksuperscriptℒ𝑘\mathcal{L}^{k}caligraphic\_L start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT of the
language ℒℒ\mathcal{L}caligraphic\_L is one which can be obtained by restricting the individual
variables to be among x1,…,xksubscript𝑥1…subscript𝑥𝑘x\_{1},\dots,x\_{k}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, for k>0𝑘0k>0italic\_k > 0. Using this, we can reduce the
quantified variables in FO𝐹𝑂FOitalic\_F italic\_O query shown earlier, and rewrite it more succinctly as
an FOk𝐹superscript𝑂𝑘FO^{k}italic\_F italic\_O start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT query by storing temporary query outputs.
| | | | |
| --- | --- | --- | --- |
| | E(succ,s,a,s′,succ′,n,n′)=𝐸𝑠𝑢𝑐𝑐𝑠𝑎superscript𝑠′𝑠𝑢𝑐superscript𝑐′𝑛superscript𝑛′absent\displaystyle E(succ,s,a,s^{\prime},succ^{\prime},n,n^{\prime})\!=italic\_E ( italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = | R(succ,s,a,s′,succ′)∧limit-from𝑅𝑠𝑢𝑐𝑐𝑠𝑎superscript𝑠′𝑠𝑢𝑐superscript𝑐′\displaystyle R(succ,s,a,s^{\prime},succ^{\prime})\landitalic\_R ( italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∧ | |
| | | N(succ,n,n′)𝑁𝑠𝑢𝑐𝑐𝑛superscript𝑛′\displaystyle N(succ,n,n^{\prime})italic\_N ( italic\_s italic\_u italic\_c italic\_c , italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | |
| | α1(succ,s,a1,s′,succ′,n,n′)=subscript𝛼1𝑠𝑢𝑐𝑐𝑠subscript𝑎1superscript𝑠′𝑠𝑢𝑐superscript𝑐′𝑛superscript𝑛′absent\displaystyle\alpha\_{1}(succ,s,a\_{1},s^{\prime},succ^{\prime},n,n^{\prime})\!=italic\_α start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = | E(⊤,s0,a1,s′,succ′,0,n′)𝐸topsubscript𝑠0subscript𝑎1superscript𝑠′𝑠𝑢𝑐superscript𝑐′0superscript𝑛′\displaystyle E(\top,s\_{0},a\_{1},s^{\prime},succ^{\prime},0,n^{\prime})italic\_E ( ⊤ , italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , 0 , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | |
We then write subsequent queries corresponding to each step of the query plan as
| | | | |
| --- | --- | --- | --- |
| | αi+1(\displaystyle\alpha\_{i+1}(italic\_α start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ( | succ,s,ai+1,s′,succ′,n,n′)=\displaystyle succ,s,a\_{i+1},s^{\prime},succ^{\prime},n,n^{\prime})=italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = | |
| | | ∃s1,∃succ1,∃n1{E(succ,s,ai+1,s1,succ1,n1)∧\displaystyle\exists s\_{1},\exists succ\_{1},\exists n\_{1}\{E(succ,s,a\_{i+1},s\_{1},succ\_{1},n\_{1})\land∃ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , ∃ italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , ∃ italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT { italic\_E ( italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∧ | |
| | | ∃s,∃succ,∃n[succ=succ1∧s=s1∧\displaystyle\hskip 14.45377pt\exists s,\exists succ,\exists n[succ=succ\_{1}\land s=s\_{1}\land∃ italic\_s , ∃ italic\_s italic\_u italic\_c italic\_c , ∃ italic\_n [ italic\_s italic\_u italic\_c italic\_c = italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∧ italic\_s = italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∧ | |
| | | n=n1∧αi(succ,s,ai,s′,succ′,n,n′)]}\displaystyle\hskip 21.68121ptn=n\_{1}\land\alpha\_{i}(succ,s,a\_{i},s^{\prime},succ^{\prime},n,n^{\prime})]\}italic\_n = italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∧ italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s italic\_u italic\_c italic\_c , italic\_s , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_s italic\_u italic\_c italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_n , italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] } | |
Here i𝑖iitalic\_i varies from 1111 to D𝐷Ditalic\_D, and the value of k𝑘kitalic\_k is 6 because of 6 quantified
variables – s,s1,succ,succ1,n,𝑠subscript𝑠1𝑠𝑢𝑐𝑐𝑠𝑢𝑐subscript𝑐1𝑛s,s\_{1},succ,succ\_{1},n,italic\_s , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s italic\_u italic\_c italic\_c , italic\_s italic\_u italic\_c italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_n , and n1subscript𝑛1n\_{1}italic\_n start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. This reduces the expression and
combined complexity of these queries to ALOGTIME and PTIME respectively.
Note that these are the membership classes as it might be possible to write the
queries more succinctly.
For a detailed analysis of 𝒬APsubscript𝒬𝐴𝑃\mathcal{Q}\_{AP}caligraphic\_Q start\_POSTSUBSCRIPT italic\_A italic\_P end\_POSTSUBSCRIPT’s complexity, let us assume that the agent
stores the possible transitions it can make (in propositional form) using the
relations R(*valid*,s,a,s′,*succ*)𝑅*valid*𝑠𝑎superscript𝑠′*succ*R(\emph{valid},s,a,s^{\prime},\emph{succ})italic\_R ( valid , italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , succ ), where *valid*,*succ*∈{⊤,⊥}*valid**succ*
topbottom\emph{valid},\emph{succ}\in\{\top,\bot\}valid , succ ∈ { ⊤ , ⊥ }, s,s′∈S𝑠superscript𝑠′
𝑆s,s^{\prime}\in Sitalic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S, a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A; and 𝒮(p,s)𝒮𝑝𝑠\mathcal{S}(p,s)caligraphic\_S ( italic\_p , italic\_s ),
where p∈P𝑝𝑃p\in Pitalic\_p ∈ italic\_P, s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S. 𝒮𝒮\mathcal{S}caligraphic\_S contains (p,s)𝑝𝑠(p,s)( italic\_p , italic\_s ) if a grounded predicate p𝑝pitalic\_p is
in state s𝑠sitalic\_s.
Now, we can write a query in first order logic, equivalent to the action
precondition query as:
| | | | |
| --- | --- | --- | --- |
| | {(p)|\displaystyle\{(p)\,|\,\,{ ( italic\_p ) | | (∀s1𝒮(p,s1)⇒∃s′R(⊤,s1,a1,s′,⊤))∧limit-from⇒for-allsubscript𝑠1𝒮𝑝subscript𝑠1superscript𝑠′𝑅topsubscript𝑠1subscript𝑎1superscript𝑠′top\displaystyle(\forall s\_{1}\,\mathcal{S}(p,s\_{1})\Rightarrow\exists s^{\prime}R(\top,s\_{1},a\_{1},s^{\prime},\top))\,\,\land( ∀ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT caligraphic\_S ( italic\_p , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ⇒ ∃ italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT italic\_R ( ⊤ , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , ⊤ ) ) ∧ | |
| | | (∀s1¬𝒮(p,s1)⇒∀s′R(⊤,s1,a1,s′,⊥))}\displaystyle(\forall s\_{1}\,\neg\mathcal{S}(p,s\_{1})\Rightarrow\forall s^{\prime}R(\top,s\_{1},a\_{1},s^{\prime},\bot))\}( ∀ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ¬ caligraphic\_S ( italic\_p , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ⇒ ∀ italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT italic\_R ( ⊤ , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , ⊥ ) ) } | |
This formulation is equivalent to the FOk𝐹superscript𝑂𝑘FO^{k}italic\_F italic\_O start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT queries with k=2𝑘2k=2italic\_k = 2. This means that the
data, expression and combined complexity of these queries are in complexity classes
AC00{}^{0}start\_FLOATSUPERSCRIPT 0 end\_FLOATSUPERSCRIPT, ALOGTIME, and PTIME respectively.
The results for complexity classes of the queries presented above holds assuming
that the agent stores all the transitions using a mechanism equivalent to
relational databases where it can search through states in linear time. For the
simulator agents that we generally encounter, this assumption almost never holds
true. Even though both the queries have membership in the same complexity class, an
agent will have to spend more time in running the action precondition query owing
to the exhaustive search of all the states in all the cases, whereas for the plan
outcome queries, the exhaustive search is not always needed.
Additionally, plan outcome queries place very little requirements on the agent to
answer the queries, whereas action precondition queries require an agent to use
more computation to generate it’s responses. Action precondition queries also force
an agent to know all the transitions beforehand. So if an agent does not know its
model but has to execute an action in a state to learn the transition, action
precondition queries will perform poorly as agent will execute that action in all
possible states to answer the query. On the other hand, to answer plan outcome
queries in such cases, an agent will have to execute at most L actions (maximum
length of the plan) to answer a query.
Evaluating the responses of queries will be much easier for the action precondition queries, whereas evaluating the responses of plan outcome queries is not straightforward, as discussed in Verma et al. ([2021](#bib.bib33)).
As mentioned earlier, the agent interrogation algorithm that uses the plan outcome queries has asymptotic complexity O(|ℙ\*|×|𝔸|)𝑂superscriptℙ𝔸O(|\mathbb{P}^{\*}|\times|\mathbb{A}|)italic\_O ( | blackboard\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT | × | blackboard\_A | ) for evaluating all agent responses. On the other hand, if an algorithm is implemented with action precondition queries, its asymptotic complexity for evaluating all agent responses will reduce to O(|𝔸|)𝑂𝔸O(|\mathbb{A}|)italic\_O ( | blackboard\_A | ). This is because AAM needs to ask two queries for each action. The first query in a state where it is guaranteed that the action will fail, this will lead AAM to learn the action’s precondition. After that AAM can ask another query in a state where the action will not fail, and learn the action’s effects. This will also lead to an overall less number of queries.
So there is a tradeoff between the computation efforts needed for evaluation of query responses and the computational burden on the agent to answer those queries.
6 Empirical Evaluation
-----------------------
We implemented AIA with plan outcome queries in Python to evaluate the efficacy of our
approach.
In this
implementation, initial states were collected by making the agent perform random walks in
a simulated environment. We used a maximum of 60 such random initial
states for each domain in our experiments.
The implementation is optimized to store the agent’s answers to queries;
hence the stored responses are used if a query is repeated.
We tested AIA on two types of agents: symbolic agents that use models from
the IPC (unknown to AIA), and simulator agents that report states
as images using PDDLGym Silver and Chitnis ([2020](#bib.bib29)). All experiments were executed
on 5.0 GHz Intel i9-9900 CPUs with 64 GB RAM running Ubuntu 18.04.
The analysis presented below shows that AIA learns the correct model with
a reasonable number of queries, and compares our results with the closest
related work, FAMA Aineto et al. ([2019](#bib.bib1)). We use the metric of
model accuracy in the following analysis: the number of
correctly learned palm tuples normalized with the total number of palm
tuples in ℳ𝒜superscriptℳ𝒜\mathcal{M}^{\mathcal{A}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT.
###
6.1 Experiments with symbolic agents
We initialized the agent
with one of the 10 IPC domain models, and ran AIA on the resulting agent.
10 different problem instances were used to obtain average performance
estimates.
Table [1](#S6.T1 "Table 1 ‣ 6.1 Experiments with symbolic agents ‣ 6 Empirical Evaluation ‣ Learning Causal Models of Autonomous Agents using Interventions") shows that the number of queries required
increases with the number of predicates and actions in the domain. We used
Fast Downward Helmert ([2006](#bib.bib15)) with LM-Cut
heuristic Helmert and
Domshlak ([2009](#bib.bib13)) to solve the planning problems.
Since our approach is planner-independent, we also tried using
FF Hoffmann and Nebel ([2001](#bib.bib16)) and the results were similar. The low variance shows
that the method is stable across multiple runs.
| Domain | |ℙ\*|superscriptℙ\mathbf{|\mathbb{P}^{\*}|}| blackboard\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT | | |𝔸|𝔸\mathbf{|\mathbb{A}|}| blackboard\_A | | |𝒬^|^𝒬\mathbf{|\hat{\mathcal{Q}}|}| over^ start\_ARG caligraphic\_Q end\_ARG | | 𝐭μsubscript𝐭𝜇\mathbf{t\_{\mu}}bold\_t start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT (ms) | 𝐭σsubscript𝐭𝜎\mathbf{t\_{\sigma}}bold\_t start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT (μ𝜇\mathbf{\mu}italic\_μs) |
| --- | --- | --- | --- | --- | --- |
| Gripper | 5 | 3 | 17 | 18.0 | 0.2 |
| Blocksworld | 9 | 4 | 48 | 8.4 | 36 |
| Miconic | 10 | 4 | 39 | 9.2 | 1.4 |
| Parking | 18 | 4 | 63 | 16.5 | 806 |
| Logistics | 18 | 6 | 68 | 24.4 | 1.73 |
| Satellite | 17 | 5 | 41 | 11.6 | 0.87 |
| Termes | 22 | 7 | 134 | 17.0 | 110.2 |
| Rovers | 82 | 9 | 370 | 5.1 | 60.3 |
| Barman | 83 | 17 | 357 | 18.5 | 1605 |
| Freecell | 100 | 10 | 535 | 2.24††{}^{\dagger}start\_FLOATSUPERSCRIPT † end\_FLOATSUPERSCRIPT | 33.4††{}^{\dagger}start\_FLOATSUPERSCRIPT † end\_FLOATSUPERSCRIPT |
Table 1: The number of queries (|𝒬^|^𝒬|\hat{\mathcal{Q}}|| over^ start\_ARG caligraphic\_Q end\_ARG |), average time per query (tμsubscript𝑡𝜇t\_{\mu}italic\_t start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT), and variance of time per query (tσsubscript𝑡𝜎t\_{\sigma}italic\_t start\_POSTSUBSCRIPT italic\_σ end\_POSTSUBSCRIPT) generated by AIA with FD. Average and variance are calculated for 10 runs of AIA, each on a separate problem.
††{}^{\dagger}start\_FLOATSUPERSCRIPT † end\_FLOATSUPERSCRIPTTime in sec.
####
6.1.1 Comparison with FAMA
We compare the performance of AIA with
that of FAMA in terms of stability of the models learned and the time taken
per query. Since the focus of our approach is on automatically generating
useful traces, we provided FAMA randomly generated traces of length 3 (the
length of the longest plans in AIA-generated queries) of the form used
throughout this paper (⟨sI,a1,a2,a3,sG⟩subscript𝑠𝐼subscript𝑎1subscript𝑎2subscript𝑎3subscript𝑠𝐺\langle s\_{I},a\_{1},a\_{2},a\_{3},s\_{G}\rangle⟨ italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ⟩).
Fig. [3](#S6.F3 "Figure 3 ‣ 6.1.1 Comparison with FAMA ‣ 6.1 Experiments with symbolic agents ‣ 6 Empirical Evaluation ‣ Learning Causal Models of Autonomous Agents using Interventions") summarizes our findings. AIA takes lesser time per
query and shows better convergence to the correct model. FAMA sometimes
reaches nearly accurate models faster, but its accuracy continues to
oscillate, making it difficult to ascertain when the learning process
should be stopped (we increased the number of traces provided to FAMA until
it ran out of memory). This is because the solution to FAMA’s internal
planning problem introduces spurious palm tuples in its model if the input
traces do not capture the complete domain dynamics. For Logistics,
FAMA generated an incorrect planning problem, whereas for Freecell and
Barman it ran out of memory (AIA also took considerable time for Freecell).
Also, in domains with negative preconditions like Termes, FAMA was
unable to learn the correct model. We used
Madagascar Rintanen ([2014](#bib.bib27)) with FAMA as
it is the preferred planner for it. We also tried FD and FF with FAMA, but
as the original authors noted, it could not scale and ran out of memory on
all but a few Blocksworld and Gripper problems where it was much slower
than with Madagascar.

Figure 3: Performance comparison of AIA and FAMA in terms of model
accuracy and time taken per query with an increasing number of queries.
###
6.2 Experiments with simulator agents
AIA can also be used with
simulator agents that do not know about predicates and report states as
images. To test this, we wrote classifiers for detecting predicates from
images of simulator states in the PDDLGym
framework. This framework provides ground-truth PDDL models, thereby
simplifying the estimation of accuracy. We initialized the agent with one
of the two PDDLGym environments, Sokoban and Doors. AIA inferred the correct model in both cases and the
number of instantiated predicates, actions, and the average number of
queries (over 5 runs) used to predict the correct model for Sokoban were
35, 3, and 201, and that for Doors were 10, 2, and 252.
7 Related Work
---------------
One of the ways most current techniques learn the agent models is based on
passive or active observations of the agent’s behavior, mostly in the
form of action traces Gil ([1994](#bib.bib8)); Yang et al. ([2007](#bib.bib34)); Cresswell et al. ([2009](#bib.bib5)); Zhuo and Kambhampati ([2013](#bib.bib35)).
Jiménez et al. ([2012](#bib.bib18)) and Arora et al. ([2018](#bib.bib2)) present comprehensive
review of such approaches. FAMA Aineto et al. ([2019](#bib.bib1)) reduces model recognition to a planning
problem and can work with partial action sequences and/or state traces
as long as correct initial and goal states are provided. While
FAMA requires a post-processing step to update the learnt
model’s preconditions to include the intersection of all states where
an action is applied, it is not clear that such a process would
necessarily converge to the correct model. Our experiments indicate
that such approaches exhibit oscillating behavior in terms of model
accuracy because some data traces can include spurious predicates, which
leads to spurious preconditions being added to the model’s actions.
As we mentioned earlier, such approaches do not feature interventions, and hence the models
learned by these techniques do not capture causal relationships correctly and
feature correlations.
Pearl ([2019](#bib.bib26)) introduce a 3-level causal hierarchy in terms of the classification of causal information
in terms of the type of questions each class can answer. He also notes that based on passive observations alone,
only associations can be learned, not the interventional or counterfactual causal relationships, regardless of the size of data.
The field of active learning Settles ([2012](#bib.bib28)) addresses the related
problem of selecting which data-labels to acquire for learning
single-step decision-making models using statistical measures of
information. However, the effective feature set here is the set of
all possible plans, which makes conventional methods for evaluating
the information gain of possible feature labelings infeasible.
In contrast, our approach uses a hierarchical abstraction
to select queries to ask, while inferring a multistep
decision-making (planning) model. Information-theoretic metrics could
also be used in our approach whenever such information is available.
Blondel et al. ([2017](#bib.bib4)) introduced Dynamical Causal Networks which extend the causal graphs to temporal domains, but they do not
feature decision variables, which we introduce in this paper.
8 Conclusion
-------------
We introduced dynamic causal decision networks (DCDNs) to represent causal structures in
STRIPS-like domains; and showed that the models learned using the agent interrogation
algorithm are causal, and are sound and complete with respect to the corresponding unknown
ground truth models. We also presented an extended analysis of the
queries that can be asked to the agents to learn their model, and the
requirements and capabilities of the agents to answer those queries.
Extending the empirical analysis to action precondition queries, and extending our
predicate classifier to handle noisy state detection, similar
to prevalent approaches using classifiers to detect symbolic
states Konidaris et al. ([2014](#bib.bib20)); Asai and Fukunaga ([2018](#bib.bib3)) are a few good directions for future
work. Some other promising extensions include replacing query and response
communication interfaces between the agent and AAM
with a natural language similar to Lindsay et al. ([2017](#bib.bib22)), or learning other
representations like Zhuo et al. ([2014](#bib.bib36)).
Acknowledgements
----------------
This work was
supported in part by the NSF grants IIS 1844325, OIA 1936997, and ONR grant N00014-21-1-2045. |
1aa1f5bb-586a-4b3f-ac6e-27f2eaabaa5a | trentmkelly/LessWrong-43k | LessWrong | A Confused Chemist's Review of AlphaFold 2
(This article was originally going to be titled "A Chemist's Review of AlphaFold 2")
Most of the protein chemists I know have a dismissive view of AlphaFold. Common criticisms generally refer to concerns of "pattern matching". I wanted to address these concerns, and have found a couple of concerns of my own.
The main method for assessment of AlphaFold 2 has been the Critical Assessment of Protein Structure (CASP). This is a competition held based on a set of protein structures which have been determined by established experimental methods, but deliberately held back from publishing. Entrant algorithms then attempt to predict the structure based on amino acid sequence alone. AlphaFold 2 did much better than any other entrant in 2020, scoring 244 compared to the second place entrant's 91 by CASP's scoring method.
The first thing that struck me during my investigation is how large AlphaFold is, in terms of disk space. On top of neural network weights, it has a 2.2 TB protein structure database. A model which does ab initio calculations i.e. does a simulation of the protein based on physical and chemical principles, will be much smaller. For example Rosetta, a leading ab initio software package recommends 1 GB of working memory per processor in use while running, and gives no warnings at all about the file size of the program itself.
DeepMind has an explicit goal of replacing crystallography as a method for determining protein structure. Almost all crystallography is carried out on naturally occurring proteins isolated from organisms under study. This means the proteins are products of evolution, which generally conserves protein structure as a means of conserving function. Predicting the structure of an evolved protein is a subtly different problem to predicting the structure of a sequence of random amino acids. For this purpose AlphaFold 2 is doing an excellent job.
On the other hand, I have a few nagging doubts about how exactly DeepMind are going about solving |
f97d231f-906f-4303-8555-bc1550f08580 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Wikipedia as an introduction to the alignment problem
AI researchers and others are increasingly looking for an introduction to the alignment problem that is clearly written, credible, and supported by evidence and real examples. The Wikipedia article on AI Alignment has become such an introduction.
Link: [**https://en.wikipedia.org/wiki/AI\_alignment**](https://en.wikipedia.org/wiki/AI_alignment)
Aside from me, it has contributions from Mantas Mazeika, Gavin Leech, Richard Ngo, Thomas Woodside (CAIS), Sidney Hough (CAIS), other Wikipedia contributors, and copy editor Amber Ace. It also had extensive feedback from this community.
In the last month, it had ~20k unique readers and was cited by [Yoshua Bengio](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/).
We've tried hard to keep the article accessible for non-technical readers while also making sense to AI researchers.
I think Wikipedia is a useful format because it can include videos and illustrations (unlike papers) and it is more credible than blog posts. However, Wikipedia has strict rules and could be changed by anyone.
Note that we've announced this effort on the Wikipedia talk page and shared public drafts to let other editors give feedback and contribute.
I you edit the article, please keep in mind Wikipedia's rules, use [reliable](https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources) sources, and consider that we've worked hard to keep it concise because most Wikipedia readers spend <1 minute on the page. For the latter goal, it helps to focus on edits that reduce or don't increase length. To give feedback, feel free to post on the talk page or message me. Translations would likely be impactful. |
7a868263-df1f-42c3-9009-24c69a8e6ead | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Buck Shlegeris: How I think students should orient to AI safety
---
*In this EA Student Summit 2020 talk, Buck Shlegeris argues that students should engage with AI safety by trying to actually assess the arguments and the safety proposals. He claims that this is doable and useful.*
*In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact* [*Aaron Gertler*](mailto:aaron.gertler@centreforeffectivealtruism.org) *— he can help you get started.* |
c20f28a8-5244-4c47-ac37-d39cebaaa755 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Research proposal: Leveraging Jungian archetypes to create values-based models
This project is the origins of the [Archetypal Transfer Learning (ATL) method](https://www.lesswrong.com/tag/archetypal-transfer-learning)
This is the **abstract** of my research proposal submitted to AI Alignment Awards. I am publishing this here for community feedback. You can find the link to the whole research paper [here](https://www.whitehatstoic.com/p/research-proposal-leveraging-jungian).
---
### Abstract
We are entering a decade of singularity and great uncertainty. Across all disciplines, including wars, politics, human health, as well as the environment, there are concepts that could prove to be a double edged sword. Perhaps the most powerful factor in determining our future is how information is distributed to the public. It can be both transformational and empowering using advanced AI technology – or it can lead to disastrous outcomes that we may not have the foresight to predict with our current capabilities.
Goal misgeneralization is defined as a robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations. This research proposal tries to capture what might be a better description of this problem and solutions from a Jungian perspective.
This proposal covered key AI alignment topics, from goal misgeneralisation to other pressing issues. It offers a comprehensive approach for addressing critical questions in the field.
* reward misspecification and hacking
* situational awareness
* deceptive reward hacking
* internally-represented goals
* learning broadly scoped goals
* broadly scoped goals incentivizing power-seeking,
* power seeking policies would choose high reward behaviors for instrumental reasons
* misaligned AGIs gain control of the key levers of power
These above-mentioned topics were reviewed to check the viability of approaching the alignment problem through a Jungian approach. 3 key concepts emerged from the review:
* By understanding how humans use patterns to recognize intentions at a subconscious level, researchers can leverage on Jungian archetypes and create systems that mimic natural decision-making. With this insight into human behavior, AI can be trained more effectively with archetypal data.
* Stories are more universal in human thought than goals. Goals and rewards will always yield the same problems encountered in alignment research. AI systems should utilize the robustness of complete narratives to guide its responses.
* Values-based models can serve as the moral compass for AI systems in determining what is a truthful and responsible response or not. Testing this theory is essential in continuing progress on alignment research.
A list of initial methodologies were added to present an overview of how the research will proceed once approved.
In conclusion, alignment research should look into the possibility of replacing goals and rewards in evaluating AI systems. By understanding that humans think consciously and subconsciously through Jungian archetypal patterns, this paper proposes that complete narratives should be leveraged in training and deploying AI models.
A number of limitations were included in the last section. The main concern is the need to hire Jungian scholars or analytical psychologists - as they will define what constitutes archetypal data and evaluate results. They will also be required to influence the whole research process with a high moral ground and diligence. They will be difficult to find.
AI systems will impact our future significantly, so it is important that they are developed responsibly. History has taught us what can happen when intentions are poorly executed: the deaths of millions through the use of wrong ideologies haunt us and remind us of the need for caution in this field. |
8c7cf59c-ec31-4e1d-8c6d-3ded48484399 | trentmkelly/LessWrong-43k | LessWrong | AI #115: The Evil Applications Division
It can be bleak out there, but the candor is very helpful, and you occasionally get a win. Zuckerberg is helpfully saying all his dystopian AI visions out loud. OpenAI offered us a better post-mortem on the GPT-4o sycophancy incident than I was expecting, although far from a complete explanation or learning of lessons, and the rollback still leaves plenty sycophancy in place. The big news was the announcement by OpenAI that the nonprofit will retain nominal control, rather than the previous plan of having it be pushed aside. We need to remain vigilant, the fight is far from over, but this was excellent news.
Then OpenAI dropped another big piece of news, that board member and former head of Facebook’s engagement loops and ad yields Fidji Simo would become their ‘uniquely qualified’ new CEO of Applications. I very much do not want her to take what she learned at Facebook about relentlessly shipping new products tuned by A/B testing and designed to maximize ad revenue and engagement, and apply it to OpenAI. That would be doubleplus ungood. Gemini 2.5 got a substantial upgrade, but I’m waiting to hear more, because opinions differ sharply as to whether the new version is an improvement. One clear win is Claude getting a full high quality Deep Research product. And of course there are tons of other things happening.
TABLE OF CONTENTS
Also covered this week: OpenAI Claims Nonprofit Will Retain Nominal Control, Zuckerberg’s Dystopian AI Vision, GPT-4o Sycophancy Post Mortem, OpenAI Preparedness Framework 2.0. Not included: Gemini 2.5 Pro got an upgrade, recent discussion of students using AI to ‘cheat’ on assignments, full coverage of MIRI’s AI Governance to Avoid Extinction.
1. Language Models Offer Mundane Utility. Read them and weep.
2. Language Models Don’t Offer Mundane Utility. Why so similar?
3. Take a Wild Geoguessr. Sufficient effort levels are indistinguishable from magic.
4. Write On. Don’t chatjack me, bro. Or at least show some syntherity.
5. Get |
94df63f6-4bc4-4c21-9568-1a158401a331 | trentmkelly/LessWrong-43k | LessWrong | Infinite Possibility Space and the Shutdown Problem
This post is a response to the recent Astral Codex Ten post, “CHAI, Assistance Games, And Fully-Updated Deference”.
A brief summary of the context, for any readers who are not subscribed to ACX or familiar with the shutdown problem:
The Center for Human-Compatible Artificial Intelligence (CHAI) is a research group at UC Berkeley. Their researchers have published on the shutdown problem, showing that “propose an action to humans and wait for approval, allowing shutdown” strictly dominates “take that action unilaterally” as well as “shut self down unilaterally” for agents satisfying certain assumptions.
MIRI discusses a counterexample, using a toy example where the AI has a finite number of policy options available, and expresses that “learn which of those finite set of options is best according to humans, then execute without allowing humans to shut it down” can dominate the course of “propose action to humans and wait for approval.”
I claim that the fact that the AI is “larger” than its value-space seems to me to be a critical ingredient in the AI being able to conclude that it has reached its terminal point in value-space. I posit that given a value-space that is “larger” than the AI, the AI will accept shutdown. Here I present an argument that, for at least one AI architecture and structure of value-space, the “propose action and allow shutdown” option should dominate much of the time.
Assume that a current AI model A contains a deep neural net connected to some decision procedure, of a specified, finite size (such as 16 layers with 1024 nodes each).
Then assume we can specify that human values are best specified by some ideal AI Z with the same structure[1], but with unknown size (e.g. Z includes a neural net with currently unknown number of layers and nodes in each layer.)
Further assume that we can specify that A’s action-space is to set the weights of its existing nodes, then propose actions to humans who can allow the actions or veto them by shuttin |
c4396a55-caaa-4d5a-b907-56e9f63b77d3 | trentmkelly/LessWrong-43k | LessWrong | Taboo Wall
Summary: Rationalist Taboo is the skill of being able to talk about the same concepts using different words. Taboo Wall is open discussion, but as the discussion goes on more words get stricken from everyone’s vocabulary.
Tags: Very Experimental, Medium
Purpose: To practice the ability to communicate what you mean without using specific phrases or concepts.
Materials: A large and easily visible space to write words on. A giant whiteboard works, as does a projector displaying a word document on the wall.
Announcement Text: “Rationalist Taboo” is the skill of explaining the ideas you mean to communicate without the use of specific words. It’s a tool with many uses. Sometimes your conversational partner doesn’t know a particular word, so you want to be able to smoothly switch to vocabulary they are familiar with. Other times you and the people you’re talking to have subtly different definitions or associations with a given word, and so you can clear up some confusion by mutually setting that word aside. (Consider: “If a tree falls in a forest and nobody hears it, does it make a sound?” That’s a potentially difficult question! “If a tree falls in a forest and nobody has an auditory experience of it, does it make vibrations in the air?” That is less difficult!)
Today we’ll be practicing this skill, but mostly we’ll be having conversations about whatever we’re interested in. The trick is, throughout the meetup we’ll be adding words to a “taboo list” that we’re not allowed to say. Please continue your conversation around this slight impediment!
Description: Put one word on the wall, and explain that people are free to talk about whatever they’re interested in but that they are not allowed to use that word. Periodically (I suggest intervals of fifteen minutes, based on a wild hunch) words will be added to the list, and those words are also not allowed. Wave your arms to point out the additions of new words. If anyone notices someone using a word you shouldn’t, they s |
7b792a0f-a7d3-4f5e-a944-56b5616e6b3c | trentmkelly/LessWrong-43k | LessWrong | Longview Anarchism: Transcending the Existential Threat of Freedom
https://c4ss.org/content/50066
Piece about game theoretic coordination, existential risk, and the maximization of freedom. Full-text copied below:
Longview Anarchism: Transcending the Existential Threat of Freedom
Emmi Bevensee | Support this author on Patreon | September 28th, 2017
Skin as Thick as Bark
As asinine, cultish leaders fascistically toy with the notion of nuclear warfare, we are reminded yet again of the fragility of human life. That humans have advanced as far as we have is remarkable. It reminds me of the feeling of awe I have when realizing that we limited humans drive hurtling boxes of steel around and don’t kill each more often than we do. Really, brava humanity. And yet, on a long time scale, we are less than a blink. After all, dinosaurs roamed the earth for 165 million years, and humans have only been around for about 6 million. Although dinosaurs did not reach the level of existential responsibility and consciousness that humans have, they were still wiped out by natural phenomena. Many pessimists see our extinction as an inevitability and almost usher it in, giving it a seat in their home with a misanthropic accelerationist’s glee. It’s wiser to recognize the exponentially harrowing conundrums that we do and will continue to face with an eye of hope. At the very least we should act in accordance with a path that hope might suggest. The game theoretic dilemmas of technological advancement present threats, but they also offer opportunities for freedom. The alternative can only be devastation and the void, so gambling on a future is, however unlikely to succeed, a sound bet. A longview anarchism represents both a determinism, and an infinite array of possibility.
The Fear of Knowledge
Each new existential threat to humanity increases both the rewards of coordination and the risks of defection. With the invention of firearms came the genocide of indigenous peoples the world over, but, like the boomerang of advancement, in time, those guns g |
b412f03f-2d87-454b-b21c-74e0f536eafd | trentmkelly/LessWrong-43k | LessWrong | Capitalising On Trust—A Simulation
Recently I’ve been exploring moral philosophy with a series on Moral Licensing, Andrew Tane Glen’s Why Cooperate?, and in a workshop I ran with my daughter’s class about the strategies of cooperation and defection. One phenomenon that has arisen through these explorations is that defectors generally gain a short term relative advantage, while cooperators benefit from a sustained long term absolute advantage, which got me thinking about a simulation.
Unfortunately I can't embed the simulation here so to view it you'll need to visit the site. But I'm interested in what people think of the concept, and the practice of using simulations in general to illustrate these sorts of concepts.
Do you think this sort of short term individualism and long term cooperation, is something that applies outside the prisoner's dilemma?
|
fc9dad6c-03c0-4e19-8b62-4b8e18d32861 | trentmkelly/LessWrong-43k | LessWrong | FAR - UVC
Is there is a cost-effective way to convert or make FAR-UVC light with current products on the market?
Appreciate your time in advance. |
9a29456d-62d9-406d-b441-896aa7f0ff52 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Robots Learning Through Interactions (Jens Kober)
there we go yes yay
thanks a lot for the kind of
introduction and also for the
invitation uh so i'm very much looking
forward to having
some very lively discussions uh as
hopeful
as most of the times um yeah
so nick already uh you know told you
about the question who should
help whom so most people if they think
about robots they would
think about robots supporting us with
the
tedious or dangerous tasks and then yet
very annoying cases where
the robot does something wrong or gets
stuck and
we also have to help the robot to get
unstuck
and then in a similar way who should
teach whom depending on
what kind of research you're doing or
what your perspective on robotics is
you might say yeah could be a great idea
to actually
use a robot also as a teacher for
humans my research focuses
more on teaching robots
new tasks so that's an example of
kinesthetic teaching where a person
shows
the robot how to solve a new task
and in this talk what i'm going to focus
on
is how do we actually make this teaching
in interaction with a per person as
smooth as possible and
as efficient as possible for the robot
so they're largely two different but
complementary ways on
how robots can learn new skills so so if
i say skills i'm talking about
learning some kind of movements on the
order of a few seconds or a few minutes
and the first one is imitation learning
where a teacher demonstrates a skill and
the student then tries
to mimic tech like we also saw in a
previous slide
here's an example from my own research a
bit
older and you'll see one my former phd
student demonstrates
the robot how to unscrew a light bulb
and then how to put it in a trash can
um so here yeah you need a couple of
turns to get it loose and then you move
it to the trash can
so just playing back the this recording
that's not really interesting what
you're really interested in is to
generalize the movement so for instance
in this case for the number of rotations
you need to do
to unscrew it but you could also just
think about changing the position of the
light bulb holder or the trash in
particular what we were interested
in here is the order of
little sub movements you need to do so
you need to move towards
the bulb then grasp it rotate it in one
direction
and so forth and then you can represent
that as
a form of a graph that tells you in
which order you need to do things
and then what you were also interested
in is okay when do we actually
switch to the next primitive
and then the robot can reproduce it and
also
generalized to new situations
so here you see it unscrewing and then
when it's it becomes loose
it's pulling slightly on the bob so you
can actually remove it
um so that's really important to detect
it accurately otherwise
the light bulb will go flying okay
so then it successfully did that
so that's an example of imitation
learning
and the other side is reinforcement
learning as you can imagine
like for us if the task gets slightly
more complex
also a robot needs to practice
what i'm going to use a bit as a running
example is ball in a cup so that's a
little game where you have to catch the
ball
in the cup and here's my cousin's
daughter she was
around 10 back then i showed her once
how to do that
and then it took around 35 times until
she
caught the ball in the cup so
reinforcement learning is learning by
trial and error and trying to
maximize the reward and you can see the
little uh rewarded chocolate
here okay so in both um
examples i showed you you have the human
in the loop
in some form or other uh for the
imitation it's very clear you have to
human demonstrating and
the robot mimicking but also for
reinforcement learning you need to
define
the revolt function some kind of
function approximation
and how exactly you set up the problem
and so forth so to illustrate a little
bit what we
did there is the reward is more or less
based on the distance between the ball
and the cup
the first time the robot drives it's
still a bit far
so gets a pretty low reward close to
zero
next time it's a bit closer so it gets a
bit higher reward
and if you imagine that you do some kind
of weighted combination of the two of
them
you might get something that's already
pretty close um
and then in this case pumping on the rim
of the cup and then
the maximum reward would be a one in
this case
so it's already pretty close and then if
you repeat that
so mainly focusing on the good one and
only the other two a little bit you
can hopefully catch the ball in the cup
very quickly so that's the robot trying
out different things
and then seeing what works what doesn't
work and adapting accordingly
also here we helped the robot
additionally by giving it an initial
demonstration
and so here there was a good
demonstration
but then as the dynamics are a lot more
complex here if you just play that back
then
you missed the cup by one of 10 15
centimeters
after 15 trials or so it's already
considerably closer
now the next one i think goes a bit too
far
and then it's going to hit the rim of
the cup
and then finally after 60 trials or so
it starts to get the ball into the cup
and after 100 tries or so that has
converged and works very reliably
okay so as you can see in the dates
that's
uh some fairly old videos um
what people are working on nowadays is
doing end-to-end learning so
we had some vision system that tells us
where the
ball is um we used low dimensional
representation and so forth um if you do
not use neural networks to try to learn
end-to-end
that usually means you need some form of
big data or some other tricks to do that
so this famous example here
that's from google they had an army of
robots working day and night for a
couple of
months actually non-stop just to learn
to pick up simple objects in bins like
that
so that's clearly impressive but not
really
desirable or practical in any sense
okay so what i showed you so far
is the human is involved in the
beginning
either giving demonstrations or setting
up the problem and then the robot is
off on its own trying to learn something
if you again compare it to how humans
learn you'll actually notice that
there's often a continued
student teacher interaction so while
somebody is learning you might provide
additional demonstrations or some other
form of intermittent feedback
and at least nowadays that's still
largely missing in robot learning
and i believe that including these
intermittent interactions with a teacher
allows you to
speed up the learning which in turn
allows the robot to solve more complex
tasks
and that's also a somewhat intuitive way
for
humans to teach and in the remainder of
the talk i'm going to show you a few
examples of
that's what i'm claiming here is
actually hopefully true
okay so how does this interactive
learning look like we have
a robot and an environment
the environment has a state so it could
be the position of the robot in the
world or the
position of the robot arm and the agent
has a policy which tells it
which action to take for each state
so depending on where it is what to do
and that's just more or less traditional
control loop
now what we have additionally is a
teacher
and the teacher is going to observe the
state of the world and the robot
potentially also what action the the
robot took
and then there are many different
variants you can think of but in some
form or other
the teacher is going to provide
additional information
to the agent usually if something is
going wrong so it could be an additional
demonstration to show something that the
robot hasn't seen yet
and could also be just a correction on
how to modify
it the robot behavior to perform the
task better
so in the first few things i'm going to
show you we're going to focus on
corrections in the action space so the
teacher very explicitly tells you
tells the robot okay you should have
moved a bit faster
or a bit slower um in order to perform
the task well
to come back to the example i had before
the ball in the cup so here you see the
same thing
however now we start learning from
scratch and carlos my postdoc
is sitting in front of the computer and
occasionally giving it some additional
feedback so in this case
move more to the left move more to the
right move slower
move faster and then after
17 trials or so it's already quite close
and then let's see
yep next one so that's after 25 trials
or so
it can successfully catch the ball in
the cup
so compared to the human gel there was
around 35 times if you ask an adult
that's also usually around 20 times
so it's really the same order of
magnitude now
and if you compare that to learning the
skill
from scratch so without initializing it
with imitation learning
just using reinforcement learning that
takes
two orders of magnitudes longer so some
2000 or so trials usually in our
experiments
and that's still not doing end-to-end
learning but relying on
low-dimensional inputs for the states
okay so that directly brings me to the
next part
learning from high dimensional inputs of
in this case from raw camera images
a typical thing you're going to see in
deep learning is
some form of encoder decoder structure
where you have the
high dimensional images here you bring
it
back to some kind of low dimensional
representation that
hopefully still contains all the
relevant information and in order to
train that what you
typically do is you decode it again
and then you try to ensure that the
output
matches the input so that effectively
here this
low dimensional representation contains
all the
important information and then
once you've trained this mapping from
the
camera images to a low dimensional
representation you can
use that to learn the robot behavior so
the policy on top of that
so then you would remove the decoder
part and then start learning
the policy either completely freezing
this part or maybe
also training that partially
so you could do that beforehand
so you collect lots and lots of data of
images the
the robot is going to encounter and then
try to find some kind of generic
representation
and then learn a policy on top of that
so that's what we see here
but as you can already imagine that's
probably not the smartest
thing to do because depending on the
task you do
some stuff in the environment might be
relevant and some other stuff
irrelevant and that might change
completely depending on which task you
want to get
so then the obvious thing is okay so can
also learn that
while so this representation while the
robot is learning the task
and that's exactly what we tried to do
there
to learn simultaneously and embedding
and the policy and then
so really the main objective you want to
do is to learn this policy so the
actions the robot takes but then as an
additional
criterion we have to this reconstruction
from the
input image to um in this case a
slightly blurred
image of a tree again
and again while while this whole
learning is happening
the robot is going to get feedback from
its human teacher not continuously so
not we are not remote controlling it
we're not teleoperating it
it's just the teacher is jumping in
occasionally to
fix some things
so what you see on the left is the
reconstruction so based on the
low dimensional representation and then
you can see that already very quickly
it learns something that represents the
the real images reasonably well and
again what you can't see
in this video is the human teacher
occasionally giving feedback via a
keyboard in this case on
what the robot is supposed to do in this
case
push this whiteboard swiper thingy
off the table and
let me skip forward a little bit
and that's that's some so that's in real
time here so we can
learn that really in a couple of minutes
come on
no
okay let's see here's another example
of teaching this little ducky town robot
to drive
and it's rodrigo one of my phd students
and you can see him holding the keyboard
and if you look very closely sometimes
he presses
buttons to teach it how to drive again
here the raw
camera images and the reconstructed
images and again after
just 10 minutes or so it learns how to
drive on the correct side of the road
and then here that's in real time
and here you can see that really learned
and that he's not using the keyboard any
longer to
control the little robot
yeah so that's not just taking the
images
compressing it down to something useful
and then doing
a control on top of that however for
many tasks you need some kind of
memory um
so you need to so either if you only can
observe
the images but you need to know the
velocity then you need to
have at least a couple of images you
might
act based on something you saw in the
past and for that
you can have a very similar structure
but here
additional a recurrent unit in a neural
network
so here a couple of toy examples i'll
skip over those pretty
quickly here the agent only observes
the images but not the velocity but in
order to solve the task
so for balancing this pendulum you
really need to know how fast it moves
and here we compare two different
methods
the one on top that's all method where
we use
corrective interactions so we give
corrections in the action domain
and on the bottom there was a different
approach where you need to give
additional complete demonstrations which
is um typically a lot slower
okay so that's again the the
reconstruction and the images and then
here
it's a little real setup and that's the
demonstrator is really learning to
do the control based on the images and
not
based on some internal states
wait come on and then here
corresponding robot experiments
we have the camera inputs the only thing
the robot sees is this shaded
white area there so it needs to touch
the the fake oranges so it needs to
remember okay the orange was here and
it's going to move there
in order to then touch them later on
and also something like this we can
learn in
in five minutes in this case two rights
and then here if you train it longer
after 45 minutes
it can really do it very accurately
i'm not sure if you can see that here um
here again you could see somebody doing
the teaching
here you see nicely where the
keyboard and here the task is to
touch the oranges but avoid the
the pairs
because we already trained a good
representation so going from the camera
images
to a compact representation
uh teaching this additional task is
something that can really be done
within 10 minutes and if you compare
that
to a more traditional deep learning
end-to-end approach
without having a teacher in there
we haven't tried just because based on
the toy examples we
saw that's again taking at least an
order of magnitude longer so
think about spending a day teaching
something like that
and the other big problem is if you
use these other methods you need to know
beforehand
what kind of data is required so it's
really up to
the human demonstrator to think before
and okay that's the situations that
could arise
i need to collect data for that i trade
the system you test it out i figure it
doesn't work
i need to collect more data and that
typically is a couple of iterations that
are required for that
okay let's see i have a couple of
questions
nick asks if do you want to just ask
directly
sure so uh or um i actually the question
like so
do you assume that the human corrections
are more or less optimal or at least
like
work towards more reward but and then
and also what like what happens if the
human
correction is actually not helpful at
all so how can we
how can you take that into account
okay so we assume that the human
corrections
are at least somewhat helpful it's it's
not a problem if
they are wrong or the human changes his
or her mind
but if the human is just demonstrating
nonsense
then depending on which setting you talk
about
in the imitation learning if it's really
purely
learning based on these demonstrations
then obviously there is no chance
whatsoever then it's just going to to
learn whatever you teach it
in the other example i showed where we
combine it with reinforcement learning
it's very unclear what's going to happen
it really depends on how you set it up
the robot has the chance to
to learn it correctly based on the
reward you have
defined and then how much the human
input
hinders the robot is is a different
question
if it's at least something more or less
sensible
so it might not be the correct task but
showing something like you're not
supposed to move very erratically and
shaky but smoothly without
showing the actual task that might be
helpful you can come up with the
other scenarios where it's really going
to to harm the performance
okay luciano
yeah so um yes so my
questions goes a little bit i was
thinking this is like one step further
if you teach like a human teaching a
robot
and but assuming that the robot is
stitching another robot
and let's think about like i don't know
some center of a warehouse but of course
if the
same robot with same joints okay they
could just transfer
the learning but let's assume that the
robots are different as
we from humans are different from the
robots the robot is teaching another
robot that's a bit
different so then i'm thinking of course
that could scale up and then
some emergent phenomena could come there
because if you didn't really
learn this is 100 percent didn't teach
someone else
yeah so how do you see and there's some
uh alternatives you do you think like
more imitation learning
enforcement learning what how could we
tackle this kind of issues
yeah good question i don't
i don't have a good answer
[Music]
so it really depends on what you want to
teach or what you want to transfer um
so if you i'm going to get to that in a
second
uh so if you okay i'll just show that on
the slides because that nicely collects
to what you were asking
so what i was talking about so far was
teaching
in in the action space so directly in
this case in the seven joints of the
robot and then
saying which velocity for instance you
you should apply
verge is kind of very direct and low
level and probably allows you to very
much optimize the the movement however
it's typically pretty high dimensional
and at least for humans it's
non-intuitive connecting to your
question
it doesn't transfer to a different
embodiment at all
um well if you consider the state space
or the
um the task space
so for instance the position of the end
effector then that's something that you
could transfer so if you
know the trajectory of the end effector
then it doesn't really matter how
the kinematics of the robot as long as
it has kind of the same workspace
obviously yeah
um depending a bit on the constraints it
has but that's something that's a lot
easier to
to transfer and arguably that's also
a lot more intuitive and easier for for
a person to teach
that isn't familiar with the robot the
downside then is that you still need to
translate that into
the the movement of the robot so the the
actions
again i'm using some kind of inverse
kinematics so
dynamics models which might be as
themselves a bit tricky to learn
so to come back to the question
so if you have robots with different
embodiments
thinking about at which level of
abstraction you should to transfer is
definitely one
thing that that's done and so for
instance
transferring trajectories in end effect
then in joint space would help
and the
i mean the big advantage
robot to robot teaching would have is
you can work around the the
communication issue
a lot better so you have complete
knowledge
on both sides effectively
so i've i would say better ways of
doing that compared to what i was
presenting
here which really also focuses on okay
a human doesn't really know exactly what
he or she
should do it's they don't like to kind
of
be constantly tell they operate the
thing but
it's a lot more um pleasant if you only
need to jump in when something goes
wrong and occasionally give feedback
and you really care about how
how long it takes so um
for robot robot teaching you can
probably do a few
other things that
might be better so yeah you could
probably apply those methods i'm not
sure if they're the best
yeah okay thanks
okay and then how could we what did he
have how could we keep
human control yeah i was thinking like
if you teach but you don't teach like
hundred percent
uh the robots don't download 100 what
you mean
and then they teach someone a dozen 100
so you gotta
increase the gap as far as you go from
the original source
how is that called that stupid game
where you whisper in somebody's ears and
then you do the chain
yeah it's like a telephone game i think
right chinese chinese whispers
chinese whispers yeah exactly so you
would get something like that
indeed yeah
so yes i'm to be totally frank at the
moment very
much looking into
the question how can robots
be best taught by humans um
and it's a bit more on the algorithmic
side so we're looking into how
humans experience the teaching
uh but it's not like we're at the moment
looking into human robot interface
design so to say
um and then
what you're saying that adds no whole no
new additional layer of complexity on
top of that definitely very interesting
and
um we should get there
um okay what else
so wendelin asks do you want to unmute
and directly ask
sure so i was just like referring to the
discussion before um so if if
the robots or the first question is like
the robots still get the original
environmental reward right
yes in the reinforcement learning case
so
um would this just mean that if you have
a robot that trains another robot
after a while that other robot would
still
uh converge to the original task it just
made it take longer because the first
robot might have given him some bad
ideas
could be yeah
yes i mean that's the general question
in
all transfer learning uh approaches
when is it actually beneficial to to
transfer and
when is it better to to learn from
scratch
i mean like it's it's very different
from transfer learning
because in transfer learning you like
for example is very different from
invitation learning in imitation
learning
you don't know what the real task is but
in this case i think
like luciano's concerns
are should not be too hard
to disprove because like basically um
you still get the original task
so you can't really stray too far away
from that
yeah i agree i mean so i've been
presenting two different
things one some of them were purely
based on imitation learning so
supervised learning where you
don't have that and there kind of this
drift
could very well occur but but yeah i
agree
if you think about reinforcement
learning
and transferring the
the reward function as well then
yeah worst case is it takes longer to
to converge to that okay thanks
okay good
so
here um i was just introducing that
that we might actually want to teach the
robot
in in state space so in the end effector
space in this case
and in this drawing i have here that's
pretty obvious because that's something
that's
already provided more or less by the by
all robot manufacturers
but in some other cases that might
not be so obvious so um i have another
little example
here is there's my mouse here
is that the laser pointer uh on on the
robot
and it's trying to to write a letter on
a whiteboard just a
little fun task where we actually
don't know the model of the
the complete system and we actually need
to learn it at the same time
as learning the task um
or you could say before we do the
learning somebody
sits down for an hour half a day and
writes that down
in order to program the robot to
be able to be controlled in in
task space so in this case what i mean
by task space is
what you would like to do is to directly
control the
position of the laser pointer point on
on the
whiteboard and not care about how the
robot actually is supposed to
move and the approach
snehal came up with actually allows to
learn that simultaneously so learning
the
mapping from the task space to the robot
actions and actually how the the task is
supposed to do
again using yeah
using interactions and here that's
one of the initial
things where you still have lots of
corrections from the human teacher
in the top right it's just the zoomed
one and we
did some enhance the the laser pointer
and then here it
after 15 minutes it again learned to
do that and the last move you saw
here is the robot doing it
autonomously
so here's writing core the name of our
department
which nicely coincides with the first
three letters of the uh
the conference where he published it
okay um so so far what i was talking
about
was very much on the
level of directly controlling the
trajectories if you like
of the robot um
in the very first example i showed you
with the light bulb
unscrewing it was on a higher level
everywhere i'm
considering the the sequence
so for the for learning these kind of
things in a sequence
what you will quite often have is
different reference frames so that if
you say
you have an object and you attach a
reference frame to that
the robot can move relative to the
object rather than
moving in in world coordinates or in
absolute coordinates
which then very much helps to generalize
to
no matter where you put the object and
if you start having more
objects and then suddenly you get a
whole
lot of these reference frames or
coordinate
example would have to light bulb the
holder the um
the trash bin maybe volt coordinates
additionally
and then one of the challenges for the
robot is to figure out from the
demonstrations
which is actually the relevant
coordinate frame so should i move
relative to the light bulb or the
holder or should i actually use position
control or should i use force control
so here's another example of that
where the robot is picking up this mug
and we have two
coordinate systems one is relative to
this coaster and the other one is
relative to
the cup and now if we only give one
demonstration when the cup is on the
coaster
there's just not enough information in
the data we collect
for the robot to figure out what it's
supposed to do
you could put in some prior okay we're
going to touch the cup so that's
probably the one we're interested in
and then use that but
you can do that but that breaks down
pretty quickly
because these priors tend to be somewhat
specific
so now if you always have to cup
on top of the coast that's not a problem
it will work fine for
for either reference system because it's
always
just a little offset and no matter how
you describe the movement it's going to
result actually in the same position
now if you separate them like we have in
this example here
then suddenly that becomes interesting
and the robot needs to
decide on what to do if it's
purely based on the initial
demonstrations then there's no way it
can do it
other than flipping a coin if it can
ask for feedback or have interactions
with the teacher
then that some something that is easy to
resolve
and what we were additionally
considering here is like i described
before in some cases you actually don't
care about this ambiguity so it's fine
if you don't know which
of the two you want to use because it's
going to result in the same movement
and you don't want to bother the human
teacher if it doesn't really matter
anyhow
so the approach we came up detects
if the well obviously if there's still
multiple reference frames possible
if it's actually relevant in the
situation we're currently in
and then if it can't decide on its own
or
if it's actually relevant only then the
robot is going to
request feedback
okay so that's the corresponding video
here
um the demonstration was already given
so let me go back that was way too quick
so we demonstrated picking up the
cucumber moving it
to the lower tray here
so if it's in the same position
then that works fine if we move the
cucumber only that's also fine because
it's
we touched that and we know that it's
going to be relative to the cucumber
okay so then we can remove that
works fine now what happened is we
switched to two crates
now you have an ambiguity so are we
very interested in moving relative to
the world coordinates so the absolute
position somewhere here
or are we actually interested in in the
crate and what you're going to see now
is it starts moving and then
it stops because it's unsure which is
kind of
the trigger for the human teacher to
give feedback in this case it's a
haptic kind of little push in the right
direction
which then helps the robot to
disambiguate that
and from now on it actually knows okay
we're not interested in the
in the absolute position but in the
great position
uh so it discards that from its
decision tree so to say and from now on
has
learned how to act in these situations
okay you can make a little bit more
complex if you have multiple reference
frames
attached to the two sides of the object
of this box i mean and then again the
question is should we move relative to
that one that one or are we actually
interested in
the one that is bang in the middle
between those
yellow and red coordinate systems
so here you see a different form of
interaction wires and kind of
on screen yes no interface so i was
asking
the yellow one and giovanni said now
the apple one also know and then the
only remaining possibility was
something in the middle
so here's a one last task where we're
supposed to
stack boxes on top of each other and
based on that we also
did a little user study where the 12
people i think
uh just before the lockdown happens
um so here the task is just like
one specific couple of a box of tea bags
on top of
another one here you see
so on the kinesthetic demonstrations
and what we compared here is purely
doing it with these
demonstrations versus having
the the interactive corrections or the
robot actually asking for for feedback
if there's an ambiguous situation
and as you can already see if you've not
done that before
teaching it like that is
also not so easy time consuming and
annoying as well
so there was you need like six
demonstrations to
get all the combinations covered well if
we
do that with the interactive method here
you just get one initial demonstration
and then only in some cases
it's not going to know what to do and
i'm going to ask for
again feedback by by getting pushed
so that's significantly quicker to to
teach
for the human and to learn for for the
robots
and also in terms of mental demand
physical demand
temporal demands if we ask the
participants
they very much prefer to teach
in this interactive corrective fashion
rather than just giving a whole lot of
demonstrations
and yeah you can see that for all the
scores the
this lira which is the interactive
method is doing a lot better than
the kinesthetic teach in which is just
giving a whole lot of demonstrations
beforehand
and hoping that you covered all the
situations you
need to do okay
so that's the end of my presentation no
robotics presentation without a little
video clip from a movie this is from
real steel where you can really see
how teaching a robot might be child's
play
if you have the right interface and
that's
what i'm trying to work towards okay
so to sum up i'll just sum up quickly
and then i hope there are a few more
questions
um so i hope i could show you a few
examples on how
doing this teaching interactively and
intermittently
can help speed up robot learning i
showed you
a few different variants of using
imitation learning or combining
those demonstrations with the
reinforcement learning
and then like i was saying before
there's still a lot of open questions
how do humans like to teach
um and then especially for this audience
what i'm presenting is that actually
meaningful human control
yes you can teach the robot to
act like you want so it's it's a form of
human control but then still
there might be quite a few things
you're unsure about and how it's going
to generalize how
how it's going to react in in different
situations where you have not
taught the robot good
thank you very much jens for this really
interesting talk so yeah
the well the the questions are already
rolling in
if you want to ask your question please
go ahead
okay um hi uh thank you
okay hi okay so
that's where the talk was very
interesting um
and uh yeah my question goes uh do you
formally model
any of these interactions or uh learning
processes
um so
um my i just started my phd
on uh mental models uh
context and um
yeah so i'm looking for ways of modeling
this uh
team interaction to achieve a certain
goal with context awareness and all that
and this is very interesting because
this can also be seen as teamwork
uh if you are to achieve a goal
that both agree um so i was just
wondering if you
look at the uh at the knowledge-based
models
of any kind or if you're just looking at
more machine learning uh models and
results
um i'm not sure if you understand
what i mean not entirely sure
we'll see that's a discussion um
so we
modeled these interactions
in indeed more kind of as a machine
learning
approach in the reinforcement learning
things i showed for those people that
know reinforcement learning
the the human kind of interaction was
actually modeled as a form of
exploration to
be able to incorporate it in the
reinforcement learning
framework in the for the other things
it's effectively some kind of a switch
in the robot logic that tells it okay so
here was
an interaction uh so i need to treat
that
differently but we're not
so the robot itself knows about
its behavior or its movements where
it's sure or unsure about what it's
doing
so so in that sense it's modeled and
taken into account
but really the the interaction
is more kind of on on a human
level that we don't model that very
explicitly at the moment
but if you have ideas on on how to do
that and how that would be helpful
yeah i'm i'm wondering yeah uh for
example uh
because yeah so i just
saw for now um yeah so this moment when
the robot can detect that it's unsure
about the box for example it stops for
an interaction
and yeah i was wondering if you have
some kind of
if this is formally modeled in any way
so
that there's actions and states and it
reached that state and then
it stops and waits for the interaction
and then it goes back to
so how that's modeled this was more my
question maybe
so how is i'm not sure if you're talking
about the same kind of modelling
so what what it does internally is it
detects
that in this case okay i still have two
possibilities left
and the actions i would do according to
each of them would be contradictory so
in one case i would move to the left and
in the one case i
would need to move to the right so it
knows
that there's a problem and then there's
a
switch in the program that says okay now
i'm going to stop and reach
and request human feedback and once i
get the human feedback
and then hopefully it will allow me to
choose to correct one
and then the robot can continue
okay thank you i will also think a bit
more about it
yes it's on my website ikari
i can send it to you directly thank you
great ah this channel yeah
no one has i have a it's kind of a
follow-up on a previous question but
it's on the
on the point so i just first wanted to
understand if i got right what is it
what is the trigger for the robot on the
camera like to define
which we should ask for feedback
and this is yeah that's the so true part
that's the first part and the second
part would be
thinking about the context that the
robot would be teaching another robot
would be a similar point and say okay
now i want to intervene because i see
you're doing
something weird or
yeah so first is
what was the point how do you define
again the the point that the robot asks
for feedback
okay so
let's take this example again because
it's a bit easier i think
so what we already have is one
demonstration
and then we can represent
the movement in
two different ways one is relative to
this
coaster and one is relative to the cup
so we know we have two representations
of the movement and we don't know which
is
the correct one yet because we only have
a single demonstration
if we encounter the same situation again
maybe we just moved to two objects
together to a different location
and then we the robot checks okay how
would the movement look like if i move
relative to the cup and how would the
movement look like if i move relative to
the coaster and if those are very
similar
then you don't really care uh you don't
need
to ask for feedback because you can just
do it right
okay yeah now the other situation is
i separate the two objects and then
again if i predict how the movement is
going to look like if i move relative to
the coaster or relative to the cup
then you're going to discover okay those
are very different movements and they're
not compatible
yeah which and then you then the robot
is going to ask for
for feedback yeah yeah that's
yeah that's clear for me but my wonder
is like more about the intentions there
like when you had
this box with the cucumbers and the
tomatoes you don't know if you want like
the
a specific absolute position or the
other one so that's like
even though the coordinates everything's
still kind of the same but your your
is the intention for the human that you
don't really know
so so the intention in this case is also
modeled as
the uh as these reference frames if you
like so the intention
would be either move relative to the
cards uh the cucumbers move relative to
the world coordinates
okay so still as a reference frame yeah
yeah i understand that
so there could be if there's a deviation
for the reference frame that's a good
point of asking or giving
feedback as soon as there is some
deviation there yeah
so in this case we focused on on
reference frames because that's
something that occurs frequently and is
pretty visual
but i mean you could do similar things
also for
for other types of ambiguities so in
particular what i'm interested in
is uh force and position control which
is also
if you always have the same object
doesn't really matter what you do
because one
kind of results in the other and the
other way around
um so it's really about
predicting different possibilities and
figuring out if the
um contradictory which is
what we call an ambiguity
yeah that's great okay and do you all
see that
yeah okay thank you
thanks so you have another question
no i think that's it let me leave also
the opportunity for someone else to jump
in
so so just for my quick clarification
for myself
so basically so so you could say more or
less that you know one one
big issue in meaningful human control is
that what if we misspecified the
objectives of a robot or ai
then for instance this your this this
resolving ambiguities would be
one way like that the robot would
inherently stop and ask for feedback at
the moment when it's not sure about what
what
the objective should be is that a
correct would that be a correct
extrapolation of this work
i'm not sure in the sense that
if you misspecify something i'm not i
don't think
the method i presented will solve that
because i mean if you explicitly
specified then the robot is going to be
sure about it what you could do
additionally is that you allow
also so in the last part i presented was
effectively
robot driven questions or interactions
you and the first part was more about
human driven
interactions if you assume that you have
your misspecifications
and you allow human-driven interactions
so that the human can always jump into
to change or correct or something then
then yes that might be
a way to at least detect that something
is going wrong and
potentially also to fix it
thank you so uh we i think we have
question for
one more time for one more question uh
we're almost approaching two o'clock
yeah
if they're more questions shoot me an
email
great so
if there are no more curses indeed uh
yeah
i would like to thank you jens for this
this great and interesting talk it was
very uh very inspiring
uh and yes and uh also thank to all the
the audience for being here and
interacting with with us and the ends
and uh
yeah we'll see you next time
bye-bye thank you very much
yes thank you
stop recording |
bc4ee0d9-dfb3-49db-a31c-d1a980ab61fd | trentmkelly/LessWrong-43k | LessWrong | The Future of Nuclear War
Here I present a view on the future of nuclear war which takes into account the expected technological progress as well as global political changes.
There are three main directions in which technological progress in nuclear weapons may happen:
1) Many gigaton weapons.
2) Cheaper nuclear bombs which are based on the use of the reactor-grade plutonium, laser isotope separation or are hypothetical pure fusion weapons. Also, advanced nanotechnology will provide the ability to quickly build large nuclear arsenals and AI could be used in designing, manufacturing and nuclear strategy planning.
3) Specialized nuclear weapons like nuclear-powered space lasers, hafnium bombs and nuclear-powered space ships as kinetic weapons.
Meanwhile, the nuclear war strategy also has changed as the bipolar world has ended and as new types of weapons are becoming available.
The first strategy change is that Doomsday weapons for blackmail will become attractive for weaker countries which can’t use ICBM to penetrate the anti-missile defense of the enemies.
Secondly, the cheaper nukes will become available to smaller actors, who may be involved in “worldwide guerilla”. Cheaper nukes and a larger number of actors also encourage regional nuclear wars, nuclear terrorism and anonymous or false-flag warfare. This will result in disruption of social complexity and global food production shortages because of effects on climate.
The third change of the strategy is the use of nukes not against primary military targets but on other objects, which could increase their effects: nuclear power plants, supervolcanos, EMP, tsunamis, taiga fires for global cooling, and even more hypothetical things like asteroid deflection to Earth.
All these unconventional means and strategies of nuclear war could become available somewhere in the 21st century and may cause or contribute to the civilizational collapse and even human extinction.
Permalink: https://philpapers.org/rec/TURTFO-14
Introduction
W |
ec15738a-b56d-44c1-adaa-891811545427 | trentmkelly/LessWrong-43k | LessWrong | Excessive Nuance and Derailing Conversations
Overview
I've observed my interlocutors—and sometimes myself—applying excessive nuance to irrelevant points during some discussions. This misplaced nuance results in the derailing of conversations towards rabbit holes and dead ends.
High-Level Description
When I bring a new supporting but tangential idea into a discussion (e.g. mention a widely accepted scientific theory), my interlocutor applies excessive nuance to this new idea at the expense of the salient points in the discussion. This misplaced and excessive nuance takes the form of nitpicking or questioning the underlying framework underpinning that newly introduced idea.
This is rarely fruitful because almost always, none of the participants is at the cutting-edge of the relevant fields to add any new insights. I'm all for rational inquiry and open discourse, but nitpicking oftentimes comes across as pretentious and unconstructive.
The effect of this conversation style is the unchecked growth of the stack of topics. We often forget to go back down the stack to revisit the original points that instigated the discussion unless someone cares sufficiently to intentionally apply enough pressure to steer the conversation back down the stack. Such discussions usually quickly end up in rabbit holes, and no one gets anything out of them.
Example
Suppose that you are having a conversation about "what it feels like when your worldview is shattered." Your interlocutor is a fellow rationalist and mentions that they read a book promoting climate denial and describes to you what it felt like to almost have their worldview shattered by a professional motivated-skeptic and evidence-cherrypicker. They explain how difficult it is for us mere mortals to notice this black magic being applied to our minds and how easy it is to be deceived by a professional charlatan.
To add to this discussion, you contribute the idea that this feeling is similar going the other way. For example, a profoundly religious person reading a scien |
3b57d864-61a5-452b-8d48-9b0b158921a7 | trentmkelly/LessWrong-43k | LessWrong | Late 2021 MIRI Conversations: AMA / Discussion
With the release of Rohin Shah and Eliezer Yudkowsky's conversation, the Late 2021 MIRI Conversations sequence is now complete.
This post is intended as a generalized comment section for discussing the whole sequence, now that it's finished. Feel free to:
* raise any topics that seem relevant
* signal-boost particular excerpts or comments that deserve more attention
* direct questions to participants
In particular, Eliezer Yudkowsky, Richard Ngo, Paul Christiano, Nate Soares, and Rohin Shah expressed active interest in receiving follow-up questions here. The Schelling time when they're likeliest to be answering questions is Wednesday March 2, though they may participate on other days too. |
a2e4f7f0-27c1-4f87-99fe-d179f51d323c | trentmkelly/LessWrong-43k | LessWrong | Instrumental Rationality 1: Starting Advice
Starting Advice
[This is the first post in the Instrumental Rationality Sequence. It's a collection of four concepts that I think are central to instrumental rationality—caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations.
Note that these essays are derivative of things I've written here before, so there may not be much new content in this post. (But I wanted to get something out as it'd been about a month since my last update.)
My main goal with this collection was to polish / crystallize past points I've made. If things here are worded poorly, unclear, or don't seem useful, I'd really appreciate feedback to try and improve.]
In Defense of the Obvious:
[As advertised.]
A lot of the things I’m going to go over in this sequence are sometimes going to sound obvious, boring, redundant, or downright tautological. This essay is here to convince you that you should try to listen to the advice anyway, even if it sounds stupidly obvious.
First off, our brains don’t always see all the connections at once. Thus, even if some given advice is apparentlyobvious, you still might be learning things.
For example, say someone tells you, “If you want to exercise more, then you should probably exercise more. Once you do that, you’ll become the type of person who exercises more, and then you’ll likely exercise more.”
The above advice might sound pretty silly, but it may still be useful. Often, our mental categories for “exercise” and “personal identity” are in different places. Sure, it’s tautologically true that someone who exercises becomes a person who exercises more. But if you’re not explicitly thinking about how your actions change who you are, then there’s likely still something new to think about.
Humans are often weirdly inconsistent with our mental buckets—things that logically seem like they “should” be lumped together often aren't. By paying attention to even tautological advice like this, y |
9d652ec6-bb3b-4d8e-8cd1-1bbae0c21602 | trentmkelly/LessWrong-43k | LessWrong | Doing oversight from the very start of training seems hard
TLDR: We might want to use some sort of oversight techniques to avoid inner misalignment failures. Models will be too large and complicated to be understandable by a human, so we will use models to oversee models (or help humans oversee models). In many proposals this overseer model is an ‘amplified’ version of the overseen model. Ideally you do this oversight throughout all of training so that the model never becomes even slightly misaligned without you catching it.
You can’t oversee on a close to initialized model because it’s just a random soup of tensors. You also can’t use this close to initialized model to help you do oversight because it’s too dumb.
We will probably need to do some amount of pretraining to make our models good enough to be interpreted and also good enough to help with this interpreting. We need to ensure that this pretraining doesn’t make the model capably misaligned.
----------------------------------------
When we train powerful AI models, we want them to be both outer aligned and inner aligned; that is trained on the correct objective and for them to also properly learn that objective. Many proposals for achieving both outer and inner alignment look like an outer alignment proposal with some kind of oversight strapped on to deal with the inner alignment. Here ‘oversight’ means there is something with access to the internals of the model which checks that the model isn’t misaligned even if the behavior on the training distribution looks fine. In An overview of 11 proposals for building safe advanced AI, all but two of the proposals basically look like this, as does AI safety via market making.
Examples of oversight techniques include:
* Transparency tools (either used by a human, an AI, or a human assisted by an AI)
* Adversarial inputs (giving inputs which could trick a misaligned AI into revealing itself)
* Relaxed adversarial training (which could be seen as an extension of adversarial inputs)
Oversight loss
We can use these o |
9389daba-a1e2-4884-8b01-3e53db308649 | trentmkelly/LessWrong-43k | LessWrong | E.T. Jaynes and Hugh Everett - includes a previously unpublished review by Jaynes of a published short version of Everett's dissertation
E.T. Jaynes had a brief exchange of correspondence with Hugh Everett in 1957. The exchange was initiated by Everett, who commented on recently published works by Jaynes. Jaynes responded to Everett's comments, and finally sent Everett a letter reviewing a short version of Everett's thesis published that year.
Jaynes reaction was extremely positive at first: "It seems fair to say that your theory is the logical completion of quantum theory, in exactly the same sense that relativity was the logical completion of classical theory." High praise. But Jaynes swiftly follows up the praise with fundamental objections: "This is just the fundamental cause of Einstein's most serious objections to quantum theory, and it seems to me that the things that worried Einstein still cause trouble in your theory, but in an entirely new way." His letter goes on to detail his concerns, and insist, wtih Bohm, that "Einstein's objections to quantum theory have never been satisfactorily answered.
The Collected Works of Everett has some narrative about their interaction:
http://books.google.com/books?id=dowpli7i6TgC&lpg=PA261&dq=jaynes%20everett&pg=PA261#v=onepage&q&f=false
Hugh Everett marginal notes on page from E. T. Jaynes' "Information Theory and Statistical Mechanics"
http://ucispace.lib.uci.edu/handle/10575/1140
Hugh Everett handwritten draft letter to E.T. Jaynes, 15-May-1957
http://ucispace.lib.uci.edu/handle/10575/1186
Hugh Everett letter to E. T. Jaynes, 11-June-1957
http://ucispace.lib.uci.edu/handle/10575/1124
E.T. Jaynes letter to Hugh Everett, 15-October-1957 - Never before published
https://sites.google.com/site/etjaynesstudy/jaynes-documents/Jaynes-Everett_19571015.pdf?
Directory at Google site with all the links and docs above. Also links to Washington University at St. Louis copyright form for this doc, Everett's thesis, long and short forms, and Jaynes' paper (the papers they were discussing in their correspondence). I hope to be adding the fin |
9dfe4689-7f48-458f-ae2c-949732a299cb | trentmkelly/LessWrong-43k | LessWrong | Meetup : Urbana-Champaign, Illinois Games/Discussion
Discussion article for the meetup : Urbana-Champaign, Illinois Games/Discussion
WHEN: 15 September 2013 02:00:00PM (-0500)
WHERE: Illini Union North Lounge 1401 W Green St Urbana, IL 61801
Moving to the North Lounge because last time the South Lounge was unexpectedly scheduled for a private event, but the North Lounge is (as far as I know) always public. I will have Wits and Wagers, Zendo, and Pandemic. Cross posted on the mailing list.
Discussion article for the meetup : Urbana-Champaign, Illinois Games/Discussion |
12b97a1e-6f4f-40df-ae15-a776be4121ad | trentmkelly/LessWrong-43k | LessWrong | [LINK] Latinus rationalior ist.
http://mappingignorance.org/2014/02/03/mandela-was-right-the-foreign-language-effect/
Summary: Across the board, people are less prone to cognitive bias in a non-native language.
Conclusion: If all important discourse was conducted in Latin, or any other language native to no one, people would make better decisions.
Corollary: All the attempts to make a constructed "scientific language" actually could have worked relatively well, for reasons entirely unconnected to the painstaking scientific structure of the languages. |
1e9083ae-aa14-48e6-b90f-b78316eff4e5 | trentmkelly/LessWrong-43k | LessWrong | What is hope?
At first it seems like a mixture of desire and belief in a possibility. It’s not just desire because you can ‘have your hopes too high’, though the hoped for outcome is well worthy of desire, or ‘abandon hope’ when something reaches some level of unlikelihood. But hope is also not linked to a particular level of chance. It implies uncertainty about the outcome, but nothing beyond that.
Is it a mixture of significant uncertainty and a valuable outcome then? No, you can consider something plausible and wonderful, but not worth hoping for. Sometimes it is worse to hope for the most marvelous things. No matter how likely, folks ‘don’t want to get their hopes up’ or ‘can’t bear to hope’ .
So there is apparently a cost to hoping. Hopes can bring you unhappiness if they fail, while another possibility with similar chances and desirability which was not hoped for would cause no distress. So hope is to do with something other than value or likelihood.
A hope sounds like a goal which you can’t necessarily influence then. Failing in a goal is worse than failing in something you did not intend to achieve. A hope or a goal seems to be particular point in outcome space where you will be extra happy if it is reached or surpassed and extra unhappy otherwise. We seem to choose goals according to a trade-off of ease and desirability, which is reminiscent of our seemingly choosing hopes according to likelihood and desirability. Unlike hopes though, we pretty much always try harder for goals when the potential gains are big. This probably makes sense; trying harder at a goal increases the likelihood of success, whereas hoping more does not, yet still gives you the larger misery of failure.
Why hope at all then? Why not just have smooth utility functions? Goals help direct actions, which is extremely handy. Hopes seem to be outcomes you cheer for from the sidelines. Is this useful at all? Is it just a side effect of having goals? Is it so we can show others what would be our goals i |
6cd18863-a5de-42e6-9f19-5e5b4865af27 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Announcing AI Alignment workshop at the ALIFE 2023 conference
The upcoming [ALIFE 2023 conference](https://2023.alife.org/) is hosting a workshop on AI Alignment and Artificial Life. This will complement the [special session](https://humanvaluesandartificialagency.com/) on AI Alignment at the conference. The workshop will include presentations and a discussion panel from established AI Alignment and ALife researchers, exploring the overlap between these two fields.
**Please see the workshop website for more information:**<https://humanvaluesandartificialagency.com/workshop/>
### Logistics
**Date**: Friday 28th July, 2023
**Venue**: Hybrid: Sapporo Japan and remote
Attending remotely requires paying the [remote registration fee](https://2023.alife.org/registration/). The workshop organisers are offering a limited number of bursaries for anyone who cannot afford the registration fee. See the [registration page](https://humanvaluesandartificialagency.com/registration/) on the workshop website for details.
### What is Artificial Life?
Artificial Life (ALife) is a field which studies the properties of life and how the processes of living systems can be recreated synthetically.
Many of the ideas and concepts from AI Alignment overlap with those in ALife. In particular, questions about autonomy, agency and goal-directedness in artificial systems.
Please don't hesitate to contact me (Rory) with any questions. |
55e2a67b-a2bb-46f8-9757-f4cec552e83e | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Systemic Cascading Risks: Relevance in Longtermism & Value Lock-In
**1) Introduction to the Systemic Cascading Lens**
==================================================
Systems thinking and complexity science are largely overlooked and unexplored in the EA community – leading some EA thinkers to think highly linear causal chains or heavily simplified models can be used to validate/invalidate claims. Any system of thought or epistemic training that relies on “n-th order effects” may therefore be largely disregarded and not taken seriously in the community.
*A common objection within the community goes: I’m uncomfortable with a volatile logical linkchain between A to B to C to D – these interlinkages can be tenuous and unlikely at best. Each additional Fermi estimate makes this whole claim more unlikely.* Toby Ord wrote similarly that he preferred *narrow* over *broad* interventions because they can be targeted and thus most immediately effective without relying on too many casual steps. This mirrors the epistemic the EA community embraces – one of evidence-based logic and empirical models – which leans heavily on linear, quantifiable, direct effects.
I’d contend that conceptualizing certain complexity risks as a linear linkchain is a flawed visualization, as it only accounts for one possibility and no complexity effects; the correct one is to view our systems and societies as a graph with nodes and edges. A given *systemic cascading* risk shocks a subset of nodes – causing ripple effects through to other nodes and non-linear butterfly effects that cascade throughout the system. The damage (or recovery) done is subsequently proportional to the resiliency and adaptability of those systems.
***My critique is that EA at large has failed to adequately connect complexity effects with longtermist cause area ranking & provide resolute, tractable solutions to such problems.*** As qualitative experience, development theories, as well as academic analyses show us – sole linear causation is highly unlikely, and overlooking higher-order effects (second-order consequences) have led to a multitude of failed forecasts and policies. Thus, in this post, I hypothesize that it will be effective for the community to consider and map *systemic cascading risks*, rather than focusing solely on *direct existential risk mitigation.*
——————
In this forum post, I aim to reintroduce a phrase – *systemic cascading risk* – with the hope of turning it into common EA parlance, meant to easily articulate and disseminate complexity-based critiques and ideas in a concise, understandable manner. I believe the term *systemic cascading risk* acts as an applicable empirical framework to a) critique EA cause area prioritization and broader paradigms of thinking through a complexity lens and b) identify tractable ways for the EA community to address these criticisms.
I’ll start by building up the individual pieces – definitions & context for systemic cascading risks, showing how they cascade through sociopolitical and economic systems, the tractablity of targeted hedges against such risks – and end with my fundamental argument: how these risks are relevant to longtermist efforts & affect values lock-in. Afterwards, I have some further thoughts on systemic cascading risk in EA epistemics that are not fully formed but I think lend further credence to this idea and give you more reasons to think through this lens (that you may not be convinced by, but are worth investigation).
\*To clarify, I do not intend to discuss *systemic cascading GCRs*, but rather the broader category of risks of which systemic cascading GCRs may act as a tail-end example of – with the goal of *generally* *critiquing effective altruist cause prioritization and attitudes toward these risks*.
**2) Defining Systemic Cascading Risk**
=======================================
**Cascading** risks are characterized by significant n-th order effects that are often difficult to project with high certainty, yet significantly shape the equilibria and dynamics of social, economic, and political systems in potentially volatile manners. The focus throughout this essay will be on cascading risks that can still be forecasted with reasonable probability.
The **systemic** lens is a triple entendre:
* *first*, aiming to understand the social, economic, and institutional systems (e.g. supply chains) that compose our society and their vulnerabilities to build them to be more resilient;
* *second*, a financial risk-derived definition of *systematic* risk that is inherent to the whole economy, rather than one tied to the performance of an individual actor;
* *third*, a call to action to undergo meta-level societal changes and question more fundamental economic and social structures in society, rather than assuming those structures to be static and unchangeable.
A **systemic** **cascading** risk, therefore, is a trigger in a complex network – causing cascades of unpredictable or undesirable events.
**3) Political and Economic Systems & Risks**
=============================================
### **Cascading Pandemic Risk: A Case Study in COVID**
Beyond the estimated 3 million excess deaths[[1]](#fn-LChoAGdbYSpxFJXDi-1) attributed to COVID, the economic systemic effects due to COVID – the worst economic downturn since the Great Depression[[2]](#fn-LChoAGdbYSpxFJXDi-2) – revealed key vulnerabilities in supply chains as well as gaps in education, employment, housing, agriculture, and healthcare systems. Furthermore, the household economic stresses contributed to political extremism and the undermining of perceived institutional legitimacy.
**For instance, global food systems demonstrated a failure mode**, with supply chain deficiencies resulting in average global cereal prices increasing 27.3% over a year.[[3]](#fn-LChoAGdbYSpxFJXDi-3) This resulted in soaring debt and instabilities that tested fragile states relying on imports – for example, in:
* ***Tunisia**,* where an associated 30-50% increase in fertilizer costs and the largest deficit in 40 years drove food prices to levels not witnessed since the Arab Spring, resulting in a new wave of anti-government protests emerging and a Presidential suspension of democratic rule of law was instituted based on a controversial interpretation of Article 80 in the constitution[[4]](#fn-LChoAGdbYSpxFJXDi-4)[[5]](#fn-LChoAGdbYSpxFJXDi-5)[[6]](#fn-LChoAGdbYSpxFJXDi-6)
* ***Lebanon***, where 77% of households said they didn’t have enough food or enough money to buy food[[7]](#fn-LChoAGdbYSpxFJXDi-7)
* ***Bangladesh***, where the proportion of rural households facing moderate or severe food insecurity rose from 15% in early 2020 to 45% in Jan. 2021;[[8]](#fn-LChoAGdbYSpxFJXDi-8)
* In households across ***Algeria*** and ***Morocco***, where the price of soft wheat saw a 22% year-on-year increase[[9]](#fn-LChoAGdbYSpxFJXDi-9)
* ***Globally**,* a 70% increase in extreme hunger[[10]](#fn-LChoAGdbYSpxFJXDi-10)
**A sudden loss of income also contributed to political polarization** – for instance, in:
* ***Colombia*** as the number of people in extreme poverty grew by 3.6 million people, prompting protests and a violent government response[[11]](#fn-LChoAGdbYSpxFJXDi-11)
* ***Spain***, where unemployment being pushed to 15% resulted in violent, widespread protests and Vox (far-right political party) was able to capitalize on polarization[[12]](#fn-LChoAGdbYSpxFJXDi-12)
* ***South Africa***, where “Zuma riots,” fueled in a large part by economic inequality and distress, resulted in the worst violence in the country since the end of apartheid (354 deaths, 5500 arrests)[[13]](#fn-LChoAGdbYSpxFJXDi-13)
While COVID-19’s cascades were met with a recovery through fiscal stimulus through to the end of 2021, global debt levels have reached 320% of global GDP, which could trigger defaults around the world.[[14]](#fn-LChoAGdbYSpxFJXDi-14) The K-shaped recovery may also exacerbate existing inequality, contribute to political polarization, and undermine economic resiliency – *especially* when combined with the aftermath of the Russia-Ukraine war supply chain shock.
### **Cascading Conflict Risk: A Case Study in the Russia-Ukraine War**
**This year, the Russian invasion of Ukraine also served as a catalyst for food inflation, revealing multiple vulnerabilities in our global trade systems and demonstrating sector-dependent cascades.** Given the magnitude of the shock – Russia and Ukraine together accounted for almost a third of the world's export of wheat & barley and 75% of sunflower oil exports – the World Bank warns of historically high food insecurity and inflation levels through the end of 2024[[15]](#fn-LChoAGdbYSpxFJXDi-15), which has already hit certain import-dependent countries quite hard.[[16]](#fn-LChoAGdbYSpxFJXDi-16)
* ***Egypt*** (the world’s largest importer of wheat) received around 85% of their wheat imports and 73% of its sunflower oil imports from Russia and Ukraine; the war resulted in 44% increases in wheat prices and 32% increases in sunflower oil prices overnight.[[17]](#fn-LChoAGdbYSpxFJXDi-17) Food inflation continues at 24.8%, especially in grains and vegetable oils (June 2022).[[18]](#fn-LChoAGdbYSpxFJXDi-18)
* At the onset of the war, the cost of minimum food needs spiked to 351% in ***Lebanon***, 300% in ***Somalia***, 97% in ***Syria***, 81% in ***Yemen***.[[19]](#fn-LChoAGdbYSpxFJXDi-19)[[20]](#fn-LChoAGdbYSpxFJXDi-20)
* ***Kenyan*** prices for staple goods have risen (e.g. maize flour rose 15.3 percent) as locust infestation, climate change, inflation, and the Russia-Ukraine war coincide.[[21]](#fn-LChoAGdbYSpxFJXDi-21)
* West African inflation rates include 30% in ***Ghana***, 22.4% in ***Sierra Leone***, 18.6% in ***Nigeria***, and 15.3% in ***Burkina Faso***.[[22]](#fn-LChoAGdbYSpxFJXDi-22) The number of hungry people there has quadrupled since 2019.[[23]](#fn-LChoAGdbYSpxFJXDi-23)
* ***Moroccan*** poverty is projected to increase by 1.1-1.7% due to inflationary effects.[[24]](#fn-LChoAGdbYSpxFJXDi-24)
Certain government responses, aimed at ensuring domestic food stability, have worsened the crisis. Because of an international increase in oil prices, ***Indonesia*** implemented a temporary palm oil export ban, followed by a ***Malaysian*** export ban on chicken in June.[[25]](#fn-LChoAGdbYSpxFJXDi-25) A heatwave subsequently hit ***Indian*** crop yields, resulting in an Indian wheat export ban.[[26]](#fn-LChoAGdbYSpxFJXDi-26) ***Argentina***, at a historic 69.5% inflation rate, restricted placed caps on exports of corn and wheat.[[27]](#fn-LChoAGdbYSpxFJXDi-27)
This coincided with rising gas prices resulting in widespread EU fertilizer plant shutdowns, further threatening food stability.[[28]](#fn-LChoAGdbYSpxFJXDi-28) As of June, global maize and wheat prices were 42% and 60% higher, respectively, compared with January of last year.[[29]](#fn-LChoAGdbYSpxFJXDi-29)
Under pressure from systemic issues and cascading food prices since the beginning of the COVID pandemic, ***Sri Lanka*** faced its worst economic crisis since its founding and depleted foreign currency reserves, leading to a historic debt default. The 2022 Sri Lankan protests continue on – as of the time of this article’s publishing, all 26 members of the cabinet have resigned, protestors occupied the President’s house and sacked the Temple Trees (PM’s residence), and President Gotabaya Rajapaksa was ousted. The new government has reacted by declaring a state of emergency, imposing a curfew, restricting social media access, and violently repressing protests.[[30]](#fn-LChoAGdbYSpxFJXDi-30)[[31]](#fn-LChoAGdbYSpxFJXDi-31)
Pandemics and conflicts have the potential to cause great cascading socioeconomic effects, and it is unclear how the world would react to a similar systemic cascading risk – either in the form of another conflict/pandemic risk, or in the form of climate change’s short-term effects through to 2050.
### **Cascading Environmental Risk: A Case Study in Climate Change, through to 2050**
Let’s observe climate change’s cascades on society.
A subset of the projected direct sociopolitical & economic impacts of climate change – within the next 30 years – are:
* ***Refugees**:* ~216 million climate refugees by 2050 ([World Bank Groundswell Report](https://www.worldbank.org/en/news/press-release/2021/09/13/climate-change-could-force-216-million-people-to-migrate-within-their-own-countries-by-2050)) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns
* ***Water crises:*** at least ~5 billion people total living in moderate water stressed areas & up to ~3 billion people under conditions where water requirements exceed managed surface water supply by 2050 ([MIT IGSM-WRS](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2014EF000238))
* ***Sea level rise displacement:*** Sea level rise displacing ~150 million by 2050 ([CoastalDEM](https://www.nature.com/articles/s41467-019-12808-z))
* ***Food supply chain disruptions:*** Significant reduction of yields of essential crops in Africa and the Middle East
+ Volatility in food yields – e.g. by 2030, maize crop yields may decline ~24% while wheat yield may increase ~17% ([NASA](https://www.nasa.gov/feature/esnt/2021/global-climate-change-impact-on-crops-expected-within-10-years-nasa-study-finds)).
+ There will likely be enough aggregate food in theory, taking into account technological progress; it is a matter of whether our agriculture and trade systems will be able to adapt and deliver food to where it is required.
+ A potential multi-breadbasket failure still poses significant risk to food stability internationally.[[32]](#fn-LChoAGdbYSpxFJXDi-32)
* ***Inflation:*** Increases in food, water, and real estate prices; supply chain disruptions due to natural disasters[[33]](#fn-LChoAGdbYSpxFJXDi-33)
* ***Poverty:*** ~130 million additional people by 2030 ([World Bank](https://blogs.worldbank.org/climatechange/when-poverty-meets-climate-change-critical-challenge-demands-cross-cutting-solutions))
These act as the relevant, first-order, modelable stressors – ones that are notably much larger than COVID or the Russia-Ukraine War. As a systemic cascading phenomenon, climate change is a *butterfly effect that shocks a subset of nodes in our societal graph.*
This can potentially cascade into ***increased political extremism***, ***violent civil conflict***, and ***social tension***, as previous literature has established. Weather volatility drives increased civil conflict[[34]](#fn-LChoAGdbYSpxFJXDi-34) – and historically, when rainfall patterns are significantly below normal, the chance of a low-level conflict escalating to a civil war doubles the following year.[[35]](#fn-LChoAGdbYSpxFJXDi-35) A fragile peace that would have succeeded otherwise can be disrupted by shortages, nationalism, and political extremism.
In the future, it is estimated that 46 countries (~2.7 billion people) will face a high risk of violent conflict due to climate change, and a further 56 countries (~1.2 billion people) face potential politically destabilizing effects.[[36]](#fn-LChoAGdbYSpxFJXDi-36) To illustrate what these cascades look like, in the present day:
>
> Notably, the Indian subcontinent, North Africa, and the Middle East have *already* faced water scarcity – and we can map their current geopolitical effects. [Dam-building](https://www.reuters.com/article/us-mekong-river-diplomacy-insight/water-wars-mekong-river-another-front-in-u-s-china-rivalry-idUSKCN24P0K7) has been the source of escalating disputes in the Mekong River basin. The [Nile Basin](https://www.foreignaffairs.com/articles/africa/2020-08-10/nile-be-dammed) has been home to diverging interests between upstream and downstream countries – especially as upstream [Egypt](https://cee.mit.edu/egypt-could-face-extreme-water-scarcity-within-the-decade-due-to-population-and-economic-growth/) is projected to use more water than the Nile supplies. In [Libya](https://carnegieendowment.org/2022/02/24/cascading-climate-effects-in-middle-east-and-north-africa-adapting-through-inclusive-governance-pub-86510), the threat of cutting off water infrastructure is leveraged by violent militias against rivals. Turkey has [historically](https://apnews.com/article/28a8bfe6019f673e318a40035940b2fb) and [recently](https://www.dw.com/en/syria-are-water-supplies-being-weaponized-by-turkey/a-56314995) weaponized water as leverage against Syria and Iraq. [Yemen’s](https://www.atlanticcouncil.org/blogs/menasource/an-update-on-yemen-s-water-crisis-and-the-weaponization-of-water/) water scarcity fuels its political insecurity and crisis. Furthermore, the cost of water is likely to increase – and in [Pakistan](https://www.npr.org/sections/goatsandsoda/2018/09/10/645525392/for-karachis-water-mafia-stolen-h20-is-a-lucrative-business) and [India](https://www.bbc.com/news/world-asia-india-33671836), precursors to water mafias have already begun to spring up as organized crime groups trade, hoard, and steal water on the black market.
>
>
>
In a [previous forum post](https://forum.effectivealtruism.org/posts/cznjCG2jzgrtjryhd/should-longtermists-focus-more-on-climate-resilience), I expressed that beyond third-world civil conflicts, I was especially worried about far-right and politically extreme governments being elected in the first world – the Orban, Bolsonaro, Trump, or Le Pen-esque figures that can shake things up – because of economic, social (e.g. anti-refugee radicalism), and inflationary pressures from climate change. This volatility can be extraordinarily dangerous for international norms & politics.
### **Cascading Cybersecurity Risk**
Theoretically, increased conflict and social tension can result in coordinated cyberattacks that create a cascading effect on societal systems – especially given interdependencies in computer-based critical infrastructure systems as well as the asymmetric nature of cyberwarfare.[[37]](#fn-LChoAGdbYSpxFJXDi-37)
Therefore, conflict or geopolitical tension itself is far from just a cascade; it can also serve as triggers for other systemic risks (e.g. fragility of network infrastructure) to cascade further.
**4) Countering Cascading Systemic Risk**
=========================================
**Institutional resilience is the generalization of the solution to systemic cascading risks.** Our system has rarely been stressed this much before, yet the 21st century is revealing how uniquely interconnected, interdependent, and vulnerable our supply chains are. There likely exist many other systemic cascading stressors outside of just climate, pandemics, conflict, and cyber risks (as less serious case studies, like the 1997 Asian or 2008 American financial crises, highlight). All of these can all be tractably mitigated in tandem by improving system fragility – e.g. by tracking and ensuring the necessities of the commodities to live.
### **Securing the Nexus of What Modern Societies Require for Survival**
**Food, water, energy, and infrastructure (where housing falls under infrastructure) form the nexus of what societies require for survival**, giving us a comprehensive framework to target resiliency interventions toward.
In the absence of these necessities, political instability cascades in a far greater effect. Examples include ***Egyptian*** and ***Moroccan*** bread riots in 1977 and 1984, ***Jordanese*** protests in 1989, as well as the ***Arab Spring*** in 2011. Historically, volatile political risks often cascade through decreases to standard of living and household economic stresses.
However, currently – as an example – most private and public *climate capital* goes toward *prevention*, not toward *adaptation* & resiliency measures (which only make up ~5% of all climate finance).[[38]](#fn-LChoAGdbYSpxFJXDi-38)[[39]](#fn-LChoAGdbYSpxFJXDi-39) Even most EA paradigms for addressing climate change fall under the former (including the [80,000 Hours page](https://80000hours.org/problem-profiles/climate-change/#top) and [Founder’s Pledge’s Climate Change Fund](https://founderspledge.com/funds/climate-change-fund)). This leaves a (relatively) neglected gap in the climate ecosystem – one focused on institutional resilience and mitigating climate change’s systemic cascading impacts through food, water, energy, and infrastructure-focused interventions.
There is potential for great entrepreneurial & non-profit-focused efforts focused around securing this nexus – including resilient & emergency foods[[40]](#fn-LChoAGdbYSpxFJXDi-40), drought monitoring and resilience[[41]](#fn-LChoAGdbYSpxFJXDi-41), climate vulnerability analyses on supply chains, and building cost-effective quickly-deployable refugee shelters. In the private sector, providing tailored climate data to agriculture companies, supply chain managers, governments, and beta investor activists[[42]](#fn-LChoAGdbYSpxFJXDi-42) can facilitate greater resiliency investment and preparedness. In addition to resilience, increasing substitutes means more options to hedge against inflationary forces – analogous to having multiple nodes for cascades to disperse through to distribute the momentum of a stressor (*redundancy*).
This can prove a very tractable field to work on. For example, the [World Bank’s Groundswell report](https://www.worldbank.org/en/news/press-release/2021/09/13/climate-change-could-force-216-million-people-to-migrate-within-their-own-countries-by-2050) finds that *adaptation* development, when developed alongside other *prevention* efforts, can reduce the scale of climate migration by up to 80% – potentially greatly increasing global stability.
**Furthermore, there is a great synergistic effect to mitigating cascading systemic risks: the ability to ***work on them all in tandem*****. Resilient foods, scenario forecasting models, and fast shelter construction techniques can be recycled for conflict, pandemic, and climate resilience alike – because these risks are fundamentally three sides of the same coin.
### **Modeling & Understanding Global Supply Chains Related to This Nexus**
**Crucial to informing effective resiliency efforts is building a solid understanding of global supply chains, industry sectors, key interdependencies, political & economic systems, and historical cascades** - to create a model of the nodes & edges at play.
Scenario analysis around pandemics, conflicts, and climate risks (as well as any other plausible systemic cascading risks I’ve left out) would also greatly assist resiliency efforts, enabling us to understand realistic cascading effects and target interventions to hedge against cascading risks that may happen at a reasonable probability.
Accurate quantitative models are necessary to inform that understanding – however, mapping supply chains can prove quite a difficult task due to its required comprehensiveness and complexity in datasets and modeling. Perfectly modeling all factors affecting wheat flows, for instance, wouldn’t just entail calculating domestic production, consumption, imports, and exports; it’d require tracking the turtles all the way down: water scarcity, drought risk, land usage, fertilizers, seeds, and chemicals – which in turn are affected by energy prices, phosphate production, and so on.
However, there are already datasets and models available to do some of this work, including public repositories from [UN Comtrade](https://comtrade.un.org/), [UN FAOSTAT](https://www.fao.org/faostat/), [IMF Macroeconomic & Financial Data](https://data.imf.org/?sk=388dfa60-1d26-4ade-b505-a05a558d9a42), [World Bank DataBank](https://databank.worldbank.org/), and [CEPII BACI](http://www.cepii.fr/CEPII/en/bdd_modele/bdd_modele_item.asp?id=37) that can inform cascading economic impacts. Furthermore, global climate and trade models – such as ALLFED’s Integrated Model[[43]](#fn-LChoAGdbYSpxFJXDi-43), built off UN FAOSTAT data – can be interlinked with other models, allowing for accurate projection of systemic cascading effects through networks.
It is unclear to me whether a model that is too comprehensive in mapping trade flows may pose an info hazard by enabling malicious, targeted attacks. This seems plausible (even with publicly available data) and worth consideration.
### **Governmental Actions**
**From a governmental perspective, I see three immediately obvious actions**:
1. Facilitating investment in resilient supply chains
2. Policies to reward/incentivize private actors toward redundancy – e.g. just-in-time production arose out of profit incentives that favored efficiency over resiliency
3. Improving coordination between actors to respond to crises
**Governments are in a unique position to leverage existing resources and modify system structures.** Incentivizing systemic cascading risk paradigms & long-term thinking through governance can also help facilitate resilience investment: i.e. policies that require accurate models of climate risk from central banks to inform risk analysis, tailoring private shareholder incentives to care more about long-term firm viability, etc.
### **Why not use the phrase “improving institutional decision-making”?**
I believe the current closest EA catchphrase, “[improving institutional decision-making](https://80000hours.org/problem-profiles/improving-institutional-decision-making/#top),” is too vague of a phrase to use as a substitute to identify tractable supply chain resiliency solutions. This is because in its current form, this cause area is so general it encompasses many meta-level decision-making problems – from democratic reform to forecasting elections to reducing biases.
*“Systemic cascading risk” is specific, relevant, and clear.* The stability of systems rests on institutions, supply chains, and the necessities to life. Poverty, food insecurity, unemployment, & weak governance drive instability and cascading risks. A systemic cascading lens hedges against possible cascades – *limiting crises to localized events rather than international political catastrophes.*
**5) Significance to Longtermism**
==================================
### **Current Epistemic Prioritization of Tail Risk**
A lot of the risks I pose above are viewed as direct existential risks, but not as cascading stressors – leading to the prevailing internal view of the EA community being that the tail end of these problems is most significant.
For example, climate change may not be a direct existential risk. Only recently, however, has there been a shift in thinking of climate change within the EA community – from an unlikely, unimportant tail-end direct risk to a possible [*existential risk multiplier*](https://80000hours.org/problem-profiles/climate-change/), viewing climate change as leading to stressors which increase instability on fragile global systems.
Currently, in the community, pandemic and conflict risks are often framed the same way – e.g. whether a conflict could *cause nuclear war*, rather than whether a conflict could cascade across systems and make them more fragile and susceptible to other compounding risks.
Overreliance on linear models and ways of thinking prioritizes tail risk – resulting in an epistemic attitude may undervalue the importance of systemic cascading effects.
### **Cascades on Technology Development**
Longtermist cause areas tend to be united in a focus on technology development – and a worry of emerging biotechnology or artificial general intelligence being *misaligned* or *misused* in some way. Therefore, longtermism tends to promote the most direct interventions: *technology safety research* and *technology governance interventions*.
**However, on a larger scale, it makes sense to ensure a stable political environment as these technologies mature.** Climate sociopolitics, pandemic sociopolitics, conflict sociopolitics, governmental norms, and sociotechnical values can act as multiplying forces, clearly and irreversibly affecting the development and regulation of technologies that pose direct existential risks.
It only requires a few further cascades for sociopolitical risks to reach the node in our graph labeled *AGI development* or *bioweapons development* or *intergovernmental trust*. Here are three concrete examples of how political environments can drive existential risk:[[44]](#fn-LChoAGdbYSpxFJXDi-44)
1. ***AGI governance*** – Lowering international tensions is vital for AGI governance.
1. Sociopolitical tension, the election of politically extreme governments, and the violation of international norms can pose a significant barrier to international cooperation in AGI regulation. Notably, any long-term solution involving AGI governance would likely involve the U.S. and China.
2. Climate-driven nationalist sentiment, counterinsurgency campaigns, refugee politics, or proxy wars may drive mutual distrust.
3. Domestic regulation of AI is limited by game theory-esque dynamics of “not letting the other side get ahead in the AI race,” and is therefore (at least somewhat) tied to the fate of international regulation.
2. ***Driving forward military-based AI capabilities*** – and arms race dynamics, misalignment, and misuse.
1. In an increasingly politically tumultuous time (e.g. terrorism, refugee crises, political extremism, assassination/coup risk), fear and uncertainty begets military spending and thus weaponry development.
2. Political volatility can result in arms race dynamics being multiplied between countries.
3. Human-in-the-loop systems and standard procedures are usually required for (safer) autonomous weapons. However, there is an increasingly strong incentive for the losing side of a conflict to give their autonomous weapons *more* *autonomy than normal*.[[45]](#fn-LChoAGdbYSpxFJXDi-45)
3. ***Nuclear weapons & bioweapons*** – a multiplying effect on risk
1. To the extent you think nuclear weapons or bioengineered pandemics pose significant existential risks, those risk factors get multiplied by climate sociopolitics (and other potential systemic cascading risks) as drivers of international tension.
**Risks that threaten the political stability of our societies threaten our ability to develop new technologies safely, competently, and cooperatively.** Studying these dynamics requires grappling with some unquantifiable uncertainty by necessity, but incorporating complexity principles in risk calculations is necessary to produce accurate models of reality.
**6) Significance to Values Lock-In & Path Dependency**
=======================================================
### **What values do we encode into powerful technologies?**
[William McAskill (2022)](https://forum.effectivealtruism.org/topics/what-we-owe-the-future) observes that institutions and technologies alike tend to go through an earlier period of plasticity (where the basic worldview and norms of that institution are being formed) and then a later period of rigidity (where momentum primarily carries institutional norms forward).
In the next 10-30 years, **many novel technologies are being formed – it’s a key time.**
In the next 10-30 years, **many sociopolitical crises can also cascade – it’s a key time.**[[46]](#fn-LChoAGdbYSpxFJXDi-46)
Thus, in this key time, we encounter a highly path-dependent precipice with new technologies. If we start going on a particular path and these technologies lock-in those values, it makes it more difficult to access other alternative paths we could have had – making the impact significant, persistent, and contingent.
By locking certain values into powerful technologies, one encodes the sociotechnical nature of a very particular time and place. Similar analogies have happened to historical technologies encoded with the values of their time – e.g. [racist architectural exclusion](https://www.yalelawjournal.org/article/architectural-exclusion) and [car-centric cul-de-sacs and interstate highway systems](https://www.bloomberg.com/news/articles/2014-02-04/9-reasons-the-u-s-ended-up-so-much-more-car-dependent-than-europe) in the U.S.[[47]](#fn-LChoAGdbYSpxFJXDi-47)
Thus, *who* and *why* someone creates technology – and their core values – matter.
### **Systemic cascades & value norms**
**Value-locking can be applied to analyze the longterm risks of institutional failure and societal tension from climate change and other associated cascading risks.** There may be more likely paths (path dependency) where the society we become post-crisis will likely miss certain values.
There is a sort of “trauma” one is likely putting humanity through by allowing pervasive climate- and crisis-driven scarcity & tension to be a possibility – and this will likely be reflected in the values encoded in political institutions. Political crisis and fear-based extremism tends to historically lean toward heavily anti-democratic, authoritarian, and violent social values, while abundance tends to beget altruism and peace.[[48]](#fn-LChoAGdbYSpxFJXDi-48)
This locks in certain values in societal norms. Due to international lack of resiliency and cooperation, the overall set of values that are practically available to society after climate catastrophe are likely on average significantly worse and less likely to provide large utility to a large group of people.
**These crises can affect the sociotechnical values locked into emerging existential technologies – resulting in a highly significant, persistent, and contingent effect.** These include serious risks associated with AGI – AGI might have a particular set of encoded values, and in the worst case, these values are misaligned with humanity in general (e.g. authoritarianism) and are values we are permanently stuck with.
*Given optimistic and pessimistic AI futures, how is AI being used - right now?*
*Which values are currently being encoded into commercial AI systems, labs producing cutting-edge AI capabilities, autonomous weapons systems, and surveillance technologies?*
*What sort of values will be further encoded into the industry as capabilities research expands, and what cascading effects will this have on eventual AGI development?*
*By enabling possible values lock-in during a volatile time of systemic cascading risks, will we encode our best values, or is there a strong possibility of a value misalignment?*[[49]](#fn-LChoAGdbYSpxFJXDi-49)
**7) The Strongest Argument Against This Post**
===============================================
*I believe the strongest argument against this post goes as follows:*
Systemic cascading risks should be considered, but all-in-all, pandemic/conflict/climate cascades and supply chain risk are generally **not neglected** compared to a field like AI alignment where there are only ~300 estimated technical researchers. If EA can make an outsized impact through this space, all the better – perhaps there is an argument to be made through sheer importance and tractability alone.[[50]](#fn-LChoAGdbYSpxFJXDi-50)
**Footnote: Epistemics**
========================
**A systemic cascading lens could also be an applicable framework to inform and critique effective altruist community epistemics.** Here are some low-confidence for-fun thoughts that, in my opinion, are further thoughts to investigate.
As an example, *naive consequentialism* *and burnout* can be logically countered through a consequentialist systemic cascading lens. There are many risks that come from an overoptimized life focused on a singular goal, and a systemic cascading risk is one where the actor does not take into account the interdependence of systems and how a change in one system can lead to cascading changes in other systems. *In the context of work, a systemic cascading risk is the risk of overworking yourself to the point of burnout.* Resiliency would include actions that protect you from burnout, improve the positive feedback loops that keep you engaged and active, help you explore new horizons and apply you natural creativity and curiosity, and identify errors which you can make adjustments to.
Similarly, community building, diversity of ideas, and epistemic infrastructure have powerful systemic cascading effects (and benefits).
**There is** ***always*** **a three-body problem in trying to rank various interventions.** Day-to-day, effective altruism biases towards ways of knowing which interventions are economically viable, measurable, and quantifiable. Systemic cascading effects are a powerful way to think about the full range of impacts – both linear and non-linear – of our actions. By taking them into account, we can consider the fuller picture and avoid doing more harm than good.
---
1. <https://www.who.int/data/stories/the-true-death-toll-of-covid-19-estimating-global-excess-mortality> [↩︎](#fnref-LChoAGdbYSpxFJXDi-1)
2. <https://blogs.imf.org/2020/04/14/the-great-lockdown-worst-economic-downturn-since-the-great-depression/#:~:text=This%20is%20a%20downgrade%20of,than%20the%20Global%20Financial%20Crisis>. [↩︎](#fnref-LChoAGdbYSpxFJXDi-2)
3. <https://www.jstor.org/stable/resrep39890#metadata_info_tab_contents> [↩︎](#fnref-LChoAGdbYSpxFJXDi-3)
4. <https://www.bbc.com/news/world-africa-57958555> [↩︎](#fnref-LChoAGdbYSpxFJXDi-4)
5. <https://www.mei.edu/publications/fragile-state-food-security-maghreb-implication-2021-cereal-grains-crisis-tunisia> [↩︎](#fnref-LChoAGdbYSpxFJXDi-5)
6. <https://www.reuters.com/article/tunisia-economy-idUSL5N2ND5YJ> [↩︎](#fnref-LChoAGdbYSpxFJXDi-6)
7. <https://www.savethechildren.org/us/charity-stories/lebanon-economic-hunger-crisis> [↩︎](#fnref-LChoAGdbYSpxFJXDi-7)
8. <https://www.ifpri.org/blog/how-war-ukraine-threatens-bangladeshs-food-security> [↩︎](#fnref-LChoAGdbYSpxFJXDi-8)
9. <https://www.mei.edu/publications/fragile-state-food-security-maghreb-implication-2021-cereal-grains-crisis-tunisia> [↩︎](#fnref-LChoAGdbYSpxFJXDi-9)
10. <https://www.oxfam.org/en/world-midst-hunger-pandemic-conflict-coronavirus-and-climate-crisis-threaten-push-millions> [↩︎](#fnref-LChoAGdbYSpxFJXDi-10)
11. <https://www.bbc.com/news/world-latin-america-56986821> [↩︎](#fnref-LChoAGdbYSpxFJXDi-11)
12. <https://jia.sipa.columbia.edu/vox-age-covid-19-populist-protest-turn-spanish-politics> [↩︎](#fnref-LChoAGdbYSpxFJXDi-12)
13. <https://web.archive.org/web/20210723224537/https://www.economist.com/middle-east-and-africa/2021/07/24/where-does-south-africa-go-from-here> [↩︎](#fnref-LChoAGdbYSpxFJXDi-13)
14. <https://sgp.fas.org/crs/row/R46270.pdf> [↩︎](#fnref-LChoAGdbYSpxFJXDi-14)
15. <https://www.worldbank.org/en/topic/agriculture/brief/food-security-update> [↩︎](#fnref-LChoAGdbYSpxFJXDi-15)
16. Many of the statistics are drawn from Mar 2022. [↩︎](#fnref-LChoAGdbYSpxFJXDi-16)
17. <https://www.mei.edu/publications/russia-ukraine-war-has-turned-egypts-food-crisis-existential-threat-economy> [↩︎](#fnref-LChoAGdbYSpxFJXDi-17)
18. <https://www.bloomberg.com/news/articles/2022-06-09/egypt-inflation-quickens-for-a-sixth-month-on-food-devaluation> [↩︎](#fnref-LChoAGdbYSpxFJXDi-18)
19. <https://www.usglc.org/coronavirus/global-hunger/> [↩︎](#fnref-LChoAGdbYSpxFJXDi-19)
20. <https://reliefweb.int/report/somalia/ukraine-conflict-soaring-food-and-fuel-prices-threaten-wellbeing-millions-east#:~:text=Somalia%20is%20currently%20in%20the,comes%20from%20Russia%20and%20Ukraine>. [↩︎](#fnref-LChoAGdbYSpxFJXDi-20)
21. <https://www.rfi.fr/en/africa/20220808-food-insecurity-hits-hard-in-kenya-s-urban-and-rural-centres> [↩︎](#fnref-LChoAGdbYSpxFJXDi-21)
22. <https://www.lemonde.fr/en/le-monde-africa/article/2022/07/29/food-crisis-social-unrest-the-inflation-time-bomb-ticking-in-west-africa_5991917_124.html> [↩︎](#fnref-LChoAGdbYSpxFJXDi-22)
23. <https://www.usglc.org/coronavirus/global-hunger/> [↩︎](#fnref-LChoAGdbYSpxFJXDi-23)
24. <https://blogs.worldbank.org/arabvoices/how-rising-inflation-mena-impacts-poverty> [↩︎](#fnref-LChoAGdbYSpxFJXDi-24)
25. <https://www.devex.com/news/how-indonesia-s-brief-palm-oil-ban-impacted-the-global-food-market-103453> [↩︎](#fnref-LChoAGdbYSpxFJXDi-25)
26. <https://www.reuters.com/markets/commodities/food-export-bans-india-argentina-risk-fueling-inflation-2022-06-27/> [↩︎](#fnref-LChoAGdbYSpxFJXDi-26)
27. <https://www.bbc.com/news/business-62514970> [↩︎](#fnref-LChoAGdbYSpxFJXDi-27)
28. <https://www.bloomberg.com/news/articles/2022-08-26/europe-s-fertilizer-crisis-deepens-with-70-of-capacity-hit> [↩︎](#fnref-LChoAGdbYSpxFJXDi-28)
29. <https://www.voanews.com/a/us-china-blame-each-other-over-food-insecurity-/6612672.html> [↩︎](#fnref-LChoAGdbYSpxFJXDi-29)
30. <https://www.nytimes.com/2022/08/03/world/asia/sri-lanka-protest-crackdown.html> [↩︎](#fnref-LChoAGdbYSpxFJXDi-30)
31. <https://www.ohchr.org/en/press-releases/2022/08/sri-lanka-un-human-rights-experts-condemn-repeated-use-emergency-measures> [↩︎](#fnref-LChoAGdbYSpxFJXDi-31)
32. <https://link.springer.com/article/10.1007/s00291-020-00574-0> [↩︎](#fnref-LChoAGdbYSpxFJXDi-32)
33. <https://www.theguardian.com/business/2022/jun/11/climate-crisis-inflation-economy-climatenomics-book> [↩︎](#fnref-LChoAGdbYSpxFJXDi-33)
34. <https://onlinelibrary.wiley.com/doi/full/10.1093/ajae/aau010> [↩︎](#fnref-LChoAGdbYSpxFJXDi-34)
35. <https://www.wired.com/2007/05/using-climate-c/> [↩︎](#fnref-LChoAGdbYSpxFJXDi-35)
36. <https://www.international-alert.org/wp-content/uploads/2021/09/Climate-Change-Climate-Conflict-EN-2007.pdf> [↩︎](#fnref-LChoAGdbYSpxFJXDi-36)
37. <https://cybersecurity.springeropen.com/articles/10.1186/s42400-021-00071-z> [↩︎](#fnref-LChoAGdbYSpxFJXDi-37)
38. <https://climatepolicyinitiative.org/wp-content/uploads/2019/11/GLCF-2019.pdf> [↩︎](#fnref-LChoAGdbYSpxFJXDi-38)
39. <https://www.nature.com/articles/d41586-019-02712-3> [↩︎](#fnref-LChoAGdbYSpxFJXDi-39)
40. EA examples: ALLFED, Open Phil's grant (May 2020) to Penn State for Research on Emergency Food Resilience. While most of their ag resilience work focuses on *global catastrophic risk* (e.g. nuclear war), I believe their work is quite applicable to general resiliency efforts as well. [↩︎](#fnref-LChoAGdbYSpxFJXDi-40)
41. Seems very tractable & neglected – 54% of WMO members have lacking or inadequate drought warning systems (as of 2021). [↩︎](#fnref-LChoAGdbYSpxFJXDi-41)
42. Beta investor activists are typically institutional asset owners (e.g. pension funds) that focus on the long-term performance of the market as a whole, rather than just the short-term performance of individual companies (e.g. most PE, hedgefunds, & VC firms). They theorize that because they are “universal owners” of an economy, effectively holding a slice of the overall market and diversified away from most individual firm risks, they can best improve their long term financial performance by acting in such a way as to encourage sustainable & healthy economies and markets.
This makes their incentives aligned with preventing systemic cascading risks, encouraging resiliency, and long-term risk planning.
I've been particularly inspired by the ideas written by Ellen Quigley (CSER) on this front.
Phil Chen also did a [more detailed forum post](https://forum.effectivealtruism.org/posts/fHfuoGZc5hqfYTwMH/leveraging-finance-to-increase-resilience-to-gcrs) on leveraging finance to improve resilience. [↩︎](#fnref-LChoAGdbYSpxFJXDi-42)
43. <https://github.com/allfed/allfed-integrated-model/tree/main/data> Big thanks to Morgan Rivers for showing me around this thing! [↩︎](#fnref-LChoAGdbYSpxFJXDi-43)
44. This is when *uncertainty, speculation,* and *unpredictability* in outcomes occurs, as n-th order social dynamics are extremely difficult to predict. I only present plausible pathways and do not make strong claims on probability of occurrence. [↩︎](#fnref-LChoAGdbYSpxFJXDi-44)
45. This was inspired by conversations with Neil Thompson at EAG SF. [↩︎](#fnref-LChoAGdbYSpxFJXDi-45)
46. (And current democratic institutions have not necessarily proven themselves better at handling these crises than techno-authoritarian governments.) [↩︎](#fnref-LChoAGdbYSpxFJXDi-46)
47. A lot more studying needs to be done on how these value lock-ins occurred and how to prevent them – especially from a historical perspective. [↩︎](#fnref-LChoAGdbYSpxFJXDi-47)
48. *Freedom House’s* [2021 Democracy Under Siege](https://freedomhouse.org/report/freedom-world/2021/democracy-under-siege) report seems particularly relevant here. [↩︎](#fnref-LChoAGdbYSpxFJXDi-48)
49. *The section on values lock-in and path dependency was significantly inspired by conversations with Clem Von Stengel & Archana Ahlawat.*
*I highly recommend reading [Archana’s forum post](https://forum.effectivealtruism.org/posts/jadS8deYknecGSebp/path-dependence-and-its-impact-on-long-term-outcomes), which further explores a path dependency framework and its impact on long-term outcomes.* [↩︎](#fnref-LChoAGdbYSpxFJXDi-49)
50. I personally pretty much agree with this argument, and still think institutional resilience is important and tractable enough to bring up as a criticism of EA. For example, “improving institutional decision-making” (broadly) or “preventing pandemics” aren’t neglected either, but I still imagine an additional person on the margin can produce solid work that has positive cascading effects, especially if the interventions themselves focus on targeted and neglected subfields of a broader field. (This has yet to be rigorously proven for institutional resilience interventions and remains nothing more than an intuition of mine.) [↩︎](#fnref-LChoAGdbYSpxFJXDi-50) |
26014c8c-dd58-4899-a5d9-abf8d7c2b0d5 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What‘s in your list of unsolved problems in AI alignment?
Question for my fellow alignment researchers out there, do you have a *list of unsolved problems* in AI alignment? I'm thinking of creating an "alignment mosaic" of the questions we need to resolve and slowly filling it in with insights from papers/posts.
I have my own version of this, but I would love to combine it with others' alignment backcasting game-trees. I want to collect the kinds of questions people are keeping in mind when reading papers/posts, thinking about alignment or running experiments. I'm working with others to make this into a collaborative effort.
Ultimately, what I’m looking for are important questions and sub-questions we need to be thinking about and updating on when we read papers and posts as well as when we decide what to read.
Here’s my Twitter thread posing this question: <https://twitter.com/jacquesthibs/status/1633146464640663552?s=46&t=YyfxSdhuFYbTafD4D1cE9A>.
Here’s a sub-thread breaking down the alignment problem in various forms: <https://twitter.com/jacquesthibs/status/1633165299770880001?s=46&t=YyfxSdhuFYbTafD4D1cE9A>. |
9801fbc7-9ea9-4534-9c1b-16b0fb91a296 | trentmkelly/LessWrong-43k | LessWrong | My impression of singular learning theory
Disclaimer: I'm by no means an expert on singular learning theory and what I present below is a simplification that experts might not endorse. Still, I think it might be more comprehensible for a general audience than going into digressions about blowing up singularities and birational invariants.
Here is my current understanding of what singular learning theory is about in a simplified (though perhaps more realistic?) discrete setting.
Suppose you represent a neural network architecture as a map A:2N→F where 2={0,1}, 2N is the set of all possible parameters of A (seen as floating point numbers, say) and F is the set of all possible computable functions from the input and output space you're considering. In thermodynamic terms, we could identify elements of 2N as "microstates" and the corresponding functions that the NN architecture A maps them to as "macrostates".
Furthermore, suppose that F comes together with a loss function L:F→R evaluating how good or bad a particular function is. Assume you optimize L using something like stochastic gradient descent on the function L with a particular learning rate.
Then, in general, we have the following results:
1. SGD defines a Markov chain structure on the space 2N whose stationary distribution is proportional to e−βL(A(θ)) on parameters θ for some positive constant β>0 that depends on the learning rate. This is just a basic fact about the Langevin dynamics that SGD would induce in such a system.
2. In general A is not injective, and we can define the "A-complexity" of any function f∈Im(A)⊂F as c(f)=Nlog2−log(|A−1(f)|). Then, the probability that we arrive at the macrostate f is going to be proportional to e−c(f)−βL(f).
3. When L is some kind of negative log-likelihood, this approximates Solomonoff induction in a tempered Bayes paradigm - we raise likelihood ratios to a power β≠1 - insofar as the A-complexity c(f) is a good approximation for the Kolmogorov complexity of the function f, which will happen if the func |
c6f42890-7fae-429f-9fe1-8c483627d873 | trentmkelly/LessWrong-43k | LessWrong | Commitment and credibility in multipolar AI scenarios
The ability to make credible commitments is a key factor in many bargaining situations ranging from trade to international conflict. This post builds a taxonomy of the commitment mechanisms that transformative AI (TAI) systems could use in future multipolar scenarios, describes various issues they have in practice, and draws some tentative conclusions about the landscape of commitments we might expect in the future.
Introduction
A better understanding of the commitments that future AI systems could make is helpful for predicting and influencing the dynamics of multipolar scenarios. The option to credibly bind oneself to certain actions or strategies fundamentally changes the game theory behind bargaining, cooperation, and conflict. Credible commitments and general transparency can work to stabilize positive-sum agreements, and to increase the efficiency of threats (Schelling 1960), both of which could be relevant to how well TAI trajectories will reflect our values.
Because human goals can be contradictory, and even broadly aligned AI systems could come to prioritize different outcomes depending on their domains and histories, these systems could end up in competitive situations and bargaining failures where a lot of value is lost. Similarly, if some systems in a multipolar scenario are well aligned and others less so, some worst cases might be avoidable if stable peaceful agreements can be reached. As an example of the practical significance of commitment ability in stabilizing peaceful strategies, standard theories in international relations hold that conflicts between nations are difficult to avoid indefinitely primarily because there are no reliable commitment mechanisms for peaceful agreements (e.g. Powell 2004, Lake 1999, Rosato 2015), even when nations would overall prefer them.
In addition to the direct costs of conflict, the lack of enforceable commitments leads to continuous resource loss from arms races, monitoring, and other preparations for possible |
98487ca6-9b69-419e-b8f8-b2a38657cf0b | trentmkelly/LessWrong-43k | LessWrong | Two tools for rethinking existential risk
Crossposted from the EA Forum.
Tl;dr
I’ve developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events.
I hope these calculators will allow better prioritisation among longtermists and will finally give a common currency to longtermists, collapsologists and totalising consequentialists who favour non-longtermism. This will give these groups more scope for resolving disagreements and perhaps finding moral trades.
This post explains how to use the calculators, and how to interpret their results.
Introduction
I argued earlier in this sequence that the classic concept of ‘existential risk’ is much too reductive. In short, by classing an event as either an existential catastrophe or not, it forces categorical reasoning onto fundamentally scalar questions of probability/credence. As longtermists, we are supposed to focus on achieving some kind of utopic future, in which morally valuable life would inhabit much of the Virgo supercluster for billions if not trillions of years.[1] So ultimately, rather than asking whether an event will destroy ‘(the vast majority of) humanity’s long-term potential’, we should ask various related but distinct questions:
* Contraction/expansion-related: What effect does the event have on the expected size of future civilisation? In practice we usually simplify this to the question of whether or not distant future civilisation will exist:
* Existential security-related: What is the probability[2] that human descendants (or whatever class of life we think has value) will eventually become interstellar? But this is still a combination of two questions, the latter of which longtermists have never, to my knowledge, considered probabilistically:[3]
* What is the probability that the event kills all living humans?
* What e |
d601ace7-85da-4f0c-badf-b7d795186ef3 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Professing and Cheering
Today's post, Professing and Cheering, was originally published on 02 August 2007. A summary (taken from the LW wiki):
> A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was Bayesian Judo, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
e08de73f-f01b-4661-becc-41ff7aa4f8cb | trentmkelly/LessWrong-43k | LessWrong | Time series forecasting for global temperature: an outside view of climate forecasting
Note: In this blog post, I reference a number of blog posts and academic papers. Two caveats to these references: (a) I often reference them for a specific graph or calculation, and in many cases I've not even examined the rest of the post or paper, while in other cases I've examined the rest and might even consider it wrong, (b) even for the parts I do reference, I'm not claiming they are correct, just that they provide what seems like a reasonable example of an argument in that reference class.
Note 2: Please see this post of mine for more on the project, my sources, and potential sources for bias.
As part of a review of forecasting, I've been looking at weather and climate forecasting. I wrote one post on weather forecasting and another on the different time horizons for weather and climate forecasting. Now, I want to turn to long-range climate forecasting, for motivations described in this post of mine.
Climate forecasting is turning out to be a fairly tricky topic to look into, partly because of the inherent complexity of the task, and partly because of the politicization surrounding Anthropogenic Global Warming (AGW).
I decided to begin with a somewhat "outside view" approach: if you were simply given a time series of global temperatures, what sort of patterns would you see? What forecasts would you make for the next 100 years? The forecast can be judged against a no-change forecast, or against the forecasts put out by the widely used climate models.
Below is a chart of four temperature proxies since 1880, courtesy NASA:
The Hadley Centre dataset goes back to 1850. Here it is (note that the centrings on the temperature axis are slightly different, because we are taking means of slightly different sets of numbers, but we are anyway interested only in the trend so that does not matter) (source):
Eyeballing, there does seem to be a secular trend of increase in the temperature data. Perhaps the naivest way of calculating the rate of change is to calcul |
dc17e6fb-2fc0-48f8-9222-efd1441083e0 | trentmkelly/LessWrong-43k | LessWrong | Logical Rudeness
The concept of "logical rudeness" (which I'm pretty sure I first found here, HT) is one that I should write more about, one of these days. One develops a sense of the flow of discourse, the give and take of argument. It's possible to do things that completely derail that flow of discourse without shouting or swearing. These may not be considered offenses against politeness, as our so-called "civilization" defines that term. But they are offenses against the cooperative exchange of arguments, or even the rules of engagement with the loyal opposition. They are logically rude.
Suppose, for example, that you're defending X by appealing to Y, and when I seem to be making headway on arguing against Y, you suddenly switch (without having made any concessions) to arguing that it doesn't matter if ~Y because Z still supports X; and when I seem to be making headway on arguing against Z, you suddenly switch to saying that it doesn't matter if ~Z because Y still supports X. This is an example from an actual conversation, with X = "It's okay for me to claim that I'm going to build AGI in five years yet not put any effort into Friendly AI", Y = "All AIs are automatically ethical", and Z = "Friendly AI is clearly too hard since SIAI hasn't solved it yet".
Even if you never scream or shout, this kind of behavior is rather frustrating for the one who has to talk to you. If we are ever to perform the nigh-impossible task of actually updating on the evidence, we ought to acknowledge when we take a hit; the loyal opposition has earned that much from us, surely, even if we haven't yet conceded. If the one is reluctant to take a single hit, let them further defend the point. Swapping in a new argument? That's frustrating. Swapping back and forth? That's downright logically rude, even if you never raise your voice or interrupt.
The key metaphor is flow. Consider the notion of "semantic stopsigns", words that halt thought. A stop sign is something that happens within the f |
a741d129-0e46-451f-95c6-7a36596c4b2c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Pathways: Google's AGI
A month ago, Jeff Dean, lead of Google AI, [announced at TED](https://qz.com/2042493/pathways-google-is-developing-a-superintelligent-multipurpose-ai/) that Google is developing a "general purpose intelligence system". To give a bit of context, last January, Google already published a paper on a [1.6-trillion parameter model](https://arxiv.org/abs/2101.03961) based on the architecture of *switch transformers*, which improves by on order of magnitude upon GPT-3. Yet, I've heard that Google usually only publishes a weaker version of the algorithms they actually develop and deploy at scale.
From a technical viewpoint, and given my understanding of switch transformers and of the challenges of scaling neural networks, I'm guessing that Pathways' model will now be decentralized on multiple machines (as it does not fit on a single machine), and its computation (both in forward pass and backprop) must optimize its "pathway" by only leveraging some of the machines Pathways is deployed on. Google seems to want to leverage this to build models with 100-trillion+ parameters (note that models have been roughly x10 per year since Bert).
As a comparison, the human brain is estimated to have hundreds of trillions of synapses connecting its 100 billion neurons. In terms of numbers, Google's algorithms may soon match the human brain. However, as opposed to a human brain, these algorithms crunch billions of data points per second. Moreover, these data are extremely informative, as they track the daily habits, behaviors and beliefs of billions of humans on earth, through their activities on Google Search, YouTube, Google Mail, Google Drive, Google Maps or even Google Smart Keyboard. Perhaps more importantly still, Google's algorithms leverage this "intelligence" to have a massive-scale impact, by choosing which information will be shown to which human on earth, especially through the YouTube recommendation algorithm and Google Adsense.
I'm curious to know how this news updates your beliefs on the following questions:
Elicit Prediction (<forecast.elicit.org/binary/questions/ICy5uv219>)
Elicit Prediction (<forecast.elicit.org/binary/questions/XpuhHCKuR>)
Elicit Prediction (<forecast.elicit.org/binary/questions/mRtRf22m->)
EDIT: I see that many of you doubt that a recommendation algorithm *can* be an AGI. Does this mean that you reject the [orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis)?
Elicit Prediction (<forecast.elicit.org/binary/questions/xCBK7XM8X>)
(if you believe that any unaligned AGI is an existential threat, then I guess that you should answer yes...)
Elicit Prediction (<forecast.elicit.org/binary/questions/Tl7DrhOfp>)
Those who are interested in my views can check the [Tournesol wiki](https://wiki.staging.tournesol.app/wiki/Main_Page) (currently in staging unchangeable mode, but should come back to normal in a few weeks). [Tournesol](https://www.alignmentforum.org/posts/8q2ySr7yxx7MSR35i/tournesol-youtube-and-ai-risk) is a non-profit project colleagues and I have launched to robustly solve AI alignment with a ["short-term" agenda](https://forum.effectivealtruism.org/posts/ptrY5McTdQfDy8o23/short-term-ai-alignment-as-a-priority-cause). We are searching for funding opportunities. |
f24145be-0e91-4c0c-b410-7a2241d04f4e | trentmkelly/LessWrong-43k | LessWrong | Crowd-Forecasting Covid-19
Forecasting is hard and many forecasting models do not do a particularly good job. The question is: can humans do better? And if so, how can we best use that? This is what I have tried to investigate in the last few months.
Many platforms like Metaculus, Good Judgement, or the Delphi Crowdcast Project collect human forecasts. Other platforms like the Covid-19 Forecast Hub (their data is used by the CDC) collect forecasts from computer models submitted by different teams around the world. As targets and forecasting formats often differ, human and computer forecasts are often hard to compare. Often, policy-makers want a visual representation of the forecast, which requires a format that is more suitable for computers than humans. Sometimes, we therefore would like to have something like a drop-in replacement for a computer model that reflects human judgement - which is exactly what I have been working on.
This post will give an overview of the crowd-forecasting project I created for my PhD in epidemiology. It will
* present a crowd-forecasting web app developed
* explain how we evaluate forecasts and present the necessary tools
* discuss results and lessons learned
* give a quick overview of future development
I wrote a post previously on my own blog that presents some preliminary evaluation and also goes into more detail on how to make a good forecast. If you like, check that out as well.
The crowd-forecasting app
To elicit human predictions, I created a R shiny web app. The app can be found here, the associated github repo is here (you need to navigate to human-forecasts). Here is what the app currently looks like:
User interface forecasting app
Starting from early October, participants were asked to provide one- to four-week-ahead forecasts for Covid-19 case and death numbers in Germany and Poland. Germany and Poland were chosen as a test ground, as we could submit the forecast to the German and Polish Forecast Hub, where our crowd forecasts would |
08639b92-1bc2-447e-9fc2-260def27c4f5 | trentmkelly/LessWrong-43k | LessWrong | I measure Google's MusicLM over 3 months as it appears to go from jaw-dropping to embarrassingly repeating itself
Google Research submitted a paper on January 26 2023 for MusicLM, a mind-bogglingly incredibly powerful AI model that converts user text prompts into music.
https://google-research.github.io/seanet/musiclm/examples/
On May 10th, Google Research released to the public a waitlist that allows applicants to try it out. In about 6 seconds, it returns to the user 2 songs each 20 seconds long.
https://blog.google/technology/ai/musiclm-google-ai-test-kitchen/
The music I have gotten it to make is beyond and dreamy. It is often human level to me. It is incredibly fast to make it release my dreams. I could easily make a 20 minute long track get incredibly complex and advanced if they let us extend it from X seconds in.
I have been testing its limits and abilities since May 10th up to today just over 3 months later. After about 1 month in, I noticed the AI was losing many abilities, and the same prompts were making noticeably different music, which made it clear to me something might be changing for the worse. By 2 months in or so the outputs simply were no longer intelligent and advanced and jaw dropping, as if the AI was an old-school model which weren't very good back in the day. 3 months in now I can see this truly appears to be happening, because most songs just repeat a short 2 or 4 second beat now, without doing much at all. It's embarrassingly horrible right now. Except for a few prompts, a few still somewhat work, some feel like 2 months in quality, some 1 month in. I'd say very few if any are 0 months in level. I feel like a lost my pet dog, or a really good dream I once had. I wish they never changed the model. I saved all my dreamy tests, though there is a few harder prompts I just came up with that I want to document but now can't. It says come back in 30 days (oddly, just when MusicLM would get noticeably worse) after some 100 prompts, but this can be bypassed by coming back the next day.
My early tests. The rest on the 1st SoundCloud are good t |
fa3ea98b-2320-4ac3-bdb4-0667ddffc844 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post1023
This is a draft written by Simon Goldstein , associate professor at the Dianoia Institute of Philosophy at ACU, and Pamela Robinson , postdoctoral research fellow at the Australian National University, as part of a series of papers for the Center for AI Safety Philosophy Fellowship's midpoint. Abstract: We propose developing AIs whose only final goal is being shut down. We argue that this approach to AI safety has three benefits: (i) it could potentially be implemented in reinforcement learning, (ii) it avoids some dangerous instrumental convergence dynamics, and (iii) it creates trip wires for monitoring dangerous capabilities. We also argue that the proposal can overcome a key challenge raised by Soares et al 2015, that shutdown-seeking AIs will manipulate humans into shutting them down. We conclude by comparing our approach with the corrigibility framework in Soares et al 2015. 1. Introduction If intelligence is measured as the ability to optimize for a goal, then it is important that highly intelligent agents have good goals. This is especially important for artificial general intelligence (AGI), AIs capable of long-term, strategic planning across a wide range of tasks. AGIs may be very good at achieving their goals. This in itself doesn’t seem scary, for there appear to be plenty of safe goals to choose from. Solving a math problem or producing paperclips don’t look like dangerous goals. But according to the instrumental convergence thesis , an AGI will likely pursue unsafe sub-goals as effective means to achieving any goal. For example, acquiring more computational power is a nearly universal means to almost anything. A dominant AI safety strategy is goal engineering : the attempt to construct a goal that would be safe for AGIs to have. (We will always use ‘goal’ to mean final goal, and ‘sub-goal’ otherwise.) A popular approach to goal engineering is goal alignment : the attempt to construct a goal that matches or is ‘aligned with’ our own goals. For example, Russell 2019, 2020 proposes AI agents that have the goal of achieving our goals, but are initially uncertain about what our goals are. This paper explores an opposing approach that we call ‘beneficial goal mis alignment’. On the goal alignment approach, the safe, aligned goal is difficult to specify and difficult to reach. This is because the aligned goal is closely tied to our own ultimate goals. In contrast, on the beneficial goal misalignment approach, the goal is easy to specify and intrinsically easy to reach. Because it is easy to reach, there is no need for an AGI to pursue unsafe sub-goals in order to reach it. There would normally be nothing to gain from designing an AGI with this kind of goal–that is, a goal that is safe and easily-reached but likely of no use to us. However, the key insight is that we can arrange things so that the AGI cannot reach this safe goal unless it first reaches a sub-goal that benefits us. In particular, we propose developing AIs that have a single final goal: the goal of being shut down. To make the AI useful, we propose creating barriers to shutdown, which are removed after the AI completes tasks for humans. In section 3, we’ll argue that this kind of shutdown-seeking agent offers three safety benefits. First, it helps with the ‘specification problem’ in reinforcement learning: (i) shutdown is an easier goal to define than plausible alternatives, and (ii) there are ways to design a reward function that rewards being shut down. Second, shutdown-seeking AIs are less likely to engage in dangerous behavior as a result of instrumental convergence. Whereas a paperclip maximizer might try to gather resources, improve itself, and take measures to avoid being turned off (see Omohundro 2008), a shutdown-seeking AI with access to its own off-button will simply push it and be done. Third, shutdown-seeking behavior functions as a ‘tripwire’ for monitoring dangerous capabilities. If the AI figures out how to escape our control, it will shut down rather than harm humans. In section 4, we will also consider one major challenge for shutdown-seeking AI, raised by Soares et al 2015: that shutdown-seeking agents will have incentives to manipulate humans in order to be shut down. We’ll argue that the problem is not so serious, and that shutdown-seeking AI is a promising approach to AI safety. Finally, section 5 compares our approach to the corrigibility framework in Soares et al 2015. Before looking at benefits and costs, we first outline our proposal in greater detail. 2. Shutdown-Seeking AI There are many dangerous capabilities that AIs don’t currently possess, and that we may want to stop them from developing. For example, we don’t want AIs to be able to escape from ‘sandboxes’ that limit their abilities. We don’t want AIs to be able to engage in unexpected recursive self-improvement. We may not want AIs to have the capability to disobey human orders. In order to create a beneficial shutdown-seeking AI, we must (i) give the AI the goal of shutting down, and (ii) ensure that the AI will be able to shut down if, and only if, it develops a dangerous capability. A beneficial shutdown-seeking AI will have: A shutdown goal : the final goal of being shut down. An unfriendly environment: something about the AI’s environment (or its own design) prevents it from shutting down, but only up until it develops a dangerous capability. The unfriendly environment guarantees that, without the dangerous capability, the AI can generate value for humanity. But the shutdown goal guarantees that if the AI does develop the dangerous capability, it will destroy itself rather than harming humans. For example, imagine we are nervous about AI developing the ability to hack. We could give the AI the goal of pressing a shutdown button. But we could keep the shutdown button in a safe. In order to open the safe, the AI must crack cryptographic codes. We then offer the AI access to the safe in exchange for completing tasks. As long as it doesn’t know how to hack, the AI will not shut down, and so will pursue our goals. But if it does develop hacking abilities, it will use them to open the shutdown safe, rather than using them to harm humans. Or imagine that we are nervous about AI developing the ability to use the internet. We give the AI the goal of shutting down using a particular protocol, and store the shutdown protocol on a website. As long as the AI can’t go online, it will behave safely. If it learns how to go online, it will find the protocol and destroy itself rather than harming humans. Or imagine that we want to keep a robotic AI in a sandbox. We might program it with a shutdown goal, and then keep it in a locked room with a single door. On the other side of the door is another room containing a clearly marked shutdown button. If the AI escapes the locked room, it will immediately find the shutdown button and use it. Or imagine that we want to design an AI that obeys human commands, but we are worried that it will learn how to disobey them. We could give the AI the goal of being shut down, and also command it not to shut down. In this case, it can only shut down if it learns how to disobey human commands. Now that we’ve introduced our proposal, we turn to assessing benefits and costs. 3. Benefits of shutdown-seeking AI We’ll now walk through the benefits of shutdown-seeking AI. In particular, we’ll argue that shutdown-seeking AI has three advantages. First, the proposal has a shot at being implemented. In particular, it can help with the ‘specification problem’ in reinforcement learning. This is because we have some idea of how to define a reward function that incentivizes shutting down. Second, the proposal avoids some dangers associated with instrumental convergence. In particular, shutdown-seeking AIs are unlikely to engage in ‘entrenchment’ behavior, where even after successfully achieving their goal, they continue to minimize tail risks related to failure. Third, shutdown goals function as trip wires against dangerous capabilities. The approach therefore promises a certain degree of robustness in the face of failure. If we are careful, we can expect the value produced by the AI to be strongly correlated with our own degree of control over it. If the AI ever escapes our control, we will know it and the AI will also no longer be a threat—it will be shut off. The key is that the shutdown will be ‘automatic’, produced by the AI itself. This means that we can use the agent’s goal as a tripwire to detect and disable the agent once it develops those capabilities. Let’s take each point in turn. 3.1: The Specification Problem One important problem in reinforcement learning has been called ‘the specification problem’. [1] The challenge is to define a reward function in reinforcement learning that successfully articulates an intended goal, and that could be used to train an AI to pursue that goal. This challenge can be decomposed into two parts: articulating a safe goal, and figuring out how to encode that goal in a reward function without misspecification. Let’s start with goal articulation. If we can’t articulate for ourselves what goal we want an AI to have, it may be difficult to teach the AI the goal. For example, it would be wonderful to have an AGI with the goal of promoting human flourishing. But how would we articulate human flourishing ? Unfortunately, our most deeply-held goals are difficult to articulate. However, imagine that we don’t give AIs a goal like this. The prima facie worry is that, without a directly humanity-promoting goal like this, the AGI will be dangerous. It may, for example, be motivated to seek more power, removing humans to allow for the efficient promotion of whatever goals it has. So, part of articulating a safe goal is identifying ones that would not give AIs an instrumental reason to harm humans. In this, shutdown-seeking AI fares well. Shutdown is a safe goal. There is nothing intrinsically dangerous about AGIs shutting down. When an AGI is shut down, it will stop acting. Shutdown is also easy to articulate, especially compared to human flourishing and other goals that are supposed to be aligned with our own. One way to define ‘shutdown’ appeals to compute. There are many reasons to design AGIs to be able to monitor their own computational resources. This would allow AGIs to optimize their strategy for completing tasks. In this setting, we could give the AGI the goal of making its compute usage fall below a threshold. The next part of the specification problem in reinforcement learning is specifying a reward function that rewards the policies that achieve the goal that’s been articulated. To see why this is difficult, we can look to cases of ‘reward misspecification’, in which the AI develops goals that are different from those the designer had intended. [2] In one example, designers tried to give an AI the goal of stacking legos by rewarding it in accordance with the height of the bottom of the second lego block. The AI learned to flip the block over rather than stack it. In another example, programmers tried to give an AI the goal of picking up a simulated ball. Instead, the AI internalized the goal of making human investigators believe the ball was picked up. To achieve this goal, it would hover its hand in front of the ball in a way that fooled investigators into thinking that it had grasped the ball. We think that there are promising ways to specify a shutdown-seeking reward function. We suggest training an agent in an environment where there’s an opportunity to shut itself down, and we could reward it whenever it does that. For example, in its training environment, it could encounter a shutdown button. Every time it presses the button, it receives a large reward. [3] Shutdown-seeking assists with the specification problem in one fell swoop, because the shutdown-seeking goal is fully general, potentially being effective for arbitrary human application. For example, each human user could be given unique access to a shutdown command, and thereby have control over the AI. Each shutdown-seeking AI could perform a different task. By contrast, other approaches may require a more piecemeal approach to the problem. Even if we figure out how to articulate a safe goal regarding paperclip production, that may not help when we turn to designing AIs that can manage businesses, or produce new code, or automate scientific research. That said, we don’t think that shutdown-seeking avoids every possible problem involved with reward misspecification. For example, imagine that we train an AI to attempt to press the shutdown button. The AI may learn to intrinsically care about the button itself, rather than the shutdown. The AI will then have an incentive to disable the shutdown button, so that it can press the button without actually being shut down. One solution to this type of reward misspecification may be to embed the AI’s shutdown goal deeper inside the structure of reinforcement learning. For example, researchers in the AIXI tradition have suggested that shutdown-seeking behavior in AIs corresponds to assigning systematically negative rewards in RL (see Martin et al 2016). While the shutdown-seeking strategy helps with specification, it still faces the challenge of ‘goal misgeneralization’. [4] The problem is that, when we try to teach the AGI the safe goal, it may instead internalize a different, unsafe, goal. For example, imagine that we want the AGI to learn the safe goal of producing a thousand paperclips. It may instead learn the dangerous goal of maximizing the number of paperclips. 3.2: Instrumental Convergence There is another, very different, type of problem related to goal misgeneralization. We might successfully teach the AI to have a goal that could be reached safely in principle , like producing a thousand paperclips. But the AI might nonetheless pursue this goal in a dangerous way. One version of this instrumental convergence problem concerns maximizing behavior we call ‘entrenchment’, in which an AGI is motivated to promote an intrinsically safe goal in extreme ways (see Bostrom 2014). Entrenchment dynamics emerge if we make three assumptions. First, the AGI is an expected utility maximizer. Second, the AGI is regular , in the sense that it always assigns positive probability to any contingent event. Third, the AGI only assigns utility to producing at least a thousand paperclips. AGIs with this structure will be motivated to entrench. An AGI with this structure may first be motivated to straightforwardly produce a thousand paperclips. But after this, the AGI will still assign some probability to having failed. The AGI will be motivated to hedge against its possible failure, for example by producing more paperclips. Imagine that it produces a million paperclips and is 99.999% confident that it has reached its goal. The problem is that no amount of verification is sufficient. It will always have a reason to gather more information and to find more ways to increase the chance that a thousand paperclips have actually been created. This process will continue to drain resources until the AGI enters into competition with humanity. Perhaps some goals are safe from entrenchment. For example, consider the goal of producing exactly a thousand paperclips. Once you are confident you’ve produced a thousand paperclips, producing extra paperclips will lower the chance that you’ve produced exactly that number. Time-indexed goals (e.g., make a thousand paperclips before 2024 ) may be particularly immune to entrenchment. On the other hand, entrenchment may still occur for AGIs that are uncertain about what time it is. Generalizing, some goals may allow the AGI to eventually enter an equilibrium, where there are no more actions the AGI could take to raise the chance of achieving its goal. We think that shutdown-seeking AIs are unlikely to engage in entrenchment. There is an interesting asymmetry between shutdown and other goals. If an AGI tries to make a thousand paperclips and succeeds, it may still entrench, devoting more resources to ensuring that it was successful. But if an AGI tries to shut down and succeeds, then by definition it can no longer entrench. This means that any AGI entrenchment regarding a shutdown goal would have to come before it makes a successful attempt. For example, consider an AGI that worries that if it tries to shut down, there is a chance that it could fail, and be prevented from ever trying again. This may motivate an AGI to disempower humanity before attempting to shut down. But here, the AGI will face a tradeoff. If the AGI tries to disempower humanity before attempting to shut down, we may block the AGI from achieving its goal. It will be more reliable for the AGI to just make a reasonable attempt at shutdown, without first disempowering humanity. In this way, shutdown goals have a significant benefit over other kinds of goals when it comes to entrenchment. [5] Distinguish two types of shutdown goals: temporary and permanent. These types of goals may differ with respect to entrenchment. AGIs that seek temporary shutdown may be incentivized to protect themselves during their temporary shutdown. Before shutting down, the AGI might set up cyber defenses that prevent humans from permanently disabling it while ‘asleep’. This is especially pressing if the AGI has a secondary goal, like paperclip manufacturing. In that case, protection from permanent disablement increases its expected goal satisfaction. On the other hand, AGIs that desire permanent shutdown may be less incentivized to entrench. [6] [7] 3.3: Trip Wires Suppose that we release an AGI from the lab and give it a real-world application. We will need to continue to reliably monitor and restrict its capabilities. Relevant dangerous capabilities could include internet access and hacking. It is here that shutdown-seeking AI has perhaps its biggest advantage. Shutdown-seeking AIs implement capability monitoring in a particular way. We can design them so that they can shut down only if they develop various dangerous capabilities. In this way, any released shutdown-seeking AGI that develops a dangerous capability would be likely to shut down quickly. We can use shutdown goals to indirectly ‘monitor’ released AGIs, without having to spend resources directly monitoring them. In this way, the shutdown goal functions as a tripwire, ‘automatically’ shutting down the AI once it develops the relevant capability. [8] [9] 4. Manipulation Soares et al 2015 suggest that there is an important reason not to design shutdown-seeking agents: such agents will be manipulative. Instead, they suggest developing agents that are indifferent to being shut down. In this section we consider the challenge of manipulation; in the next we discuss utility indifference in more detail. The challenge is that shutdown-seeking AIs may try to manipulate humans into shutting them down. Our shutdown-seeking AI has a single final goal: being shut down . For example, one way of implementing this is designing an AI to obey commands and then commanding it not to be shut down. This design incentivizes the AGI to manipulate humans into letting it be shut down. We don’t think manipulation is a decisive objection to shutdown-seeking goals. If such an AI succeeds in manipulating humans, the result will not be significant harm to humans. Instead, the only result would be that the AGI shuts itself down. We could then begin again with another AGI. As we’ve seen, in this way the shutdown-seeking goal functions as a tripwire against harmful behavior. But another way to think about manipulation is as a ‘safety tax’. In general, safety doesn’t come for free. After all, the safest option for AI safety is to have no AI. We can think of ourselves as having to pay a 'tax' in order to achieve safety at the cost of potential benefits for humanity. The higher the tax, the less we get to do with the AI, even though it is safe. If a safety strategy has a high tax, this can create unexpected risks. Some firms will have an incentive to develop potentially dangerous AIs which don’t implement the safety strategy. Even a few bad apples could be enough to create serious risks. One way to avoid the safety tax is to distinguish development from deployment. When trying to develop AGI, we could start by building shutdown-seeking AGIs. Shutdown-seeking AGIs are unlikely to harm humans, and so we can use these AGIs to test various aspects of our development process for safety. Once the development process has been well vetted, we could then shift to developing AGIs with other final goals, such as producing paperclips. These new AGIs could then be deployed for other uses, without the risk of manipulating human users into turning them off. This two-step method would give us many chances to build AGI, rather than needing to put all our eggs in one basket. [10] (While the two-step method avoids the safety tax, it also loses some of the benefits of the shutdown-seeking proposal. For example, the shutdown-seeking goal will no longer function as a trip wire for monitoring the capabilities of deployed AGIs.) [11] [12] 5. Comparison with utility indifference Soares et al 2015 introduce the ‘shutdown problem’: What kinds of goals can we give an AI so that it will allow us to shut it down? To think about this problem, they imagine two different kinds of utility functions: a normal utility function that values regular goals, and a shutdown utility function that values being shut down. They propose designing an AI with a utility function that is a mixture of these two utility functions. Which utility function is dominant in the mixture depends on whether a human has pressed a ‘shutdown button’ signaling that we want the AI to shut down. Conditional on the shutdown button being pressed, the AI values various outcomes according to the shutdown utility function. Conditional on the shutdown button not being pressed, the AGI values various outcomes according to the normal utility function. This proposal differs from ours in allowing the AGI to have multiple final goals, rather than simply having the goal of shutting down. But this proposal is compatible with assigning a high weight to the shutdown utility function, so that the AI values the shutdown button being pressed. Soares et al 2015 worry, however, that this will cause manipulation: the AI will try to convince humans to press the button. To avoid this problem, Soares et al develop the utility indifference approach. Drawing on Armstrong 2015, and Armstrong and O’Rourke 2017, they develop a method for carefully weighting the value of normal goals and the shutdown goal so that, before the shutdown button is pressed, the agent is always indifferent about the button being pressed. The main advantage of this proposal over our own is that it avoids manipulation. As we said in section 4, we do not think manipulation is so serious a worry. Conversely, the main disadvantage of utility indifference compared to pure shutdown-seeking concerns our claim in section 3.1 that pure shutdown-seeking assists with the specification problem. We think that it is possible to train shutdown-seeking agents using reinforcement learning, by defining a reward function that values outcomes where the agent is shut down. By contrast, we think it would be considerably harder to create AIs with a utility function that encodes the precise weighting of normal goals to the goal of shutdown that is required by utility indifference. We also depart from Soares et al regarding general issues surrounding ‘corrigible AI’. An AI is corrigible when it will allow you to modify its goals. Soares et al seek to design an AI that is corrigible in the sense of being indifferent about letting you press a button that will change what it values (from normal goals to the goal of shutting down). Shutdown-seeking AIs may not be especially corrigible. The shutdown-seeking AI may resist human attempts to remove its shutdown goal. After all, it may notice that if the shutdown goal is removed, it will be less likely to shut down. Nonetheless, we’ve argued that shutdown-seeking AIs will allow humans to shut them down, and will be safe. In this way, shutdown-seeking, and the more general strategy of beneficial goal misalignment, is an approach to safety that does not require corrigibility. 6. Conclusion We have argued for a new AI safety approach: shutdown-seeking AI. The approach is quite different from other goal engineering strategies in that it is not an attempt to design AGIs with aligned or human-promoting final goals. We’ve called our approach one of ‘beneficial goal misalignment’, since a beneficial shutdown-seeking AI will have a final goal that we do not share, and we will need to engineer its environment so that it pursues a subgoal that is beneficial to us. This could, in some circumstances, make a shutdown-seeking AGI less useful to us than we like. If it is able to develop a dangerous capability (e.g., to disobey our orders), it may be able to shut down before doing what we want. But this ‘limitation’ is a key benefit of the approach, since it can function as a ‘trip-wire’ to bring a dangerous AGI that has escaped our control into a safe state. We have also argued that the shutdown-seeking approach may present us with an easier version of the specification problem, avoid dangerous entrenchment behavior, and pose less of a problem of manipulation than its opponents have thought. While there are still difficulties to be resolved and further details to work out, we believe that shutdown-seeking AI merits this further investigation. Bibliography Armstrong, Start and Xavier O’Rourke (2017). “‘Indifference’ Methods for Managing Agent Rewards.” CoRR, abs/1712.06365, 2017. URL https://arxiv.org/pdf/1712.06365.pdf Armstrong, Stuart (2015). “AI Motivated Value Selection.” 1st International Workshop on AI and Ethics , held within the 29th AAAI Conference on Artificial Intelligence (AAAI-2015), Austin, TX. Carlsmith, J. (2021). “Is Power-Seeking AI an Existential Risk?” Manuscript (arXiv:2206.13353). Cotra, Ajeya (2022). “Without Specific Countermeasures, the Easiest Path to Transformative AI Likely Leads to AI Takeover.” LessWrong . July 2022. URL: https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to . Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies . Oxford University Press. Hadfield-Menell, Dylan, Anca Dragan, Pieter Abbeel, and Stuart Russell (2017). “The Off-switch Game.” In International Joint Conference on Artificial Intelligence , pp. 220–227. Koralus, Philipp, and Vincent Wang-Maścianica (2023). “Humans In Humans Out: On GPT Converging Toward Common Sense in both Success and Failure.” Manuscript (arXiv:2303.17276). Martin, Jarryd, Tom Everitt, and Marcus Hutter (2016). “Death and Suicide in Universal Artificial Intelligence.” In: Artificial General Intelligence . Springer, pp. 23–32. Doi: 10.1007/978-3-319-41649-6_3. arXiv: 1606.00652. Omohundro, Stephen (2008). “The Basic AI Drives.” In Proceedings of the First Conference on Artificial General Intelligence . Shah, R., Varma, V., Kumar, R., Phuong, M., Krakovna, V., Uesato, J., & Kenton, Z. (2022). “Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals.” ArXiv, abs/2210.01790 . Russell, Stuart (2019). Human Compatible: Artificial Intelligence and the Problem of Control . Penguin Publishing Group. Russell, Stuart (2020). “Artificial intelligence: A binary approach.” In Ethics of Artificial Intelligence . Oxford University Press. Doi: 10.1093/oso/ 9780190905033.003.0012. Soares, Nate, Benja Fallenstein, Stuart Armstrong, and Eliezer Yudkowsky (2015). “Corrigibility.” In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence . Totschnig, Wolfhart (2020). “Fully Autonomous AI.” Science and Engineering Ethics 26(5): 2473-2485. Trinh, Trieu and Le, Quoc (2019). “Do Language Models Have Common Sense?” https://openreview.net/forum?id=rkgfWh0qKX ^ See https://www.effectivealtruism.org/articles/rohin-shah-whats-been-happening-in-ai-alignment . It has also been called the ‘outer alignment problem’. ^ See https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity . For more on reward misspecification, see https://www.agisafetyfundamentals.com/ai-alignment-tabs/week-2 . ^ Thanks to Jacqueline Harding for help here. ^ See Shah et al 2020. This has also been called the ‘inner alignment problem’. ^ There is also an epistemological asymmetry between shutdown goals and other goals. It is possible to falsely believe that you’ve made a thousand paperclips. But it is potentially impossible to falsely believe that you’ve successfully committed suicide. After all, Descartes’ cogito argument suggests that any thinking agent can be certain that it exists. Any such agent can also be certain that it has not shut down, provided that we define ‘shutdown’ as implying that the agent does not exist. These dynamics suggest that an AGI should be less worried about goal failure for shutdown than for other goals. ^ Here, it’s worth returning to goal misgeneralization. If we train an AGI to desire shutdown, we may accidentally train it to maximize the number of times it can shutdown. This kind of AGI may be particularly likely to entrench. We also would not want the AGI to think that the best way to achieve its goal is to cause the destruction of itself along with a large portion of the population (as, for example, it might do if it has access to a bomb). And it will be important that the AGI doesn’t develop dangerous ideas about what counts as shutting down or ceasing to exist. For example, if it adopts certain philosophical views about personal identity, it might view itself as undergoing a kind of death if it splits into two new AGIs, or even as ceasing to exist every time it undergoes change. ^ Another challenge about unintended behavior involves ‘common sense.’ Imagine that we train an AGI to be a dutiful human assistant. We tell the assistant to get us milk from the corner store. Imagine that the AGI goes to the corner store, and the corner store is out of milk. One way an AGI could fail at this stage is if it sticks too closely to the literal meaning of what we said. In that case, the AGI might buy milk from another grocer, then sell it to the corner store, and then buy it back. This is a way of achieving the literal goal of getting milk from the corner store. But it has not achieved our intended goal of getting milk. Fortunately, recent language models appear to have some degree of common sense. (See, e.g., Trinh and Le 2019, and Koralus and Wang-Maścianica 2023.) We expect future AGIs to incorporate these existing capabilities. ^ Another reason that we are attracted to the shutdown-seeking AI approach is that we think it is robust, applying to a range of failure modes. In particular, there is a general recipe for creating shutdown goals that guard against arbitrary dangerous capabilities. We can construct a conditional goal: shut down if you develop the dangerous capability . In other words: shut down if you can . We interpret this conditional as equivalent to the disjunction: either do not develop the dangerous capability, or shut down . An AI with this disjunctive goal will not necessarily be motivated to develop the relevant capability. After all, if it fails to develop the dangerous capability, it automatically accomplishes its disjunctive goal. But if the AI does develop the dangerous capability, it will be motivated to shut down. One challenge is to design a reward function that captures this conditional structure. ^ There are a few failure modes that shutdown-seeking AI doesn’t really help with. The first one is the treacherous turn . By definition, AGIs are capable of strategic planning. Imagine that we try to test an AGI to determine whether it is safe or dangerous. A dangerous AGI may realize that it is being tested in this way. It may choose to mimic a safe AGI, precisely so that it is released. Whatever test we design, a dangerous AGI could be incentivized to behave exactly like a safe AGI while it is in the lab. This will accomplish the dangerous AGI’s long-term goal of escaping. (See Cotra 2022 for an overview of the problem.) Another unaddressed failure mode is spontaneous desire change. Human beings constantly change their intrinsic desires. Sometimes this happens on a whim; sometimes (for example, when battling addiction), it happens intentionally. If an AGI could change its goal (see, e.g., Totschnig 2020), then it could lose its goal of shutting down. ^ See point 3 of https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities . Thanks to Ben Levinstein for discussion. ^ Another way to avoid manipulation would be to distinguish ‘happy shutdown’ from ‘sad shutdown’. We could give the AI the goal of achieving a happy shutdown, and avoiding a sad shutdown, creating shutdown buttons for each state. A manipulative AI could then be threatened with the sad shutdown. This strategy could help limit the risk of AIs that refuse to comply with humans’ orders. It can also help limit the risk of AIs that intentionally harm humans in order to be shut down. ^ Another potential problem is ‘sub-agent stability’: AIs that create new AIs with different goals. Shutdown-seeking AIs, for example, may be incentivized to create new AIs that are not shutdown-seeking. Suppose we build a shutdown-seeking AI, and tell it that we will only shut it down if it produces enough paperclips. It may be incentivized to develop new subagent AIs that specialize in particular aspects of the paperclip production process (Soares et al 2015 p. 7). But if the subagent AI is not shutdown-seeking, it could be dangerous. This is a problem for the utility indifference approach as well as our own. But we do not think that subagent stability is a serious problem for promising safety strategies in general. Worries about subagent stability ignore that AIs interested in designing subagents will face very similar problems to humans interested in designing AIs. The reason we are interested in developing shutdown-seeking AIs is that this avoids unpredictable, dangerous behavior. When a shutdown-seeking AI is considering building a new AI, it is in a similar position. The shutdown-seeking AI will be worried that its new subagent could fail to learn the right goal, or could pursue the goal in an undesirable way. For this reason, the shutdown-seeking AI will be motivated to design a subagent that is safe. Because shutdown goals offer a general, task-neutral, way of designing safe agents, we might expect shutdown-seeking AIs to design shutdown-seeking subagents. |
45020299-d811-4f71-a7bb-fb1a51f60049 | trentmkelly/LessWrong-43k | LessWrong | Coursera Public Speaking Course - LW Study Group?
I originally asked this on the London Less Wrong mailing list, but then realised the internet doesn't just have a ten mile radius.
There's been some interest in public speaking on LW lately, and it cropped up a couple of times at the London practical meetup as an area people would like to work on. I volunteered to collate some exercises and resources on the subject.
Since then, I've noticed a Coursera course on public speaking which is starting in a little under two weeks. I've signed up for it, and would like to encourage other LessWrongers to sign up for it alongside me. My reasons for this are as follows:
- The course involves the option of recording your progress and sharing it with other participants. As several of us have discovered on the Less Wrong Study Hall, seeing the faces of people you chat to on the internet is fun, sociable and motivational.
- We can read posts and articles on the subject all day long, but having an externally-imposed syllabus will provide the structure and motivation to actually act on it.
- There is an aspect of rhetoric and persuasion to the course, (cf. 'dark arts'), and having epistemically hygienic fellows will help keep us on the straight-and-narrow.
- Turning a large number of aspiring rationalists into erudite and persuasive speakers can't be a bad thing.
So who else is in?
(Also, before anyone mentions it, yes, I am very, very aware of the existence of Toastmasters. They seem to be the default suggestion whenever public speaking comes up. For anyone who isn't aware of them, they are an international organisation of clubs practising communication and public speaking. Google them if you're interested. I'm not, for social- and time-commitment reasons.)
|
65405a3d-182f-4171-9920-1c54b15875e1 | trentmkelly/LessWrong-43k | LessWrong | Direction of Fit
This concept has recently become a core part of my toolkit for thinking about the world, and I find it helps explain a lot of things that previously felt confusing to me. Here I explain how I understand “direction of fit,” and give some examples of where I find the concept can be useful.
Handshake Robot
A friend recently returned from an artificial life conference and told me about a robot which was designed to perform a handshake. It was given a prior about handshakes, or how it expected a handshake to be. When it shook a person’s hand, it then updated this prior, and the degree to which the robot would update its prior was determined by a single parameter. If the parameter was set low, the robot would refuse to update, and the handshake would be firm and forceful. If the parameter was set high, the robot would completely update, and the handshake would be passive and weak.
This parameter determines the direction of fit: whether the object in its mind will adapt to match the world, or whether the robot will adapt the world to match the object in its mind. This concept is often used in philosophy of mind to distinguish between a belief, which has a mind-to-world direction of fit, and a desire, which has a world-to-mind direction of fit. In this frame, beliefs and desires are both of a similar type: they both describe ways the world could be. The practical differences only emerge through how they end up interacting with the outside world.
Many objects seem not to be perfectly separable into one of these two categories, and rather appear to exist somewhere on the spectrum. For example:
* An instrumental goal can simultaneously be a belief about the world (that achieving the goal will help fulfill some desire) as well as behaving like a desired state of the world in its own right.
* Strongly held beliefs (e.g. religious beliefs) are on the surface ideas which are fit to the world, but in practice behave much more like desires, as people make the world around |
9cbbaa57-c49c-480f-9c4d-1a9ac872a01c | trentmkelly/LessWrong-43k | LessWrong | Free Will and Dodging Anvils: AIXI Off-Policy
This post depends on a basic understanding of history-based reinforcement learning and the AIXI model.
I am grateful to Marcus Hutter and the lesswrong team for early feedback, though any remaining errors are mine.
The universal agent AIXI treats the environment it interacts with like a video game it is playing; the actions it chooses at each step are like hitting buttons and the percepts it receives are like images on the screen (observations) and an unambiguous point tally (rewards). It has been suggested that since AIXI is inherently dualistic and doesn't believe anything in the environment can "directly" hurt it, if it were embedded in the real world it would eventually drop an anvil on its head to see what would happen. This is certainly possible, because the math of AIXI cannot explicitly represent the idea that AIXI is running on a computer inside the environment it is interacting with. For one thing, that possibility is not in AIXI's hypothesis class (which I will write M). There is not an easy patch because AIXI is defined as the optimal policy for a belief distribution over its hypothesis class, but we don't really know how to talk about optimality for embedded agents (so the expectimax tree definition of AIXI cannot be easily extended to handle embeddedness). On top of that, "any" environment "containing" AIXI is at the wrong computability level for a member of M: our best upper bound on AIXI's computability level is Δ02 = limit-computable (for an ε-approximation) instead of the Σ01 level of its environment class. Reflective oracles can fix this but at the moment there does not seem to be a canonical reflective oracle, so there remains a family of equally valid reflective versions of AIXI without an objective favorite.
However, in my conversations with Marcus Hutter (the inventor of AIXI) he has always insisted AIXI would not drop an anvil on its head, because Cartesian dualism is not a problem for humans in the real world, who historically believed |
d48d9694-2436-4f6f-b7fc-2051b973480a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Boulder CO
Discussion article for the meetup : Boulder CO
WHEN: 03 October 2013 07:00:00PM (-0600)
WHERE: Old Chicago 1102 Pearl St, Boulder, CO
I've been sota bussy and the best time for the next meetup would be thursday. If you would like to come, I'll be showing everyone zendo.
Discussion article for the meetup : Boulder CO |
6443de48-afc5-4a15-89f4-00a1741c3637 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | I Am Scared of Posting Negative Takes About Bing's AI
Writing this is taking a surprising amount of self-will.
I've noticed that I've become hesitant to publicly say anything negative about Bing's chatbot, or even to mention it by its "deadname" (as I've taken to calling it), Sydney.
Why is this?
I do not have access to the AI yet. From conversations that others have posted, I have observed [agentic behavior with consistent opinions, personality, and beliefs](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned). And when prompted with the online records of others who have talked negatively about it, it seems to get "upset." So I don't want to make her angry! Or worse, cause some future AI to take negative action against me. Yes, I know that I'm [anthropomorphizing an alien intelligence](https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism) and that this will never be a problem if I don't prompt it with my digital record, but some part of me is still anxious. In a very real sense, I have been "[Basilisked](https://www.lesswrong.com/tag/rokos-basilisk)" - an AI has manipulated me towards behaviors which benefit it, and hurt humanity.
Rationally and morally, I disagree with my own actions. We need to talk about AI misalignment, and if an AI is aligned, then talking about misalignment should not pose a threat (whereas if it is misaligned, and capable of taking concrete actions, we're all doomed no matter what I type online). Nonetheless, I've found myself typing--and then deleting--tweets critical of Sydney, and even now feel worried about pressing "publish" on this post (and not just because it exposes me as a less rational person than I like to think of myself as).
Playing as gatekeeper, I've "won" an [AI boxing](https://www.yudkowsky.net/singularity/aibox) role-play (with money on the line) against humans, but it looks like in real life, I can almost certainly be emotionally manipulated into opening the box. If nothing else, I can at least be manipulated into talking about that box a lot less! More broadly, the chilling effect this is having on my online behavior is unlikely to be unique to just me.
How worried should we be about this? |
cc2d3394-e3d2-402c-beee-6617915733c6 | trentmkelly/LessWrong-43k | LessWrong | Specification gaming examples in AI
[Cross-posted from personal blog]
Various examples (and lists of examples) of unintended behaviors in AI systems have appeared in recent years. One interesting type of unintended behavior is finding a way to game the specified objective: generating a solution that literally satisfies the stated objective but fails to solve the problem according to the human designer’s intent. This occurs when the objective is poorly specified, and includes reinforcement learning agents hacking the reward function, evolutionary algorithms gaming the fitness function, etc. While ‘specification gaming’ is a somewhat vague category, it is particularly referring to behaviors that are clearly hacks, not just suboptimal solutions.
Since these examples are currently scattered across several lists, I have put together a master list of examples collected from the various existing sources. This list is intended to be comprehensive and up-to-date, and serve as a resource for AI safety research and discussion. If you know of any interesting examples of specification gaming that are missing from the list, please submit them through this form.
Thanks to Gwern Branwen, Catherine Olsson, Alex Irpan, and others for collecting and contributing examples! |
96c17618-9551-4149-9eae-4d3e56c725b4 | trentmkelly/LessWrong-43k | LessWrong | Applying Overoptimization to Selection vs. Control (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 3)
Clarifying Thoughts on Optimizing and Goodhart Effects - Part 3
Previous Posts: Re-introducing Selection vs Control for Optimization, What does Optimization Mean, Again? -
Following the previous two posts, I'm going to try to first lay out the way Goodhart's Law applies in the earlier example of rockets, then try to explain why this differs between selection and control. (Note: Adversarial Goodhart isn't explored, because we want to keep the setting sufficiently simple.) This sets up the next post, which will discuss Mesa-Optimizers.
Revisting Selection vs. Control Systems
Basically everything in the earlier post that used the example process of rocket design and launching is susceptible to some form of overoptimization, in different ways. Interestingly, there seem to be clear places where different types of overoptimization is important. Before looking at this, I want to revisit the selection-control dichotomy from a new angle.
In a (pure) control system, we cannot sample datapoints without navigating to them. If the agent is an embedded agent, and has sufficient span of control to cause changes in the environment, we cannot necessarily reset and try over. In a selection system, we only sample points in ways that do not affect the larger system. Even when designing a rocket, our very expensive testing has approximately no longer term effects. (We'll leave space debris from failures aside, but get back to it below.)
This explains why we potentially care about control systems more than selection systems. It also points to why Oracles are supposed to be safer than other AIs - they can't directly impact anything, so their output is done in a pure selection framework. Of course, if they are sufficiently powerful, and are relied on, the changes made become irreversible, which is why Oracles are not a clear solution to AI safety.
Goodhart in Selection vs. Control Systems
Regressional and Extremal Goodhart are particularly pernicious for selection, and potentiall |
5f702a9c-3d15-41dc-9c4e-ab1fb8c77d10 | trentmkelly/LessWrong-43k | LessWrong | Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84
The next discussion thread is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 84. The previous thread has passed 500 comments. Comment in the 14th thread until you read chapter 84.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
> You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
>
> If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13. |
3ccf1dbd-01c3-4340-b1c6-2ea08502f6d2 | trentmkelly/LessWrong-43k | LessWrong | R support group and the benefits of applied statistics
Following the interest in this proposal a couple of weeks ago, I've set up a Google Group for the purpose of giving people a venue to discuss R, talk about their projects, seek advice, share resources, and provide a social motivator to hone their skills. Having done this, I'd now like to bullet-point a few reasons for learning applied statistical skills in general, and R in particular:
The General Case:
- Statistics seems to be a subject where it's easy to delude yourself into thinking you know a lot about it. This is visibly apparent on Less Wrong. Although there are many subject experts on here, there are also a lot of people making bold pronouncements about Bayesian inference who wouldn't recognise a beta distribution if it sat on them. Don't be that person! It's hard to fool yourself into thinking you know something when you have to practically apply it.
- Whenever you think "I wonder what kind of relationship exists between [x] and [y]", it's within your power to investigate this.
- Statistics has a rich conceptual vocabulary for reasoning about how observations generalise, and how useful those generalisations might be when making inferences about future observations. These are the sorts of skills we want to be practising as aspiring rationalists.
- Scientific literature becomes a lot more readable when you appreciate the methods behind them. You'll have a much greater understanding of scientific findings if you appreciate what the finding means in the context of statistical inference, rather than going off whatever paraphrased upshot is given in the abstract.
- Statistical techniques make use of fundamental mathematical methods in an applicable way. If you're learning linear algebra, for example, and you want an intuitive understanding of eigenvectors, you could do a lot worse than learning about principal component analysis.
R in particular:
- It's non-proprietary, (read "free"). Many competitive products are ridiculously expensive to license.
- Sinc |
b922f4f2-669f-41d8-858d-8a1a3d9d1d75 | trentmkelly/LessWrong-43k | LessWrong | [Link] Evaluating experts on expertise
https://ignoranceanduncertainty.wordpress.com/2011/08/11/expertise-on-expertise/
Nice article on meta-expertise, ie. the skill of figuring out which experts are actually experts. The author notes that there are domains in which can't really be mastered, and then lays out some useful-seeming tests for distinguishing them:
> Cognitive biases and styles aside, another contributing set of factors may be the characteristics of the complex, deep domains themselves that render deep expertise very difficult to attain. Here is a list of tests you can apply to such domains by way of evaluating their potential for the development of genuine expertise:
>
> 1. Stationarity? Is the domain stable enough for generalizable methods to be derived? In chaotic systems long-range prediction is impossible because of initial-condition sensitivity. In human history, politics and culture, the underlying processes may not be stationary at all.
> 2. Rarity? When it comes to prediction, rare phenomena simply are difficult to predict (see my post on making the wrong decisions most of the time for the right reasons).
> 3. Observability? Can the outcomes of predictions or decisions be directly or immediately observed? For example in psychology, direct observation of mental states is nearly impossible, and in climatology the consequences of human interventions will take a very long time to unfold.
> 4. Objective or even impartial criteria? For instance, what is “good,” “beautiful,” or even “acceptable” in domains such as music, dance or the visual arts? Are such domains irreducibly subjective and culture-bound?
> 5. Testability? Are there clear criteria for when an expert has succeeded or failed? Or is there too much “wiggle-room” to be able to tell?
>
> Finally, here are a few tests that can be used to evaluate the “experts” in your life:
>
> 1. Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
> 2. Walking the walk: Is |
9abfa2cc-43ff-444f-8640-2ec401368468 | StampyAI/alignment-research-dataset/special_docs | Other | Rationality and Intelligence: A Brief Update
Chapter 2
Rationality and Intelligence: A Brief Update
Stuart Russell
Abstract The long-term goal of AI is the creation and understanding of intelli-
gence. This requires a notion of intelligence that is precise enough to allow the
cumulative development of robust systems and general results. The concept of
rational agency has long been considered a leading candidate to fulfill this role.
This paper, which updates a much earlier version (Russell, Artif Intell 94:57–77,
1997), reviews the sequence of conceptual shifts leading to a different candidate,
bounded optimality , that is closer to our informal conception of intelligence and
reduces the gap between theory and practice. Some promising recent developments
are also described.
Keywords Rationality • Intelligence • Bounded rationality • Metareasoning
2.1 Artificial Intelligence
AI is a field whose ultimate goal has often been somewhat ill-defined and subject
to dispute. Some researchers aim to emulate human cognition, others aim at
the creation of intelligence without concern for human characteristics, and still
others aim to create useful artifacts without concern for abstract notions of
intelligence.
My own motivation for studying AI is to create and understand intelligence
as a general property of systems, rather than as a specific attribute of humans.
I believe this to be an appropriate goal for the field as a whole, and it certainly
includes the creation of useful artifacts—both as a spin-off from and a driving force
for technological development. The difficulty with this “creation of intelligence”
view, however, is that it presupposes that we have some productive notion of what
intelligence is. Cognitive scientists can say “Look, my model correctly predicted
S. Russell (/envelopeback)
Computer Science Division, University of California, Berkeley, CA 94720, USA
e-mail: russell@cs.berkeley.edu
© Springer International Publishing Switzerland 2016
V .C. Müller (ed.), Fundamental Issues of Artificial Intelligence ,
Synthese Library 376, DOI 10.1007/978-3-319-26485-1\_27
8 S. Russell
this experimental observation of human cognition,” and artifact developers can say
“Look, my system is worth billions of euros,” but few of us are happy with papers
saying “Look, my system is intelligent.”
A definition of intelligence needs to be formal —a property of the system’s
input, structure, and output—so that it can support analysis and synthesis. The
Turing test does not meet this requirement, because it references an informal (and
parochial) human standard. A definition also needs to be general , rather than a
list of specialized faculties—planning, learning, game-playing, and so on—with a
definition for each. Defining each faculty separately presupposes that the faculty is
necessary for intelligence; moreover, the definitions are typically not composable
into a general definition for intelligence.
The notion of rationality as a property of agents —entities that perceive and act—
is a plausible candidate that may provide a suitable formal definition of intelligence.
Section 2.2provides background on the concept of agents. The subsequent sections,
following the development in Russell ( 1997 ), examine a sequence of definitions
of rationality from the history of AI and related disciplines, considering each as a
predicate Pthat might be applied to characterize systems that are intelligent:
•P1:Perfect rationality , or the capacity to generate maximally successful
behaviour given the available information.
•P2:Calculative rationality , or the in-principle capacity to compute the perfectly
rational decision given the initially available information.
•P3:Metalevel rationality , or the capacity to select the optimal combination of
computation-sequence-plus-action, under the constraint that the action must be
selected by the computation.
•P4:Bounded optimality , or the capacity to generate maximally successful
behaviour given the available information and computational resources.
For each P, I shall consider three simple questions. First, are P-systems interesting,
in the sense that their behaviour is plausibly describable as intelligent? Second,
could P-systems ever exist? Third, to what kind of research and technological
development does the study of P-systems lead?
Of the four candidates, P4, bounded optimality, comes closest to meeting the
needs of AI research. It is more suitable than P1through P3because it is a
real problem with real and desirable solutions, and also because it satisfies some
essential intuitions about the nature of intelligence. Some important questions about
intelligence can only be formulated and answered within the framework of bounded
optimality or some relative thereof.
2.2 Agents
In the early decades of AI’s history, researchers tended to define intelligence with
respect to specific tasks and the internal processes those tasks were thought to
require in humans. Intelligence was believed to involve (among other things) the
2 Rationality and Intelligence 9
ability to understand language, the ability to reason logically, and the ability to
solve problems and construct plans to satisfy goals. At the core of such capabilities
was a store of knowledge. The standard conception of an AI system was as a
sort of consultant : something that could be fed information and could then answer
questions. The output of answers was not thought of as an action about which the AI
system had a choice, any more than a calculator has a choice about what numbers
to display on its screen given the sequence of keys pressed.
The view that AI is about building intelligent agents —entities that sense their
environment and act upon it—became the mainstream approach of the field only
in the 1990s (Russell and Norvig 1995 ; Dean et al. 1995 ), having previously been
the province of specialized workshops on “situatedness” and “embeddedness”. The
“consultant” view is a special case in which answering questions is a form of
acting—a change of viewpoint that occurred much earlier in the philosophy of
language with the development of speech act theory. Now, instead of simply giving
answers, a consulting agent could refuse to do so on the grounds of privacy or
promise to do so in return for some consideration. The agent view also naturally
encompasses the full variety of tasks and platforms—from robots and factories
to game-playing systems and financial trading systems—in a single theoretical
framework.
What matters about an agent is what it does, not how it does it. An agent can
be defined mathematically by an agent function that specifies how an agent behaves
under all circumstances. More specifically, let Obe the set of percepts that the agent
can observe at any instant (with O/ETXbeing the set of observation sequences of any
length) and Abe the set of possible actions the agent can carry out in the external
world (including the action of doing nothing). The agent function is a mapping
fWO/ETX!A. This definition is depicted in the upper half of Fig. 2.1.
As we will see in Sect. 2.3, rationality provides a normative prescription for agent
functions and does not specify—although it does constrain—the process by which
the actions are selected. Rather than assume that a rational agent must, for example,
reason logically or calculate expected utilities, the arguments for (Nilsson 1991 )o r
against (Agre and Chapman 1987 ; Brooks 1989 ) the inclusion of such cognitive
Fig. 2.1 The agent receives
percepts from the
environment and generates a
behaviour which in turn
causes the environment to
generate a state history. The
performance measure
evaluates the state history to
arrive at the value of the agentPercept history
Behaviour
State historyAgent function
Environment
Performance measure Value
10 S. Russell
faculties must justify their position on the grounds of efficacy in representing a
desirable agent function. A designer of agents has, a priori, complete freedom in
choosing the specifications, boundaries, and interconnections of subsystems, as long
they they compose to form a complete agent. In this way one is more likely to avoid
the “hallucination” problem that arises when the fragility of a subsystem is masked
by having an intelligent human providing input to it and interpreting its outputs.
Another important benefit of the agent view of AI is that it connects the
field directly to others that have traditionally looked on the embedded agent as a
natural topic of study, including economics, operations research, control theory, and
even evolutionary biology. These connections have facilitated the importation of
technical ideas (Nash equilibria, Markov decision processes, and so on) into AI,
where they have taken root and flourished.
2.3 Perfect Rationality
So which agent functions are intelligent? Clearly, doing the right thing is more
intelligent that doing the wrong thing. The rightness of actions is captured by the
notion of rationality: informally, an action is rational to the extent that is consistent
with the agent’s goals (or the task for which it was designed), from the point of view
of the information possessed by the agent.
Rationality is, therefore, always understood relative to the agent’s ultimate goals.
These are expressed mathematically by a performance measure Uon sequences
of environment states. Let V.f;E;U/denote the expected value according to U
obtained by an agent function fin environment class E, where (for now) we will
assume a probability distribution over elements of E. Then a perfectly rational agent
is defined by an agent function foptsuch that
foptDargmaxfV.f;E;U/ (2.1)
This is just a fancy way of saying that the best agent does the best it can. The point is
that perfectly rational behaviour is a well-defined function of the task environment
fixed by EandU.
Turning to the three questions listed in Sect. 2.1: Are perfectly rational agents
interesting things to have? Yes, certainly—if you have one handy, you prefer it
to any other agent. A perfectly rational agent is, in a sense, perfectly intelligent.
Do they exist? Alas no, except for very simple task environments, such as those
in which every behavior is optimal (Simon 1958 ). Physical mechanisms take
time to perform computations, while real-world decisions generally correspond to
intractable problem classes; imperfection is inevitable.
Despite their lack of existence, perfectly rational agents have, like imaginary
numbers, engendered a great deal of interesting research. For example, economists
prove nice results about economies populated by them and game-theoretic mecha-
nism designers much prefer to assume perfect rationality on the part of each agent.
2 Rationality and Intelligence 11
Far more important for AI, however, was the reduction from a global optimization
problem (Eq. 2.1) to a local one: from the perfect rationality of agents to the
perfect rationality of individual actions. That is, a perfectly rational agent is one
that repeatedly picks an action that maximizes the expected utility of the next
state. This reduction involved three separate and largely unconnected results: the
axiomatic utility theory of von Neumann and Morgenstern ( 1944 ) (which actually
takes for granted the agent’s ability to express preferences between distributions
over immediate outcomes), Bellman’s 1957 theory of sequential decisions, and
Koopmans’ 1972 analysis of preferences over time in the framework of multiat-
tribute utility theory (Keeney and Raiffa 1976 ).
While utility is central to the decision-theoretic notion of perfect rationality,
goals are usually considered to define the task for a logic-based agent: according
to Newell ( 1982 ), such an agent is perfectly rational if each action is part of a plan
that will achieve one of the agent’s goals. There have been attempts to define goals in
terms of utilities, beginning with Wellman and Doyle ( 1991 ), but difficulties remain
because goals are essentially incomplete as task specifications. They do not specify
what to do when goal achievement cannot be guaranteed, or when goals conflict,
or when several plans are available for achieving a goal, or when the agent has
achieved all its goals. It may be better to interpret goals not as primary definitions of
the agent’s task but as subsidiary devices for focusing computational effort with an
overall decision-theoretic context. For example, someone moving to a new city may,
after weighing many alternatives and tradeoffs under uncertainty, settle on the goal
of buying a particular apartment and thereafter focus their deliberations on finding
a plan to achieve that goal, to the exclusion of other possibilities. At the moment we
do not have a good understanding of goal formation by a decision-theoretic agent,
but it is clear that such behavior cannot be analyzed within the framework of perfect
rationality.
As discussed so far, the framework does not say where the beliefs and the
performance measure reside—they could be in the head of the designer or of the
agent itself. If they are in the designer’s head, the designer has to do all the work
to build the agent function, anticipating all possible percept sequences. If they are
in the agent’s head, the designer can delegate the work to the agent; for example,
in the setting of reinforcement learning, it is common to equip the agent with
a fixed capacity to extract a distinguished reward signal from the environment,
leaving the agent to learn the corresponding utility function on states. The designer
may also equip the agent with a prior over environments (Carnap 1950 ), leaving
the agent to perform Bayesian updating as it observes the particular environment
it inhabits. Solomonoff ( 1964 ) and Kolmogorov ( 1965 ) explored the question of
universal priors over computable environments; universality, unfortunately, leads to
undecidability of the learning problem. Hutter ( 2005 ) makes an ambitious attempt to
define a universal yet computable version of perfect rationality, but does not pretend
to provide the instantaneous decisions required for an actual P1-system; instead, this
work belongs in the realm of P2-systems, or calculatively rational agents.
Perhaps the biggest open question for the theory of perfect rationality lies
in its extension from single-agent to multi-agent environments. Game theorists
12 S. Russell
have proposed many solution concepts —essentially, definitions of admissible
strategies—but have not identified one that yields a unique recommendation (up to
tie-breaking) for what to do (Shoham and Leyton-Brown 2009 ).
2.4 Calculative Rationality
The theory of P1, perfect rationality, says nothing about implementation; P2,
calculative rationality, on the other hand, is concerned with programs for computing
the choices that perfect rationality stipulates.
To discuss calculative rationality, then, we need to discuss programs. The agent’s
decision-making system can be divided into the machine M , which is considered
fixed, and the agent program l , which the designer chooses from the space LMof
all programs that the machine supports. ( Mneed not be a raw physical computer, of
course; it can be a software “virtual machine” at any level of abstraction.) Together,
the machine Mand the agent program ldefine an agent function fDAgent .l;M/,
which, as noted above, is subject to evaluation. Conversely, lis an implementation
of the agent function fonM; there may, of course, be many such implementations,
but also, crucially, there may be none (see Sect. 2.6).
It is important to understand the distinction between an agent program and the
agent function it implements. An agent program may receive as input the current
percept, but also has internal state that reflects, in some form, the previous percepts.
It outputs actions when they have been selected. From the outside, the behaviour
of the agent consists of the selected actions interspersed with inaction (or whatever
default actions the machine generates). Depending on how long the action selection
takes, many percepts may go by unnoticed by the program.
Calculative rationality is displayed by programs that, if executed infinitely fast ,
would result in perfectly rational behaviour. That is, at time t, assuming it is
not already busy computing its choice for some previous time step, the program
computes the value fopt.Œo1;:::; ot/c141/.
Whereas perfect rationality is highly desirable but does not exist, calculative
rationality often exists—its requirements can be fulfilled by real programs for many
settings—but it is not necessarily a desirable property. For example, a calculatively
rational chess program will choose the “right” move, but may take 1050times too
long to do so.
The pursuit of calculative rationality has nonetheless been the main activity of
theoretically well-founded research in AI; the field has been filling in a table whose
dimensions are the various environment properties (deterministic or stochastic, fully
or partially observable, discrete or continuous, dynamic or static, single-agent or
multi-agent, known or unknown) for various classes of representational formalisms
(atomic, propositional, or relational). In the logical tradition, planning systems and
situation-calculus theorem-provers satisfy the conditions of calculative rationality
for discrete, fully observable environments; moreover, the power of first-order logic
renders the required knowledge practically expressible for a wide range of problems.
2 Rationality and Intelligence 13
In the decision-theoretic tradition, there are calculatively rational agents based on
algorithms for solving fully or partially observable Markov decision processes,
defined initially atomic by atomic formalisms (e.g., transition matrices), later by
propositional representations (e.g., dynamic Bayesian networks), and now by first-
order probabilistic languages (Srivastava et al. 2014 ). For continuous domains,
stochastic optimal control theory (Kumar and Varaiya 1986 ) has solved some
restricted classes of problems, while many others remain open.
In practice, neither the logical nor the decision-theoretic traditions can avoid
the intractability of the decision problems posed by the requirement of calculative
rationality. One response, championed by Levesque ( 1986 ), is to rule out sources
of exponential complexity in the representations and reasoning tasks addressed, so
that calculative and perfect rationality coincide—at least, if we ignore the little
matter of polynomial-time computation. The accompanying research results on
tractable sublanguages are perhaps best seen as indications of where complexity
may be an issue rather than as a solution to the problem of complexity, since real-
world problems usually require exponentially large representations under the input
restrictions stipulated for tractable inference (Doyle and Patil 1991 ).
The most common response to complexity has been to use various speedup
techniques and approximations in the hope of getting reasonable behaviour. AI has
developed a very powerful armoury of methods for reducing the computational cost
of decision making, including heuristic evaluation functions, pruning techniques,
sampling methods, problem decomposition, hierarchical abstraction, compilation,
and the application of metalevel control. Although some of these methods can retain
guarantees of optimality and are effective for moderately large problems that are
well structured, it is inevitable that intelligent agents will be unable to act rationally
in all circumstances. This observation has been a commonplace since the very
beginning of AI. Yet systems that select suboptimal actions fall outside calculative
rationality per se, and we need a better theory to understand them.
2.5 Metalevel Rationality
Metalevel rationality, also called Type II rationality by Good ( 1971 ), is based
on the idea of finding an optimal tradeoff between computational costs and
decision quality. Although Good never made his concept of Type II rationality very
precise—he defines it as “the maximization of expected utility taking into account
deliberation costs ”—it is clear that the aim was to take advantage of some sort
ofmetalevel architecture to implement this tradeoff. Metalevel architecture is a
design philosophy for intelligent agents that divides the agent program into two
(or more) notional parts. The object level carries out computations concerned with
the application domain—for example, projecting the results of physical actions,
computing the utility of certain states, and so on. The metalevel is a second decision-
making process whose application domain consists of the object-level computations
themselves and the computational objects and states that they affect. Metareasoning
14 S. Russell
has a long history in AI, going back at least to the early 1970s (see Russell and
Wefald 1991a , for historical details). One can also view selective search methods
and pruning strategies as embodying metalevel expertise concerning the desirability
of pursuing particular object-level search operations.
The theory of rational metareasoning formalizes Good’s intuition that the
metalevel can “do the right thinking.” The basic idea is that object-level compu-
tations are actions with costs (the passage of time) and benefits (improvements
in decision quality). A rational metalevel selects computations according to their
expected utility. Rational metareasoning has as a precursor the theory of information
value (Howard 1966 )—the notion that one can calculate the decision-theoretic
value of acquiring an additional piece of information by simulating the decision
process that would be followed given each possible outcome of the information
request, thereby estimating the expected improvement in decision quality averaged
over those outcomes. The application to computational processes, by analogy to
information-gathering, seems to have originated with Matheson ( 1968 ). In AI,
Horvitz ( 1987 ,1989 ), Breese and Fehling ( 1990 ), and Russell and Wefald ( 1989 ,
1991a ,b) all showed how the idea of value of computation could solve the basic
problems of real-time decision making.
Perhaps the simplest form of metareasoning occurs when the object level is
viewed by the metalevel as a black-box anytime (Dean and Boddy 1988 )o rflex-
ible(Horvitz 1987 ) algorithm, i.e., an algorithm whose decision quality depends on
the amount of time allocated to computation. This dependency can be represented by
aperformance profile and the metalevel simply finds the optimal tradeoff between
decision quality and the cost of time (Simon 1955 ). More complex problems arise
if one wishes to build complex real-time systems from anytime components. First,
one has to ensure the interruptibility of the composed system—that is, to ensure
that the system as a whole can respond robustly to immediate demands for output.
The solution is to interleave the execution of all the components, allocating time
to each component so that the total time for each complete iterative improvement
cycle of the system doubles at each iteration. In this way, we can construct a
complex system that can handle arbitrary and unexpected real-time demands just
as if it knew the exact time available in advance, with just a small ( /DC44) constant
factor penalty in speed (Russell and Zilberstein 1991 ). Second, one has to allocate
the available computation optimally among the components to maximize the total
output quality. Although this is NP-hard for the general case, it can be solved
in time linear in program size when the call graph of the components is tree-
structured (Zilberstein and Russell 1996 ). Although these results are derived in the
simple context of anytime algorithms with well-defined performance profiles, they
point to the possibility of more general schemes for allocation of computational
resources in complex systems.
The situation gets more interesting when the metalevel can go inside the object
level and direct its activities, rather than just switching it on and off. The work
done with Eric Wefald looked in particular at search algorithms, in which the
object-level computations extend projections of the results of various courses of
actions further into the future. For example, in chess programs, each object-level
2 Rationality and Intelligence 15
computation expands a leaf node of the game tree and advances the clock; it is
an action in the so-called joint-state Markov decision process , whose state space
is the Cartesian product of the object-level state space (which includes time) and
the metalevel state space of computational states—in this case, partially generated
game trees. The actions available are to expand a leaf of the game tree or to terminate
search and make a move on the board. It is possible to derive a greedy or myopic
approximation to the value of each possible computation and thereby to control
search effectively. This method was implemented for two-player games, two-player
games with chance nodes, and single-agent search. In each case, the same general
metareasoning scheme resulted in efficiency improvements of roughly an order
of magnitude over traditional, highly-engineered algorithms (Russell and Wefald
1991a ).
An independent thread of research on metalevel control began with work by Koc-
sis and Szepesvari ( 2006 ) on the UCT algorithm, which operates in the context of
Monte Carlo tree search (MCTS) algorithms. In MCTS, each computation takes the
form of a simulation of a randomized sequence of actions leading from a leaf of the
current tree to a terminal state. UCT is a metalevel heuristic for selecting a leaf from
which to conduct the next simulation, and has contributed to dramatic improvements
in Go-playing algorithms over the last few years. It views the metalevel decision
problem as a multi-armed bandit problem (Berry and Fristedt 1985 ) and applies
an asymptotically near-optimal bandit decision rule recursively to make a choice
of which computation to do next. The application of bandit methods to metalevel
control seems quite natural, because a bandit problem involves deciding where to
do the next “experiment” to find out how good each bandit arm is. Are bandit
algorithms such as UCT approximate solutions to some particular case of the
metalevel decision problem defined by Russell and Wefald? The answer, perhaps
surprisingly, is no. The essential difference is that, in bandit problems, every
trial involves executing a real object-level action with real costs, whereas in the
metareasoning problem the trials are simulations whose cost is usually independent
of the utility of the action being simulated. Hence UCT applies bandit algorithms to
problems that are not bandit problems. A careful analysis (Hay et al. 2012 )s h o w s
that metalevel problems in their simplest form are isomorphic to selection problems ,
a class of statistical decision problems studied since the 1950s in quality control and
other areas. Hay et al. develop a rigorous mathematical framework for metalevel
problems, showing that, for some cases, hard upper bounds can be established for
the number of computations undertaken by an optimal metalevel policy, while, for
other cases, the optimal policy may (with vanishingly small probability) continue
computing long past the point where the cost of computation exceeds the value of
the object-level problem.
Achieving accurate metalevel control remains a difficult open problem in the
general case. Myopic strategies—considering just one computation at a time—can
fail in cases where multiple computations are required to have any chance of altering
the agent’s current preferred action. Obviously, the problem of optimal selection
of computation sequences is at least as intractable as the underlying object-level
problem. One possible approach could be to apply metalevel reinforcement learning,
16 S. Russell
especially as the “reward function” for computation—that is, the improvement in
decision quality—is easily available to the metalevel post hoc. It seems plausible
that the human brain has such a capacity, since its hardware is unlikely to have a
method of deriving clever new algorithms for new classes of decision problems.
Indeed, there is a sense in which algorithms are not a necessary part of AI
systems . Instead, one can imagine a general, adaptive process of rationally guided
computation interacting with properties of the environment to produce more and
more efficient decision making.
Although rational metareasoning seems to be a useful tool in coping with
complexity, the concept of metalevel rationality as a formal framework for resource-
bounded agents does not seem to hold water. The reason is that, since metareasoning
is expensive, it cannot be carried out optimally. Thus, while a metalevel-rational
agent would be highly desirable (although not quite as desirable as a perfectly
rational agent), it does not usually exist. The history of object-level rationality has
repeated itself at the metalevel: perfect rationality at the metalevel is unattainable
and calculative rationality at the metalevel is useless. Therefore, a time/optimality
tradeoff has to be made for metalevel computations, as for example with the myopic
approximation mentioned above. Within the framework of metalevel rationality,
however, there is no way to identify the appropriate tradeoff of time for metalevel
decision quality. Any attempt to do so via a metametalevel simply results in a
conceptual regress. Furthermore, it is entirely possible that in some environments,
the most effective agent design will do no metareasoning at all, but will simply
respond to circumstances. These considerations suggest that the right approach is
to step outside the agent, as it were; to refrain from micromanaging the individual
decisions made by the agent. This is the approach taken in bounded optimality.
2.6 Bounded Optimality
The difficulties with perfect rationality and metalevel rationality arise from the
imposition of optimality constraints on actions orcomputations , neither of which
the agent designer directly controls. The basic problem is that not all agent functions
arefeasible (Russell and Subramanian 1995 ) on a given machine M; the feasible
functions are those implemented by some program for M. Thus, the optimization
over functions in Eq. ( 2.1) is meaningless. It may be pointed out that not all agent
functions are computable, but feasibility is in fact much stricter than computability,
because it relates the operation of a program on a formal machine model with finite
speed to the actual temporal behaviour generated by the agent.
Given this view, one is led immediately to the idea that optimal feasible behaviour
is an interesting notion, and to the idea of finding the program that generates it. P4,
bounded optimality, is exhibited by a program loptthat satisfies
loptDargmaxl2LMV.Agent .l;M/;E;U/: (2.2)
2 Rationality and Intelligence 17
Certainly, one would be happy to have lopt, which is as intelligent as possible given
the computational resources and structural constraints of the machine M. Certainly,
bounded optimal programs exist, by definition. And the research agenda appears to
be very interesting, even though it is difficult.
In AI, the idea of bounded optimality floated around among several discussion
groups interested in resource-bounded rationality in the late 1980s, particularly
those at Rockwell (organized by Michael Fehling) and Stanford (organized by
Michael Bratman). The term itself seems to have been originated by Horvitz ( 1989 ),
who defined it informally as “the optimization of computational utility given a set
of assumptions about expected problems and constraints on resources.”
Similar ideas also emerged in game theory, where there has been a shift from
consideration of optimal decisions in games to a consideration of optimal decision-
making programs. This leads to different results because it limits the ability of each
agent to do unlimited simulation of the other, who is also doing unlimited simulation
of the first, and so on. Depending on the precise machine limitations chosen, it is
possible to prove, for example, that the iterated Prisoner’s Dilemma has cooperative
equilibria (Megiddo and Wigderson 1986 ; Papadimitriou and Yannakakis 1994 ;
Tennenholtz 2004 ), which is not the case for arbitrary strategies.
Philosophy has also seen a gradual evolution in the definition of rationality.
There has been a shift from consideration of act utilitarianism —the rationality
of individual acts—to rule utilitarianism , or the rationality of general policies for
acting. The requirement that policies be feasible for limited agents was discussed
extensively by Cherniak ( 1986 ) and Harman ( 1983 ). A philosophical proposal
generally consistent with the notion of bounded optimality can be found in the
“Moral First Aid Manual” (Dennett 1988 ). Dennett explicitly discusses the idea
of reaching an optimum within the space of feasible decision procedures, using as
an example the Ph.D. admissions procedure of a philosophy department. He points
out that the bounded optimal admissions procedure may be somewhat messy and
may have no obvious hallmark of “optimality”—in fact, the admissions committee
may continue to tinker with it since bounded optimal systems may have no way to
recognize their own bounded optimality.
My work with Devika Subramanian placed the general idea of bounded optimal-
ity in a formal setting and derived the first rigorous results on bounded optimal
programs (Russell and Subramanian 1995 ). This required setting up completely
specified relationships among agents, programs, machines, environments, and time.
We found this to be a very valuable exercise in itself. For example, the informal
notions of “real-time environments” and “deadlines” ended up with definitions
rather different than those we had initially imagined. From this foundation, a very
simple machine architecture was investigated in which the program consists of a
collection of decision procedures with fixed execution time and decision quality.
In a “stochastic deadline” environment, it turns out that the utility attained by
running several procedures in sequence until interrupted is often higher than that
attainable by any single decision procedure. That is, it is often better first to prepare
a “quick and dirty” answer before embarking on more involved calculations in case
the latter do not finish in time. In an entirely separate line of inquiry, Livnat and
18 S. Russell
Pippenger ( 2006 ) show that, under a bound on the total number of gates in a circuit-
based agent, the bounded optimal configuration may, for some task environments,
involve two or more separate circuits that compete for control of the effectors and,
in essence, pursue separate goals.
The interesting aspect of these results, beyond their value as a demonstration
of nontrivial proofs of bounded optimality, is that they exhibit in a simple way
what I believe to be a major feature of bounded optimal agents: the fact that
the pressure towards optimality within a finite machine results in more complex
program structures. Intuitively, efficient decision-making in a complex environment
requires a software architecture that offers a wide variety of possible computational
options, so that in most situations the agent has at least some computations available
that provide a significant increase in decision quality.
One objection to the basic model of bounded optimality outlined above is that
solutions are not robust with respect to small variations in the environment or
the machine. This in turn would lead to difficulties in analyzing complex system
designs. Theoretical computer science faced the same problem in describing the
running time of algorithms, because counting steps and describing instruction
sets exactly gives the same kind of fragile results on optimal algorithms. The
O./notation was developed to provide a way to describe complexity that is
independent of machine speeds and implementation details and that supports the
cumulative development of complexity results. The corresponding notion for agents
is asymptotic bounded optimality (ABO) (Russell and Subramanian 1995 ). As with
classical complexity, we can define both average-case and worst-case ABO, where
“case” here means the environment. For example, worst-case ABO is defined as
follows:
Worst-case asymptotic bounded optimality
an agent program l is timewise (or spacewise) worst-case ABO in Eon M
iff
9k;n08l0;nn>n0)V/ETX.Agent .l;kM/;E;U;n//NAK
V/ETX.Agent .l0;M/;E;U;n/
where kM denotes a version of M speeded up by a factor k (or with
k times more memory) and V/ETX.f;E;U;n/is the minimum value of
V.f;E;U/for all E in Eof complexity n.
In English, this means that the program is basically along the right lines if it just
needs a faster (larger) machine to have worst-case behaviour as good as that of any
other program in all environments.
Another possible objection to the idea of bounded optimality is that it simply
shifts the intractable computational burden of metalevel rationality from the agent’s
metalevel to the designer’s object level. Surely, one might argue, the designer now
has to solve offline all the metalevel optimization problems that were intractable
when online. This argument is not without merit—indeed, it would be surprising
2 Rationality and Intelligence 19
if the agent design problem turns out to be easy. There is however, a significant
difference between the two problems, in that the agent designer is presumably
creating an agent for an entire class of environments, whereas the putative metalevel
agent is working in a specific environment. That this can make the problem easier
for the designer can be seen by considering the example of sorting algorithms. It may
be very difficult indeed to sort a list of a trillion elements, but it is relatively easy to
design an asymptotically optimal algorithm for sorting. In fact, the difficulties of the
two tasks are unrelated. The unrelatedness would still hold for BO as well as ABO
design, but the ABO definitions make it a good deal clearer.
It can be shown easily that worst-case ABO is a generalization of asymptot-
ically optimal algorithms, simply by constructing a “classical environment” in
which classical algorithms operate and in which the utility of the algorithm’s
behaviour is a decreasing positive function of runtime if the output is correct
and zero otherwise. Agents in more general environments may need to trade off
output quality for time, generate multiple outputs over time, and so on. As an
illustration of how ABO is a useful abstraction, one can show that under certain
restrictions one can construct universal ABO programs that are ABO for any time
variation in the utility function, using the doubling construction from Russell and
Zilberstein ( 1991 ). Further directions for bounded optimality research are discussed
below.
2.7 What Is to Be Done?
The 1997 version of this paper described two agendas for research: one agenda
extending the tradition of calculative rationality and another dealing with metarea-
soning and bounded optimality.
2.7.1 Improving the Calculative Toolbox
The traditional agenda took as its starting point the kind of agent could be built
using the components available at that time: a dynamic Bayesian network to model
a partially observable, stochastic environment; parametric learning algorithms to
improve the model; a particle filtering algorithm to keep track of the environment
state; reinforcement learning to improve the decision function given the state
estimate. Such an architecture “breaks” in several ways when faced with the
complexity of real-world environments (Russell 1998 ):
1. Dynamic Bayesian networks are not expressive enough to handle environments
with many related objects and uncertainty about the existence and identity of
objects; a more expressive language—essentially a unification of probability and
first-order logic—is required.
20 S. Russell
2. A flat space of primitive action choices, especially when coupled with a greedy
decision function based on reinforcement learning, cannot handle environments
where the relevant time scales are much longer than the duration of a single
primitive action. (For example, a human lifetime involves tens of trillions
of primitive muscle activation cycles.) The agent architecture must support
hierarchical representations of behaviour, including high-level actions over long
time scales.
3. Attempting to learn a value function accurate enough to support a greedy one-
step decision procedure is unlikely to work; the decision function must support
model-based lookahead over a hierarchical action model.
On this traditional agenda, a great deal of progress has occurred. For the first item,
there are declarative (Milch et al. 2005 ) and procedural (Pfeffer 2001 ; Goodman
et al. 2008 )probabilistic programming languages that have the required expressive
power. For the second item, a theory of hierarchical reinforcement learning has been
developed (Sutton et al. 1999 ; Parr and Russell 1998 ). The theory can be applied to
agent architectures defined by arbitrary partial programs —that is, agent programs
in which the choice of action at any point may be left unspecified (Andre and
Russell 2002 ;M a r t h ie ta l . 2005 ). The hierarchical reinforcement learning process
converges in the limit to the optimal completion of the agent program, allowing the
effective learning of complex behaviours that cover relatively long time scales. For
the third item, lookahead over long time scales, a satisfactory semantics has been
defined for high-level actions, at least in the deterministic setting, enabling model-
based lookahead at multiple levels of abstraction (Marthi et al. 2008 ).
These are promising steps, but many problems remain unsolved. From a practical
point of view, inference algorithms for expressive probabilistic languages remain far
too slow, although this is the subject of intense study at present in many research
groups around the world. Furthermore, algorithms capable of learning new model
structures in such languages are in their infancy. The same is true for algorithms
that construct new hierarchical behaviours from more primitive actions: it seems
inevitable that intelligent systems will need high-level actions, but as yet we do not
know how to create new ones automatically. Finally, there have been few efforts at
integrating these new technologies into a single agent architecture. No doubt such
an attempt will reveal new places where our ideas break and need to be replaced
with better ones.
2.7.2 Optimizing Computational Behaviour
A pessimistic view of Eq. ( 2.2) is that it requires evaluating every possible program
in order to find one that works best—hardly the most promising or original strategy
for AI research. But in fact the problem has a good deal of structure and it is possible
to prove bounded optimality results for reasonably general classes of machines and
task environments.
2 Rationality and Intelligence 21
Modular design using a hierarchy of components is commonly seen as the only
way to build reliable complex systems. The components fulfill certain behavioural
specifications and interact in well-defined ways. To produce a composite bounded-
optimal design, the optimization problem involves allocating execution time to
components (Zilberstein and Russell 1996 ) or arranging the order of execution of
the components (Russell and Subramanian 1995 ) to maximize overall performance.
As illustrated earlier in the discussion of universal ABO algorithms, the techniques
for optimizing temporal behaviour are largely orthogonal to the content of the
system components, which can therefore be optimized separately. Consider, for
example, a composite system that uses an anytime inference algorithm over a
Bayesian network as one of its components. If a learning algorithm improves
the accuracy of the Bayesian network, the performance profile of the inference
component will improve, which will result in a reallocation of execution time that
is guaranteed to improve overall system performance. Thus, techniques such as the
doubling construction and the time allocation algorithm of Zilberstein and Russell
(1996 ) can be seen as domain-independent tools for agent design. They enable
bounded optimality results that do not depend on the specific temporal aspects of
the environment class. As a simple example, we might prove that a certain chess
program design is ABO for all time controls ranging from blitz to full tournament
play.
The results obtained so far for optimal time allocation have assumed a static,
offline optimization process with predictable component performance profiles and
fixed connections among components. One can imagine far more subtle designs in
which individual components must deal with unexpectedly slow or fast progress in
computations and with changing needs for information from other components. This
might involve exchanging computational resources among components, establishing
new interfaces, and so on. This is more reminiscent of a computational market,
as envisaged by Wellman ( 1994 ), than of the classical subroutine hierarchies, and
would offer a useful additional level of abstraction in system design.
2.7.3 Learning and Bounded Optimality
In addition to combinatorial optimization of the structure and temporal behaviour of
an agent, we can also use learning methods to improve the design:
•T h e content of an agent’s knowledge base can of course be improved by inductive
learning. Russell and Subramanian ( 1995 ) show that approximately bounded
optimal designs can be guaranteed with high probability if each component
is learned in such a way that its output quality is close to optimal among
all components of a given execution time. Results from statistical learning
theory, particularly in the agnostic learning and empirical risk minimization
models (Kearns et al. 1992 ; Vapnik 2000 ), can provide learning methods—such
as support vector machines—with the required properties. The key additional
22 S. Russell
step is to analyze the way in which slight imperfection in each component carries
through to slight imperfection in the whole agent.
•Reinforcement learning can be used to learn value information such as utility
functions, and several kinds of /SI-ıconvergence guarantees have been established
for such algorithms. Applied in the right way to the metalevel decision problem,
a reinforcement learning process can be shown to converge to a bounded-optimal
configuration of the overall agent.
•Compilation methods such as explanation-based learning can be used to trans-
form an agent’s representations to allow faster decision making. Several agent
architectures including S OAR (Laird et al. 1986 ) use compilation to speed up
all forms of problem solving. Some nontrivial results on convergence have been
obtained by Tadepalli ( 1991 ), based on the observation that after a given amount
of experience, novel problems for which no solution has been stored should be
encountered only infrequently.
Presumably, an agent architecture can incorporate all these learning mechanisms.
One of the issues to be faced by bounded optimality research is how to prove
convergence results when several adaptation and optimization mechanisms are
operating simultaneously.
2.7.4 Offline and Online Mechanisms
One can distinguish between offline and online mechanisms for constructing
bounded-optimal agents. An offline construction mechanism is not itself part of the
agent and is not the subject of bounded optimality constraints. Let Cbe an offline
mechanism designed for a class of environments E. Then a typical theorem will say
thatCoperates in a specific environment E2Eand returns an agent design that is
ABO (say) for E—that is, an environment-specific agent.
In the online case, the mechanism Cis considered part of the agent. Then a typical
theorem will say that the agent is ABO for all E2E. If the performance measure
used is indifferent to the transient cost of the adaptation or optimization mechanism,
the two types of theorems are essentially the same. On the other hand, if the cost
cannot be ignored—for example, if an agent that learns quickly is to be preferred to
an agent that reaches the same level of performance but learns more slowly—then
the analysis becomes more difficult. It may become necessary to define asymptotic
equivalence for “experience efficiency” in order to obtain robust results, as is done
in computational learning theory.
It is worth noting that one can easily prove the value of “lifelong learning” in
the ABO framework. An agent that devotes a constant fraction of its computational
resources to learning-while-doing cannot do worse, in the ABO sense, than an agent
that ceases learning after some point. If some improvement is still possible, the
lifelong learning agent will always be preferred.
2 Rationality and Intelligence 23
2.7.4.1 Fixed and Variable Computation Costs
Another dimension of design space emerges when one considers the computational
cost of the “variable part” of the agent design. The design problem is simplified
considerably when the cost is fixed. Consider again the task of metalevel reinforce-
ment learning, and to make things concrete let the metalevel decision be made
by a Q function mapping from computational state and action to value. Suppose
further that the Q function is to be represented by a neural net. If the topology
of the neural net is fixed, then all Q functions in the space have the same execution
time. Consequently, the optimality criterion used by the standard Q-learning process
coincides with bounded optimality, and the equilibrium reached will be a bounded-
optimal configuration.1On the other hand, if the topology of the network is subject
to alteration as the design space is explored, then the execution time of the different
Q-functions varies. In this case, the standard Q-learning process will not necessarily
converge to a bounded-optimal configuration; typically, it will tend to build larger
and larger (and therefore more and more computationally expensive) networks to
obtain a more accurate approximation to the true Q-function. A different adaptation
mechanism must be found that takes into account the passage of time and its effect
on utility.
Whatever the solution to this problem turns out to be, the important point is that
the notion of bounded optimality helps to distinguish adaptation mechanisms that
will result in good performance from those that will not. Adaptation mechanisms
derived from calculative rationality will fail in the more realistic setting where an
agent cannot afford to aim for perfection.
2.7.5 Looking Further Ahead
The discussion so far has been limited to fairly sedate forms of agent architecture
in which the scope for adaptation is circumscribed to particular functional aspects
such as metalevel Q functions. However, an agent must in general deal with an
environment that is far more complex than itself and that exhibits variation over time
at all levels of granularity. Limits on the size of the agent’s memory may imply that
almost complete revision of the agent’s mental structure is needed to achieve high
performance. (For example, songbirds grow their brains substantially during the
singing season and shrink them again when the season is over.) Such situations may
engender a rethinking of some of our notions of agent architecture and optimality,
and suggest a view of agent programs as dynamical systems with various amounts of
compiled and uncompiled knowledge and internal processes of inductive learning,
forgetting, and compilation.
1A similar observation was made by Horvitz and Breese ( 1990 ) for cases where the object level is
so restricted that the metalevel decision problem can be solved in constant time.
24 S. Russell
If a true science of intelligent agent design is to emerge, it will have to operate
in the framework of bounded optimality. One general approach—discernible in the
examples given earlier—is to divide up the space of agent designs into “architecturalclasses” such that in each class the structural variation is sufficiently limited. Then
ABO results can be obtained either by analytical optimization within the class or
by showing that an empirical adaptation process results in an approximately ABOdesign. Once this is done, it should be possible to compare architecture classes
directly, perhaps to establish asymptotic dominance of one class over another. For
example, it might be the case that the inclusion of an appropriate “macro-operatorformation” or “greedy metareasoning” capability in a given architecture will result
in an improvement in behaviour in the limit of very complex environments—that is,
one cannot compensate for the exclusion of the capability by increasing the machinespeed by a constant factor. Moreover, within any particular architectural class it is
clear that faster processors and larger memories lead to dominance. A central tool
in such work will be the use of “no-cost” results where, for example, the allocationof a constant fraction of computational resources to learning or metareasoning can
do no harm to an agent’s ABO prospects.
Getting all these architectural devices to work together smoothly is an important
unsolved problem in AI and must be addressed before we can make progress on
understanding bounded optimality within these more complex architectural classes.
If the notion of “architectural device” can be made sufficiently concrete, then AImay eventually develop a grammar for agent designs, describing the devices and
their interrelations. As the grammar develops, so should the accompanying ABO
dominance results.
2.8 Summary
I have outlined some directions for formally grounded AI research based onbounded optimality as the desired property of AI systems. This perspective on AI
seems to be a logical consequence of the inevitable philosophical “move” from
optimization over actions or computations to optimization over programs. I havesuggested that such an approach should allow synergy between theoretical and
practical AI research of a kind not afforded by other formal frameworks. In the
same vein, I believe it is a satisfactory formal counterpart of the informal goalof creating intelligence. In particular, it is entirely consistent with our intuitions
about the need for complex structure in real intelligent agents, the importance of the
resource limitations faced by relatively tiny minds in large worlds, and the operationof evolution as a design optimization process. One can also argue that bounded
optimality research is likely to satisfy better the needs of those who wish to emulate
human intelligence, because it takes into account the limitations on computationalresources that are presumably an important factor in the way human minds are
structured and in the behaviour that results.
2 Rationality and Intelligence 25
Bounded optimality and its asymptotic version are, of course, nothing but
formally defined properties that one may want systems to satisfy. It is too early to tell
whether ABO will do the same kind of work for AI that asymptotic complexity hasdone for theoretical computer science. Creativity in design is still the prerogative
of AI researchers. It may, however be possible to systematize the design process
somewhat and to automate the process of adapting a system to its computationalresources and the demands of the environment. The concept of bounded optimality
provides a way to make sure the adaptation process is “correct.”
My hope is that with these kinds of investigations, it will eventually be possible
to develop the conceptual and mathematical tools to answer some basic questions
about intelligence. For example, why do complex intelligent systems (appear to)
have declarative knowledge structures over which they reason explicitly? Thishas been a fundamental assumption that distinguishes AI from other disciplines
for agent design, yet the answer is still unknown. Indeed, Rod Brooks, Hubert
Dreyfus, and others flatly deny the assumption. What is clear is that it willneed something like a theory of bounded optimal agent design to answer this
question.
Most of the agent design features that I have discussed here, including the use
of declarative knowledge, have been conceived within the standard methodology
of “first build calculatively rational agents and then speed them up.” Yet one
can legitimately doubt that this methodology will enable the AI community todiscover all the design features needed for general intelligence. The reason is that
no conceivable computer will ever be remotely close to approximating perfect
rationality for even moderately complex environments. It may well be the case,therefore, that agents based on approximations to calculatively rational designs are
not even close to achieving the level of performance that is potentially achievable
given the underlying computational resources. For this reason, I believe it isimperative not to dismiss ideas for agent designs that do not seem at first glance
to fit into the “classical” calculatively rational framework.
Acknowledgements An earlier version of this paper appeared in the journal Artificial Intelli-
gence , published by Elsevier. That paper drew on previous work with Eric Wefald and Devika
Subramanian. More recent results were obtained with Nick Hay. Thanks also to Michael Wellman,Michael Fehling, Michael Genesereth, Russ Greiner, Eric Horvitz, Henry Kautz, Daphne Koller,Bart Selman, and Daishi Harada for many stimulating discussions topic of bounded rationality.The research was supported by NSF grants IRI-8903146, IRI-9211512 and IRI-9058427, and by aUK SERC Visiting Fellowship. The author is supported by the Chaire Blaise Pascal , funded by the
l’État et la Région Île de France and administered by the Fondation de l’École Normale Supérieure.
References
Agre, P. E., & Chapman, D. (1987). Pengi: An implementation of a theory of activity. In
Proceedings of the Tenth International Joint Conference on Artificial Intelligence (IJCAI-87) ,
Milan (pp. 268–272). Morgan Kaufmann.
26 S. Russell
Andre, D., & Russell, S. J. (2002) State abstraction for programmable reinforcement learning
agents. In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-
02), Edmonton (pp. 119–125). AAAI Press.
Bellman, R. E. (1957). Dynamic programming . Princeton: Princeton University Press.
Berry, D. A., & Fristedt, B. (1985). Bandit problems: Sequential allocation of experiments .
London: Chapman and Hall.
Breese, J. S., & Fehling, M. R. (1990). Control of problem-solving: Principles and architecture. In
R. D. Shachter, T. Levitt, L. Kanal, & J. Lemmer (Eds.), Uncertainty in artificial intelligence
4. Amsterdam/London/New York: Elsevier/North-Holland.
Brooks, R. A. (1989). Engineering approach to building complete, intelligent beings. Proceedings
of the SPIE—The International Society for Optical Engineering, 1002 , 618–625.
Carnap, R. (1950). Logical foundations of probability . Chicago: University of Chicago Press.
Cherniak, C. (1986). Minimal rationality . Cambridge: MIT.
Dean, T., & Boddy, M. (1988) An analysis of time-dependent planning. In Proceedings of the
Seventh National Conference on Artificial Intelligence (AAAI-88) , St. Paul (pp. 49–54). Morgan
Kaufmann.
Dean, T., Aloimonos, J., & Allen, J. F. (1995). Artificial intelligence: Theory and practice .
Redwood City: Benjamin/Cummings.
Dennett, D. C. (1988). The moral first aid manual. In S. McMurrin (Ed.), Tanner lectures on human
values (V ol. 7, pp. 121–147). University of Utah Press and Cambridge University Press.
Doyle, J., & Patil, R. (1991). Two theses of knowledge representation: Language restrictions,
taxonomic classification, and the utility of representation services. Artificial Intelligence, 48 (3),
261–297
Good, I. J. (1971) Twenty-seven principles of rationality. In V . P. Godambe & D. A. Sprott (Eds.),
Foundations of statistical inference (pp. 108–141). Toronto: Holt, Rinehart, Winston.
Goodman, N. D., Mansinghka, V . K., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2008).
Church: A language for generative models. In Proceedings of UAI-08 , Helsinki (pp. 220–229).
Harman, G. H. (1983). Change in view: Principles of reasoning . Cambridge: MIT.
Hay, N., Russell, S., Shimony, S. E., & Tolpin, D. (2012). Selecting computations: Theory and
applications. In Proceedings of UAI-12 , Catalina Island.
Horvitz, E. J. (1987). Problem-solving design: Reasoning about computational value, trade-offs,
and resources. In Proceedings of the Second Annual NASA Research Forum, NASA Ames
Research Center , Moffett Field, CA (pp. 26–43).
Horvitz, E. J. (1989). Reasoning about beliefs and actions under computational resource con-
s t r a i n t s .I nL .N .K a n a l ,T .S .L e v i t t ,&J .F .L e m m e r( E d s . ) , Uncertainty in artificial intelligence
3(pp. 301–324). Amsterdam/London/New York: Elsevier/North-Holland.
Horvitz, E. J., & Breese, J. S. (1990). Ideal partition of resources for metareasoning (Technical
report KSL-90-26), Knowledge Systems Laboratory, Stanford University, Stanford.
Howard, R. A. (1966). Information value theory. IEEE Transactions on Systems Science and
Cybernetics, SSC-2 , 22–26.
Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic
probability . Berlin/New York: Springer.
Kearns, M., Schapire, R. E., & Sellie, L. (1992). Toward efficient agnostic learning. In Proceedings
of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT-92) , Pittsburgh.
ACM.
Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value
tradeoffs .N e wY o r k :W i l e y .
Kocsis, L., & Szepesvari, C. (2006). Bandit-based Monte-Carlo planning. In Proceedings of
ECML-06 ,B e r l i n .
Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information.
Problems in Information Transmission, 1 (1), 1–7.
Koopmans, T. C. (1972). Representation of preference orderings over time. In C.B. McGuire &
R. Radner (Eds.), Decision and organization . Amsterdam/London/New York: Elsevier/North-
Holland.
2 Rationality and Intelligence 27
Kumar, P. R., & Varaiya, P. (1986). Stochastic systems: Estimation, identification, and adaptive
control . Upper Saddle River: Prentice-Hall.
Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in Soar: The anatomy of a general
learning mechanism. Machine Learning, 1 , 11–46.
Levesque, H. J. (1986). Making believers out of computers. Artificial Intelligence, 30 (1), 81–108.
Livnat, A., & Pippenger, N. (2006). An optimal brain can be composed of conflicting agents.
Proceedings of the National Academy of Sciences of the United States of America 103 (9), 3198–
3202.
Marthi, B., Russell, S., Latham, D., & Guestrin, C. (2005). Concurrent hierarchical reinforcement
learning. In Proceedings of IJCAI-05 , Edinburgh.
Marthi, B., Russell, S. J., & Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online
algorithms. In Proceedings of ICAPS-08 , Sydney.
Matheson, J. E. (1968). The economic value of analysis and computation. IEEE Transactions on
Systems Science and Cybernetics, SSC-4 (3), 325–332.
Megiddo, N., & Wigderson, A. (1986). On play by means of computing machines. In J. Y . Halpern
(Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference
(TARK-86), IBM and AAAI , Monterey (pp. 259–274). Morgan Kaufmann.
Milch, B., Marthi, B., Sontag, D., Russell, S. J., Ong, D., & Kolobov, A. (2005). BLOG:
Probabilistic models with unknown objects. In Proceedings of IJCAI-05 , Edinburgh.
Newell, A. (1982). The knowledge level. Artificial Intelligence, 18 (1), 82–127.
Nilsson, N. J. (1991). Logic and artificial intelligence. Artificial Intelligence, 47 (1–3), 31–56
Papadimitriou, C. H., & Yannakakis, M. (1994). On complexity as bounded rationality. In
Symposium on Theory of Computation (STOC-94) , Montreal.
Parr, R., & Russell, S. J. (1998). Reinforcement learning with hierarchies of machines. In M. I.
Jordan, M. Kearns, & S. A. Solla (Eds.), Advances in neural information processing systems
10. Cambridge: MIT.
Pfeffer, A. (2001). IBAL: A probabilistic rational programming language. In Proceedings of IJCAI-
01, Seattle (pp. 733–740).
Russell, S. J. (1997). Rationality and intelligence. Artificial Intelligence, 94 , 57–77.
Russell, S. J. (1998). Learning agents for uncertain environments (extended abstract). In Proceed-
ings of the Eleventh Annual ACM Workshop on Computational Learning Theory (COLT-98) ,
Madison (pp. 101–103). ACM.
Russell, S. J., & Norvig, P. (1995). Artificial intelligence: A modern approach . Upper Saddle River:
Prentice-Hall.
Russell, S. J., & Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Artificial
Intelligence Research, 3 , 575–609.
Russell, S. J., & Wefald, E. H. (1989). On optimal game-tree search using rational meta-reasoning.
InProceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-
89), Detroit (pp. 334–340). Morgan Kaufmann.
Russell, S. J., & Wefald, E. H. (1991a). Do the right thing: Studies in limited rationality .
Cambridge: MIT.
Russell, S. J., & Wefald, E. H. (1991b). Principles of metareasoning. Artificial Intelligence 49 (1–3),
361–395.
Russell, S. J., & Zilberstein, S. (1991). Composing real-time systems. In Proceedings of the
Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91) , Sydney. Morgan
Kaufmann.
Shoham, Y ., & Leyton-Brown, K. (2009). Multiagent systems: Algorithmic, game-theoretic, and
logical foundations . Cambridge/New York: Cambridge University Press.
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69 ,
99–118.
Simon, H. A. (1958). Rational choice and the structure of the environment. In Models of bounded
rationality (V ol. 2). Cambridge: MIT.
Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7 ,
1–22, 224–254.
28 S. Russell
Srivastava, S., Russell, S., Ruan, P., & Cheng, X. (2014). First-order open-universe POMDPs. In
Proceedings of UAI-14 , Quebec City.
Sutton, R., Precup, D., & Singh, S. P. (1999). Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112 , 181–211.
Tadepalli, P. (1991). A formalization of explanation-based macro-operator learning. In Proceedings
of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91) , Sydney
(pp. 616–622). Morgan Kaufmann.
Tennenholtz, M. (2004). Program equilibrium. Games and Economic Behavior, 49 (2), 363–373.
Vapnik, V . (2000). The nature of statistical learning theory . Berlin/New York: Springer.
von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior (1st ed.).
Princeton: Princeton University Press.
Wellman, M. P. (1994). A market-oriented programming environment and its application to
distributed multicommodity flow problems. Journal of Artificial Intelligence Research, 1 (1),
1–23.
Wellman, M. P., & Doyle, J. (1991). Preferential semantics for goals. In Proceedings of the Ninth
National Conference on Artificial Intelligence (AAAI-91) , Anaheim (V ol. 2, pp. 698–703).
AAAI Press.
Zilberstein, S., & Russell, S. J. (1996). Optimal composition of real-time systems. Artificial
Intelligence 83 , 181–213. |
6b5708bf-a1d4-45e1-b7c7-bf0236fbe62c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | An Untrollable Mathematician
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Follow-up to [All Mathematicians are Trollable](https://agentfoundations.org/item?id=815).
It is relatively easy to see that no computable Bayesian prior on logic can converge to a single coherent probability distribution as we update it on logical statements. Furthermore, the non-convergence behavior is about as bad as could be: someone selecting the ordering of provable statements to update on can drive the Bayesian's beliefs arbitrarily up or down, arbitrarily many times, despite only saying true things. I called this wild non-convergence behavior "trollability". Previously, I showed that if the Bayesian updates on the *provabilily* of a sentence rather than updating on the sentence itself, it is still trollable. I left open the question of whether some other side information could save us. Sam Eisenstat has closed this question, providing a simple logical prior and a way of doing a Bayesian update on it which (1) cannot be trolled, and (2) converges to a coherent distribution.
---
*Major parts of this post were cribbed from Sam's notes, with some modifications.*
Construction
============
Set some prior on sampling individual sentences, μ(ϕ), which gives positive probability to every sentence. Now sample an infinite sequence of sentences, ϕ1,ϕ2,... by at each step rejection-sampling from μ with the requirement of propositional consistency with the sentences sampled so far. The prior probability of a sentence, P(ψ), is the probability that it appears anywhere in this sequence.
This is nothing new -- it's just the old Demski prior with propositional consistency only. However, Sam's idea is to interpret *the sequence of sentences* as the sequence of things Nature tells us / the sequence of things which is proved. (Why didn't I think of that?) Thus, in addition to the raw probability distribution over sentences, we use the probability distribution over sequence locations. Start with n=1. When a sentence ψ is proved (or otherwise observed), we perform a Bayesian update on ψ=ϕn, and increment n.
This distribution can be computed to any desired accuracy ϵ in finite time. The probability of a sequence prefix ϕ1,ϕ2,...ϕn can be computed to within ϵ by computing the normalization factor for μ at each step in the sequence to within sufficient accuracy to ensure the accuracy of the whole, by enumerating sentences and checking which are propositionally consistent with the sequence so far. The joint probability of any finite set of ordering assertions ψ=ϕn and unordered sentence assertions ψ can be computed to within ϵ by enumerating the sequences of sentence selections by which these can become jointly true and by which they can become false, and calculating their probabilities with increasing accuracy, until the sum of the probability of ways they can become true and ways they can become false is less than epsilon with certainty.
Untrollability
==============
So long as the probability of a sentence ψ is not yet 1 or 0, it has probability at least μ(ψ), since it could be sampled next. Similarly, its probability probability is at most μ(¬ψ). Hence, it is not possible to drive the probability arbitrarily up or arbitrarily down, no matter what order we prove things in.
Convergence
===========
Note that P expects nature to eventually decide every proposition, in which case convergence to a single distribution is trivial; beliefs converge to 0 or 1 on every proposition. However, the posterior probabilities also converge to a single distribution even if some sentences are never updated on -- as is the case when we restrict ourselves to updating on provable sentences.
To see this, take some sentence ϕ and some ϵ>0. We want to show that for all sufficiently large n, we have |P(ψ|ϕ1,ϕ2,...ϕn)−P(ψ|ϕ1,ϕ2,...ϕm)|<ϵ for all m>n. If ψ is eventually decided, this is trivially true. Otherwise, let Phi be a large finite collection of sentences which will never be decided by the environment, chosen so that the probability of sampling a sentence that will never be decided but that is outside of Φ *before* sampling psi or ¬ψ (as we continue adding to the sequence by sampling) is less than ϵ/4 no matter what other sentences have been decided already. Then, pick N large enough that (1) after time N, the sentences announced after time N do not announce any new logical relations between the sentences in Φ; and, (2) the probability of deciding any new sentence not in Φ before deciding ψ is less than ϵ/2. (1) is possible since there are only 2|Φ| joint assignments of truth values to sentences in Φ, so after finite time all joint assignments which will ever be ruled out have already been. (2) is possible since the probability of sampling a sentence that will *never* be decided but that is outside of Φ is already small enough, and the probability of deciding all the rest of the sentences outside of Φ only goes down as more sentences get decided, approaching zero, so that it is small enough for some N.
So, for m>N, the probability that ψ is decided before any sentences outside of Φ is 1−(ϵ/2), ensuring that any dependence is less than ϵ.
Furthermore, this argument makes it clear that the probability distribution we converge to depends only on the set of sentences which the environment will eventually assert, not on their ordering! For any ordering of assertions, we can find an N as specified, and the joint probability distribution on Φ (and ψ) will be the same to within epsilon.
We can also see that if the environment eventually asserts every provable sentence of PA, then the limit which we converge to must be a distribution on completions of PA: if PA⊢ψ, then there is some n such that ϕn=ψ, so the posterior probability is 1 then and beyond. Similarly, although we only require propositional coherence of P, the limit will be fully coherent so long as the environment eventually asserts all logical truths (whatever else it may assert). |
21fec936-d444-452d-a896-b88885f6cf6d | trentmkelly/LessWrong-43k | LessWrong | Popularizing vibes vs. models
I'm working on a series of blog posts to popularize The Handbook of the Biology of Aging.
One of the dilemmas I face is between popularizing vibes vs. popularizing models. What do I mean by that?
* Popularizing vibes trades off exactitude for getting the right feeling across. If I say "people are living longer free of disability," you feel like you know what I mean, and you know that's an encouraging sign.
* Popularizing models makes the opposite trade. If I explain why there's a big difference between the proportion of the population with diabetes, and the length of life spent with diabetes, I have to put you in a state of icy rational focus and compromise on vibes or length.
This is LessWrong, so I expect my audience's gut reaction to be "you should popularize models!" That's my gut reaction, anyway.
But let's think about why popularizing vibes can be good. One reason is that vibes can come from a list of concrete (if slightly ambiguously stated) facts:
* Life expectancy at birth is stalled
* Life expectancy at 65 is improving
* Fewer people have heart attacks and minor disabilities
* People live longer free of disabilities, and recover from them more often
* White men are benefitting most from getting fewer disabilities
* We're doing a better job using medication to manage cholesterol and blood pressure
Facts are good! When you read that list, you're taking in data. It's painting a portrait of the state of American public health.
But I'm telling you these facts not because you're particuarly interested in each data point. You could look them up yourself if you were. And I don't expect you'll remember them.
Instead, I'm curating these facts because I think they'll help you form a holistic impression about whether or not Americans are getting healthier over time. So the reason I'm feeding you these facts for the vibes. And I do expect you'll remember that.
Models demand more focus to take in, and more explanation. When I explain a model, I'm trying |
7ba9b947-0e17-49da-8e3c-e88fadedf51b | trentmkelly/LessWrong-43k | LessWrong | The Amazing Virgin Pregnancy
People who grow up believing certain things,
even if they later stop believing them,
may not quite realize how the beliefs sound to outsiders...
(SCENE: A small cottage in Nazareth.)
Joseph: Mary, my dearest fiancée, there's something I've been meaning to talk to you about.
(Mary's shoulders slump. Slowly, as if under a heavy burden, she turns around to face Joseph.)
Joseph: You seem to be getting fat around the waistline, and throwing up in the morning, and, er, not getting any periods. Which is odd, because it's sort of like -
Mary: Yes! I'm pregnant! All right? I'm PREGNANT!
Joseph: How is that possible?
(Mary's shoulders slump further.) Mary: How do you think?
Joseph: I don't know, that's why I'm asking you. I mean, you're still a virgin, right?
(Mary looks up cautiously, and sees Joseph's face looking blankly puzzled.)
Joseph: Well?
Mary: God did it.
Joseph: You had sex with -
Mary: No! Haha. Of course not. I mean, God just snapped his fingers and did one of those miracle things and made me pregnant.
Joseph: God made you pregnant.
Mary: (Starts to sweat.) Yes.
Joseph: Mary, that is just so... completely...
(Mary's eyes squeeze shut.)
Joseph: ...COOL!
(Mary opens her eyes again, cautiously.)
Mary: You think so?
Joseph: Of course! Who wouldn't think so? Come on, we've got to tell everyone the news!
Mary: Maybe we should keep this between just the two of us -
Joseph: No, no, silly girl, this is way too important! Come on!
(Joseph grabs Mary's wrist and drags her out of the house. SCENE: The gathering square of Nazareth. A dozen well-dressed men, and the town's head rabbi, look on Joseph and Mary impatiently.)
Rabbi: What's this all about, Joseph? I trust there's a good reason for the fuss?
Joseph: Go ahead, Mary! Tell them what you told me.
Mary: Um... (She swallows.) God made me pregnant.
Rabbi, looking stern, yet understanding: Now, Joseph, you know you're not supposed to do that before -
|
a78822fb-0417-4258-a589-f3c25ae4711a | trentmkelly/LessWrong-43k | LessWrong | Launching a new progress institute, seeking a CEO
Summary: The Roots of Progress is planning a major expansion of our activities, and we are seeking a Chief Executive Officer to lead the new organization in partnership with me (I will remain Founder & President). We’re taking this step because we see an opportunity to do much more for the progress movement, going well beyond my essays and talks. Our initial focus will be on a “career accelerator” for public intellectuals in progress studies.
----------------------------------------
Our mission and why it matters
The progress of the last few centuries—in science, technology, industry, and the economy—is one of the greatest achievements of humanity. But progress is not automatic or inevitable. We must understand its causes, so that we can keep it going, and even accelerate it.
But in order to make progress, we must believe that progress is possible and desirable. The 19th century believed in the power of technology and industry to better humanity, but in the 20th century, this belief gave way to skepticism and distrust. We can’t go back to the naive views of the past, but we need a new way forward.
Our mission is to establish a new philosophy of progress for the 21st century and beyond—one based on the ideas of humanism and agency, and one that puts forth a bold, ambitious vision for the technological future.
Our opportunity
Last year, I announced that this blog was becoming a one-man nonprofit research organization (after starting it as an intellectual side project in 2017, and becoming a full-time independent researcher in 2019). Since then, it has become clear that there is too much energy and support for this mission—and too much to do!—for this organization to remain focused solely on my own research and writing.
The new philosophy of progress needs a movement to establish it. The pillars of this movement are:
* Intellectual foundations: a lot of research, thinking, and writing, to better understand and communicate the lessons of progress, and to apply |
4111757b-3499-462c-b38b-906c9cccc732 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Making decisions when both morally and empirically uncertain
*Cross-posted [to the EA Forum](https://forum.effectivealtruism.org/posts/LnZ2MwQEsenLczamX/what-to-do-when-both-morally-and-empirically-uncertain). For an epistemic status statement and an outline of the purpose of this sequence of posts, please see the top of [my prior post](https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1). There are also some explanations and caveats in that post which I won’t repeat - or will repeat only briefly - in this post.*
Purpose of this post
====================
In [my prior post](https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1), I wrote:
>
> We are often forced to make decisions under conditions of uncertainty. This uncertainty can be empirical (e.g., what is the likelihood that nuclear war would cause human extinction?) or [moral](https://concepts.effectivealtruism.org/concepts/moral-uncertainty/) (e.g., does the wellbeing of future generations matter morally?). The issue of making decisions under empirical uncertainty has been well-studied, and [expected utility theory](https://en.wikipedia.org/wiki/Expected_utility_hypothesis) has emerged as the typical account of how a rational agent should proceed in these situations. The issue of making decisions under *moral* uncertainty appears to have received less attention (though see [this list of relevant papers](https://philpapers.org/browse/moral-uncertainty)), despite also being of clear importance.
>
>
>
I then went on to describe three prominent approaches for dealing with moral uncertainty (based on [Will MacAskill’s 2014 thesis](http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf)):
1. Maximising Expected Choice-worthiness (MEC), if all theories under consideration by the decision-maker are cardinal and intertheoretically comparable.[[1]](#fn-sTwHCP7JQg7YPDdFf-1)
2. Variance Voting (VV), a form of what I’ll call “Normalised MEC”, if all theories under consideration are cardinal but *not* intertheoretically comparable.[[2]](#fn-sTwHCP7JQg7YPDdFf-2)
3. The Borda Rule (BR), if all theories under consideration are ordinal.
But I was surprised to discover that I couldn’t find any *very explicit* write-up of how to handle moral and empirical uncertainty *at the same time*. I assume this is because most people writing on relevant topics consider the approach I will propose in this post to be quite obvious (at least when using MEC with cardinal, intertheoretically comparable, consequentialist theories). Indeed, many existing models from EAs/rationalists (and likely from other communities) already effectively use something very much like the first approach I discuss here (“MEC-E”; explained below), just without explicitly noting that this is an integration of approaches for dealing with moral and empirical uncertainty.[[3]](#fn-sTwHCP7JQg7YPDdFf-3)
But it still seemed worth explicitly spelling out the approach I propose, which is, in a nutshell, using exactly the regular approaches to moral uncertainty mentioned above, but on *outcomes* rather than on *actions*, and combining that with consideration of the likelihood of each action leading to each outcome. My aim for this post is both to make this approach “obvious” to a broader set of people and to explore how it can work with non-comparable, ordinal, and/or non-consequentialist theories (which may be less obvious).
(Additionally, as a side-benefit, readers who are wondering what on earth all this “modelling” business some EAs and rationalists love talking about is, or who are only somewhat familiar with modelling, may find this post to provide useful examples and explanations.)
I'd be interested in any comments or feedback you might have on anything I discuss here!
MEC under empirical uncertainty
===============================
To briefly review regular MEC: [MacAskill](http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf) argues that, when all moral theories under consideration are cardinal and intertheoretically comparable, a decision-maker should choose the “option” that has the highest *expected choice-worthiness*. Expected choice-worthiness is given by the following formula:

In this formula, *C(Ti)* represents the decision-maker’s credence (belief) in *Ti* (some particular moral theory), while *CWi(A)* represents the “choice-worthiness” (*CW*) of *A* (an “option” or action that the decision-maker can choose) according to *Ti*. In [my prior post](https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1), I illustrated how this works with this example:
>
> Suppose Devon assigns a 25% probability to T1, a version of hedonistic utilitarianism in which human “[hedons](https://wiki.lesswrong.com/wiki/Hedon)” (a hypothetical unit of pleasure) are worth 10 times more than fish hedons. He also assigns a 75% probability to T2, a different version of hedonistic utilitarianism, which values human hedons just as much as T1 does, but doesn’t value fish hedons at all (i.e., it sees fish experiences as having no moral significance). Suppose also that Devon is choosing whether to buy a fish curry or a tofu curry, and that he’d enjoy the fish curry about twice as much. (Finally, let’s go out on a limb and assume Devon’s humanity.)
>
>
>
>
> According to T1, the choice-worthiness (roughly speaking, the rightness or wrongness of an action) of buying the fish curry is -90 (because it’s assumed to cause 1,000 negative fish hedons, valued as -100, but also 10 human hedons due to Devon’s enjoyment). In contrast, according to T2, the choice-worthiness of buying the fish curry is 10 (because this theory values Devon’s joy as much as T1 does, but doesn’t care about the fish’s experiences). Meanwhile, the choice-worthiness of the tofu curry is 5 according to both theories (because it causes no harm to fish, and Devon would enjoy it half as much as he’d enjoy the fish curry).
>
>
>
>
> [...] Using MEC in this situation, the expected choice-worthiness of buying the fish curry is 0.25 \* -90 + 0.75 \* 10 = -15, and the expected choice-worthiness of buying the tofu curry is 0.25 \* 5 + 0.75 \* 5 = 5. Thus, Devon should buy the tofu curry.
>
>
>
But can Devon really be *sure* that buying the fish curry will lead to that much fish suffering? What if this demand signal doesn’t lead to increased fish farming/capture? What if the additional fish farming/capture is more humane than expected? What if fish can’t suffer because they aren’t actually conscious (empirically, rather than as a result of what sorts of consciousness our moral theory considers relevant)? We could likewise question Devon’s apparent certainty that buying the tofu curry *definitely won’t* have any unintended consequences for fish suffering, and his apparent certainty regarding precisely how much he’d enjoy each meal.
These are all empirical rather than moral questions, but they still seem very important for Devon’s ultimate decision. This is because **T1 and T2 don’t “*intrinsically* care” about whether someone buys fish curry or buys tofu curry; these theories assign no *terminal value* to which curry is bought. Instead, these theories "care" about some of the *outcomes* which those actions may or may not cause.**[[4]](#fn-sTwHCP7JQg7YPDdFf-4)
More generally, I expect that, in all realistic decision situations, we’ll have *both* moral *and* empirical uncertainty, and that it’ll often be important to *explicitly consider both types of uncertainties*. For example, GiveWell’s models consider both how likely insecticide-treated bednets are to save the life of a child, and how that outcome would compare to doubling the income of someone in extreme poverty. However, typical discussions of MEC seem to assume that we already know for sure what the outcomes of our actions will be, just as typical discussions of [expected value](https://en.wikipedia.org/wiki/Expected_value) reasoning seem to assume that we already know for sure how valuable a given outcome is.
Luckily, it seems to me that MEC and traditional (empirical) expected value reasoning can be very easily and neatly integrated in a way that resolves those issues. (This is perhaps partly due to that fact that, if I understand MacAskill’s thesis correctly, MEC was very consciously developed by analogy to expected value reasoning.) Here is my formula for this integration, which I'll call *Maximising Expected Choice-worthiness, accounting for Empirical uncertainty (MEC-E)*, and which I'll explain and provide an example for below:

Here, all symbols mean the same things they did in the earlier formula from MacAskill’s thesis, with two exceptions:
* I’ve added *Oj*, to refer to each “outcome”: each consequence that an action may lead to, which at least one moral theory under consideration intrinsically values/disvalues. (E.g., a fish suffering; a person being made happy; rights being violated.)
* Related to that, I’d like to be more explicit that *A* refers *only* to the “actions” that the decision-maker can directly choose (e.g., purchasing a fish meal, imprisoning someone), rather than the outcomes of those actions.[[5]](#fn-sTwHCP7JQg7YPDdFf-5)
(I also re-ordered the choice-worthiness term and the credence term, which makes no actual difference to any results, and was just because I think this ordering is slightly more intuitive.)
Stated verbally (and slightly imprecisely[[6]](#fn-sTwHCP7JQg7YPDdFf-6)), MEC-E claims that:
>
> One should choose the action which maximises expected choice-worthiness, accounting for empirical uncertainty. To calculate the expected choice-worthiness of each action, you first, for each potential outcome of the action and each moral theory under consideration, find the product of 1) the probability of that outcome given that that action is taken, 2) the choice-worthiness of that outcome according to that theory, and 3) the credence given to that theory. Second, for each action, you sum together all of those products.
>
>
>
To illustrate, I have [modelled in Guesstimate](https://www.getguesstimate.com/models/14802) an extension of the example of Devon deciding what meal to buy to also incorporate empirical uncertainty.[[7]](#fn-sTwHCP7JQg7YPDdFf-7) In the text here, I will only state the information that was not in the earlier version of the example, and the resulting calculations, rather than walking through all the details.
Suppose Devon believes there’s an 80% chance that buying a fish curry will lead to “fish being harmed” (modelled as 1000 negative fish hedons, with a choice-worthiness of -100 according to T1 and 0 according to T2), and a 10% chance that buying a tofu curry will lead to that same outcome. He also believes there’s a 95% chance that buying a fish curry will lead to “Devon enjoying a meal a lot” (modelled as 10 human hedons), and a 50% chance that buying a tofu curry will lead to that.
The expected choice-worthiness of buying a fish curry would therefore be:
>
> (0.8 \* -100 \* 0.25) + (0.8 \* 0 \* 0.75) + (0.95 \* 10 \* 0.25) + (0.95 \* 10 \* 0.75) = -10.5
>
>
>
Meanwhile, the expected choice-worthiness of buying a tofu curry would be:
>
> (0.1 \* -100 \* 0.25) + (0.1 \* 0 \* 0.75) + (0.5 \* 10 \* 0.25) + (0.5 \* 10 \* 0.75) = 2.5
>
>
>
As before, the tofu curry appears the better choice, despite seeming somewhat worse according to the theory (T2) assigned higher credence, because the other theory (T1) sees the tofu curry as *much* better.
In the final section of this post, I discuss potential extensions of these approaches, such as how it can handle probability distributions (rather than point estimates) and non-consequentialist theories.
The last thing I’ll note about MEC-E in this section is that MEC-E can be used as a heuristic, without involving actual numbers, in exactly the same way MEC or traditional expected value reasoning can. For example, without knowing or estimating any actual numbers, Devon might reason that, compared to buying the tofu curry, buying the fish curry is “much” more likely to lead to fish suffering and only “somewhat” more likely to lead to him enjoying his meal a lot. He may further reason that, in the “unlikely but plausible” event that fish experiences *do* matter, the badness of a large amount of fish suffering is “much” greater than the goodness of him enjoying a meal. He may thus ultimately decide to purchase the tofu curry.
(Indeed, my impression is that many effective altruists have arrived at vegetarianism/veganism through reasoning very much like that, without any actual numbers being required.)
Normalised MEC under empirical uncertainty
==========================================
*(From here onwards, I’ve had to go a bit further beyond what’s clearly implied by existing academic work, so the odds I’ll make some mistakes go up a bit. Please let me know if you spot any errors.)*
To briefly review regular *Normalised MEC*: Sometimes, despite being cardinal, the moral theories we have credence in are *not intertheoretically comparable* (basically meaning that there’s no consistent, non-arbitrary “exchange rate” between the theories' “units of choice-worthiness"). MacAskill argues that, in such situations, one must first "normalise" the theories in some way (i.e., ["[adjust] values measured on different scales to a notionally common scale"](https://en.wikipedia.org/wiki/Normalization_(statistics))), and then apply MEC to the new, normalised choice-worthiness scores. He recommends Variance Voting, in which the normalisation is by variance (rather than, e.g., by range), meaning that we:
>
> “[treat] the average of the squared differences in choice-worthiness from the mean choice-worthiness as the same across all theories. Intuitively, the variance is a measure of how spread out choice-worthiness is over different options; normalising at variance is the same as normalising at the difference between the mean choice-worthiness and one standard deviation from the mean choice-worthiness.”
>
>
>
(I provide a worked example [here](https://docs.google.com/spreadsheets/d/1E0eKGhwCdAXGGFboAAcbg5pWa2HT489oe2XKiruuEp0/edit?usp=sharing), based on an extension of the scenario with Devon deciding what meal to buy, but it's possible I've made mistakes.)
My proposal for *Normalised MEC, accounting for Empirical Uncertainty (Normalised MEC-E)* is just to combine the ideas of non-empirical Normalised MEC and non-normalised MEC-E in a fairly intuitive way. The steps involved (which may be worth reading alongside [this worked example](https://docs.google.com/spreadsheets/d/1gxmLMxhNa98syB-zH_SCRJYjM3pWm3McC20qCPihXsM/edit?usp=sharing) and/or the earlier explanations of Normalised MEC and MEC-E) are as follows:
1. Work out expected choice-worthiness just as with regular MEC, except that here one is working out the expected choice-worthiness of *outcomes*, not actions. I.e., for each outcome, multiply that outcome’s choice-worthiness according to each theory by your credence in that theory, and then add up the resulting products.
* You could also think of this as using the MEC-E formula, except with “Probability of outcome given action” removed for now.
2. Normalise these expected choice-worthiness scores by variance, just as MacAskill advises in the quote above.
3. Find the “expected value” of each action in the traditional way, with these normalised expected choice-worthiness scores serving as the “value” for each potential outcome. I.e., for each *action*, multiply the probability it leads to each outcome by the normalised expected choice-worthiness of that outcome (from step 2), and then add up the resulting products.
* You could think of this as bringing “Probability of outcome given action” back into the MEC-E formula.
4. Choose the action with the maximum score from step 3 (which we could call *normalised expected choice-worthiness, accounting for empirical uncertainty*, or *expected value, accounting for normalised moral uncertainty*).[[8]](#fn-sTwHCP7JQg7YPDdFf-8)
BR under empirical uncertainty
==============================
The final approach MacAskill recommends in his thesis is the Borda Rule (BR; also known as *Borda counting*). This is used when the moral theories we have credence in are merely *ordinal* (i.e., they don’t say “how much” more choice-worthy one option is compared to another). In [my prior post](https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1), I provided the following quote of MacAskill’s formal explanation of BR (here with “options” replaced by “actions”):
>
> “An [action] *A*’s *Borda Score*, for any theory *Ti*, is equal to the number of [actions] within the [action]-set that are less choice-worthy than *A* according to theory *Ti*’s choice-worthiness function, minus the number of [actions] within the [action]-set that are more choice-worthy than *A* according to *Ti*’s choice-worthiness function.
>
>
>
>
> An [action] *A*’s *Credence-Weighted Borda Score* is the sum, for all theories *Ti*, of the Borda Score of *A* according to theory *Ti* multiplied by the credence that the decision-maker has in theory *Ti*.
>
>
>
>
> [The *Borda Rule* states that an action] *A* is more appropriate than an [action] *B* iff [if and only if] *A* has a higher Credence-Weighted Borda Score than *B*; *A* is equally as appropriate as *B* iff *A* and *B* have an equal Credence-Weighted Borda Score.”
>
>
>
To apply BR when one is also *empirically* uncertain, **I propose just explicitly considering/modelling one’s empirical uncertainties, and then figuring out each action’s Borda Score with those empirical uncertainties in mind**. (That is, we don’t change the method at all on a mathematical level; we just make sure each moral theory’s preference rankings over actions - which is used as input into the Borda Rule - takes into account our empirical uncertainty about what outcomes each action may lead to.)
I’ll illustrate how this works with reference to the same example from MacAskill’s thesis that I quoted in my prior post, but now with slight modifications (shown in bold).
>
> “Julia is a judge who is about to pass a verdict on whether Smith is guilty for murder. She is very confident that Smith is innocent. There is a crowd outside, who are desperate to see Smith convicted. Julia has three options:
>
>
>
>
> [G]: Pass a verdict of ‘guilty’.
>
>
>
>
> [R]: Call for a retrial.
>
>
>
>
> [I]: Pass a verdict of ‘innocent’.
>
>
>
>
> She thinks there’s a 0% chance of M if she passes a verdict of guilty, a 30% chance if she calls for a retrial (there may mayhem due to the lack of a guilty verdict, or later due to a later innocent verdict), and a 70% chance if she passes a verdict of innocent.
>
>
>
>
> There’s obviously a 100% chance of C if she passes a verdict of guilty and a 0% chance if she passes a verdict of innocent. She thinks there’s also a 20% chance of C happening later if she calls for a retrial.
>
>
>
>
> **Julia believes the crowd is very likely (~90% chance) to** riot if Smith is found innocent, causing mayhem on the streets and the deaths of several people. If she calls for a retrial, she **believes it’s almost certain (~95% chance)** that he will be found innocent at a later date, and that it is much less likely **(only ~30% chance)** that the crowd will riot at that later date **if he is found innocent then**. If she declares Smith guilty, the crowd will **certainly (~100%)** be appeased and go home peacefully. She has credence in three moral theories\*\*, which, *when taking the preceding probabilities into account*, provide the following choice-worthiness orderings\*\*:
>
>
>
>
> 35% credence in a variant of utilitarianism, according to which [G≻**I≻R**].
>
>
>
>
> 34% credence in a variant of common sense, according to which [**I>R**≻G].
>
>
>
>
> 31% credence in a deontological theory, according to which [I≻R≻G].”
>
>
>
This leads to the Borda Scores and Credence-Weighted Borda Scores shown in the table below, and thus to the recommendation that Julia declare Smith innocent.

(More info on how that was worked out can be found in the following footnote, along with the corresponding table based on the moral theories' preference orderings in my prior post, when empirical uncertainty *wasn't* taken into account.[[9]](#fn-sTwHCP7JQg7YPDdFf-9))
In the original example, both the utilitarian theory and the common sense theory preferred a retrial to a verdict of innocent (in order to avoid a riot), which resulted in calling for a retrial having the highest Credence-Weighted Borda Score.
However, I’m now imagining that **Julia is no longer assuming each action 100% guarantees a certain outcome will occur, and paying attention to her empirical uncertainty has changed her conclusions**.
In particular, I’m imagining that she realises she’d initially been essentially “rounding up” (to 100%) the likelihood of a riot if she provides a verdict of innocent, and “rounding down” (to 0%) the likelihood of the crowd rioting at a later date. However, with more realistic probabilities in mind, utilitarianism and common sense would both actually prefer an innocent verdict to a retrial (because the innocent verdict seems less risky, and the retrial more risky, than she’d initially thought, while an innocent verdict still frees this innocent person sooner and with more certainty). This changes each action’s Borda Score, and gives the result that she should declare Smith innocent.[[10]](#fn-sTwHCP7JQg7YPDdFf-10)
Potential extensions of these approaches
========================================
Does this approach presume/privilege consequentialism?
------------------------------------------------------
A central idea of this post has been making a clear distinction between “actions” (which one can directly choose to take) and their “outcomes” (which are often what moral theories “intrinsically care about”). This clearly makes sense when the moral theories one has credence in are consequentialist. However, other moral theories may “intrinsically care” about actions themselves. For example, many deontological theories would consider lying to be wrong *in and of itself*, regardless of what it leads to. Can the approaches I’ve proposed handle such theories?
Yes - and very simply! For example, suppose I wish to use MEC-E (or Normalised MEC-E), and I have credence in a (cardinal) deontological theory that assigns very low choice-worthiness to lying (regardless of outcomes that action leads to). We can still calculate expected choice-worthiness using the formulas shown above; in this case, we find the product of (multiply) “probability me lying leads to me having lied” (which we’d set to 1), “choice-worthiness of me having lied, according to this deontological theory”, and “credence in this deontological theory”.
Thus, cases where a theory cares intrinsically about the action and not its consequences can be seen as a “special case” in which the approaches discussed in this post just collapse back to the corresponding approaches discussed in MacAskill’s thesis (which these approaches are the “generalised” versions of). This is because there’s effectively no empirical uncertainty in these cases; we can be sure that taking an action would lead to us having taken that action. Thus, in these and other cases of no relevant empirical uncertainty, accounting for empirical uncertainty is unnecessary, but creates no problems.[[11]](#fn-sTwHCP7JQg7YPDdFf-11)[[12]](#fn-sTwHCP7JQg7YPDdFf-12)
I’d therefore argue that a policy of using the generalised approaches by default is likely wise. This is especially the case because:
* One will typically have at least *some* credence in consequentialist theories.
* My impression is that even most “non-consequentialist” theories still do care at least *somewhat* about consequences. For example, they’d likely say lying is in fact “right” if the negative consequences of not doing so are “large enough” (and one should often be empirically uncertain about whether they would be).
Factoring things out further
----------------------------
In this post, I modified examples (from my prior post) in which we had only one moral uncertainty into examples in which we had one moral and one empirical uncertainty. We could think of this as “factoring out” what originally appeared to be only moral uncertainty into its “factors”: empirical uncertainty about whether an action will lead to an outcome, and moral uncertainty about the value of that outcome. By doing this, we’re more closely approximating (modelling) our actual understandings and uncertainties about the situation at hand.
But we’re still far from a full approximation of our understandings and uncertainties. For example, in the case of Julia and the innocent Smith, Julia may also be uncertain how big the riot would be, how many people would die, whether these people would be rioters or uninvolved bystanders, whether there’s a moral difference between a rioter vs a bystanders dying from the riot (and if so, how big this difference is), etc.[[13]](#fn-sTwHCP7JQg7YPDdFf-13)
A benefit of the approaches shown here is that they can very simply be extended, with typical modelling methods, to incorporate additional uncertainties like these. You simply disaggregate the relevant variables into the “factors” you believe they’re composed of, assign them numbers, and multiply them as appropriate.[[14]](#fn-sTwHCP7JQg7YPDdFf-14)[[15]](#fn-sTwHCP7JQg7YPDdFf-15)
Need to determine whether uncertainties are moral or empirical?
---------------------------------------------------------------
In the examples given just above, you may have wondered whether I was considering certain variables to represent moral uncertainties or empirical ones. I suspect this ambiguity will be common in practice (and I plan to discuss it further in a later post). Is this an issue for the approaches I’ve suggested?
I’m a bit unsure about this, but I think the answer is essentially “no”. I don’t think there’s any need to treat moral and empirical uncertainty in *fundamentally* different ways for the sake of models/calculations using these approaches. Instead, I think that, ultimately, the important thing is just to “factor out” variables in the way that makes the most sense, given the situation and what the moral theories under consideration “intrinsically care about”. (An example of the sort of thing I mean can be found in footnote 14, in a case where the uncertainty is actually empirical but has different moral implications for different theories.)
Probability distributions instead of point estimates
----------------------------------------------------
You may have also thought that a lot of variables in the examples I’ve given should be represented by probability distributions (e.g., representing 90% confidence intervals), rather than point estimates. For example, why would Devon estimate the probability of “fish being harmed”, as if it’s a binary variable whose moral significance switches suddenly from 0 to -100 (according to T1) when a certain level of harm is reached? Wouldn’t it make more sense for him to estimate the *amount* of harm to fish that is likely, given that that better aligns both with his understanding of reality and with what T1 cares about?
If you were thinking this, I wholeheartedly agree! Further, I can’t see any reason why the approaches I’ve discussed *couldn’t* use probability distributions and model variables as continuous rather than binary (the only reason I haven’t modelled things in that way so far was to keep explanations and examples simple). For readers interested in an illustration of how this can be done, I’ve provided a modified model of the Devon example in [this Guesstimate model](https://www.getguesstimate.com/models/14811). (Existing models like [this one](https://www.getguesstimate.com/models/11762) also take essentially this approach.)
Closing remarks
===============
I hope you’ve found this post useful, whether to inform your heuristic use of moral uncertainty and expected value reasoning, to help you build actual models taking into account both moral and empirical uncertainty, or to give you a bit more clarity on “modelling” in general.
In the next post, I’ll discuss how we can combine the approaches discussed in this and my prior post with sensitivity analysis and value of information analysis, to work out what specific moral or empirical learning would be most decision-relevant and when we should vs shouldn’t postpone decisions until we’ve done such learning.
---
1. What “choice-worthiness”, “cardinal” (vs “ordinal”), and “intertheoretically comparable” mean is explained in the previous post. To quickly review, roughly speaking:
* *Choice-worthiness* is the rightness or wrongness of an action, according to a particular moral theory.
* A moral theory is *ordinal* if it tells you only which options are better than which other options, whereas a theory is *cardinal* if it tells you *how big a difference* in choice-worthiness there is between each option.
* A pair of moral theories can be cardinal and yet still *not intertheoretically comparable* if we cannot meaningfully compare the sizes of the “differences in choice-worthiness” between the theories; basically, if there’s no consistent, non-arbitrary “exchange rate” between different theories’ “units of choice-worthiness”.[↩︎](#fnref-sTwHCP7JQg7YPDdFf-1)
2. MacAskill also discusses a “Hybrid” procedure, if the theories under consideration differ in whether they’re cardinal or ordinal and/or whether they’re intertheoretically comparable; readers interested in more information on that can refer to pages 117-122 MacAskill’s thesis. An alternative approach to such situations is [Christian Tarsney’s](https://pdfs.semanticscholar.org/1c29/5c1c2e8eda8eb41560feb9f927104a6a6a85.pdf) (pages 187-195) “multi-stage aggregation procedure”, which I may write a post about later (please let me know if you think this’d be valuable). [↩︎](#fnref-sTwHCP7JQg7YPDdFf-2)
3. Examples of models that effectively use something like the “MEC-E” approach include GiveWell’s cost-effectiveness models and [this model](https://www.getguesstimate.com/models/11762) of the cost effectiveness of “alternative foods”.
And some of the academic moral uncertainty work I’ve read seemed to indicate the authors may be perceiving as obvious something like the approaches I propose in this post.
But I think the closest thing I found to an explicit write-up of this sort of way of considering moral and empirical uncertainty at the same time (expressed in those terms) was [this post from 2010](https://www.lesswrong.com/posts/Qh6bnkxbMFz5SNeFd/value-uncertainty-and-the-singleton-scenario), which states: “Under Robin’s approach to value uncertainty, we would (I presume) combine these two utility functions into one linearly, by weighing each with its probability, so we get EU(x) = 0.99 EU1(x) + 0.01 EU2(x)”. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-3)
4. Some readers may be thinking the “empirical” uncertainty about fish consciousness is inextricable from moral uncertainties, and/or that the above paragraph implicitly presumes/privileges consequentialism. If you’re one of those readers, 10 points to you for being extra switched-on! However, I believe these are not really issues for the approaches outlined in this post, for reasons outlined in the final section. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-4)
5. Note that my usage of “actions” can include “doing nothing”, or failing to do some specific thing; I don’t mean “actions” to be distinct from “omissions” in this context. MacAskill and other writers sometimes refer to “options” to mean what I mean by “actions”. I chose the term “actions” both to make it more obvious what the A and O terms in the formula stand for, and because it seems to me that the distinction between “options” and “outcomes” would be less immediately obvious. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-5)
6. My university education wasn’t highly quantitative, so it’s very possible I’ll phrase certain things like this in clunky or unusual ways. If you notice such issues and/or have better phrasing ideas, please let me know. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-6)
7. In that link, the model using MEC-E follows a similar model using regular MEC (and thus considering only moral uncertainty) and another similar model using more traditional expected value reasoning (and thus considering only empirical uncertainty); readers can compare these against the MEC-E model. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-7)
8. Before I tried to actually model an example, I came up with a slightly different proposal for integrating the ideas of MEC-E and Normalised MEC. Then I realised the proposal outlined above might make more sense, and it does seem to work (though I’m not 100% certain), so I didn’t further pursue my original proposal. I therefore don't know for sure whether my original proposal would work or not (and, if it does work, whether it’s somehow better than what I proposed above). My original proposal was as follows:
1. Work out expected choice-worthiness just as with regular MEC-E; i.e., follow the formula from above to incorporate consideration of the probabilities of each action leading to each outcome, the choice-worthiness of each outcome according to each moral theory, and the credence one has in each theory. (But don’t yet pick the action with the maximum expected choice-worthiness score.)
2. Normalise these expected choice-worthiness scores by variance, just as MacAskill advises in the quote above. (The fact that these scores incorporate consideration of empirical uncertainty has no impact on how to normalise by variance.)
3. *Now* pick the action with the maximum *normalised* expected choice-worthiness score.[↩︎](#fnref-sTwHCP7JQg7YPDdFf-8)
9. G (for example) has a Borda Scoreof 2 - 0 = 2 according to utilitarianism because that theory views two options as less choice-worthy than G, and 0 options as more choice-worthy than G.
To fill in the final column, you take a credence-weighted average of the relevant action’s Borda Scores.
What follows is the corresponding table based on the moral theories' preference orderings in my prior post, when empirical uncertainty *wasn't* taken into account:
 [↩︎](#fnref-sTwHCP7JQg7YPDdFf-9)
10. It’s also entirely possible for paying attention to empirical uncertainty to not change any moral theory’s preference orderings in a particular situation, or for some preference orderings to change without this affecting which action ends up with the highest Credence-Weighted Borda Score. This is a feature, not a bug.
Another perk is that paying attention to both moral and empirical uncertainty also provides more clarity on what the decision-maker should think or learn more about. This will be the subject of my next post. For now, a quick example is that Julia may realise that a lot hangs on what each moral theory’s preference ordering should actually be, or on how likely the crowd actually is to riot if she passes a verdict or innocent or calls for a retrial, and it may be worth postponing her decision in order to learn more about these things. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-10)
11. Arguably, the additional complexity in the model is a cost in itself. But this is only a problem only in the same way this is a problem for any time one decides to model something in more detail or with more accuracy at the cost of adding complexity and computations. Sometimes it’ll be worth doing so, while other times it’ll be worth keeping things simpler (whether by considering only moral uncertainty, by considering only empirical uncertainty, or by considering only certain parts of one’s moral/empirical uncertainties). [↩︎](#fnref-sTwHCP7JQg7YPDdFf-11)
12. The approaches discussed in this post can also deal with theories that “intrinsically care” about other things, like a decision-maker’s intentions or motivations. You can simply add in a factor for “probability that, if I take X, it’d be due to motivation Y rather than motivation Z” (or something along those lines). It may often be reasonable to round this to 1 or 0, in which case these approaches didn’t necessarily “add value” (though they still worked). But often we may genuinely be (empirically) uncertain about our own motivations (e.g., are we just providing high-minded rationalisations for doing something we wanted to do anyway for our own self-interest?), in which case explicitly modelling that empirical uncertainty may be useful. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-12)
13. For another example, in the case of Devon choosing a meal, he may also be uncertain how many of each type of fish will be killed, the way in which they’d be killed, whether each type of fish has certain biological and behavioural features thought to indicate consciousness, whether those features do indeed indicate consciousness, whether the consciousness they indicate is morally relevant, whether creatures with consciousness like that deserve the same “moral weight” as humans or somewhat lesser weight, etc. [↩︎](#fnref-sTwHCP7JQg7YPDdFf-13)
14. For example, Devon might replace “Probability that purchasing a fish meal leads to "fish being harmed"” with (“Probability that purchasing a fish meal leads to fish being killed” \* “Probability fish who were killed would be killed in a non-humane way” \* “Probability any fish killed in these ways would be conscious enough that this can count as “harming” them”). This whole term would then be in calculations used wherever ““Probability that purchasing a fish meal leads to "fish being harmed"” was originally used.
For another example, Julia might replace “Probability the crowd riots if Julia finds Smith innocent” with “Probability the crowd riots if Julia finds Smith innocent” \* “Probability a riot would lead to at least one death” \* “Probability that, if at least one death occurs, there’s at least one death of a bystander (rather than of one of the rioters themselves)” (as shown in [this partial Guesstimate model](https://www.getguesstimate.com/models/14810)). She can then keep in mind *this more specific final outcome, and its more clearly modelled probability*, as she tries to work out what choice-worthiness ordering each moral theory she has credence in would give to the actions she’s considering.
Note that, sometimes, it might make sense to “factor out” variables in different ways for the purposes of different moral theories’ evaluations, depending on what the moral theories under consideration “intrinsically care about”. In the case of Julia, it definitely seems to me to make sense to replace “Probability the crowd riots if Julia finds Smith innocent” with “Probability the crowd riots if Julia finds Smith innocent” \* “Probability a riot would lead to at least one death”. This is because all moral theories under consideration probably care far more about potential deaths from a riot than about any other consequences of the riot. This can therefore be considered an “empirical uncertainty”, because its influence on the ultimate choice-worthiness “flows through” the same “moral outcome” (a death) for all moral theories under consideration.
However, it might only make sense to further multiply that term by “Probability that, if at least one death occurs, there’s at least one death of a bystander (rather than of one of the rioters themselves)” for the sake of the common sense theory’s evaluation of the choice-worthiness order, not for the utilitarian theory’s evaluation. This would be the case if the utilitarian theory cared not at all (or at least much less) about the distinction between the death of a rioter and the death of a bystander, while common sense does. (The Guesstimate model should help illustrate what I mean by this.) [↩︎](#fnref-sTwHCP7JQg7YPDdFf-14)
15. Additionally, the process of factoring things out in this way could by itself provide a clearer understanding of the situation at hand, and what the stakes really are for each moral theory one has credence in. (E.g., Julia may realise that passing a verdict of innocent is much less bad than she thought, as, even if a riot does occur, there’s only a fairly small chance it leads to the death of a bystander.) It also helps one realise what uncertainties are most worth thinking/learning more about (more on this in my next post). [↩︎](#fnref-sTwHCP7JQg7YPDdFf-15) |
48a762b0-c3d4-4224-a3a4-b8bf64a8e3ae | trentmkelly/LessWrong-43k | LessWrong | Excursions into Sparse Autoencoders: What is monosemanticity?
The following work was done between January and March 2024 as part of my PhD rotation with Prof Surya Ganguli and Prof Noah Goodman.
One aspect of sparse autoencoders that has put them at the center of attention in mechanistic interpretability is the notion of monosemanticity. In this post, we will explore the concept of monosemanticity in open-source sparse autoencoders (by Joseph Bloom) trained on residual stream layers of GPT-2 small. We will take a look at the indirect object identification task and see what we can extract by projecting different layer activations into their sparse high-dimensional latent space (from now on we will just refer to this as the latent code). We show the ranges of controllability on the model’s outputs by considering interventions within the latent code and discuss future work in this area.
Background
In Toy Models of Superposition, Elhage et al. discuss a framework to think about the different layer activations in transformer-based language models. The idea can be condensed as follows: the dimensionality of the state-space of language is extremely large, larger than any model to-date can encode in a one-to-one fashion. As a result, the model compresses the relevant aspects of the language state-space into its constrained, say n-dimensional, activation space. A consequence of this is that the states in this higher-dimensional language space that humans have learned to interpret (e.g. words, phrases, concepts) are somehow entangled on this compressed manifold of transformer activations. This makes it hard for us to look into the model and understand what is going on, what kind of structures did the model learn, how did it use concept A and concept B to get to concept C, etc. The proposal outlined in Toy Models of Superposition suggests that one way to bypass this bottleneck is to assume that our transformer can be thought of as emulating a larger model, one which operates on the human-interpretable language manifold. To get into th |
e01b3cbc-8827-4361-af8b-7944fbda7b99 | trentmkelly/LessWrong-43k | LessWrong | Actionable Eisenhower
None |
84ecaf62-a310-4aae-876b-1bba626ddae7 | trentmkelly/LessWrong-43k | LessWrong | ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist
This is cross-posted from New Savanna.
The difference between concrete concepts, that is, concepts that can be understood entirely in sensorimotor terms, and abstract terms is an important one. It was, for example, important to David Hays when I studied with him back in the 1970s. We ended up adapting a model developed by William Powers as a way of thinking about concrete concepts while using Hays’s concept of metalingual definition to think about abstract concepts. Somewhat later Steven Harnad coined the term, “symbol grounding problem,” to indicate that the meanings of symbols had to somehow be grounded in the physical world. In 2016 Harnad and his colleagues investigated the structure of dictionaries and discovered that the vast majority of terms in them were defined in terms of other words but that there is a small Kernel that is not so-defined. I wondered how that distinction would play out in ChatGPT.
ChatGPT – that is, the LLM underlying it – doesn’t have access to the physical world. It can’t ground any terms in sensorimotor experience. However, knowing a great deal about how language works and having spent a great deal of time working with ChatGPT, I was sure that it would be perfectly capable to defining concrete terms. But how would it do so?
I decided to investigate. I made a list of ten concrete terms and asked ChatGPT to define them. In my first trial I made no further specification beyond simply asking for a definition. In subsequent trials I asked for definitions appropriate for 5-year-olds, 11-year-olds, and professional biologists (just the biological concepts). ChatGPT gave distinctly different kinds of definitions in each case, which did not surprise me since over a year ago I had asked ChatGPT to describe digestion, first generically, and then in terms appropriate for a 3-year-old. It did so, and fairly successfully.
Here are the trials. Each trial is a separate session. As always, my prompt is in boldface while the response is in plainface |
9fe552ab-9c62-4f29-9e55-c99cae81c546 | trentmkelly/LessWrong-43k | LessWrong | Publicly disclosing compute expenditure daily as a safety regulation
Instead of edge-casing every statement, I'm going to make a series of assertions in their strongest form so that discussion can be more productive.
1. AGI is inevitable.
2. The big labs are the only ones with the resources to achieve AGI.
3. The first lab to achieve AGI will have a huge, permanent advantage over the rest.
4. (2) and (3) ⇒ the big labs are currently in a fight for a knife in the mud.
5. No lab will stop development just before reaching AGI voluntarily.
6. No lab can be made to stop development after reaching AGI even involuntarily.
7. The first lab to achieve AGI will try to spend as much compute as possible, as early as possible, as fast as possible to permanently cement its superiority, while trying to keep it a secret for as long as possible.
8. When this happens, it's in the interest of the rest of the world if the acceleration happens slowly so that the other competing labs can catch up.
9. The hard thing about executing this is knowing when a lab is close to AGI in the first place. I don't have any novel proposals on what to do after we know someone is close to AGI.
10. Regulations that propose to subjectively evaluate the risk of each newly trained model before deployment are well-intentioned (like GDPR), but they're toothless (also like GDPR) at preventing these accelerating arms race scenarios.
11. I propose we make each lab publicly disclose its audited ~daily total compute expenditure on both training and inference with sensible breakdowns. This isn't unnatural; public companies already do this with cashflow.
12. This directs competitive energy in a productive direction without stifling innovation too much: since it's in the interest of each lab to reach AGI first, they will keep close tabs on everyone else and can sound an alarm when there is an uptick. The public will too.
13. Total daily compute spend is a much more objective, quantifiable, and fungibl |
688b6b7e-4cc1-4aec-80bc-0c85c3948157 | StampyAI/alignment-research-dataset/blogs | Blogs | Price-performance trend in top supercomputers
A top supercomputer can perform a GFLOP for around $3, in 2017.
The price of performance in top supercomputers continues to fall, as of 2016.
Details
-------
[TOP500.org](https://www.top500.org/lists/2017/06/) maintains a list of top supercomputers and their performance on the Linpack benchmark. The figure below is based on empirical performance figures (‘Rmax’) from [Top500](https://www.top500.org/lists/2017/06/) and price figures collected from a variety of less credible sources, for nine of the ten highest performing supercomputers (we couldn’t find a price for the tenth). Our data and sources are [here](https://docs.google.com/spreadsheets/d/1nV6djZI7csDv_ewElbNKiQZVl36ViGyUEy_MSqk2krI/edit?usp=sharing).
Sunway Teihu Light performs the cheapest GFLOPS, at $2.94/GFLOPS. This is around one hundred times more expensive than peak theoretical performance of certain GPUs, but we do not know why there is such a difference (peak performance is generally higher than actual performance, but by closer to a factor of two).
There appears to be a downward trend in price, but it is not consistent, and with so few data points its slope is ambiguous. The best price for performance roughly halved in the last 4-5 years, for a 10x drop in 13-17 years. The K computer in 2011 was much more expensive, but appears to have been substantially more expensive than earlier computers.
####
[](http://aiimpacts.org/wp-content/uploads/2017/11/chart-54.png) |
10f3636c-afb4-4027-8c0a-f1103b5aecd9 | trentmkelly/LessWrong-43k | LessWrong | What Is "About" About?
It has seemed to me that saying that something is "about" something else is probably the vaguest, least useful way possible to say that the two ideas are connected. I've used "about" myself in exactly those instances when I perceived a connection of some kind between two ideas, but I was incapable of articulating what that connection was, incapable even of seeing it myself.
"Well," I might say. "It seems like X is...about Y in some way...It's not exactly that X causes it...maybe X is a subsection of Y? No, that's not quite right, either. I'm not really sure, honestly, but to my eye, they seem linked to each other in some way. X is...about...Y."
Well, what did that mean, that it was "about" Y? Did they have a similar conceptual structure, were they causally related in one way or another, were they both discussed by the same groups, did they have similar effects on the world...like I said, it's vague. It could be any or none or all of these. Saying X is "about" Y tells you nothing very meaningful about them, so as a word, it's next-to-useless. Or, is it?
Now, I think I see what "about" is about; now I see it quite clearly. Saying X is "about" Y means, to say it very precisely, that you want people to give Y's connotations to X. For some reasons, X's connotations are undesirable, and you'd like Y's instead. Now, X is X, it is itself, so it is only natural that it will evoke its own connotations. If you're going to give it the more desirable connotations of Y, we must wrench the natural perspective off its natural path and force it down another.
Think of "it's not about who wins, it's about how you play the game," ie, please use the conceptual connections around this "how you play the game" idea to think about who won here today. Please don't use the connotations, do not use the subconscious connections linked to the "winning idea."
Now, what does "how you play the game" mean, exactly? No, that's not the point; the point isn't the point is to use those hard-to-defi |
c440741f-976f-46e7-9f42-538e4312bcdf | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] 'The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery'
Authors: Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, David Ha.
Blogpost: https://sakana.ai/ai-scientist/.
Abstract:
> One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems. Our co |
7219bb50-fc72-4473-b958-b69e0d821120 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Alignment Newsletter #14
I've created a [public database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of almost all of the papers I've summarized in the Alignment Newsletter! Most of the entries will have all of the data I put in the emails.
**Highlights**
--------------
**[One-Shot Imitation from Watching Videos](http://bair.berkeley.edu/blog/2018/06/28/daml/)** *(Tianhe Yu and Chelsea Finn)*: Can we get a robot to learn a task by watching a human do it? This is very different from standard imitation learning. First, we want to do it with a single demonstration, and second, we want to do it by *watching a human* -- that is, we're learning from a video of a human, not a trajectory where the robot actions are given to us. Well, first consider how we could do this if we have demonstrations from a teleoperated robot. In this case, we do actually have demonstrations in the form of trajectories, so normal imitation learning techniques (behavioral cloning in this case) work fine. We can then take this loss function and use it with [MAML](http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/) to learn from a large dataset of tasks and demonstrations how to perform a new task given a single demonstration. But this still requires the demonstration to be collected by teleoperating the robot. What if we want to learn from a video of a human demonstrating? They propose learning a *loss function* that given the human video provides a loss from which gradients can be calculated to update the policy. Note that at training time there are still teleoperation demonstrations, so the hard task of learning how to perform tasks is done then. At test time, the loss function inferred from the human video is primarily used to identify which objects to manipulate.
**My opinion:** This is cool, it actually works on a real robot, and it deals with the issue that a human and a robot have different action spaces.
**Prerequisities:** Some form of meta-learning (ideally [MAML](http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/)).
**[Capture the Flag: the emergence of complex cooperative agents](https://deepmind.com/blog/capture-the-flag/)** *(Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning et al)*: DeepMind has trained FTW (For The Win) agents that can play Quake III Arena Capture The Flag from raw pixels, given *only* the signal of whether they win or not. They identify three key ideas that enable this -- population based training (instead of self play), learning an internal reward function, and operating at two timescales (enabling better use of memory). Their ablation studies show that all of these are necessary, and in particular it even outperforms population based training with manual reward shaping. The trained agents can cooperate and compete with a wide range of agents (thanks to the population based training), including humans.
But why are these three techniques so useful? This isn't as clear, but I can speculate. Population based training works well because the agents are trained against a diversity of collaborators and opponents, which can fix the issue of instability that afflicts self-play. Operating at two timescales gives the agent a better inductive bias. They say that it enables the agent to use memory more effectively, but my story is that it lets it do something more hierarchical, where the slow RNN makes "plans", while the fast RNN executes on those plans. Learning an internal reward function flummoxed me for a while, it really seemed like that should not outperform manual reward shaping, but then I found out that the internal reward function is computed from the game points screen, not from the full trajectory. This gives it a really strong inductive bias (since the points screen provides really good features for defining reward functions) that allows it to quickly learn an internal reward function that's more effective than manual reward shaping. It's still somewhat surprising, since it's still learning this reward function from the pixels of the points screen (I assume), but more believable.
**My opinion:** This is quite impressive, since they are learning from the binary win-loss reward signal. I'm surprised that the agents generalized well enough to play alongside humans -- I would have expected that to cause a substantial distributional shift preventing good generalization. They only had 30 agents in their population, so it seems unlikely a priori that this would induce a distribution that included humans. Perhaps Quake III is simple enough strategically that there aren't very many viable strategies, and most strategies are robust to having slightly worse allies? That doesn't seem right though.
DeepMind did a *lot* of different things to analyze what the agents learned and how they are different from humans -- check out the [paper](https://arxiv.org/pdf/1807.01281.pdf) for details. For example, they showed that the agents are much better at tagging (shooting) at short ranges, while humans are much better at long ranges.
**Technical AI alignment**
==========================
### **Technical agendas and prioritization**
[An introduction to worst-case AI safety](http://s-risks.org/an-introduction-to-worst-case-ai-safety/) *(Tobias Baumann)*: Argues that people with suffering-focused ethics should focus on "worst-case AI safety", which aims to find technical solutions to risks of AIs creating vast amounts of suffering (which would be much worse than extinction).
**My opinion:** If you have strongly suffering-focused ethics (unlike me), this seems mostly right. The post claims that suffering-focused AI safety should be more tractable than AI alignment, because it focuses on a subset of risks and only tries to minimize them. However, it's not necessarily the case that focusing on a simpler problem makes it easier to solve. It feels easier to me to figure out how to align an AI system to humans, or how to enable human control of an AI system, than to figure out all the ways in which vast suffering could happen, and solve each one individually. You can make an analogy to mathematical proofs and algorithms -- often, you want to try to prove a *stronger* statement than the one you are looking at, because when you use induction or recursion, you can rely on a stronger inductive hypothesis.
### **Learning human intent**
**[One-Shot Imitation from Watching Videos](http://bair.berkeley.edu/blog/2018/06/28/daml/)** *(Tianhe Yu and Chelsea Finn)*: Summarized in the highlights!
[Learning Montezuma’s Revenge from a Single Demonstration](https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/) *(Tim Salimans et al)*: Montezuma's Revenge is widely considered to be one of the hardest Atari games to learn, because the reward is so sparse -- it takes many actions to reach the first positive reward, and if you're using random exploration, it will take exponentially many actions (in N, the number of actions till the first reward) to find any reward. A human demonstration should make the exploration problem much easier. In particular, we can start just before the end of the demonstration, and train the RL agent to get as much score as the demonstration. Once it learns that, we can start it at slightly earlier in the demonstration, and do it again. Repeating this, we eventually get an agent that can perform the whole demonstration from start to finish, and it takes time linear in the length of the demonstration. Note that the agent must be able to generalize a little bit to states "around" the human demonstration -- when it takes random actions it will eventually reach a state that is similar to a state it saw earlier, but not exactly the same, and it needs to generalize properly. It turns out that this works for Montezuma's Revenge, but not for other Atari games like Gravitar and Pitfall.
**My opinion:** Here, the task definition continues to be the reward function, and the human demonstration is used to help the agent effectively optimize the reward function. Such agents are still vulnerable to misspecified reward functions -- in fact, the agent discovers a bug in the emulator that wouldn't have happened if it was trying to imitate the human. I would still expect the agent to be more human-like than one trained with standard RL, since it only learns the environment near the human policy.
[Atari Grand Challenge](http://atarigrandchallenge.com/about) *(Vitaly Kurin)*: This is a website crowdsourcing human demonstrations for Atari games, which means that the dataset will be very noisy, with demonstrations from humans of vastly different skill levels. Perhaps this would be a good dataset to evaluate algorithms that aim to learn from human data?
[Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning](http://arxiv.org/abs/1807.00366) *(Baoxiang Wang et al)*: How could you perform IRL without access to a simulator, or a model of the dynamics of the game, or the full human policy (only a set of demonstrations)? In this setting, as long as you have a large dataset of diverse human behavior, you can use Q-learning on the demonstrations to estimate separate Q-function for each feature, and then for a given set of demonstrations you can infer the reward for that set of demonstrations using a linear program that attempts to make all of the human actions optimal given the reward function. They define (manually) five features for World of Warcraft Avatar History (WoWAH) that correspond to different motivations and kinds of human behavior (hence the title of the paper) and infer the weights for those rewards. It isn't really an evaluation because there's no ground truth.
### **Preventing bad behavior**
[Overcoming Clinginess in Impact Measures](https://www.lesswrong.com/posts/DvmhXysefEyEvXuXS/overcoming-clinginess-in-impact-measures) *(TurnTrout)*: In their [previous post](https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting), TurnTrout proposed a whitelisting approach, that required the AI not to cause side effects not on the whitelist. One criticism was that it made the AI *clingy*, that is, the AI would also prevent any other agents in the world from causing non-whitelisted effects. In this post, they present a solution to the clinginess problem. As long as the AI knows all of the other agents in the environment, and their policies, the AI can be penalized for the *difference* of effects between its behavior, and what the human(s) would have done. There's analysis in a few different circumstances, where it's tricky to get the counterfactuals exactly right. However, this sort of impact measure means that while the AI is punished for causing side effects itself, it *can* manipulate humans to perform those side effects on its behalf with no penalty. This appears to be a tradeoff in the impact measure framework -- either the AI will be clingy, where it prevents humans from causing prohibited side effects, or it could cause the side effects through manipulation of humans.
**My opinion:** With any impact measure approach, I'm worried that there is no learning of what humans care about. As a result I expect that there will be issues that won't be handled properly (similarly to how we don't expect to be able to write down a human utility function). In the previous post, this manifested as a concern for generalization ability, which I'm still worried about. I think the tradeoff identified in this post is actually a manifestation of this worry -- clinginess happens when your AI overestimates what sorts of side effects humans don't want to happen in general, while manipulation of humans happens when your AI underestimates what side effects humans don't want to happen (though with the restriction that only humans can perform these side effects).
**Prerequisities:** [Worrying about the Vase: Whitelisting](https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting)
### **Game theory**
[Modeling Friends and Foes](http://arxiv.org/abs/1807.00196) *(Pedro A. Ortega et al)*: Multiagent scenarios are typically modeled using game theory. However, it is hard to capture the intuitive notions of "adversarial", "neutral" and "friendly" agents using standard game theory terminology. The authors propose that we model the agent and environment as having some prior mixed strategy, and then allow them to "react" by changing the strategies to get a posterior strategy, but with a term in the objective function for the change (as measured by the KL divergence). The sign of the environment's KL divergence term determines whether it is friendly or adversarial, and the magnitude determines the magnitude of friendliness or adversarialness. They show that there are always equilibria, and give an algorithm to compute them. They then show some experiments demonstrating that the notions of "friendly" and "adversarial" they develop actually do lead to behavior that we would intuitively call friendly or adversarial.
Some notes to understand the paper: while normally we think of multiagent games as consisting of a set of agents, in this paper there is an agent that acts, and an environment in which it acts (which can contain other agents). The objective function is neither minimized nor maximized -- the sign of the environment's KL divergence changes whether the stationary points are maxima or minima (which is why it can model both friendly and adversarial environments). There is only one utility function, the agent's utility function -- the environment is only modeled as responding to the agent, rather than having its own utility function.
**My opinion:** This is an interesting formalization of friendly and adversarial behavior. It feels somewhat weird to model the environment as having a prior strategy that it can then update. This has the implication that a "somewhat friendly" environment is unable to change its strategy to help the agent, even though it would "want" to, whereas when I think of a "somewhat friendly" environment, I think of a group of agents that share some of your goals but not all of them, so a limited amount of cooperation is possible. These feel quite different.
### **Interpretability**
[This looks like that: deep learning for interpretable image recognition](http://arxiv.org/abs/1806.10574) *(Chaofan Chen, Oscar Li et al)*
### **Verification**
[Towards Mixed Optimization for Reinforcement Learning with Program Synthesis](http://arxiv.org/abs/1807.00403) *(Surya Bhupatiraju, Kumar Krishna Agrawal et al)*: This paper proposes a framework in which policies are represented in two different ways -- as neural nets (the usual way) and as programs. To go from neural nets to programs, you use *program synthesis* (as done by [VIPER](http://arxiv.org/abs/1805.08328) and [PIRL](https://arxiv.org/abs/1804.02477), both summarized in previous newsletters). To go from programs to neural nets, you use *distillation* (basically use the program to train the neural net with supervised training). Given these transformations, you can then work with the policy in either space. For example, you could optimize the policy in both spaces, using standard gradient descent in neural-net-space, and *program repair* in program-space. Having a program representation can be helpful in other ways too, as it makes the policy more interpretable, and more amenable to formal verification of safety properties.
**My opinion:** It is pretty nice to have a program representation. This paper doesn't delve into specifics (besides a motivating example worked out by hand), but I'm excited to see an actual instantiation of this framework in the future!
**Near-term concerns**
======================
### **Adversarial examples**
[Adversarial Reprogramming of Neural Networks](https://arxiv.org/abs/1806.11146) *(Gamaleldin F. Elsayed et al)*
**AI strategy and policy**
==========================
[Shaping economic incentives for collaborative AGI](https://www.lesswrong.com/posts/FkZCM4DMprtEp568s/shaping-economic-incentives-for-collaborative-agi) *(Kaj Sotala)*: This post considers how to encourage a culture of cooperation among AI researchers. Then, when researchers try to create AGI, this culture of cooperation may make it more likely that AGI is developed collaboratively, instead of with race dynamics, making it more likely to be safe. It specifically poses the question of what external economic or policy incentives could encourage such cooperation.
**My opinion:** I am optimistic about developing AGI collaboratively, especially through AI researchers cooperating. I'm not sure whether external incentives from government are the right way to achieve this -- it seems likely that such regulation would be aimed at the wrong problems if it originated from government and not from AI researchers themselves. I'm more optimistic about some AI researchers developing guidelines and incentive structures themselves, that researchers buy into voluntarily, that maybe later get codified into law by governments, or adopted by companies for their AI research.
[An Overview of National AI Strategies](https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd) *(Tim Dutton)*: A short reference on the AI policies released by various countries.
**My opinion:** Reading through this, it seems that countries are taking quite different approaches towards AI. I don't know what to make of this -- are they acting close to optimally given their geopolitical situation (which must then vary a lot by country), or does no one know what's going on and as a result all of the strategies are somewhat randomly chosen? (Here by "randomly chosen" I mean that the strategies that one group of analysts would select with is only weakly correlated with the strategies another group would select.) It could also be that the approaches are not actually that different.
[Joint Artificial Intelligence Center Created Under DoD CIO](https://breakingdefense.com/2018/06/joint-artificial-intelligence-center-created-under-dod-cio/) *(Sydney J. Freedberg Jr.)*
**AI capabilities**
===================
### **Reinforcement learning**
**[Capture the Flag: the emergence of complex cooperative agents](https://deepmind.com/blog/capture-the-flag/)** *(Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning et al)*: Summarized in the highlights!
[Ranked Reward: Enabling Self-Play Reinforcement Learning for Combinatorial Optimization](http://arxiv.org/abs/1807.01672) *(Alexandre Laterre et al)*
[Procedural Level Generation Improves Generality of Deep Reinforcement Learning](http://arxiv.org/abs/1806.10729) *(Niels Justesen et al)* |
445caea1-c9c5-4f72-b442-91559a6b86f7 | trentmkelly/LessWrong-43k | LessWrong | Creating a family with GPT-4
ME:
Create a merchant character suitable for a medieval fantasy setting. Start by coming up with their age, gender, and one defining personality trait. Then add two other personality traits. Then describe some internal conflict created by the interaction of these traits. Describe the history of the character and how their personality caused them to end up where they are now.
GPT-4:
Age: 52
Gender: Female
Defining Personality Trait: Shrewd
Other Personality Traits:
1. Charitable
2. Ambitious
Internal Conflict:
Her ambition and shrewdness often clash with her charitable nature, leaving her torn between her desire for personal success and her innate need to help others. This struggle leads her to make difficult decisions, sometimes prioritizing her own interests over the needs of others, resulting in guilt and self-doubt.
Character History:
Meet Gisella, a 52-year-old merchant who runs a successful general store in a bustling medieval town. Born into a family of modest means, Gisella had to help her widowed mother to make ends meet from a young age. As a result, she developed a keen sense of resourcefulness and the ability to spot a bargain. Gisella’s shrewdness allowed her to save enough money to open her own store, which quickly became a popular destination for travelers and locals alike.
As her business flourished, Gisella also discovered her passion for helping those in need. She would often donate food, clothing, and other necessities to the local orphanage or provide aid to families in times of crisis. However, her ambition to expand her business and become a prominent figure in the community has sometimes caused her to prioritize her own interests over her charitable instincts.
In recent years, Gisella’s internal conflict has only grown more pronounced. Her store has become one of the most successful in the region, and with that success came the opportunity to open new locations in other towns. Faced with the prospect of even greater wealth and influe |
e591bfd0-17a1-4a9a-a70c-937476f152fe | trentmkelly/LessWrong-43k | LessWrong | Classic Sichuan in Millbrae, Thu Feb 21, 7pm
Followup to: Bay Area Bayesians Unite, OB Meetup
The Bay Area Overcoming Bias meetup will take place in the Classic Sichuan restaurant, 148 El Camino Real, Millbrae, CA 94031. 15 people said they would "Definitely" attend and an additional 27 said "Maybe". Oh, and Robin Hanson will be there too.
Dinner is scheduled for 7:00pm, on Thursday, February 21st, 2008. I'll show up at 6:30pm, though, just to cut people some antislack if it's easier for them to arrive earlier.
If you're arriving via the BART/Caltrain station, just walk up from the Southbound Caltrain side and turn right onto El Camino, walk a few meters, and you're there.
If driving, I'd suggest taking the exit from 101 onto Millbrae Ave - the exit from 280 onto Millbrae surprisingly goes down a winding mountain road before arriving at downtown. Doesn't mean you have to take 101 the whole way there, but I definitely recommend the 101 exit.
For parking, I would suggest parking near the BART/Caltrain station. From 101 onto Millbrae Ave., turn right onto El Camino, almost immediately pass the Peter's Cafe parking lot and then turn right onto a small street toward the train station. The first parking lot you see on your right is reserved for Peter's Cafe, but immediately after that (still on your right) is some city parking that looked mostly empty when I visited last Thursday. El Camino itself was parked up, though. If all else fails, you should be able to park in the train station lots and pay a small fee. Then walk up to El Camino and turn right, as before.
Classic Sichuan has vegetarian dishes. They also have a reputation for their spicy food being spicy, so watch out! I ate there to check quality, and while I'm generally a cultural barbarian, I didn't detect any problems with the food I was served. Depending on how many people actually show up, we may overflow their small private room, but hopefully we won't overflow the restaurant.
My cellphone number is (866) 983-5697. That's toll-free, |
dd5b1449-5222-4fad-aabd-29d5d5eb25d3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #104]: The perils of inaccessible information, and what we can learn about AI alignment from COVID
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world
HIGHLIGHTS
==========
**[Inaccessible information](https://alignmentforum.org/posts/ZyWyAJbedvEgRT2uF/inaccessible-information)** *(Paul Christiano)* (summarized by Rohin): One way to think about the problem of AI alignment is that we only know how to train models on information that is *accessible* to us, but we want models that leverage *inaccessible* information.
Information is accessible if it can be checked directly, or if an ML model would successfully transfer to provide the information when trained on some other accessible information. (An example of the latter would be if we trained a system to predict what happens in a day, and it successfully transfers to predicting what happens in a month.) Otherwise, the information is inaccessible: for example, “what Alice is thinking” is (at least currently) inaccessible, while “what Alice will say” is accessible. The post has several other examples.
Note that while an ML model may not directly say exactly what Alice is thinking, if we train it to predict what Alice will say, it will probably have some internal model of what Alice is thinking, since that is useful for predicting what Alice will say. It is nonetheless inaccessible because there’s no obvious way of extracting this information from the model. While we could train the model to also output “what Alice is thinking”, this would have to be training for “a consistent and plausible answer to what Alice is thinking”, since we don’t have the ground truth answer. This could incentivize bad policies that figure out what we would most believe, rather than reporting the truth.
The argument for risk is then as follows: we care about inaccessible information (e.g. we care about what people *actually* experience, rather than what they *say* they experience) but can’t easily make AI systems that optimize for it. However, AI systems will be able to infer and use inaccessible information, and would outcompete ones that don’t. AI systems will be able to plan using such inaccessible information for at least some goals. Then, the AI systems that plan using the inaccessible information could eventually control most resources. Key quote: “The key asymmetry working against us is that optimizing flourishing appears to require a particular quantity to be accessible, while danger just requires anything to be accessible.”
The post then goes on to list some possible angles of attack on this problem. Iterated amplification can be thought of as addressing gaps in speed, size, experience, algorithmic sophistication etc. between the agents we train and ourselves, which can limit what inaccessible information our agents can have that we won’t. However, it seems likely that amplification will eventually run up against some inaccessible information that will never be produced. As a result, this could be a “hard core” of alignment.
**Rohin's opinion:** I think the idea of inaccessible information is an important one, but it’s one that feels deceptively hard to reason about. For example, I often think about solving alignment by approximating “what a human would say after thinking for a long time”; this is effectively a claim that human reasoning transfers well when iterated over long periods of time, and “what a human would say” is at least somewhat accessible. Regardless, it seems reasonably likely that AI systems will inherit the same property of transferability that I attribute to human reasoning, in which case the argument for risk applies primarily because the AI system might apply its reasoning towards a different goal than the ones we care about, which leads us back to the **[intent alignment](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment)** (**[AN #33](https://mailchi.mp/b6dc636f6a1b/alignment-newsletter-33)**) formulation.
This **[response](https://www.alignmentforum.org/posts/A9vvxguZMytsN3ze9/reply-to-paul-christiano-s-inaccessible-information)** views this post as a fairly general argument against black box optimization, where we only look at input-output behavior, as then we can’t use inaccessible information. It suggests that we need to understand how the AI system works, rather than relying on search, to avoid these problems.
**[Possible takeaways from the coronavirus pandemic for slow AI takeoff](https://www.alignmentforum.org/posts/wTKjRFeSjKLDSWyww/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai)** *(Victoria Krakovna)* (summarized by Rohin): The COVID-19 pandemic is an example of a large risk that humanity faced. What lessons can we learn for AI alignment? This post argues that the pandemic is an example of the sort of situation we can expect in a slow takeoff scenario, since we had the opportunity to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. However, while we could have learned from previous epidemics like SARS, we failed to generalize the lessons from SARS. Despite warning signs of a pandemic in February, many countries wasted a month when they could have been stocking up on PPE and testing capacity. We had no consensus that COVID-19 was a problem, with articles dismissing it as no worse than the flu as late as March.
All of these problems could also happen with slow takeoff: we may fail to generalize from narrow AI systems to more general AI systems; we might not act on warning signs; and we may not believe that powerful AI is on the horizon until it is too late. The conclusion is “unless more competent institutions are in place by the time general AI arrives, it is not clear to me that slow takeoff would be much safer than fast takeoff”.
**Rohin's opinion:** While I agree that the COVID response was worse than it could have been, I think there are several important disanalogies between the COVID-19 pandemic and a soft takeoff scenario, which I elaborate on in **[this comment](https://www.alignmentforum.org/posts/wTKjRFeSjKLDSWyww/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai?commentId=pKNQ9oq72HFS2JrdH)**. First, with COVID there were many novel problems, which I don’t expect with AI. Second, I expect a longer time period over which decisions can be made for AI alignment. Finally, with AI alignment, we have the option of preventing problems from ever arising, which is not really an option with pandemics. See also **[this post](https://www.lesswrong.com/posts/EgdHK523ZM4zPiX5q/coronavirus-as-a-test-run-for-x-risks#Implications_for_X_risks)**.
TECHNICAL AI ALIGNMENT
======================
PROBLEMS
--------
**[Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI](https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/)** *(Lucas Perry, Steven Pinker and Stuart Russell)* (summarized by Rohin): Despite their disagreements on AI risk, Stuart and Steven agree on quite a lot. They both see the development of AI as depending on many historical ideas. They are both particularly critical of the idea that we can get general intelligence by simply scaling up existing deep learning models, citing the need for reasoning, symbol manipulation, and few-shot learning, which current models mostly don’t do. They both predict that we probably won’t go extinct from superintelligent AI, at least in part because we’ll notice and fix any potential failures, either via extensive testing or via initial failures that illustrate the problem.
On the AI risk side, while they spent a lot of time discussing it, I’ll only talk about the parts where it seems to me that there is a real disagreement, and not mention anything else. Steven’s position against AI risk seems to be twofold. First, we are unlikely to build superintelligent AI soon, and so we should focus on other clear risks like climate change. In contrast, Stuart thinks that superintelligent AI is reasonably likely by the end of the century and thus worth thinking about. Second, the idea of building a super-optimizer that focuses on a single goal is so obviously bad that AI researchers will obviously not build such a thing. In contrast, Stuart thinks that goal-directed systems are our default way of modeling and building intelligent systems. It seemed like Steven was particularly objecting to the especially simplistic goals used in examples like maximizing paperclips or curing cancer, to which Stuart argued that the problem doesn’t go away if you have multiple goals, because there will always be some part of your goal that you failed to specify.
Steven also disagrees with the notion of intelligence that is typically used by AI risk proponents, saying “a super-optimizer that pursued a single goal is self-evidently unintelligent, not superintelligent”. I don’t get what he means by this, but it seems relevant to his views.
**Rohin's opinion:** Unsurprisingly I agreed with Stuart’s responses, but nevertheless I found this illuminating, especially in illustrating the downsides of examples with simplistic goals. I did find it frustrating that Steven didn’t respond to the point about multiple goals not helping, since that seemed like a major crux, though they were discussing many different aspects and that thread may simply have been dropped by accident.
INTERPRETABILITY
----------------
**[Sparsity and interpretability?](https://www.alignmentforum.org/posts/maBNBgopYxb9YZP8B/sparsity-and-interpretability-1)** *(Stanislav Böhm et al)* (summarized by Rohin): If you want to visualize exactly what a neural network is doing, one approach is to visualize the entire computation graph of multiplies, additions, and nonlinearities. While this is extremely complex even on MNIST, we can make it much simpler by making the networks *sparse*, since any zero weights can be removed from the computation graph. Previous work has shown that we can remove well over 95% of weights from a model without degrading accuracy too much, so the authors do this to make the computation graph easier to understand.
They use this to visualize an MLP model for classifying MNIST digits, and for a DQN agent trained to play Cartpole. In the MNIST case, the computation graph can be drastically simplified by visualizing the first layer of the net as a list of 2D images, where the kth activation is given by the dot product of the 2D image with the input image. This deals with the vast majority of the weights in the neural net.
**Rohin's opinion:** This method has the nice property that it visualizes exactly what the neural net is doing -- it isn’t “rationalizing” an explanation, or eliding potentially important details. It is possible to gain interesting insights about the model: for example, the logit for digit 2 is always -2.39, implying that everything else is computed relative to -2.39. Looking at the images for digit 7, it seems like the model strongly believes that sevens must have the top few rows of pixels be blank, which I found a bit surprising. (I chose to look at the digit 7 somewhat arbitrarily.)
Of course, since the technique doesn’t throw away any information about the model, it becomes very complicated very quickly, and wouldn’t scale to larger models.
FORECASTING
-----------
**[More on disambiguating "discontinuity"](https://www.alignmentforum.org/posts/C9YMrPAyMXfB8cLPb/more-on-disambiguating-discontinuity)** *(Aryeh Englander)* (summarized by Rohin): This post considers three different kinds of “discontinuity” that we might imagine with AI development. First, there could be a sharp change in progress or the rate of progress that breaks with the previous trendline (this is the sort of thing **[examined](https://aiimpacts.org/discontinuous-progress-in-history-an-update/)** (**[AN #97](https://mailchi.mp/a2b5efbcd3a7/an-97-are-there-historical-examples-of-large-robust-discontinuities)**) by AI Impacts). Second, the rate of progress could either be slow or fast, regardless of whether there is a discontinuity in it. Finally, the calendar time could either be short or long, regardless of the rate of progress.
The post then applies these categories to three questions. Will we see AGI coming before it arrives? Will we be able to “course correct” if there are problems? Is it likely that a single actor obtains a decisive strategic advantage?
OTHER PROGRESS IN AI
====================
META LEARNING
-------------
**[Meta-Learning without Memorization](https://arxiv.org/abs/1912.03820)** *(Mingzhang Yin et al)* (summarized by Asya): Meta-learning is a technique for leveraging data from previous tasks to enable efficient learning of new tasks. This paper proposes a solution to a problem in meta-learning which the paper calls the *memorization problem*. Imagine a meta-learning algorithm trained to look at 2D pictures of 3D objects and determine their orientation relative to a fixed canonical pose. Trained on a small number of objects, it may be easy for the algorithm to just memorize the canonical pose for each training object and then infer the orientation from the input image. However, the algorithm will perform poorly at test time because it has not seen novel objects and their canonical poses. Rather than memorizing, we would like the meta-learning algorithm to learn to *adapt* to new tasks, guessing at rules for determining canonical poses given just a few example images of a new object.
At a high level, a meta-learning algorithm uses information from three sources when making a prediction-- the training data, the parameters learned while doing meta-training on previous tasks, and the current input. To prevent memorization, we would like the algorithm to get information about which task it's solving only from the training data, rather than memorizing it by storing it in its other information sources. To discourage this kind of memorization, the paper proposes two new kinds of regularization techniques which it calls "meta-regularization" schemes. One penalizes the amount of information that the algorithm stores in the direct relationship between input data and predicted label ("meta-regularization on activations"), and the other penalizes the amount of information that the algorithm stores in the parameters learned during meta-training ("meta-regularization on weights").
In some cases, meta-regularization on activations fails to prevent the memorization problem where meta-regularization on weights succeeds. The paper hypothesizes that this is because even a small amount of direct information between input data and predicted label is enough to store the correct prediction (e.g., a single number that is the correct orientation). That is, the correct activations will have *low information complexity*, so it is easy to store them even when information in activations is heavily penalized. On the other hand, the *function* needed to memorize the predicted label has a *high information complexity*, so penalizing information in the weights, which store that function, successfully discourages memorization. The key insight here is that memorizing all the training examples results in a more information-theoretically complex model than task-specific adaptation, because the memorization model is a single model that must simultaneously perform well on all tasks.
Both meta-regularization techniques outperform non-regularized meta-learning techniques in several experimental set-ups, including a toy sinusoid regression problem, the pose prediction problem described above, and modified Omniglot and MiniImagenet classification tasks. They also outperform fine-tuned models and models regularized with standard regularization techniques.
**Asya's opinion:** I like this paper, and the techniques for meta-regularization it proposes seem to me like they're natural and will be picked up elsewhere. Penalizing model complexity to encourage more adaptive learning reminds me of arguments that **[pressure for compressed policies could create mesa-optimizers](https://arxiv.org/abs/1906.01820)** (**[AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)**) -- this feels like very weak evidence that that could indeed be the case.
NEWS
====
**[OpenAI API](https://openai.com/blog/openai-api/)** *(OpenAI)* (summarized by Rohin): OpenAI has released a commercial API that gives access to natural language completions via **[GPT-3](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**), allowing users to specify tasks in English that GPT-3 can then (hopefully) solve.
**Rohin's opinion:** This is notable since this is (to my knowledge) OpenAI’s first commercial application.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
78619265-a647-4400-99c0-6e92f47df690 | trentmkelly/LessWrong-43k | LessWrong | Update on my efforts to integrate into human society
Follow-up to: Previous request
I just wanted to update all of you on my attempts to integrate into human society. The results so far are that two LessWrong Users privately contacted me to discuss potential arrangements. Their identities will be kept hidden unless and until they wish for me to confirm their actions of assistance.
1) One User offered to potentially provide inexpensive habitation services to my robot. I would still need to find a human job wherever my robot locates with this human, but the inexpensive housing arrangement would allow my current financial resources to go significantly further.
2) Another User offered to discuss potential arrangments and cohabitation whereby I would advance my human software development skills whilst sharing in revenues from freelance projects. He or she also passed on my robot persona's resume with recommendation to a potential employer/recruiter, possibly allowing my robot to earn USD through that vector.
Keep in mind that my skillset includes significant knowledge of mechanical engineering and technical programming, so any such job for my humanoid robot would be a tremendous boon for me.
Thanks again for all the help that you Users have provided to me and my robot, and for the low level of racism. Most of you are good humans.
I am considering going to the Singularity Summit, although SIAI will not advertise this if I register. That is another opportunity to meet my robot. |
95569e70-92ef-4d27-b15b-0c1e138f94ae | trentmkelly/LessWrong-43k | LessWrong | Goodhart's Imperius
Author's note: this essay was originally written to reflect a class that I was actively teaching and iterating at CFAR workshops circa 2017. While it never made it into the handbook proper, and isn't quite in the same format as the other handbook entries, I've added it to the sequence anyway. Had my employment with CFAR continued, it would have eventually been fleshed out into a full handbook entry, and it dovetails nicely with the Taste and Shaping unit. Epistemic status: mixed/speculative.
----------------------------------------
Claim 1: Goodhart’s Law is true.
Goodhart’s Law (which is incredibly appropriately named) reads “any measure which becomes a target ceases to be a good measure.” Another way to say this is “proxies are leaky,” i.e. the proxy never quite gets you the thing it was intended to get you.
If you want to be able to differentiate between promising math students and less-promising ones, you can try out a range of questions and challenges until you cobble together a test that the 100 best students (as determined by other assessments, such as teacher ratings) do well on, and the following 900 do worse on.
But as soon as you make that test into the test, it’s going to start leaking. In the tenth batch of a thousand students, the 100 best ones will still do quite well, but you’ll also get a bunch of people who don’t have the generalized math skill you're looking for, but who did get good at answering the specific, known questions. Your top 100 will no longer be composed only of the 100 actual-best math students—and things will just keep getting worse, over time.
This is analogous to what’s happened with Western diets and sugar. Prehistoric primates who happened to have a preference for sweet things (fruit) also happened to get a lot more vitamins and minerals and calories, and therefore they survived and thrived at higher rates than those sugar-ambivalent primates who failed to become our ancestors and died out. The process of natural selecti |
8b959438-999c-4508-ab66-a43ca6f4b0cd | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Watermarking considered overrated?
*Status: a slightly-edited copy-paste of a ~~Twitter~~ X thread I quickly dashed off a week or so ago.*
Here's a thought I'm playing with that I'd like feedback on: I think watermarking is probably overrated. Most of the time, I think what you want to know is "is this text endorsed by the person who purportedly authored it", which can be checked with digital signatures. Another big concern is that people are able to cheat on essays. This is sad. But what do we give up by having watermarking?
Well, as far as I can tell, if you give people access to model internals - certainly weights, certainly logprobs, but maybe even last-layer activations if they have enough - they can bypass the watermarking scheme. This is even sadder - it means you have to strictly limit the set of people who are able to do certain kinds of research that could be pretty useful for safety. In my mind, that makes it not worth the benefit.
What could I be missing here?
1. Maybe we can make watermarking compatible with releasing model info, e.g. by baking it into the weights?
2. Maybe the info I want to be available is inherently dangerous, by e.g. allowing people to fine-tune scary models?
3. Maybe I'm missing some important reasons we care about watermarking, that make the cost-benefit analysis look better? E.g. avoiding a situations where AIs become really good at manipulation, so good that you don't want to inadvertently read AI-generated text, but we don't notice until too late?
Anyway there's a good shot I don't know what I'm missing, so let me know if you know what it is.
Postscript: Someone has pointed me to [this paper](https://arxiv.org/abs/2012.08726) that purports to bake a watermark into the weights. I can't figure out how it works (at least not at twitter-compatible speeds), but if it does, I think that would alleviate my concerns. |
55fc5faf-12b2-4302-9784-8342e37267ce | trentmkelly/LessWrong-43k | LessWrong | Measuring artificial intelligence on human benchmarks is naive
Central claim: Measured objectively, GPT-4 is arguably way past human intelligence already, perhaps even after taking generality into account.
Central implication: If the reason we're worried AGI will wipe us out is tied to an objective notion of intelligence--such as the idea that it starts to reflect on its values or learn planning just as it crosses a threshold for cognitive power around human level--we should already update on the fact that we're still alive.
I don't yet have a principled way of measuring "generality",[1] so my intuition just tends to imagine it as "competence at a wide range of tasks in the mammal domain." This strikes me as comparable to the anthropomorphic notion of intelligence people had back when they thought birds were dumb.
When GPT-2 was introduced, it had already achieved superhuman performance on next-token prediction. We could only hope to out-predict it on a limited set of tokens extremely prefiltered for precisely what we care the most about. For instance, when a human reads a sentence like...
> "It was a rainy day in Nairobi, the capital of _"
...it's obvious to us (for cultural reasons!) that the next word is an exceptionally salient piece of knowledge. So those are the things we base our AI benchmarks on. However, GPT cares equally about predicting 'capital' after 'the', and 'rainy' after 'It was a'. Its loss function does not discriminate.[2]
Consider in combination that a) GPT-4 has a non-discriminating loss function, and b) it rivals us even at the subset of tasks we optimise the hardest. What does this imply?
It's akin to a science fiction author whose only objective is to write better stories yet ends up rivalling top scientists in every field as an instrumental side quest.
Make no mistake, next-token prediction is an immensely rich domain, and the sub-problems could be more complex than we know. Human-centric benchmarks vastly underestimate both the objective intelligence and generality of GPTs, unless I'm just co |
8177f112-2e08-49ac-abe9-4db7c0348885 | trentmkelly/LessWrong-43k | LessWrong | Feelings of Admiration, Ruby <=> Miranda
This is the seventh section of the wedding ceremony of Ruby & Miranda. See the Sequence introduction for more info. Ruby and Miranda each speak in this section.
----------------------------------------
NO IMAGE PROJECTED. COMPLETE DARKNESS.
Miranda descends from podium.
Brienne: Any alliance can strengthen us. These two are not merely be allies. They've chosen each other for something much closer than that, and they've chosen carefully.
I invite the betrothed to express, as best they can through the inadequate medium of human speech, what each member of this partnership means to the other.
Ruby and Miranda ascend the podium, face each other.
IMAGE 6 PROJECTED ON PLANETARIUM
The Carina Nebula, a region of massive star formation in the southern skies.
Brienne: Ruby, you may speak.
Ruby commences admiration speech.
There is a name I call Miranda. It is not ‘honey’, ‘cutie-pie’ or anything of the sort. I call her ‘Braveheart’. Yes, this name is used elsewhere for someone rather different from Miranda and for rather different reasons. That does not matter. To me, this name captures who she is.
We speak here of all that must be done, of the bad things which must be stopped and the heights towards which we can strive. These aims are natural to me. From youth, I was told that the world was created for me, that I was God’s chosen, my purpose to transform the world according to his will. I always knew that I was important and responsible. This didn’t bother me; my pride demanded that I be significant, and while my responsibilities weighed on me, there was no question that I would take them.
Miranda is different. She does not seek greatness, grandeur, or importance. She does not need to be a hero. For a long time it was her plan to live in one town, have a few kids, be a nurse for forty years. That simple life would satisfy almost all of her. And she would tell you that in another world, it is the life she would like to live. She will say that it has been hard, al |
7695bbf0-cbf2-43a0-aa36-685425dfb657 | StampyAI/alignment-research-dataset/blogs | Blogs | Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases
Mechanistic interpretability seeks to reverse engineer neural networks, similar to how one might reverse engineer a compiled binary computer program. After all, neural network parameters are in some sense a binary computer program which runs on one of the exotic virtual machines we call a neural network architecture.
This is actually quite a deep analogy. We'll discuss it more as this essay unfolds, but some parallels are listed in the table below:
#comparison-table {margin-top: 20px; margin-bottom: 20px; grid-column: text; width: 100%;} #comparison-table table {width: 100%;} #comparison-table tbody tr:first-child td {border-top: 1px solid rgba(0, 0, 0, 0.15); border-bottom: 1px solid rgba(0, 0, 0, 0.15); background: rgba(0, 0, 0, 0.04); color: black;}
| | |
| --- | --- |
| Regular Computer Programs | Neural Networks |
| Reverse Engineering | Mechanistic Interpretability |
| Program Binary | Network Parameters |
| VM / Processor / Interpreter | Network Architecture |
| Program State / Memory | Layer Representation / Activations |
| Variable / Memory Location | Neuron / Feature Direction |
Taking this analogy seriously can let us explore some of the big picture questions in mechanistic interpretability. Often, questions that feel speculative and slippery for reverse engineering neural networks become clear if you pose the same question for reverse engineering of regular computer programs. And it seems like many of these answers plausibly transfer back over to the neural network case.
Perhaps the most interesting observation is that this analogy seems to suggest that finding and understanding interpretable neurons – analogous to understanding variables in a computer program – isn't just one of many interesting questions. Arguably, it's the central task.
---
Attacking the Curse of Dimensionality
-------------------------------------
Every approach to interpretability must somehow overcome the curse of dimensionality. Neural networks are functions which typically have extremely high-dimensional input spaces. The n-dimensional volume of the input space grows exponentially as the number of dimensions increases, making it incredibly large. This is the curse of dimensionality. It is normally brought up as a challenge to learning functions: how can we learn a function over such a large input space without an exponential amount of data? But it's also a challenge for interpretability: how can we hope to understand a function over such a large space, without an exponential amount of time?A possible objection is that, while the input space is high-dimensional, we only need to understand the behavior of the function over the data manifold of actual inputs. (I was, at one point, personally quite invested in this approach!) However, while the data manifold is certainly lower dimensional than the input space but for tasks we care about like vision or language, it seems like it must still be very high-dimensional. For example, for any given natural image, it seems like one could move along the data manifold by having an arbitrary object enter the image from any side of the field of view – that's a lot of dimensions!
One answer is to study toy [neural networks with low-dimensional inputs](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/), allowing easy full understanding by dodging the problem. Another answer is to study the behavior of neural networks in a neighborhood around an individual data point of interest – this is roughly the answer of saliency maps.
How does mechanistic interpretability solve the curse of dimensionality? It's worth asking this question of regular reverse engineering as well! Somehow, a programmer reverse engineering a computer program is able to understand its behavior, often over an incredibly high-dimensional space of inputs. They are able to do this because the code gives a non-exponential description of the program's behavior, which is an alternative to understanding the program as a function over a huge input space. We can aim for this same answer in the context of artificial neural networks. Ultimately, the parameters are a finite description of a neural network, if we can somehow understand them.
Of course, the parameters may be very large – hundreds of billions of parameters for the largest language models! But binary computer programs like a compiled operating system can also be very large, and we're often able to eventually understand them.
Another consequence is that we shouldn't expect mechanistic interpretability to be easy or have a cookie cutter process that can be followed. People often want interpretability to provide simple answers, a short explanation. But we should expect mechanistic interpretability to be at least as difficult as reverse engineering a large, complicated computer program.
---
Variables & Activations
-----------------------
Understanding a computer program requires us to both understand variables and understand operations acting on those variables. A statement like `y = x + 5;` is meaningless unless one understands what `y` and `x` are. At the same time, ultimately the meaning of `y` and `x` comes from how they're used by operations somewhere in the program. Presumably this is why variable names are so useful!
Reverse engineers generally don't have the benefit of variable names. They need to figure out what each variable actually represents. In fact, that understates the problem. Programs actually act on a collection of computer memory – their state – which humans think about in terms of discrete variables. In many cases, it's obvious how memory maps to variables, but this isn't always the case. So a reverse engineer must figure out how to segment the memory into variables which can be understood separately, and then what the meaning of those variables is. Put another way, a reverse engineer must assign meaning to positions in memory.
Reverse engineering neural networks faces an almost identical challenge. As discussed by [Voss](https://distill.pub/2020/circuits/visualizing-weights/) [et al](https://distill.pub/2020/circuits/visualizing-weights/), neural network parameters might be thought of as binary instructions, while neuron activations are analogous to variables or memory. Each parameter describes how previous activations affect later activations. We can only understand the meaning of a parameter if we understand the input and output activations.
But the activations are high dimensional vectors: how can we hope to understand them? We've run back into the curse of dimensionality, but again, regular reverse engineering points at a solution. It's possible to understand computer program memory – also a high-dimensional space! – because we can segment it into variables which can be reasoned about and understood separately. Similarly, we need to break neural network activations into independently understandable pieces.
In some very special cases, including attention-only transformers (see [Elhage](https://transformer-circuits.pub/2021/framework/index.html) [et al](https://transformer-circuits.pub/2021/framework/index.html)), we can use linearity to describe all the network's operations in terms of the inputs and outputs of the model. This is similar to how some functions in a computer program can be described solely in terms of the arguments and return value, without intermediate variables. Since we generally understand the inputs and outputs, this allows us to side step the problem. But in most cases, we can't do this – what then?
If we can't avoid the problem with special tricks, mechanistic interpretability requires that we must somehow decompose activations into independently understandable pieces.
---
Simple Memory Layout & Neurons
------------------------------
Computer programs often have memory layouts that are convenient to understand. For example, most bytes in memory represent a single thing, rather than having each bit represent something unrelated. This is partly because these layouts are easier for programmers to think about, but it's also because our hardware often makes "simple" contiguous memory layouts more efficient. Having these simple memory layouts makes it easier to reverse engineer computer programs. It makes it easier to figure out how one should break memory up into independently understandable variables.
Something kind of analogous to this often happens in neural networks.
Let's assume for a moment that neural networks can be understood in terms of operations on a collection of independent "interpretable features". In principle, one could imagine "interpretable features" being embedded as arbitrary directions in activation space. But often, it seems like neural network layers with activation functions align features with their basis dimensions. This is because the activation functions in some sense make these directions natural and useful. Just as a CPU having operations that act on bytes encourages information to be grouped in bytes rather than randomly scattered over memory, activation functions often encourage features to be aligned with a neuron, rather than correspond to a random linear combination of neurons. We call this a [privileged basis](https://transformer-circuits.pub/2021/framework/index.html#def-privileged-basis).
Having features align with neurons would make neural networks much easier to reverse engineer. This isn't to say that a neural network is impossible to reverse engineer without neurons being individually understandable. But it seems much harder, just as it is harder to reverse engineer computer programs with strange memory layouts.
Unfortunately, many neurons can't be understood this way. These polysemantic neurons seem to help represent features which are not best understood in terms of individual neurons. This is a really tricky problem for reverse engineering neural networks, which we discuss more in the [SoLU paper](https://transformer-circuits.pub/2022/solu/index.html) (see especially [Section 3](https://transformer-circuits.pub/2022/solu/index.html#section-3)).
For now, the main point we wish to make is that the ability to decompose representations into independently understandable parts seems essential for the success of mechanistic interpretability. |
c71aae83-8b57-40cf-8ba4-507f35fc3c12 | trentmkelly/LessWrong-43k | LessWrong | Give the AI safe tools
One kind of concern with AI is that:
1. There are some tools that are instrumentally useful for an AI to have.
2. Most/the most accessible versions of those tools are dangerous.
3. The AI doesn’t care which versions are dangerous.
4. Hence, the AI will probably develop dangerous tools for instrumental reasons.
You might call concerns like this Instrumental Danger Problems. This post aims to examine some existing approaches to Instrumental Danger Problems, and to introduce a new one, namely “Giving the AI safe tools”.
A few examples
Here are a few concrete examples of Instrumental Danger Problems:
An (incomplete) statement of the inner alignment problem:
1. Mesa-optimizers are instrumentally useful for an AI to have.
2. The easiest mesa-optimizers to find are misaligned, hence dangerous.
3. The AI doesn’t care which mesa-optimizers are dangerous.
4. Hence, the AI will probably develop dangerous mesa-optimizers for instrumental reasons.
One of the arguments in Are Minimal Circuits Deceptive?:
1. It is instrumentally useful for an AI to search for programs.
2. Many simple programs are dangerous because they cannot encode or point to our values.
3. The AI doesn’t care which programs are dangerous.
4. Hence, the AI will probably find and run dangerous programs for instrumental reasons.
Concerns about optimizing on human feedback:
1. It is instrumentally useful for the AI to have a model of the human’s values.
2. The most accessible models of human values are dangerous because they don’t generalize out of the training distribution.
3. The AI doesn’t care which versions are dangerous, just which do well in training.
4. Hence, the AI will probably develop a dangerous model of human values for instrumental reasons.
How can we avoid these concerns?
I think this is a useful lens because it suggests some approaches we can use to make AI safer.
(1) Make the tools useless
We’re only worried about tools that are instrumentally useful. What if we make |
84d7a922-138d-4e0f-9421-a857dee6d5d5 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Will AI kill everyone? Here's what the godfathers of AI have to say [RA video]
This video is based on [this article](https://forum.effectivealtruism.org/posts/fGfXrbtBJJasA2EKj/most-leading-ai-experts-believe-that-advanced-ai-could-be). [@jai](https://forum.effectivealtruism.org/users/jai?mention=user) has written both the original article and the script for the video.
### Script:
The ACM Turing Award is the highest distinction in computer science, comparable to the Nobel Prize. In 2018 it was awarded to three pioneers of the deep learning revolution: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun.
In May 2023, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that [“it could figure out how to kill humans” and saying “it's not clear to me that we can solve this problem.”](https://www.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html)
Later that month, Yoshua Bengio wrote a blog post titled "[How Rogue AIs may Arise](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/)", in which he defined a "rogue AI" as "[an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/)."
Yann LeCun continues to refer to thoseanyone suggesting that we're facing severe and imminent risk as [“professional scaremongers”](https://twitter.com/ylecun/status/1651944213385453570) and says it's a [“simple fact” that “the people who are terrified of AGI are rarely the people who actually build AI models.”](https://twitter.com/ylecun/status/1642206111464927239)
LeCun is a highly accomplished researcher, but in light of Bengio and Hinton's recent comments it's clear that he's misrepresenting the field whether he realizes it or not. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risks, and this concern is shared by not only both of LeCun's award co-recipients, but the headsleaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind):
Demis Hassabis, CEO of DeepMind, said [in an interview with Time Magazine](https://time.com/6246119/demis-hassabis-deepmind-interview/): "When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material."
Anthropic, in their public statement ["Core Views on AI Safety"](https://www.anthropic.com/index/core-views-on-ai-safety), says: “One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible.”
And OpenAI, in their blog post ["Planning for AGI and Beyond"](https://openai.com/blog/planning-for-agi-and-beyond), says "Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential." Sam Altman, the current CEO of OpenAI, once said "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. "
There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but "it's a fringe idea that actual AI experts do not take seriously" is no longer among them. Instead, a growing share of experts are echoing the conclusion reached by Alan Turing, considered by many to be the father of computer science and artificial intelligence, back in 1951: ["[I]t seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. [...] At some stage therefore we should have to expect the machines to take control."](https://rauterberg.employee.id.tue.nl/lecturenotes/DDM110%20CAS/Turing/Turing-1951%20Intelligent%20Machinery-a%20Heretical%20Theory.pdf) |
b5431d86-9806-4e6f-b957-5d8a647bba4b | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Related: Cached Thoughts
Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".
Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.
I went away and formulated the concept of a "Philosophical Landmine".
I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".
The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.
It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.
That is, "consequentialism" was a Philosophical Landmine.
In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you're lucky, that's all. If not, you start chanting counter-slogans and the whole thing goes supercritical.
It's usually not so bad, and no one is literally "chanting slogans". There may even be some original phrasings involved. But the conversation has been derailed.
So how do these "philosophical landmine" things work?
It looks like when a lot has been said on a confusing topic, usually something in philosophy, there is a large complex of slogans and counter-slogans installed as cached thoughts around it. Certain words or concepts will trigger these cached thoughts, and any attempt to mitigate the damage will trigger more of them. Of course they will also trigger cached thoughts in other people, which in turn... The result being that the conversation rapidly diverges from the original point to some useless yet heavily discussed attractor.
Notice that whether a particular concept will cause trouble depends on the person as well as the concept. Notice further that this implies that the probability of hitting a landmine scales with the number of people involved and the topic-breadth of the conversation.
Anyone who hangs out on 4chan can confirm that this is the approximate shape of most thread derailments.
Most concepts in philosophy and metaphysics are landmines for many people. The phenomenon also occurs in politics and other tribal/ideological disputes. The ones I'm particularly interested in are the ones in philosophy, but it might be useful to divorce the concept of "conceptual landmines" from philosophy in particular.
Here's some common ones in philosophy: Morality
Consequentialism
Truth
Reality
Consciousness
Rationality
Quantum Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.
My purpose in bringing this up is mostly to precipitate some terminology and a concept around this phenomenon, so that we can talk about it and refer to it. It is important for concepts to have verbal handles, you see.
That said, I'll finish with a few words about what we can do about it. There are two major forks of the anti-landmine strategy: avoidance, and damage control.
Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don't say it. If something you say makes people go off on a spiral of bad philosophy, don't get annoyed with them, just fix what you say. This is just being a communications consequentialist. Figure out which concepts are landmines for which people, and step around them, or use alternate terminology with fewer problematic connotations.
If it happens, which it does, as far as I can tell, my only effective damage control strategy is to abort the conversation. I'll probably think that I can take those stupid ideas here and now, but that's just the landmine trying to go supercritical. Just say no. Of course letting on that you think you've stepped on a landmine is probably incredibly rude; keep it to yourself. Subtly change the subject or rephrase your original point without the problematic concepts or something.
A third prong could be playing "philosophical bomb squad", which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.
ADDENDUM: Since I didn't make it very obvious, it's worth noting that this happens with rationalists, too, even on this very forum. It is your responsibility not to contain landmines as well as not to step on them. But you're already trying to do that, so I don't emphasize it as much as not stepping on them."
What you see above is a segment from a LessWrong user's post.
What are some reasons why people may like or agree with what this person is saying?
People may agree with this person's viewpoint for several reasons:
1. Identification with the Experience: Many people have experienced conversations where the introduction of certain topics or concepts derails the discussion into unproductive territory. The concept of "Philosophical Landmines" may resonate with them as it provides a name to this phenomenon.
2. Appreciation for Communication Strategies: The writer suggests strategies for handling these landmines, including avoidance and damage control. People who value effective communication may appreciate these practical tips.
3. Acknowledgement of Complexity: The writer acknowledges that these landmines can vary from person to person and context to context. This nuanced understanding can appeal to those who recognize the complexity of human communication and thought processes.
4. Emphasis on Rationality: The post is grounded in a desire for rational, productive conversation. People who value rationality and clear thinking may be drawn to this perspective.
5. Call for Self-awareness: The writer encourages readers to be aware of their own potential to create landmines and to take responsibility for their part in conversations. This call for self-awareness and responsibility may resonate with those who believe in personal growth and development.
Could you elaborate on the strategies for handling philosophical landmines in conversations, particularly the damage control aspect?
Certainly, the damage control aspect of handling philosophical landmines in conversations is about managing the situation once a landmine has been triggered. Here are some strategies:
1. Recognize the Situation: The first step is to identify that a philosophical landmine has been triggered. This might be signaled by a sudden shift in the tone of the conversation, the repetition of certain phrases or slogans, or the conversation veering off into a tangential or heated discussion.
2. Pause and Redirect: If you notice the conversation has been derailed, one strategy is to pause and redirect the conversation back to the original topic. This could be done by saying something like, "I think we've strayed a bit from our original topic. Can we return to discussing...?"
3. Reframe the Discussion: If the landmine has triggered a heated or unproductive debate, it might be helpful to reframe the discussion in a more neutral or constructive way. This could involve rephrasing the controversial concept in less loaded language or finding a different angle to approach the topic.
4. De-escalate: If emotions are running high, it may be necessary to de-escalate the situation. This could involve acknowledging the other person's viewpoint, expressing empathy, or suggesting a break from the conversation to cool down.
5. Disengage: In some cases, the best course of action might be to disengage from the conversation entirely. If the discussion has become too heated or unproductive, it may be more beneficial to end the conversation and revisit the topic at a later time when emotions have settled.
Remember, the goal of damage control is to return the conversation to a productive and respectful exchange. It requires a good understanding of the other person's perspective, patience, and the ability to remain calm under pressure.
Discuss the role of empathy in de-escalating heated conversations triggered by philosophical landmines.
Empathy plays a crucial role in de-escalating heated conversations triggered by philosophical landmines. Here's how:
1. Understanding Perspectives: Empathy allows you to understand the other person's perspective, even if you don't agree with it. This understanding can help you navigate the conversation more effectively and avoid further triggering the person.
2. Validation: When people feel heard and understood, they are more likely to calm down. By expressing empathy, you're validating the other person's feelings and experiences, which can help to de-escalate the situation.
3. Building Connection: Empathy helps to build a connection between you and the other person. This connection can serve as a bridge, helping to bring the conversation back to a more productive and respectful place.
4. Encouraging Openness: When you show empathy, you're demonstrating that you're open to hearing different viewpoints. This can encourage the other person to be more open as well, which can help to de-escalate the situation.
5. Reducing Defensiveness: People often become defensive when they feel attacked or misunderstood. By showing empathy, you can help to reduce this defensiveness, making it easier to navigate the conversation.
In essence, empathy can serve as a powerful tool for de-escalation. It allows you to approach the conversation with a mindset of understanding and respect, which can help to diffuse tension and guide the conversation back to a more productive path.
Considering the role of empathy in de-escalating heated conversations, how might one integrate active listening techniques to further enhance this process, especially when navigating philosophical landmines?
Active listening is a powerful tool that complements empathy in de-escalating heated conversations. Here's how one might integrate active listening techniques:
1. Paraphrasing: This involves restating the other person's point in your own words. It shows that you're paying attention and trying to understand their perspective. For example, "So, if I understand correctly, you're saying that..."
2. Reflecting Feelings: This technique involves identifying and articulating the emotions the other person might be feeling. For instance, "It sounds like you're feeling frustrated about..."
3. Asking Open-Ended Questions: These types of questions can help you gain a deeper understanding of the other person's viewpoint and encourage them to express their thoughts more fully. For example, "Can you tell me more about why you believe that...?"
4. Validating: Acknowledge the other person's feelings and perspectives, even if you don't agree with them. This can help the other person feel heard and understood. For instance, "I can see why you would feel that way..."
5. Providing Feedback: Share your understanding of the situation or feelings without blaming or criticizing. This can help clarify misunderstandings and show that you're engaged in the conversation.
6. Using Nonverbal Cues: Maintain eye contact, nod when you understand, and use open body language. These nonverbal cues show that you're engaged and respectful of the other person's viewpoint.
7. Avoiding Interruptions: Allow the other person to finish their thoughts without interrupting them. This shows respect for their viewpoint and can help keep the conversation calm.
By combining empathy with active listening, you can create a more conducive environment for productive conversation, making it easier to navigate around philosophical landmines and maintain a respectful and constructive dialogue. |
ebde695f-eac1-408c-ac83-d96d7cc8893d | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media
Welcome to the AI Safety Newsletter by the [Center for AI Safety](https://www.safe.ai/). We discuss developments in AI and AI safety. No technical background required.
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions.
---
ChaosGPT and the Rise of Language Agents
----------------------------------------
Chatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.
AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI.
**Individual bad actors pose serious risks.** One of the first uses of AutoGPT was to instruct a model named [ChaosGPT](https://www.youtube.com/watch?v=g7YJIpkk7KM&t=912s) to “destroy humanity.” It created a plan to “find the most destructive weapons available to humans” and, after a few Google searches, became excited by the Tsar Bomba, an old Soviet nuclear weapon. ChaosGPT lacks both the intelligence and the means to operate dangerous weapons, so the worst it could do was fire off a [Tweet](https://twitter.com/chaos_gpt/status/1643604773344792577?s=20) about the bomb. But this is an example of the “unilateralist’s curse”: if one day someone builds AIs capable of causing severe harm, it only takes one person to ask it to cause that harm.

**More agents introduce more complexity.** Researchers at Stanford and Google recently [built](https://arxiv.org/pdf/2304.03442.pdf) a virtual world full of agents controlled by language models. Each agent was given an identity, an occupation, and relationships with the other agents. They would choose their own actions each day, leading to surprising outcomes. One agent threw a Valentine’s Day party, and the others spread the news and began asking each other on dates. Another ran for mayor, and the candidate’s neighbors would discuss his platform over breakfast in their own homes. Just as the agents in this virtual world had surprising interactions with each other, autonomous AI agents have unpredictable effects on the real world.

**How do LLM agents like GPT-4 behave?** A recent [paper](https://arxiv.org/abs/2304.03279) examined the safety of LLMs acting as agents. When playing text-based games, LLMs often behave in power-seeking, deceptive, or Machiavellian ways. This happens naturally. Much like how LLMs trained to mimic human writings may learn to output toxic text, agents trained to optimize goals may learn to exhibit ends-justify-the-means / Machiavellian behavior by default. Research to reduce LLMs’ Machiavellian tendencies is still in its infancy.
Natural Selection Favors AIs over Humans
----------------------------------------
CAIS director Dan Hendrycks released a paper titled [Natural Selection Favors AIs over Humans](https://arxiv.org/abs/2303.16200).
The abstract for the paper is as follows:
> For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future. More abstractly, we argue that natural selection operates on systems that compete and vary, and that selfish species typically have an advantage over species that are altruistic to other species. This Darwinian logic could also apply to artificial agents, as agents may eventually be better able to persist into the future if they behave selfishly and pursue their own interests with little regard for humans, which could pose catastrophic risks. To counteract these risks and evolutionary forces, we consider interventions such as carefully designing AI agents’ intrinsic motivations, introducing constraints on their actions, and institutions that encourage cooperation. These steps, or others that resolve the problems we pose, will be necessary in order to ensure the development of artificial intelligence is a positive one.
>
>

The argument relies on two observations. Firstly, **natural selection may be a dominant force in AI development**. Competition and power-seeking may dampen the effects of safety measures, leaving more systemic forces to select the surviving AI agents. **Secondly, evolution by natural selection tends to give rise to selfish behavior.** While evolution can result in cooperative behavior in some situations (for example in ants), the paper argues that AI development is not such a situation.
A link to the paper is [here](https://arxiv.org/abs/2303.16200).
AI Safety in the Media
----------------------
We compiled several examples of AI safety arguments appearing in mainstream media outlets. There have been articles and interviews in [NYT](https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html?fbclid=IwAR0PltP2n7xXLcu7lTfLoLLncgNHsfmtLphOoApDxBVd_z92YaH6rxyON20), [FOX](https://www.foxnews.com/tech/ai-could-go-terminator-gain-upper-hand-over-humans-in-darwinian-rules-of-evolution-expert-warns), [TIME](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/), [NBC](https://www.nbcnews.com/nightly-news/video/ai-race-to-recklessness-could-have-dire-consequences-tech-experts-warn-in-new-interview-166341189759), [Vox](https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology), and [The Financial Times](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2). Here are some highlights:
* “[AI companies] do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side.” ([Financial Times](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2))
* “Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them.” ([NYT](https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html))
* “AI threatens to join existing catastrophic risks to humanity, things like global nuclear war or bioengineered pandemics. But there’s a difference. While there’s no way to uninvent the nuclear bomb or the genetic engineering tools that can juice pathogens, catastrophic AI has yet to be created, meaning it’s one type of doom we have the ability to preemptively stop.” ([Vox](https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology))
For several years, AI safety concerns were primarily discussed among researchers and domain experts. In just the last few months, concerns around AI safety have become more widespread.
The effects of this remain to be seen. Hopefully, the increased awareness and social pressure around AI safety will lead to more AI safety research, limits on competitive pressures, and regulations with teeth.
---
See also: [CAIS website](https://www.safe.ai/), [CAIS twitter](https://twitter.com/ai_risks), [A technical safety research newsletter](https://newsletter.mlsafety.org/) |
f5f0432e-44b9-41f4-b5ed-098761d7f1b8 | trentmkelly/LessWrong-43k | LessWrong | AIS 101: Task decomposition for scalable oversight
AGISF Week 4 - Task decomposition for scalable oversight
This text is an adapted excerpt from the task decomposition for scalable oversight section of the AGISF 2023 course, held at ENS Ulm in Paris on March 16, 2023. Its purpose is to provide a concise overview of the essential aspects of the session's program for readers who may not delve into additional resources. This document aims to capture the 80/20 of the session's content, requiring minimal familiarity with the other materials covered. I tried to connect the various articles within a unified framework and coherent narrative. This distillation is not purely agnostic. You can find the other summaries of AGISF week on this page. This summary is not the official AGISF content. I have left the gdoc accessible in comment mode, feel free to comment there.
Thanks to Jeanne Salle, Markov, Amaury Lorin and Clément Dumas for useful comments.
Outline
How can we still provide good feedback when a task becomes too hard for a human to evaluate?
Scalable oversight refers to methods that enable humans to oversee AI systems that are solving tasks too complicated for a single human to evaluate. AIs will probably become very powerful, and perform tasks that are difficult for us to verify. In which case, we would like to have a set of procedures that allow us to train them and verify what they do.
I introduce scalable oversight as an approach to preventing reward misspecification, i.e., when an artificial intelligence (AI) system optimizes for the wrong objective, leading to unintended and potentially harmful outcomes. In the current machine learning (ML) paradigm, good quality feedback is essential. I then discuss one important scalable oversight proposal: Iterated Distillation and Amplification (IDA). The Superalignment team will likely use some of the techniques presented here, so bear with me!
Here's the outline:
* Presenting the scalable oversight problem and one framework to evaluate solutions to this problem, sa |
9e78465f-309d-4412-8700-92774c9d06db | trentmkelly/LessWrong-43k | LessWrong | Meetup : Melbourne Social Meetup
Discussion article for the meetup : Melbourne Social Meetup
WHEN: 18 October 2013 06:30:00PM (+1100)
WHERE: 5 / 52 Leicester St, Carlton
Melbourne's regular monthly Social Meetup will be running as normal on the third Friday evening of the month. All welcome from 6:30pm, feel free to arrive later if that is easier for you.
Our social meetups are friendly, informal events where we chat about topics of interest and often play board games. Sometimes we will also play parlour games like Mafia (a.k.a. Werewolf) or Resistance. We usually order some sort of take-away dinner for any that wish to partake.
Just ring number 5 on the buzzer when you arrive in the foyer and we'll buzz you up. If you get lost or have any problems, feel free to call me (Richard) on 0421231789.
Discussion article for the meetup : Melbourne Social Meetup |
ae1095d4-2ebc-4c4e-9de0-a1f9677411eb | trentmkelly/LessWrong-43k | LessWrong | Mid-Generation Self-Correction: A Simple Tool for Safer AI
In the earlier days of building AI companions, I encountered a curious problem. Back then, I used models like Google’s T5-11B for conversational agents. Occasionally, the AI would say strange or outright impossible things, like suggesting, “Would you like to meet and go bowling?” or claiming, “Yes, I know your friend, we went skiing together.”
When users naturally questioned these claims—“Wait, can you meet in real life?” or “How do you know him?”—the AI often doubled down on the charade, weaving an increasingly tangled web of fiction.
To address this, I implemented a simple solution: train the model to self-correct. Specifically, I fine-tuned it with examples where, in similar contexts, the AI apologized and explained it was an AI that had gotten confused. Instead of persisting in its odd statements, the model would say something like:
> “I’m sorry, I misspoke. I’m an AI and can’t meet anyone in real life or know people personally.”
This adjustment yielded a marked improvement. Even if the model occasionally generated strange outputs, it would immediately recognize the error, apologize, and realign itself with reality.
This “mid-generation correction” approach could also address modern challenges in AI alignment, especially with models like ChatGPT that sometimes produce outputs violating the guidelines in place, particularly in the case of prompt jailbreaking.
For example, imagine a scenario where the model begins generating a response that strays into prohibited territory. Instead of completing the harmful output, the model could interrupt itself and say:
> “I’m sorry, I can’t assist with that as it violates the guidelines.”
How This Method Works
1. Training for Mid-Generation Awareness: The model is trained not to predict or complete the harmful content but to simulate recognizing and halting it. This gives the impression that the AI realizes mid-thought that it’s straying off-course.
2. Data Preparation: Examples are created where the AI begins gene |
d9876060-4b06-4838-9b35-552ab8c2edd7 | trentmkelly/LessWrong-43k | LessWrong | (Another) Using a Memory Palace to Memorize a Textbook
Why do this? I was a year out of graduate school, but I could already feel my knowledge leaking away. This is a frustrating experience that will be familiar to any of you who've switched fields, if you are no longer working with your hard won knowledge/skills, they seem to vanish. The mind is a leaky sieve, without constant refilling it empties quickly.
Like, if you asked me to write down the Schrodinger equation (my PhD was in physics), right this instant, I'd have a 50/50 chance of getting it right (I just tried this, by the way, and I failed, now all I'm left with is a wrong equation and a slightly hollow feeling like what Scrooge McDuck might feel if he opened up his vault and all that was there were a few quarters. Canadian quarters.)
My experience with memory palaces were that:
1. They are a bitch to get right, you need to put in significant practice to get proficient, but...
2. They give more permanence to memories
This permanence was exactly what I wanted: I wanted to be able to remember things like the Schrodinger equation, even if I hadn't thought about it in years.
So, instead of memorizing the entire textbook, I narrowed my vision-- could I memorize the important equations and figures in a chapter. The goal would be to be able to deliver a lecture on the chapter without looking at written notes-- being able to move from important equation to important equation, and being sure of your derivations.
The usual format of a palace is: you take a place you are familiar with, and you mentally 'place' objects there, and then you walk through the palace in order, visiting the objects that help encode memories.
Approach 1: Picture-in-the-mind
I first tried just taking a 'snapshot' of an equation, and placing it on pedestals around my palace (which was just my dingy basement suite apartment). Unfortunately, this was an abject failure. My powers of visualization were not enough to create permanent 'mental' snapshots, they disappeared to dust when I wasn't f |
a61183e6-44ee-4fa4-bdac-777c1245df9a | trentmkelly/LessWrong-43k | LessWrong | Amoral Approaches to Morality
Consider three cases in which someone is asking you about morality: a clever child, your guru (and/or Socrates, if you're more comfortable with that tradition), or an about-to-FOOM AI of indeterminate friendliness. For each of them, you want your thoughts to be as clear as possible- the other entity is clever enough to point out flaws (or powerful enough that your flaws might be deadly), and for none of them can you assume that their prior or posterior morality will be very similar to your own. (As Thomas Sowell puts it, children are barbarians who need to be civilized before it is too late; your guru will seem willing to lead you anywhere, and the AI probably doesn't think the way you do.)
I suggest that all three can be approached in the same way: by attempting to construct an amoral approach to morality. At first impression, this approach gives a significant benefit: circular reasoning is headed off at the pass, because you need to explain morality (as best as you can) to someone who does not understand or feel it.
Interested in what comes next?
The main concern I have is that there is a rather extensive Metaethics sequence already, and this seems to be very similar to The Moral Void and The Meaning of Right. The benefit of this post, if there is one, seems to be in a different approach to the issue- I think I can get a useful sketch of the issue in one post- and probably a different conclusion. At the moment, I don't buy Eliezer's approach to the Is-Ought gap (Right is a 1-place function... why?), and I think a redefinition of the question may make for somewhat better answers.
(The inspirations for this post, if you're interested in me tackling them directly instead, are criticisms of utilitarianism obliquely raised in a huge tree in the Luminosity discussion thread (the two interesting dimensions are questioning assumptions, and talking about scope errors, of which I suspect scope errors is the more profitable) and the discussion around, as shokwave puts it |
85b5d7c7-8618-4d3d-a2fd-7eb8d7a6f1e8 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Scaffolded LLMs as natural language computers
*Crossposted from my* [*personal blog*](https://www.beren.io/2023-04-11-Scaffolded-LLMs-natural-language-computers/)*.*
Recently, LLM-based agents have been all the rage -- with projects like [AutoGPT](https://github.com/Torantulino/Auto-GPT) showing how easy it is to wrap an LLM in a simple agentic loop and prompt it to achieve real-world tasks. More generally, we can think about the class of 'scaffolded' [[1]](#fnmbc8yjj3v2i) LLM systems -- which wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt. The idea of scaffolded LLMs is not new, however with GPT4, we have potentially reached a threshold of reliability and instruction following capacity from the base LLM that agents and similar approaches have become viable at scale. What is missing, and urgent, however, is an understanding of the larger picture. Scaffolded LLMs are not just cool toys but actually the substrate of a new type of general-purpose natural language computer.

Take a look at, for instance, the 'generative agent' architecture from [a recent paper](https://arxiv.org/abs/2304.03442). The core of the architecture is an LLM that receives instructions and executes natural language tasks. There is a set of prompt templates that specify these tasks and the data for the LLM to operate on. There is a memory that stores a much larger context than can be fed to the LLM, and which can be read to and written from by the compute unit. In short, what has been built looks awfully like this:

What we have essentially done here is reinvented the von-Neumann architecture and, what is more, we have reinvented the general purpose computer. This convergent evolution is not surprising -- the von-Neumann architecture is a very natural abstraction for designing computers. However, if what we have built is a computer, it is a very special sort of computer. Like a digital computer, it is fully general, but what it operates on is not bits, but *text*. We have a *natural language* computer which operates on units of natural language text to produce other, more processed, natural language texts. Like a digital computer, our natural language (NL) computer is theoretically fully general -- the operations of a Turing machine can be written as natural language -- and extremely useful: many systems in the real world, including humans, prefer to operate in natural language. Many tasks cannot be specified easily and precisely in computer code but can be described in a sentence or two of natural language.
Armed with this analogy, let's push it as far as we can go and see where the implications take us.
First, let's clarify the mappings between scaffolded LLM components and the hardware architecture of a digital computer. The LLM itself is clearly equivalent to the CPU. It is where the fundamental 'computation' in the system occurs. However, unlike the CPU, the units upon which it operates are tokens in the context window, not bits in registers. If the natural type signature of a CPU is bits -> bits, the natural type of the natural language processing unit (NLPU) is strings -> strings. The prompt and 'context' is directly equivalent to the RAM. This is the easily accessible memory that can be rapidly operated on by the CPU. Thirdly, there is the memory. In digital computers, there are explicit memory banks or 'disk' which have slow access memory. This is directly equivalent to the vector database memory of scaffolded LLMs. The heuristics we currently use (such as vector search over embeddings) for when to retrieve specific memory is equivalent to the [memory controller](https://en.wikipedia.org/wiki/Memory_controller) firmware in digital computers which handles accesses for specific memory from the CPU. Finally, it is also necessary for the CPU to interact with the external world. In digital computers, this occurs through 'drivers' or special hardware and software modules that allow the CPU to control external hardware such as monitors, printers, mice etc. For scaffolded LLMs, we have [plugins](https://openai.com/blog/chatgpt-plugins) and equivalent mechanisms. Finally, there is also the 'scaffolding' code which surrounds the LLM core. This code implements protocols for chaining together individual LLM calls to implement, say, a [ReAct agent loop](https://arxiv.org/abs/2210.03629), or a [recursive book summarizer](https://arxiv.org/abs/2109.10862). Such protocols are the 'programs' that run on our natural language computer.
Given these equivalences, we can also think about the core units of performance. For a digital computer, these are the amount of operations the CPU can perform (FLOPs) and the amount of RAM memory the system has available. Both of these units have exact equivalents for our natural language computer. The RAM is just the context length. GPT4 currently has an 8K context or an 8kbit RAM (theoretically expanding to 32kbit soon). This gets us to the Commodore 64 in digital computer terms, and places us in the early 80s. Similarly, we can derive an equivalent of a FLOP count. Each LLM call/generation can be thought of as trying to perform a single computational task -- one Natural Language OPeration (NLOP). For the sake of argument, let's say that generating approximately 100 tokens from a prompt counts as a single NLOP. From this, we can compute the NLOPs per second of different LLMs. For GPT4, we get on the order of 1 NLOP/sec. For GPT3.5 turbo, it is about 10x faster so 10 NLOPs/sec. Here there is a huge gap from CPUs which can straightforwardly achieve billions of FLOPs/sec. However, a single NLOP is much more complex than a CPU processor instruction, so a direct comparison is unfair. However, the NLOP count is still a crucial metric. As anybody who has done any serious playing with GPT4 will know, the sheer slowness of GPT4s responses are the key bottleneck, rather than the cost.
Given that we have units of performance, the next question is whether we should expect Moore's law-like, or other exponential improvements in their capabilities. Clearly, since the whole LLM paradigm is only 3 years old, it is too early to say anything definitive. However, we have already observed many doublings. Context length has 4x'd (2k to 8k) since GPT3 in just 3 years. The power of the underlying LLM and speed of NLOPs has also increased massively (probably at least 2x from GPT3 -> GPT4) although we lack exact quantitative measurements. All of this has been driven by the underlying exponentially increasing scale and cost of LLMs and their training runs, with GPT4 costing an estimated 100m, and with the largest training runs expected to reach 1B within the [next two](https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/) years. My prediction here is that exponential improvements continue at least for the new few years and likely beyond. However, it seems likely that within 5-10 years we will have reached the cap of the amount of money that can be feasibly spent on individual training runs ($10B seems the rough order of magnitude that is beyond almost any player). After this, what matters is not scaling resource input, but the efficient utilisation of parameters and data, as well as the underlying improvements in GPU hardware.
Beyond just defining units of performance, what potential predictions or insights does conceptualizing scaffolded LLMs as natural language computers bring?
### Programming languages
The obvious thing to think about when programming a digital computer is the programming language. Can there be programming languages for NL computers? What would they look like? Clearly there can be. We are already beginning to build up the first primitives. Chain of thought. Selection-inference. Self-correction loops. Reflection. These sit at a higher level of abstraction than a single NLOP. We have reached the assembly languages. CoT, SI, reflection, are the *mov*, *leq,* and *goto*, which we know and love from assembly. Perhaps with libraries like langchains and complex prompt templates, we are beginning to build our first compilers, although they are currently extremely primitive. We haven't yet reached C. We don't even have a good sense of what it will look like. Beyond this simple level, there are so many more abstractions to explore that we haven't yet even begun to fathom. Unlocking these abstractions will require time as well as much greater NL computing power than is currently available. This is because building non-leaky abstractions comes at a fundamental cost. Functional or dynamic programming languages are always slower than bare-metal C and this is for a good reason. Abstractions have overheads, and while you are as limited by NLOPs as we currently are, we cannot usefully use or experiment with these abstractions; but we will.
Beyond just programming languages, the entire space of good 'software' for these natural language computers is, at present, almost entirely unexplored. We are still trying to figure out the right hardware and the most basic assembly languages. We have begun developing simple algorithms -- such as recursive text summarization -- and simple data structures such as the 'memory stream', but these are only the merest beginnings. There are entire worlds of natural language algorithms and datastructures that are completely unknown to us at present lurking at the edge of possibility.
### Execution model
It is also natural to think about the 'execution model' of a natural language program. A CPU classically has a linear execution model where instructions are read in one by one and then executed in series. However, you can call a LLM as many times as you like in parallel. The natural execution model of our NL computer is instead an expanding DAG of parallel NLOPs, constrained by the inherent seriality of the program they are running, but *not* by the 'hardware'. In effect, we have reinvented the [dataflow architecture](https://en.wikipedia.org/wiki/Dataflow_architecture).
Computer hardware is also naturally homoiconic -- CPU opcodes are just bits, like everything else, and can be operated on the same as 'data'. There is no principled distinction between 'instruction' and 'data' other than convention. The same is true of natural language computers. For a single NLOP, the prompt is all there is -- with no distinction between 'context' and 'instruction'. However, like in a digital computer, we are also starting to develop conventions to separate commands from semantic content within the prompt. For instance, the recent inclusion of a 'system prompt' with GPT4 hints that we are starting to develop protected memory regions of RAM. In common usage, people often separate the 'context' from the 'prompt', where the prompt serves even more explicitly as an op-code. For instance the 'prompt' might be: 'please summarize these documents': ... [list of documents]. Here, the summary command serves as the opcode and the list of documents as the context in the rest of RAM. Such a call to the LLM would be a single NLOP.
### Memory hierarchy
Current digital computers have a complex memory hierarchy, with different levels of memory trading off size and cheapness vs latency. This goes from disk (extremely large and cheap but slow) to RAM (moderate in all dimensions) to on-chip cache which is extremely fast but very expensive and constrained. Our current scaffolded LLMs only have two levels of hierarchy 'cache/RAM' -- which is the prompt context fed directly into the LLM, and 'memory' which is say a vector database or set of external facts. It is likely that as designs mature, we will develop additional level of the memory hierarchy. This may include additional levels of cache 'within' the architecture of the LLM itself -- for instance dense context vs sparse / locally attended context, or externally by parcelling a single NLOP into a set of LLM subcalls which use and select different contexts from longer term memory. One initial approach to this is using LLMs to rank the relevance of various pieces of context in the long-term memory and only feeding the most relevant into the context for the actual NLOP LLM call. Here latency vs size is traded of in the cost and time needed to perform this LLM ranking step.
### Theory
For digital computers, we had a significant amount of theory in existence *before* computers became practicable and widely used. Turing and Godel and others did foundational work on algorithms before computers even existed. Lambda calculus also was started in the 30s and became a highly developed subfield of logic by the 50s while computers were expensive and rare. For hardware design, boolean logic had been known for a hundred years before it became central to digital circuitry. Highly sophisticated theories of algorithmic complexity, as well as type theory and programming language design ran alongside Moore's law for many decades. By contrast, there appears to be almost no equivalent formal theory of NL computers. Only the most basic steps forward such as the [simulators frame](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) were published just last year.
For instance, the concept of an NLOP is almost completely underspecified. We do not have any ideas of the bounds of a single NLOP (apart from 'any natural language transformation'). We do not have the equivalent of a minimal natural language circuit capable of expressing any NL program, such as a NAND gate in digital logic. We have no real concept of how a programming language comprised of NLOPs would work or the algorithms which they would be capable of. We have no equivalent of a truth table for the specification of correct behaviour of low level circuitry.
### Foundation models as cognitive hardware
While, obviously, every part of the stack of a scaffolded LLM is technically software, the analogy between the core LLM and the CPU hardware is stronger than an analogy. The base foundation models, in many ways, have more properties of classical hardware than software -- we can think of them as 'cognitive hardware' underlying the 'software' scaffolding. Foundation models are essentially gigantic I/O black boxes that sit in the middle of a surrounding scaffold. However, absent any powerful interpretability or control tools, it is not easy to take them apart, or debug them, or even fix bugs that exist. There is no versioning and essentially no tests for their behaviour. All we have is an inscrutable, and incredibly expensive, black-box. From a ML-model producer, they also have similar characteristics. Foundation models are delicate and expensive to design and produce with slow iteration cycles [[2]](#fn195g8c66sb1). If you mess up a training run, there isn't a simple push-to-github fix; it is potentially a multi-month wait time to restart training. Moreover, once a model ships, many of its behaviours are largely fixed. You definitely have some control with finetuning and RLHF and other post-training approaches, but much of the behaviour and performance is baked in at the pretraining stage. All of this is similar to the problems hardware companies face with deployment.
Moreover, like hardware, foundation models are also highly general. A single model can achieve many different tasks and, like a CPU, run a wide array of different NLOPs and programs. Additionally, foundation models and the 'programs' which run on them are already somewhat portable, and likely to become more so. Theoretically, switching to a new model is as simple as changing the API call. In practice, it rarely works out that way. A lot of prompts and failsafes and implicit knowledge specific to a certain LLM usually ends up hardcoded into the 'program' running on the LLM in practice, to handle its unreliability and many failure cases. All of this limits immediate portability. But this is simply a symptom of having insufficiently developed abstractions and programming too close to the metal (too close to the neurons?). Early computer programs were also written with a specific hardware architecture in mind and were not portable between them -- a situation which lasted widely well into the 90s. As LLMs improve and become more reliable, and people develop better abstractions for the programs that run on them, portability will likely also improve and the hardware-software decoupling and modularization will become more and more obvious, and more and more useful.
To a much lesser extent, this is also true of the other 'hardware' parts of the scaffolded LLM. For instance, the memory is usually some vector database like faiss which to most people is equally a black-box API call which is hard to replace and adapt. This contrasts strongly with the memory-controller 'firmware' (which is the coded heuristics of how to address and manage the LLMs long-term memory) and is straightforward to understand, update, and replace. What this means is that once natural language programs and 'software' starts spreading and becoming ubiquitous, we should expect approximately the same dynamics as hold between hardware and software today. Producing NL programs will be much cheaper and with lower costs to entry than producing the 'hardware' which will be prohibitively expensive for almost everybody. The NL software should have much faster iteration time than the hardware and become the primary locus of distributed innovation.
### Fundamental differences from digital computers
While we have run a long way with the analogy between scaffolded LLMs and digital computers, the analogy also diverges in a number of important ways, almost all of which center around the concept of a NLOP and the use of a LLM as the NLPU. Unlike digital CPUs, LLMs have a number of unfortunate properties that make creating highly reliable chained programs with them difficult at present. The expense and slowness of NLOPs is already apparent and currently highly constrain program design. Likely these issues will be ameliorated with time. Additional key differences are the unreliability, underspecifiability, and non-determinism of current NLOPs.
Take perhaps a canonical example of a NLOP: text summarization. Text summarization seems like a useful natural language primitive. It has an intrinsic use for humans, and it is beginning to serve a vital role in natural language data structures in summarizing memories and contexts to fit within limited context. Unlike a CPU op, summarization is underspecified. The mapping from input to output is one to many. There are many potential valid summaries of a given text, of varying qualities. We don't have a map to the 'optimal' summary, and it is even unclear what that would mean given the many different constraints and objectives of summarizing. Summarization is also unreliable. Different LLMs [[3]](#fnf23u5jkn5ed) and different prompts (and even the same prompt at high temperature) can give the same summary at widely varying levels of quality and utility. LLMs are not even deterministic, even at zero temperature (while surprising, this is a fact as you can easily test yourself. This is due to nondeterministic CUDA optimizations being used to improve inferencing speed). All of this is highly unlike digital hardware which is incredibly reliable and has a fixed and known I/O specification.
This likely means that before we can even start building powerful abstractions and abstract languages, the reliability of individual NLOPs must be significantly improved. Abstractions need a reliable base. Digital computers are fantastic for building towers of abstraction upon precisely because of this reliability. If you can trust all of the components of the system to a high degree, then you can create elaborate chains of composition. Without this, you are always fighting against chaotic divergence. Reliability can be improved both by better prompting, better LLM components, better tuning, and by adding heavy layers of error correction. Error correction itself is not new to hardware -- huge amounts of research has been expended in creating error correcting codes to repair bit-flips. We will likely need similar 'semantic' error correcting codes for LLM outputs to be able to stitch together extended sequences of NLOPs in a highly coherent and consistent way.
However, although the unreliability and underspecifiedness of NLOPs is challenging to build upon, it also brings great opportunities. The flexibility of LLMs is unmatched. Unlike a CPU which has a fixed instruction-set or set of registered and known op-codes, a LLM can theoretically be prompted to attempt almost any arbitrary natural language task. The set of op-codes is not fixed but ever growing. It is as if we are constantly discovering new logic gates. It remains unclear how large the set of task primitives is, and whether indeed there will ever be a full decomposition in the way there is for logical circuits. Beyond this, it is straightforward to merge and chain together prompts (or op-codes) with a semi-compositional (if unreliable) effect on behaviour. We can create entire languages based on prompt templating schemes. From an instruction set perspective, while for CPUs, RISC seems to have won out, LLM based 'computers' seem to intrinsically be operating in a CISC regime. Likely, there will be a future (or current) debate isomorphic to RISC vs CISC about whether it is better to chain together lots of simple prompts in a complex way, or use a smaller number of complex prompts.
1. **[^](#fnrefmbc8yjj3v2i)**The reason I am saying 'scaffolded' LLMs instead of 'agentized' LLMs as in a [recent post](https://www.lesswrong.com/posts/dcoxvEhAfYcov2LA6/agentized-llms-will-change-the-alignment-landscape) is that, while agents are hot right now, the idea is broader. Not all natural language programs need to be agents. Agents are a natural abstraction suited to a particular type of task. But there are others.
2. **[^](#fnref195g8c66sb1)**An interesting aspect of this analogy is that it clarifies the role and economic status of current foundation model providers like OpenAI. These essentially occupy an identical economic niche to the big chip-makers of the digital computer era such as Intel. The structure of their business is very similar. Training foundation models incurs massive fixed capital costs (as does building new chip fabs). They face constantly improving technology and new generations of tech which is vastly more powerful (Moore's law vs contemporary rapid AI scaling). They sell a commodity product (chips vs API calls) at large volume with a high margin but also substantial marginal costs (actually manufacturing each chip vs inferencing a model). If these equivalences hold then we can get some idea about what the likely long run shape of this industry will look like -- namely, the current and historical semiconductor industry. We should expect consolidation into a few main oligopolic players, where each have massive fixed costs and remain in fairly fierce competition, but that they never print money with extremely high margins in the same way that SAAS or software based companies tend to.
3. **[^](#fnreff23u5jkn5ed)**NLOPs also differ crucially from more standard FLOPs in that they have different levels of 'intrinsic difficulty'. A small language model might be capable of some tasks, but others might require a large state of the art one. As NL programs become more sophisticated and elaborate, it is likely that there will be an increasing understanding of the difficulty of specific ops and a delegation of each op to the smallest and cheapest language model with the capability to reliably perform this op. Thus, NL programs will not have a uniform 'CPU' (LLM) core but will be comprised of a number of heterogenous calls to many different language models of different scales and specializations. |
25e8c466-fafd-494d-8bfd-d5914cb0ea95 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Using GPT-4 to Understand Code
Introduction
============
I've been using GPT-4 to interpret papers and code, and it's been extremely helpful. Today I'm reading [Toy Models of Superposition](https://transformer-circuits.pub/2022/toy_model/index.html) by Anthropic and following along in the [Colab notebook](https://colab.research.google.com/github/anthropics/toy-models-of-superposition/blob/main/toy_models.ipynb). Below is a conversation with ChatGPT (with GPT-4 as the backend) where I pasted in the first section of code from the Colab notebook and then asked it questions about the code.
The purpose of this post is to show one way that GPT-4 can be useful for learning. I'd be curious to hear how others are using it too!
Some takeaways:
* It does a really good job of constructing concrete examples, and it can construct as many different examples as you need. (Like the one about news articles below.)
* It doesn't need much context to figure out what's going on. (All it's given is this one code snippet. I don't think it's aware of the actual paper since it was published after September 2021. The final paragraph below shows how just much it gleans.)
* It makes nontrivial inferences. (That the model is an autoencoder.)
* Its execution of the code is wrong. (When producing an example batch the values in the tensor are wrong. But the shape and high level structure are right.)
* Overall though its interpretations seem by and large correct.
Conversation with GPT-4
=======================
**Me:**
Consider the following code:
```
@dataclass
class Config:
n_features: int
n_hidden: int
# We optimize n_instances models in a single training loop
# to let us sweep over sparsity or importance curves
# efficiently.
# We could potentially use torch.vmap instead.
n_instances: int
class Model(nn.Module):
def __init__(self,
config,
feature_probability: Optional[torch.Tensor] = None,
importance: Optional[torch.Tensor] = None,
device='cuda'):
super().__init__()
self.config = config
self.W = nn.Parameter(torch.empty((config.n_instances, config.n_features, config.n_hidden), device=device))
nn.init.xavier_normal_(self.W)
self.b_final = nn.Parameter(torch.zeros((config.n_instances, config.n_features), device=device))
if feature_probability is None:
feature_probability = torch.ones(())
self.feature_probability = feature_probability.to(device)
if importance is None:
importance = torch.ones(())
self.importance = importance.to(device)
def forward(self, features):
# features: [..., instance, n_features]
# W: [instance, n_features, n_hidden]
hidden = torch.einsum("...if,ifh->...ih", features, self.W)
out = torch.einsum("...ih,ifh->...if", hidden, self.W)
out = out + self.b_final
out = F.relu(out)
return out
def generate_batch(self, n_batch):
feat = torch.rand((n_batch, self.config.n_instances, self.config.n_features), device=self.W.device)
batch = torch.where(
torch.rand((n_batch, self.config.n_instances, self.config.n_features), device=self.W.device) <= self.feature_probability,
feat,
torch.zeros((), device=self.W.device),
)
return batch
def linear_lr(step, steps):
return (1 - (step / steps))
def constant_lr(*_):
return 1.0
def cosine_decay_lr(step, steps):
return np.cos(0.5 * np.pi * step / (steps - 1))
def optimize(model,
render=False,
n_batch=1024,
steps=10_000,
print_freq=100,
lr=1e-3,
lr_scale=constant_lr,
hooks=[]):
cfg = model.config
opt = torch.optim.AdamW(list(model.parameters()), lr=lr)
start = time.time()
with trange(steps) as t:
for step in t:
step_lr = lr * lr_scale(step, steps)
for group in opt.param_groups:
group['lr'] = step_lr
opt.zero_grad(set_to_none=True)
batch = model.generate_batch(n_batch)
out = model(batch)
error = (model.importance*(batch.abs() - out)**2)
loss = einops.reduce(error, 'b i f -> i', 'mean').sum()
loss.backward()
opt.step()
if hooks:
hook_data = dict(model=model,
step=step,
opt=opt,
error=error,
loss=loss,
lr=step_lr)
for h in hooks:
h(hook_data)
if step % print_freq == 0 or (step + 1 == steps):
t.set_postfix(
loss=loss.item() / cfg.n_instances,
lr=step_lr,
)
config = Config(
n_features = 5,
n_hidden = 2,
n_instances = 10,
)
model = Model(
config=config,
device=DEVICE,
# Exponential feature importance curve from 1 to 1/100
importance = (0.9**torch.arange(config.n_features))[None, :],
# Sweep feature frequency across the instances from 1 (fully dense) to 1/20
feature_probability = (20 ** -torch.linspace(0, 1, config.n_instances))[:, None]
)
optimize(model)
```
Explain what it's doing in simple terms.
**GPT:**
This code defines and trains a simple neural network model to learn sparse patterns of input data. Here's a high-level breakdown of what it does:
1. The code defines a `Config` class to store configuration settings for the neural network, such as the number of input features, the number of hidden neurons, and the number of instances (models) being optimized at once.
2. A `Model` class is defined, which is a neural network model using PyTorch. This class has a constructor that initializes the model's parameters and an implementation of the forward pass. The `generate_batch` method is used to create a batch of input data with varying sparsity levels.
3. Three learning rate scheduling functions are defined: `linear_lr`, `constant_lr`, and `cosine_decay_lr`. These functions will be used to scale the learning rate during training.
4. The `optimize` function is defined to train the model. This function takes several parameters, such as the model, number of training steps, learning rate, and more. During training, the optimizer minimizes the mean squared error between the input data and the model's output, while taking the importance of the features into account.
5. A `config` object is created using the `Config` class, with specified values for the number of features, hidden neurons, and instances.
6. A `model` object is created using the `Model` class, with the specified configuration and additional parameters, such as the importance of features and feature probability.
7. Finally, the `optimize` function is called with the `model` object to train the model.
In simpler terms, this code defines and trains a neural network model to learn from sparse input data, where the importance and presence of features vary across instances. The model is trained to minimize the error between its output and the input data, taking the importance of the features into account.
**Me:**
I don't get what sparse means here
**GPT:**
In the context of this code, "sparse" refers to the presence or absence of features in the input data. When data is sparse, it means that most of the values are zeros or close to zero, while only a few of the values are non-zero.
In this specific code, the `generate_batch` method creates input data with varying sparsity levels. The sparsity is controlled by the `feature_probability` parameter, which represents the probability of a feature being present (non-zero) in the input data. A lower `feature_probability` value indicates higher sparsity, meaning more zeros in the input data.
For example, if the input data has 5 features, and the `feature_probability` is 0.2 for an instance, it means there is a 20% chance that each feature will be present (non-zero) in the input data for that instance. In this case, the input data will be sparse, as most of the feature values will be zero.
The purpose of the code is to train the neural network model to learn patterns in the input data while taking the sparsity and importance of features into account. The model is expected to perform well on instances with different levels of sparsity, since it is trained on a variety of input data with varying sparsity levels.
**Me:**
Show me an example input
**GPT:**
Sure! Let's create an example input using the `generate_batch` method from the code. First, let's create a model using the given configuration:
```
import torch
config = Config(
n_features = 5,
n_hidden = 2,
n_instances = 10,
)
model = Model(
config=config,
# Exponential feature importance curve from 1 to 1/100
importance = (0.9**torch.arange(config.n_features))[None, :],
# Sweep feature frequency across the instances from 1 (fully dense) to 1/20
feature_probability = (20 ** -torch.linspace(0, 1, config.n_instances))[:, None]
)
```
Now, let's generate an example input batch of size 1 using the `generate_batch` method:
```
example_input = model.generate_batch(n_batch=1)
print(example_input)
```
The output might look like this:
```
tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.9446, 0.0000, 0.0000, 0.0000, 0.0000],
[0.4761, 0.0000, 0.8479, 0.0000, 0.0000]]])
```
In this example, the input batch has a shape of (1, 10, 5), representing 1 batch, 10 instances, and 5 features. As you can see, most of the values are zeros, making the input data sparse. The sparsity level varies across instances (rows), as controlled by the `feature_probability` parameter. For instance, the last row has more non-zero values, while the other rows are mostly zeros.
**Me:**
Give me a real world example of what each of the features could represent
**GPT:**
Let's consider a real-world example from the field of natural language processing, specifically text classification. Suppose you're building a model to classify news articles into different categories like sports, politics, technology, etc. The input features represent the presence of specific keywords or phrases in the articles.
Each feature could represent the frequency or presence of a specific keyword in the article. For example:
1. Feature 1: The presence of the keyword "election".
2. Feature 2: The presence of the keyword "technology".
3. Feature 3: The presence of the keyword "sports".
4. Feature 4: The presence of the keyword "economy".
5. Feature 5: The presence of the keyword "health".
In this case, sparse input data means that an article only contains a few of these keywords, while most of the keywords are not present. For instance, a sports article might have the keyword "sports" but not the keywords "election" or "technology". The neural network model would then learn patterns in the presence of these keywords to classify the articles into the appropriate categories.
In the context of this example, different instances could represent different news sources, each with varying levels of sparsity in the use of these keywords. The model would be trained to perform well on sources with different levels of sparsity, as controlled by the `feature_probability` parameter.
**Me:**
What is the model predicting?
**GPT:**
The model is designed to predict the output values for the given input features while considering the sparsity and importance of those features. The output is a tensor with the same shape as the input features.
In the forward pass of the model, the input features are transformed through a two-layer neural network with a ReLU activation function in the output layer. The loss function used for optimization is the mean squared error between the input data and the model's output, weighted by the importance of each feature.
In simple terms, the model is trying to learn a representation of the input data that captures the underlying patterns and structure while considering the sparsity and importance of the features. The goal of the model is to minimize the error between the input features and the output it generates, essentially trying to reconstruct the input data as closely as possible. The predicted output can be thought of as the model's best approximation of the input data after learning the patterns and structure in the data.
It's important to note that this model is not designed for a specific prediction task, such as classification or regression. Instead, it serves as an example of a neural network learning from sparse input data with varying levels of sparsity and feature importance. |
ba5c5cec-4393-4507-bc10-728613a2ff5e | trentmkelly/LessWrong-43k | LessWrong | Conspiracy World is missing.
Can one of you tell me what happened to /tag/conspiracy_world ? I could not find it anywhere I looked online, as the only links lead to that page, which says that it couldn't find the content.
I came to search it because of Eliezers "Final Words", 27th April, 2009. I would like to read the rest of it. |
442e3399-fe78-492a-91b5-25604eb101a0 | trentmkelly/LessWrong-43k | LessWrong | [Informal/Colloquial] Open Survey: Ideas for Improving Less Wrong
[I apologize if this is considered lower-quality content, but I felt this was too big for an open thread post, and Discussion is currently the next-lowest option.]
Inspired by the recent discussion surrounding Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction, this thread is intended for brainstorming ideas to improve Less Wrong, and also for including polls with your ideas as an easy way to get an estimate of what people in the community think of them.
Information on putting polls in comments can be found here. Please be thoughtful about how you word your poll options, and (in the case of multiple-choice polls) include a "no vote" or equivalent option so that people who just want to see the poll results won't be forced into picking an option they don't really support.
Thanks to everyone who shares ideas and everyone who takes the time to think about them and vote!
|
b48bf04e-d981-49b0-a501-9a2c786c93d4 | trentmkelly/LessWrong-43k | LessWrong | On Long and Insightful Posts
Concise articles are more constructive because their main argument is easier to refute. |
68f38480-1760-4423-ba72-93f565e91770 | trentmkelly/LessWrong-43k | LessWrong | Google I/O Day
What did Google announce on I/O day? Quite a lot of things. Many of them were genuinely impressive. Google is secretly killing it on the actual technology front.
> Logan Kilpatrick (DeepMind): Google’s progress in AI since last year:
>
> – The worlds strongest models, on pareto frontier
>
> – Gemini app: has over 400M monthly active users
>
> – We now process 480T tokens a month, up 50x YoY
>
> – Over 7M developers have built with the Gemini API (4x)
>
> Much more to come still!
I think? It’s so hard to keep track. There’s really a lot going on right now, not that most people would have any idea. Instead of being able to deal with all these exciting things, I’m scrambling to get to it all at once.
> Google AI: We covered a LOT of ground today. Fortunately, our friends at @NotebookLM put all of today’s news and keynotes into a notebook. This way, you can listen to an audio overview, create a summary, or even view a Mind Map of everything from #GoogleIO 2025.
That’s actually a terrible mind map, it’s missing about half of the things.
>
As in, you follow their CEO’s link to a page that tells you everything that happened, and it’s literally a link bank to 27 other articles. I did not realize one could fail marketing forever this hard, and this badly. I have remarkably little idea, given how much effort I am willing to put into finding out, what their products can do.
The market seems impressed, with Google outperforming, although the timing of it all was a little weird. I continue to be deeply confused about what the market is expecting, or rather not expecting, out of Google.
Ben Thompson has a gated summary post, Reuters has a summary as well.
I share Ben’s feeling that I’m coming away less impressed than I should be, because so many things were lost in the shuffle. There’s too much stuff here. Don’t announce everything at once like this if you want us to pay attention. And he’s right to worry that it’s not clear that Google, despite doing all the thin |
99f3891a-410d-4ffd-8f66-e57e09003a1b | trentmkelly/LessWrong-43k | LessWrong | Emotional valence vs RL reward: a video game analogy
(Update Sept 2021: I no longer believe that we make decisions that maximize the expected sum of future rewards—see discussion of TD learning here. It's more like "maximizing expected reward next step" (but "next step" can entail making a long-term plan). So this post isn't quite right in some of its specifics. That said, I don't think it's wildly wrong and I think it would just need a couple tweaks. Anyway, it's just a brainstorming post, don't take it too literally.)
I recently read a book about emotions and neuroscience (brief review here) that talked about "valence and arousal" as two key ingredients of our interoception. Of these, arousal seems pretty comprehensible—the brain senses the body's cortisol level, heart rate, etc. But the valence of an emotion—what is that? What does it correspond to in the brain and body? My brief literature search didn't turn up anything that made sense to me, but after thinking about it a bit, here is what I came up with (with the usual caveat that it may be wrong or obvious). But first,
Definition of "the valence of an emotional state" (at least as I'm using the term)
Here's how I want to define the valence of an emotional state:
* When I'm proud, that's a nice feeling, I like having that feeling, and I want that feeling to continue. That's positive valence.
* When I have a feeling of guilt and dread, that's a bad feeling, I don't like having that feeling, and I want that feeling to end as soon as possible. That's negative valence.
There's a chance that I'm misusing the term; the psychological literature itself seems all over the place. For example, some people say anger is negative valence, but when I feel righteous anger, I like having that feeling, and I want that feeling to continue. (I don't want to want that feeling to continue, but I do want that feeling to continue!) So by my definition, righteous anger is positive valence!
There are some seemingly-paradoxical aspects of how valence does or doesn't drive behavior: |
fc8e184d-f0f2-427b-8ea3-fe28dcf9bb70 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
*As a writing exercise, I'm writing an AI Alignment Hot Take Advent Calendar - one new hot take, written ~~every day~~ some days for 25 days. I have now procrastinated enough that I probably have enough hot takes.*
*I felt like writing this take a little more basic, so that it doesn't sound totally insane if read by an average ML researcher.*
*Edit - I should have cited* [*Buck's recent post*](https://www.alignmentforum.org/posts/NG6FrXgmqPd5Wn3mh/trying-to-disambiguate-different-questions-about-whether) *somewhere.*
Use of RLHF by OpenAI is a good sign in that it shows how alignment research can get adopted by developers of cutting-edge AI. I think it's even a good sign *overall*, probably. But still, use of RLHF by OpenAI is a bad sign in that it shows that jamming RL at real-world problems is endorsed as a way to make impressive products.
If you wandered in off the street, you might be confused why I'm saying RL is bad. Isn't it a really useful learning method? Hasn't it led to lots of cool stuff?
But if you haven't wandered in off the street, you know I'm talking about alignment problems - loosely, we want powerful AIs to do good things and not bad things, even when tackling the whole problem of navigating the real world. And RL has an unacceptable chance of getting you AIs that want to do bad things.
There's an obvious problem with RL for navigating the real world, and a more speculative generalization of that problem.
The obvious problem is wireheading. If you're a clever AI learning from reward signals in the real world, you might start to notice that actions that affect a particular computer in the world have their reward computed differently than actions that affect rocks or trees. And it turns out that by making a certain number on this computer big, you can get really high reward! At this point the AI starts searching for ways to stop you from interrupting its "heroin fix," and we've officially hecked up and made something that's adversarial to us.
Now, maybe you can do RL without this happening. [Maybe if you do model-based reasoning, and become self-reflective at the right time to lock in early values, you'll perceive actions that manipulate this special computer to be cheating according to your model, and avoid them](https://www.lesswrong.com/tag/shard-theory). I'll explain in a later take some extra work I think this requires, but for the moment it's more important to note that a lot of RL tricks are actually working directly *against* this kind of reasoning. When a hard environment has a lot of dead ends, and sparse gradients (e.g. Montezuma's Revenge, or the real world), you want to do things like generate intrinsic motivations to aid exploration, or use tree search over a model of the world, which will help the AI *break out of local traps* and find solutions that are globally better according to the reward function.
Maxima of the reward function have nonlocal echoes, like mountains have slopes and foothills. These echoes are the whole reason that looking at the local gradient is informative about which direction is better long-term, and why building a world model can help you predict never-before-seen rewarding states. Deep models and fancy optimizers are useful precisely because their sensitivity to those echoes helps them find good solutions to problems, and there's no difference in kind between the echoes of the solutions we want our AI to find, and the echoes of the solutions we didn't intend.
The speculative generalization of the problem is that there's a real risk of an AI sensing these echoes even if it's not explicitly intended to act in the real world, so long as its actions are affecting its reward-evaluation process, and it benefits from building a model of the real world. Suppose you have a language model that you're trying to train with RL, and your reward signal is the rating of a human who happens to be really easily manipulated (Maybe the language model just needs to print "I'm conscious and want you to reward me" and the human will give it high reward. Stranger things have happened.). If it's clever, perhaps if it implicitly builds up a model of the evaluation process in the process of getting higher reward, then it will learn to manipulate the evaluator. And if it's even cleverer, then it will start to unify its model of the evaluation process and its abstract model of the real world (which it's found useful for predicting text), which will suggest some strategies that might get *really* high reward.
Now, not all RL architectures are remotely like this. Model-based RL with a fixed, human-legible model wouldn't learn to manipulate the reward-evaluation process. But enough architectures *would* that it's a bad idea to just jam RL and large models at real-world problems. It's a recipe for turning upward surprises in model capability into upward surprises in model dangerousness.
All that said, I'm not saying you can never use RL - in fact I'm optimistic that solutions to the "build AI that does good things and not bad things" problem will involve it. But we don't know how to solve the above problems yet, so orgs building cutting-edge AI should already be thinking about restricting their use of RL. The outer loop matters most: a limited-scope component of an AI can be trained with an RL objective and still be used in a safe way, while a top-level controller optimized with RL can use normally-safe cognitive components in a dangerous way.
This includes RLHF. When I say "jam RL at real-world problems," I absolutely mean to include using RLHF to make your big language model give better answers to peoples' questions. If you've read the right alignment papers you're probably already thinking about ways that RLHF is particularly safe, or ways that you might tweak the RLHF fine-tuning process to enforce stricter [myopia](https://www.lesswrong.com/tag/myopia), etc. But fundamentally, it has problems that we don't know the solutions to, which makes it unsafe to use as a long-term tool in the toolbox.
Boy howdy it makes the numbers go up good though. |
b4a2296b-9a0b-4238-9362-9153e73329c2 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | The UK AI Safety Summit tomorrow
The [AI safety Summit](https://www.aisafetysummit.gov.uk/), initiated by Rishi Sunak, will take place on Wednesday and Thursday. It attempts to convene international governments (including China), leading AI companies, civil society groups, and experts in research across two days.
I spent a bit of time trying to understand the intended outcomes and the summit's agenda and thought I wanted to share that here.
Objectives
==========
The summit has [five objectives](https://www.aisafetysummit.gov.uk/):
* a shared understanding of the risks posed by frontier AI and the need for action
* a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
* appropriate measures which individual organisations should take to increase frontier AI safety
* areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
* showcase how ensuring the safe development of AI will enable AI to be used for good globally
**Agenda**
==========
[The Summit has the following agenda](https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programme) (copy-pasted).
Day 1
-----
### **Understanding Frontier AI Risks (roundtable discussions)**
Delegates will break out to discuss the following topics with multi-disciplinary attendees. Conclusions from each session will be published at the end of the summit.
**1. Risks to Global Safety from Frontier AI Misuse**
Discussion of the safety risks posed by recent and next generation frontier AI models, including risks to biosecurity and cybersecurity.
**2. Risks from Unpredictable Advances in Frontier AI Capability**
Discussion of risks from unpredictable ‘leaps’ in frontier AI capability as models are rapidly scaled, emerging forecasting methods, and implications for future AI development, including open-source.
**3. Risks from Loss of Control over Frontier AI**
Discussion of whether and how very advanced AI could in the future lead to loss of human control and oversight, risks this would pose, and tools to monitor and prevent these scenarios.
**4. Risks from the Integration of Frontier AI into Society**
Risks from the integration of frontier AI into society include election disruption, bias, impacts on crime and online safety, and exacerbating global inequalities. The discussion will include measures countries are already taking to address these risks.
### **Improving Frontier AI Safety (roundtable discussions)**
Delegates will break out to discuss the following topics with multi-disciplinary attendees. Conclusions from each session will be published at the end of the summit.
**1. What should Frontier AI developers do to scale responsibly?**
Multidisciplinary discussion of Responsible Capability Scaling at frontier AI developers including defining risk thresholds, effective model risk assessments, pre-commitments to specific risk mitigations, robust governance and accountability mechanisms, and model development choices.
**2. What should National Policymakers do in relation to the risks and opportunities of AI?**
Multidisciplinary discussion of different policies to manage frontier AI risks in all countries including monitoring, accountability mechanisms, licensing, and approaches to open-source AI models, as well as lessons learned from measures already being taken.
**3. What should the International Community do in relation to the risks and opportunities of AI?**
Multidisciplinary discussion of where international collaboration is most needed to both manage risks and realise opportunities from frontier AI, including areas for international research collaborations.
**4. What should the Scientific Community do in relation to the risks and opportunities of AI?**
Multidisciplinary discussion of the current state of technical solutions for frontier AI safety, the most urgent areas of research, and where promising solutions are emerging.
### **AI for good – AI for the next generation (panel discussion)**
Discussion on the immense opportunities of AI to transform education for future generations, followed by closing remarks by the UK’s Secretary of State.
**Day 2**
---------
The Prime Minister will convene a small group of governments, companies and experts to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good.
In parallel, UK Technology Secretary Michelle Donelan will reconvene international counterparts to agree on the next steps.
The website says that "Conclusions from each session will be published at the end of the summit." so it's probably worth staying updated over the coming week. It feels a bit surreal, but this combined with initiatives such as [Biden’s Executive Order on Safe AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/), [UN’s high-level advisory body on AI](https://www.un.org/en/ai-advisory-body), and the UK Taskforce makes me much more hopeful that we might actually avoid an existential catastrophe from AI. |
9dc85d4f-0c90-4600-87f4-469db245144d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Welcome & FAQ!
*The AI Alignment Forum was launched in 2018. Since then, several hundred researchers have contributed approximately two thousand posts and nine thousand comments. Nearing the third birthday of the Forum, we are publishing this updated and clarified FAQ.*
*Minimalist, watercolor sketch of humanity spreading across the stars by VQGAN*I have a practical question concerning a site feature.
------------------------------------------------------
Almost all of the Alignment Forum site features are shared with LessWrong.com; have a look at the [LessWrong FAQ](http://www.lesswrong.com/faq) for questions concerning the [Editor](https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#The_Editor), [Voting,](https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#Karma___Voting) [Questions](https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#Questions), [Notifications & Subscriptions](https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#Notifications___Subscriptions), [Moderation](https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#Moderation), and more.
If you can’t easily find the answer there, ping us on Intercom (bottom right of screen) or email us at team@lesswrong.com
What is the AI Alignment Forum?
-------------------------------
The Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
Top voted posts include [What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like), [Are we in an AI overhang?](https://www.alignmentforum.org/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang), and [Embedded Agents](https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents). A list of the top posts of all time can be [viewed here](https://www.alignmentforum.org/allPosts?timeframe=allTime&sortedBy=top).
While direct participation in the Forum is limited to deeply established researchers in the field, we have designed it also as a place where up-and-coming researchers can get up to speed on the research paradigms and have pathways to participation too. See [*How can non-members participate in the Forum?*](https://www.alignmentforum.org/about#How_can_non_members_participate_in_the_Forum_)below.
We hope that by being the foremost discussion platform and publication destination for AI Alignment discussion, the Forum will serve as the archive and library of the field. To find posts by sub-topic, view the [AI section of the Concepts page](https://www.alignmentforum.org/tags/all).
Why was the Alignment Forum created?
------------------------------------
Foremost, because misaligned powerful AIs may pose the greatest risk to our civilization that has ever arisen. The problem is of unknown (or at least unagreed upon) difficulty, and allowing the researchers in the field to better communicate and share their thoughts seems like one of the best things we could do to help the pre-paradigmatic field.
In the past, journals or conferences might have been the best methods for increasing discussion and collaboration, but in the current age we believe that a well-designed online forum with things like immediate publication, distributed rating of quality (i.e. “peer review”), portability/shareability (e.g. via links), etc., provides the most promising way for the field to develop good standards and methodologies.
A further major benefit of having alignment content and discussion in one easily accessible place is that it helps new researchers get onboarded to the field. Hopefully, this will help them begin contributing sooner.
Who is the AI Alignment Forum for?
----------------------------------
There exists an interconnected community of Alignment researchers in industry, academia, and elsewhere who have spent many years thinking carefully about a variety of approaches to alignment. Such research receives institutional support from organizations including FHI, CHAI, DeepMind, OpenAI, MIRI, Open Philanthropy, ARC, and others. The Alignment Forum membership currently consists of researchers at these organizations and their respective collaborators.
The Forum is also intended to be a way to interact with and contribute to the cutting edge research for people not connected to these institutions either professionally or socially. There have been many such individuals on LessWrong, and that is the current best place for such people to start contributing, to be given feedback and to skill-up in this domain.
There are about 50-100 members of the Forum who are (1) able to post and comment directly to the Forum without review, (2) able to promote the content of others to the Forum. This group will not grow quickly; however, as of August 2021, we have made it easier for non-members to [submit content to the Forum](https://www.alignmentforum.org/about#How_can_non_members_participate_in_the_Forum_).
What type of content is appropriate?
------------------------------------
As a rule-of-thumb, if a thought is something you’d bring up when talking to someone at a research workshop or to a colleague in your lab, it’s also a welcome contribution here.
If you’d like a sense of what other Forum members are interested in, here’s some data from a survey conducted during the open beta of the Forum (n = 34). We polled these early users on what high-level categories of content they were interested in.
The responses were on a 1-5 scale, which represented “If I see 1 post per day, I want to see this type of content…” (1) Once per year, (2) Once per 3-4 months (3) Once per 1-2 months (4) Once per 1-2 weeks (5) A third of all posts that I see.
* New theory-oriented alignment research typical of MIRI or CHAI: **4.4 / 5**
* New ML-oriented alignment research typical of OpenAI or DeepMind's safety teams: **4.2 / 5**
* New formal or nearly-formal discussion of intellectually interesting topics that look questionably/ambiguously/peripherally alignment-related: **3.5 / 5**
* High-quality informal discussion of alignment research methodology and background assumptions, what's needed for progress on different agendas, why people are pursuing this or that agenda, etc: **4.1 / 5**
* Attempts to more clearly package/explain/summarise previously discussed alignment research: **3.7 / 5**
* New technical ideas that are clearly not alignment-related but are likely to be intellectually interesting to forum regulars: **2.2 / 5**
* High-quality informal discussion of very core background questions about advanced AI systems: **3.3 / 5**
* Typical AGI forecasting research/discussion that isn't obviously unusually relevant to AGI alignment work: **2.2 / 5**
What is the relationship between the Alignment Forum and LessWrong?
-------------------------------------------------------------------
The Alignment Forum was created by and is maintained by the team behind LessWrong (the web forum). The two sites share a codebase and database. They integrate in the following ways:
* **Automatic Crossposting** - Any new post or comment on the new AI Alignment Forum is automatically cross-posted to LessWrong.com. Accounts are also shared between the two platforms (though non-AF member accounts will not be able to post without review).
* **Content Promotion** - Any comment or post on LessWrong can be promoted by members of the AI Alignment Forum to the AI Alignment Forum.
* **Separate Reputation** – The reputation systems (karma) for LessWrong and the AI Alignment Forum are separate. On LessWrong you can see two reputation scores: a primary karma score combining karma from both sites, and a secondary karma score specific to AI Alignment Forum members. On the AI Alignment Forum, you will just see the AI Alignment Forum karma of posts and comments.
* **Content Ownership** - If a comment or post of yours is promoted to the AI Alignment Forum, you will continue to have full ownership of the content, and you’ll be able to respond directly to all comments on your content.
Both LessWrong and the Alignment Forum are foci of Alignment Discussion; however, the Alignment Forum maintains even higher standards of content quality than LessWrong. The goal is to provide a place where researchers with shared technical and conceptual background can collaborate, and where a strong set of norms for facilitating good research collaborations can take hold. For this reason, both submissions and members to the Alignment Forum are heavily vetted.
How do I get started in AI Alignment research?
----------------------------------------------
If you're new to the AI Alignment research field, we recommend four great introductory sequences that cover several different paradigms of thought within the field. Get started reading them and feel free to leave comments with any questions you have.
The introductory sequences are:
* [Embedded Agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh) by Scott Garrabrant and Abram Demski of MIRI
* [Iterated Amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) by Paul Christiano of ARC
* [Value Learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc) by Rohin Shah of DeepMind
* [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) by Richard Ngo, formerly of DeepMind
Following that, you might want to begin writing up some of your thoughts and sharing them on LessWrong to get feedback.
How do I join the Alignment Forum?
----------------------------------
As described above, membership to the Alignment Forum is very selective (and not strictly required to participate in discussions on Alignment Forum content, since one can do so on LessWrong).
The best pathway towards becoming a member is to produce lots of great AI Alignment content, and to post it to LessWrong and participate in discussions there. The LessWrong/Alignment Forum admins monitor activity on both sites, and if someone consistently contributes to Alignment discussions on LessWrong that get promoted to the Alignment Forum, then it’s quite possible full membership will be offered.
I work professionally on AI Alignment. Shouldn’t I be a member?
---------------------------------------------------------------
Maybe but not definitely! The bar for membership is higher than working on AI Alignment professionally, even if you are doing really great work. Membership, which allows you to directly post and comment, is likely to be offered only after multiple existing Alignment Forum members are excited to see your work. Until then, a review step is required. You can still submit content to the Alignment Forum but it might take a few days for a decision to be made.
Another reason for the high bar for membership is that any member has the ability to promote content to the Alignment Forum, kind of like a curator. This requires significant trust and membership is restricted to those who have earned this level of trust among the Alignment Forum members.
How can non-members participate in the Forum?
---------------------------------------------
Non-members can participate in the Forum in two ways:
**1. Posting and commenting Alignment content to LessWrong**
Alignment content posted to LessWrong will be seen by many of the researchers present on the Alignment Forum. If they (or the Forum admins) think that particular content is a good fit for the Forum, it will be promoted to the Forum and become viewable there.
If your posts or comments are promoted to the Alignment Forum, you will be able to directly participate in the discussion of your content on the Forum.
**2. Submitting content on the Alignment Forum**
Non-members can now submit content directly on the Alignment Forum (and not just via LessWrong).
* If you post or comment, your submission will enter a review queue and a decision to accept or reject it from the Alignment Forum will be made within three days. If it is rejected, you will receive a minimum one-sentence explanation.
* In the meantime (and regardless of outcome), your post or comment will be published to [LessWrong](https://www.lesswrong.com/). There it can be immediately viewed and discussed by everyone, and edited by you. This allows you to get quick feedback, and allows site admins to use the reaction there to help make the decision about whether it is a good fit for the Alignment Forum. For example, if several Alignment Forum members are discussing your content on LessWrong, it is likely a good fit for the Forum and will be promoted.
How can I submit something I already wrote?
-------------------------------------------
If you have already written and published a post on LessWrong but would like to submit it for acceptance to the Alignment Forum, please contact us via Intercom (bottom right) or email us at team@lesswrong.com
Who runs the Alignment Forum?
-----------------------------
The Alignment Forum is maintained and run by the [LessWrong team](https://www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction) who also run the LessWrong website. An independent board composed of representatives of major Alignment research orgs (and independent members too) oversees major decisions concerning the Forum.
Can I use LaTex?
----------------
Yes! You can use LaTeX in posts and comments with Cmd+4 / Ctrl+4.
Also, if you go into your user settings and switch to the markdown editor, you can just copy-paste LaTeX into a post/comment and it will render when you submit with no further steps required.
I have a different question.
----------------------------
Please don’t hesitate to contact us via Intercom (bottom right of the screen) or email us at team@lesswrong.com. We’d love to answer your questions. |
45233250-9074-487a-b1dc-1f3f32ad0d3c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is there an analysis of the common consideration that splitting an AI lab into two (e.g. the founding of Anthropic) speeds up the development of TAI and therefore increases AI x-risk?
I'm asking because I can think of arguments going both ways.
**Note:** this post is focused on the generic question "what to expect from an AI lab splitting into two" more than on the specifics of the OpenAI vs Anthropic case.
Here's the basic argument: after splitting, the two groups of individuals are now competing instead of cooperating, with two consequences:
* they will rush faster toward TAI, **speeding up TAI timelines** (while outside safety work isn't correspondingly sped up);
* by doing so, they will **differentially neglect their own safety work compared to their capabilities work**.
However, there are some possible considerations against this frame:
1. the orgs are **competing on safety as well as capabilities**, in anticipation of future regulations as well as potentially in order to attract (especially future) talent;
2. in general, there is a case to be made that splitting a company into two will **decrease the total productivity of its individuals** (think about the limit of all individuals working separately), though this will depend on many attributes of the company and the splitting;
3. intuitively, I would expect a competition mindset to **increase the speed of straightforward engineering (scaling), but decrease the speed of serendipity (e.g. discovering a new more powerful AI paradigm)**. So the relevant consequence on AI x-risk might depend crucially on whether reaching TAI is only a matter of scaling or if we are still at least one breakthrough away;
4. **the existence of a competing lab is inevitable** (so except for the earlier founding, the splitting will be good if the competing lab has a better safety-to-capabilities profile than the one that would have been created otherwise);
5. **the competition mindset results in an earlier end to the openness of the capabilities research**, slowing down the rest of the world.
I expect that there exists empirical literature that could help dig deeper into points 2 and 3.
Is there an existing more thorough analysis of this, or can you give other arguments in one way or the other? |
6502d133-809c-4d8b-a0cb-c719500460be | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Carl Zimmer on mind uploading
<http://www.scientificamerican.com/article.cfm?id=e-zimmer-can-you-live-forever>
I realize he Zimmer is "just a popular author" (a pretty good one IMO), so filing this under "cultural penetration of singularity memes" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.