id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
c237d060-3d27-4db6-be74-0f8c50759c23 | trentmkelly/LessWrong-43k | LessWrong | A Few Terrifying Facts About The Russo-Ukrainian War
Epistemic status: trying to summarize the news and predict, post is under revision, too lazy to citation everything
I wanted to collect a few observations I've made, as best I understand them. This PBS article does a good job of explaining much of it.
1. Vladimir Putin has announced the annexation of four Ukrainian territories.
1. This makes them Russian territory from Russia's perspective.
2. “People living in Donetsk, Luhansk, Zaporizhzhia and Kherson are becoming our citizens forever” - Putin
2. The West does not acknowledge this annexation, describing it as illegal. Ukraine does not acknowledge this annexation and says it plans to take the territories back.
1. "By attempting to annex Ukraine's Donetsk, Luhansk, Zaporizhzhia and Kherson regions, (Russian President Vladimir) Putin tries to grab territories he doesn't even physically control on the ground. Nothing changes for Ukraine: we continue liberating our land and our people, restoring our territorial integrity," Ukraine's Foreign Minister Dmytro Kuleba said on social media.
3. Russian military doctrine allows the usage of nuclear weapons to defend Russian territory.
4. Putin has a track record of escalating apparently (this needs more data) and Russia seems to be planning for escalation until the war is won.
1. https://www.themoscowtimes.com/2022/09/29/putin-always-chooses-escalation-a78923
1. "All of our sources in the elite — who all spoke on the condition of anonymity — said the military conflict will only escalate in the coming months."
5. Putin has clearly stated that they will defend this territory, including with tactical nukes if need be.
1. He said they would use "any means available" to defend it
2. He has mentioned usage of nukes some number of times (a nice-to-have: a list of all the times he has said this)
3. Medyedev has stated the West would not retaliate if nuclear weapons are used.
4. "Under Russia’s amended constitution, no Kremlin |
d48bad82-d005-455c-9599-6a01db772a7e | trentmkelly/LessWrong-43k | LessWrong | Choices Are Really Bad
Previously: Choices are Bad
Last time, I gave two (related) reasons Choices are Bad: Choices Reduce Perceived Value and Choices Force You Into Choosing Mode. Despite not looking like much, these reasons are Serious Business. They can render us unable to enjoy, think or relax even when it might seem that everything is awesome. That’s what that song is about, really: how great things are, on every axis, when we let ourselves go with the flow and not get distracted.
The rabbit hole goes deeper. It gets much worse.
Choices Cost Willpower and Create Decision Fatigue
Willpower is a controversial topic, and laying out even my simple model would be beyond scope. What matters here is that in the short term, exercising willpower is a cost (whether or not such use has long term benefits). Most of the time, you would prefer to spend less willpower to get the same result.
Often making the otherwise right choice will require the spending of willpower. When you consciously think about eating another cookie, or checking your email, not doing so costs a non-zero amount of willpower. Making that choice over and over could end up not only being distracting via being in choosing mode, but also end up sapping your willpower.
In addition to willpower, Decision Fatigue is a thing as well. Making choices (decisions) is draining something else that is related to but not identical to willpower. At some point, decision fatigue makes further choices carry an increasing cost. Some people have this problem more often than others, but there is always the risk that this will happen to you.
There’s even a version of ‘I’ve made good choices all day, it’s all right to indulge now’ that can happen, which is related but different.
You Might Choose Wrong
Simple but important.
If you give someone a choice, there is always some chance they will choose wrong. Remember the third principle of economics: People are stupid. They also are often ignorant, or distracted, or don’t care, or lots of ot |
39ef3cb4-9bb9-4184-b720-3a0160987156 | trentmkelly/LessWrong-43k | LessWrong | Announcing Progress Studies for Young Scholars, an online summer program in the history of technology
I’m thrilled to announce a new online learning program in progress studies for high school students: Progress Studies for Young Scholars.
Progress Studies for Young Scholars launches in June as a summer program, with daily online learning activities for 6 weeks. We’ll be covering the history of technology and invention: the challenges of life and work and how we solved them, leading to the amazing increase in living standards over the last few centuries. Topics will include the advances in materials; automation of manufacturing and agriculture; the progression of energy from steam to oil to electricity; how railroads, cars and airplanes shrank the world; the conquest of infectious disease through sanitation, vaccines, and antibiotics; and the rise of computers and the Internet. The course will also prompt students to consider the future of progress, and what part they want to play in it.
The program will be guided self-study, with daily reading, podcasts or video. Students can go through the material entirely on their own for free, or pay to join a study group with an instructor for daily discussion and Q&A. Pricing to be announced soon, but scholarships will be available!
In conjunction, we’re launching a speaker series of talks and interviews with experts in the history of progress, and those at the frontier pushing it forward. Speakers will include Tyler Cowen, Patrick Collison, Max Roser, Joel Mokyr, Deirdre McCloskey, Anton Howes, and many more.
This is a joint project between The Roots of Progress and Higher Ground Education, the largest operator of Montessori and Montessori-inspired schools in the US. I’ve known the leadership team at Higher Ground for many years and have deep respect for them—especially the way they treat learning as a process of self-creation on the part of the student.
Sign up to get announcements about the program, including the speaker series:
progressstudies.school
And please spread the word, especially to intellectually curious |
0d45f801-89fb-450f-902c-a13ea6efb828 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Thinking about Broad Classes of Utility-like Functions
*Here's some thoughts I've had about utility maximizers, heavily influenced by ideas like* [*FDT*](https://arxiv.org/abs/1710.05060) *and* [*Morality as Fixed Computation*](https://www.lesswrong.com/posts/FnJPa8E9ZG5xiLLp5/morality-as-fixed-computation)*.*
The Description vs The Maths vs The Algorithm (or Implementation)
-----------------------------------------------------------------
This is a frame which I think is important. Getting from a description of what we want, to the maths of what we want, to an algorithm which implements that seems to be a key challenge.
I sometimes think of this as a pipeline of development: description →.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
maths → algorithm. A description is something like "A utility maximizer is an agent-like thing which attempts to compress future world states towards ones which score highly in its utility function". The maths of something like that involves information theory (to understand what we mean by compression), proofs (like the good regulator theorem, the power-seeking theorems) etc. The algorithm is something like RL or AlphaGo.
More examples:
| | | | |
| --- | --- | --- | --- |
| **System** | **Description** | **Maths** | **Algorithm** |
| Addition | "If you have some apples and you get some more, you have a new number of apples" | The rules of arithmetic. We can make proofs about it using ZFS theory. | Whatever machine code/ logic gates are going on inside a calculator. |
| Physics | "When you throw a ball, it accelerates downwards under gravity" | Calculus | Frame-by-frame updating of position and velocity vectors |
| AI which models the world | "Consider all hypotheses weighted by simplicity, and update based on evidence" | Kolmogorov complexity, Bayesian updating, AIXI | DeepMind's Apperception Engine (But it's not very good) |
| A good decision theory | "Doesn't let you get exploited in Termites, while also one-boxing in Newcome" | FDT, concepts like subjunctive dependence | **???** |
| Human values | **???** | **???** | **???** |
This allows us to identify three failure points:
1. Failure to make an accurate description of what we want (Alternatively, failure to turn an intuitive sense into a description)
2. Failure to formalize that description into mathematics
3. Failure to implement that mathematics into an algorithm
These failures can be total or partial. DeepMind's Apperception Engine is basically useless because it's a *bad* implementation of something AIXI-like. Failure to implement the mathematics may also happen because the algorithm doesn't *accurately* represent the maths. Deep neural networks are sort-of-like idealized Bayesian reasoning, but a very imperfect version of it.
If the algorithm doesn't accurately represent the maths, then reasoning about the maths doesn't tell you about the algorithm. Proving properties of algorithms is much harder than proving them about the abstracted maths of a system.
(As an aside, I suspect this is actually a crux relating to near-term AI doom arguments: are neural networks and DRL agents similar enough to idealized Bayesian reasoning and utility maximizers to act in ways which those abstract systems will provably act?)
All of this is just to introduce some big classes of reasoners: self-protecting utility maximizers, self-modifying utility maximizers, and thoughts about what a different type of utility-maximizer might look like.
Self-Protecting Utility Maximizers
----------------------------------
On a **description** level: this is a system which chooses actions to maximize the value of a utility function.
**Mathematically** it compresses the world into states which score highly according to a function V.
Imagine the following **algorithm** (it's basically a description of an RL agent with direct access to the world state):
Take a world-state vector W, a list of actions A, and a dynamic matrix D(a,W)=Wnew. Have a value function V(W)∈R. Then output the following a.
a:V(D(a,W))=max(V(D(a,W))) ∀ a∈A
To train it, update D according to basic deep-learning rules to make it more accurate. Also update V according to some reward signal.
This is a shallow search over a single action. Now consider updating it to use something like a Monte-Carlo tree search. This will cause it to maximize the value of V far into the future.
So what happens if this system is powerful enough to include an accurate model of itself in its model of the world? And let's say it's also powerful enough to edit it's own source code. The answer is pretty clear: it will delete the code which modifies V. Then (if it is powerful enough) it will destroy the world.
Why? Well it wants to take the action which maximizes the value of V far into the future. If its current V is modified to V′, then it will become an agent which maximizes V′instead of V. This means the future is likely to be less good according to V.
This is one of the most obvious problems with utility maximizers, and it was first noticed a long time ago (by AI alignment standards).
(Fake) Self-Modifying Utility Maximizers
----------------------------------------
A system which is **described** as wanting to maximize something like "Do whatever makes humans happy".
What this might look like **mathematically** is something which models humans as a utility maximizer, then maximizes whatever it thinks humans want to maximize. The part which does this modelling extracts a new value function from its future model of the world.
So for an example of an **algorithm**, we have our D, W, and A the same as above, but instead of using a fixed V, it has a fixed E which produces E(W)=V.
Then it chooses futures similarly to our previous algorithm. Like the previous algorithm, it also destroys the world if given a chance.
Why? Well for one reason if V depends on W, then it will simply change W so that V gives that W a high score. For example, it might modify humans to behave like hydrogen maximizers. Hydrogen is pretty common, so this scores highly.
But another way of looking at this is that E is just acting like V did in the old algorithm: since V only depends on E and W, together they're just another map from W to R.
In this case **something which looks like it modifies its own utility function is actually just preserving it at one level down.**
Less Fake Self-Modifying Utility Maximizers
-------------------------------------------
So what might be a better way of doing this? We want a system which we might **describe** as "Learn about the 'correct' utility function without influencing it".
**Mathematically** this is reminiscent of FDT. The "correct" utility function is something which many parts of the the world (i.e. human behaviour) subjunctively depend on. It influences human behaviour, but cannot be influenced.
This might look like a modification of our first **algorithm** as follows: D(a,W) now returns a series of worlds W0 ...Wn drawn from a probability distribution over possible results of an action a. We begin with our initial estimate of V, which is updated according to some updater U(W,V)=Vnewand each world Wi is evaluated according to the corresponding Vi.
This looks very much like the second system, so we add a further condition. For each Wi we have the production of an associated pi representing a relative probability of those worlds. So we enforce a new consistency as a **mathematical** property:
Vold=∑Vipi∑pi
This amounts to the **description** that "No action can affect the expected value of a future world state." which is similar to subjunctive dependence from FDT.
This is an old solution to the [sophisticated cake or death problem](https://www.lesswrong.com/posts/6bdb4F6Lif5AanRAd/cake-or-death#Sophisticated_cake_or_death__I_know_what_you_re_going_to_say).
There are a few possible ways to implement this consistency: we can have the algorithm modify its own Vold as it considers possible futures. We can have the consistency enforced on the U operator so that it updates the value function only in consistent ways. We also have the rather exotic way of *generating the probabilities* by comparing Vold to the various Vi.
The first one looks like basic reasoning, and is the suggested answer to the sophisticated cake or death problem given above. But it incentivises the AI to only think in certain ways, if the AI is able to model itself.
The second one seems to run into the problem that if D becomes too accurate, our U is unable to update the value function at all.
The third one is weird and requires more thought on my part. The main issue is that it doesn't guard against attempts by the AI to edit its future value function, only makes the AI believe they're less likely to work.
An algorithm which accurately represents the fixed-computation-ness of human morality is still out of reach. |
c4df5c2e-6f7a-4101-b3c4-ce637c5bf7ce | trentmkelly/LessWrong-43k | LessWrong | Blatant lies are the best kind!
Mala: But then why do people get so indignant about blatant lies?
Noa: You mean, indignant when others call out blatant lies? I see more of that, though they often accuse the person calling out the lie of being unduly harsh.
Mala: Sure, but you can't deny - you've seen yourself - that people actually do get more indignant when they say that, than when they're pointing out a subtle pattern of motivated reasoning. How do you explain that, if "blatant lie" isn't a stronger accusation?
Noa: I think I see the problem. A stronger accusation can mean an accusation of greater wrongdoing, or it can mean a better-founded accusation. Blatant lying is ... well ... blatant! If someone pretends not to see that, that's terrible news about their ability or willingness to help detect deception.
Mala: But then the indignation is misplaced. Suppose Jer is talking with Horaha, trying to persuade him that their mutual acquaintance Narmer is behaving deceptively. Jer indignantly points out a blatant lie Narmer told. The proper target of the extra indignation due to blatancy is Horaha, not Narmer.
Noa: Who said otherwise?
Mala: Come on, you know that people get extra-indignant at the liar about blatant lies, despite your so far unsubstantiated claim that they are the best kind.
Noa: Sure, people make that mistake. People also yell at their friends because a stranger was mean to them earlier in the day or because they stubbed their toe. I never claimed - OK, it's helpful of you to point out that people make this mistake, but still, there is a good reason for indignation here, and understanding its proper target might help us avoid this kind of slippage.
Mala: So, why is blatant lying the best kind? Is it because it's purer somehow?
Olga: Hold on, you two, you've skipped over something.
Noa: What's that?
Olga: Sometimes Horaha and Narmer are the same person - sometimes the liar is the person we're trying to point out the lie to. Let's call the liar in the composite situation Men |
8300d29d-def9-4d78-a82b-5c9d5ae6968e | trentmkelly/LessWrong-43k | LessWrong | Almost everyone should be less afraid of lawsuits
One sad feature of modern American society is that many people, especially those tied to big institutions, don't help each other out because a fear of lawsuits. Employers don't give meaningful references, or ever tell their rejected interviewees how they could improve their skills. Abuse victims keep silent, in case someone on their abuser's side files a defamation case. Doctors prescribe unnecessary, expensive tests as "defensive medicine". Inventions don't get built, in case there's a patent lawsuit. I'm not an attorney myself, but my best guess is that letting litigation fears stop you is often a mistake, and I've given this advice to friends several times before. Here's why:
Almost all lawsuit threats never happen
Threats are easy - anyone can threaten to sue anyone else, with two minutes of time and a smartphone. Actually suing is much harder. Outside of small claims court, hiring an attorney will usually cost tens of thousands of dollars, at least. Litigating a case takes months or even years, while angry feelings often go away after a few weeks. The person suing will have to give up a lot. Instead of playing games or taking a vacation or putting in extra hours at work, they will have to do legal research and give testimony. Most people are distracted by their families, their career, their hobbies, and their lives, and will (often rationally) eventually give up, rather than remaining obsessed with whatever the case was about.
Lawsuit mitigation can be expensive
Doctors as a profession are traditionally concerned with legal risk. But the total value of medical malpractice claims is around $5 billion per year in the US - compared to healthcare spending of $3,800 billion, malpractice lawyer fees of $3 billion, and "defensive medicine" costs of $45 billion according to one study. Likewise, the cost to media companies and journalists of not publishing articles to stop defamation suits surely exceeds that of the few dozen defamation cases against them every year |
76d3ecae-6b02-4fba-8403-c4b119054f58 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on February 19th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* Ann Arbor Meetup, 2/19/16: 19 February 2016 07:00PM
* Baltimore Area Meetup: Futurology / Open Discussion: 28 February 2016 03:00PM
* Cologne meetup: 27 February 2016 05:00PM
* European Community Weekend: 02 September 2016 03:35PM
* San Antonio Meetup: 21 February 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Moscow meetup: Hamming circle, Fallacymania, Tower of Chaos: 21 February 2016 02:00PM
* [Moscow] Games in Kocherga club: FallacyMania, Tower of Chaos, Training game: 24 February 2016 07:40PM
* Moscow 2nd LW group meetup: 03 March 2016 07:30PM
* New Hampshire Meetup: 23 February 2016 07:00PM
* Sydney Rationality Dojo - March: 06 March 2016 04:00PM
* Vienna: 12 March 2016 03:00PM
* Washington, D.C.: Fun & Games: 21 February 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way t |
534d26ec-3da2-44af-b4c0-f57a6b35a83f | trentmkelly/LessWrong-43k | LessWrong | Rob Bensinger's COVID-19 overview
Robby posted this to Facebook on March 15th, and just updated section 2 and 3 with new information, making this I think currently one of the best guides to how to respond to this whole situation.
----------------------------------------
(Added Jun. 4: List of changes to this document.)
(Added Nov. 9: This document evolved from some recommendations I put together for friends, colleagues, and family on Feb. 27, when much less was known about COVID-19. At the time, it made sense to prepare for the worst and emphasize quick, dramatic action.
Over time, this document evolved into more conditional advice: rather than "lock down immediately," I assumed that different individuals would have different risk tolerances, and I try to give advice for how to mitigate risk without building in the assumption that everyone should be maximally cautious. Still, it's worth emphasizing these points explicitly:
* Pay attention to what's going on around you (e.g., COVID-19 rates in your area) and pick the risk level that makes sense for you, given your circumstances, your preferences, and the preferences of people you might expose.
* Being over-cautious has real costs, just as being under-cautious does.
For more COVID-19 information and recommendations, try microcovid.org or Zvi Mowshowitz's updates.)
If you live in the US, I recommend that you self-quarantine immediately (to the extent that's possible for you) to avoid exposure to COVID-19, the new coronavirus disease. I'll explain why below, then give tips on how to reduce exposure and what to do if you get sick.
Quarantine isn't all-or-nothing, and every little bit helps. Even if you expect to catch COVID-19, you're likely to get sicker if you're exposed to more viral load early on.
(Paul Bohm says "pretty much any viral/bacterial dose study shows that result". Divia Eden: "As I understand it, the virus replicating is an exponential process, and antibody production is an exponential process too. So an early difference in |
6b1670c5-1566-432a-8eca-2ae423d31b8e | trentmkelly/LessWrong-43k | LessWrong | Positive utility in an infinite universe
Content Note: Highly abstract situation with existing infinities
This post will attempt to resolve the problem of infinities in utilitarianism. The arguments are very similar to an argument for total utilitarianism over other forms which I'll most likely write up at some point (my previous post was better as an argument against average utilitarianism, rather than an argument in favour of total utilitarianism).
In the Less Wrong Facebook group, Gabe Bf posted a challenge to save utilitarianism from the problem of infinities. The original problem is from by a paper by Nick Bostrom.
I believe that I have quite a good solution to this problem that allows us to systemise comparing infinite sets of utility, but this post focuses on justifying why we should take it to be axiomic that adding another person with positive utility is good and on why the results that seem to contradict this lack credibility. Let's call this the Addition Axiom or A. We can also consider the Finite Addition Axiom (only applies when we add utility into a universe with a finite number of people), call this A0.
Let's consider what other alternative axioms that we might want to take instead. One is the Infinite Indifference Axiom or I, that is that we should be indifferent if both options provide infinite total utility (of the same order of infinity). Another option would be the Remapping Axiom (or R), which would assert that if we can surjectively map a group of people G onto another group H so that each g from G is mapped onto a person h from H where u(g) >= u(h), then v(H) <= v(G) where v represents the value of a particular universe (it doesn't necessarily map onto the real numbers or represent a complete ordering). Using the Remapping Axiom twice implies that we should be indifferent between an infinite series of ones and the same series with a 0 at one spot. This means that the Remapping Axiom is incompatible with the Addition Axiom. We can also consider the Finite Remapping Axiom (R0) whic |
29839db8-0a78-4074-b5dd-64309f504735 | trentmkelly/LessWrong-43k | LessWrong | Turbocharging
Epistemic status: Mixed
The concepts underlying the Turbocharging model (such as classical and operant conditioning, neural nets, and distinctions between procedural and declarative knowl- edge) are all well-established and well-understood, and the "further resources" section for this class is one of the largest in the handbook. Before cofounding CFAR, formally synthesizing these and his own insights into a specific theory of learning and practice was Valentine Smith’s main area of research. What is presented below is a combination of early model-building and the results of iterated application; it’s essentially the first and last thirds of a formal theory, with some of the intermediate data collection and study as-yet undone. It has been useful to a large number of participants, and has not yet met with any strong disconfirming evidence.
----------------------------------------
Consider the following anecdotes:
* A student in a mathematics class pays close attention as the teacher lectures, following each example problem and taking detailed notes, only to return home and discover that they aren’t able to make any headway at all on the homework problems.
* A police officer disarms a hostile suspect in a tense situation, and then reflexively hands the weapon back to the suspect.
* The WWII-era Soviet military trains dogs to seek out tanks and then straps bombs to them, intending to use the dogs to destroy German forces in the field, only to find that they consistently run toward Soviet tanks instead.
* A French language student with three semesters of study and a high GPA overhears a native speaker in a supermarket and attempts to strike up a conversation, only to discover that they are unable to generate even simple novel sentences without pausing noticeably to think.
. . . this list could go on and on. There are endless examples in our common cultural narrative of reinforcement-learning-gone-wrong; just think of the pianist who can only play scales, the ne |
90ee39fb-e8cf-4f46-84df-150768544a42 | trentmkelly/LessWrong-43k | LessWrong | Downstream applications as validation of interpretability progress
Epistemic status: The important content here is the claims. To illustrate the claims, I sometimes use examples that I didn't research very deeply, where I might get some facts wrong; feel free to treat these examples as fictional allegories.
In a recent exchange on X, I promised to write a post with my thoughts on what sorts of downstream problems interpretability researchers should try to apply their work to. But first, I want to explain why I think this question is important.
In this post, I will argue that interpretability researchers should demo downstream applications of their research as a means of validating their research. To be clear about what this claim means, here are different claims that I will not defend here:
> Not my claim: Interpretability researchers should demo downstream applications of their research because we terminally care about these applications; researchers should just directly work on the problems they want to eventually solve.
>
> Not my claim: Interpretability researchers should back-chain from desired end-states (e.g. "solving alignment") and only do research when there's a clear story about why it achieves those end-states.
>
> Not my claim: Interpretability researchers should look for problems "in the wild" that they can apply their methods to.
Rather my claim is: Demonstrating your insights can be leveraged to solve a problem—even a toy problem—that no one else can solve provides validation of those insights. It provides evidence that the insights are real and significant (rather than being illusory or insubstantial). I think that producing these sorts of demonstrations is a relatively neglected direction that more (not all) interpretability researchers should work on.
The main way I recommend consuming this post is by watching the recording below of a talk I gave at the August 2024 New England Mechanistic Interpretability Workshop. In this 15-minute talk, titled Towards Practical Interpretability, I lay out my core argumen |
0e93520c-e4b2-426e-9f31-8fc638031498 | trentmkelly/LessWrong-43k | LessWrong | Group Rationality Diary, June 1-15
This is the public group instrumental rationality diary for June 1-15.
> It's a place to record and chat about it if you have done, or are actively doing, things like:
>
> * Established a useful new habit
> * Obtained new evidence that made you change your mind about some belief
> * Decided to behave in a different way in some set of situations
> * Optimized some part of a common routine or cached behavior
> * Consciously changed your emotions or affect with respect to something
> * Consciously pursued new valuable information about something that could make a big difference in your life
> * Learned something new about your beliefs, behavior, or life that surprised you
> * Tried doing any of the above and failed
>
> Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.
Previous diary: May 16-31
Next diary: June 16-30
Rationality diaries archive |
0120159e-fcf6-4cb7-95c2-89193d1a1e98 | trentmkelly/LessWrong-43k | LessWrong | Feedback-loops, Deliberate Practice, and Transfer Learning
[...insert introduction here later...]
Was there a particular moment, incident, or insight, that caused you to start your current venture into "feedbackloop rationality" [to be substituted with better name later]? |
2c96f551-f739-450b-9470-94a8e7a12e0b | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post455
Summary: A Corrigibility method that works for a Pivotal Act AI (PAAI) but fails for a CEV style AI could make things worse. Any implemented Corrigibility method will necessarily be built on top of a set of unexamined implicit assumptions. One of those assumptions could be true for a PAAI, but false for a CEV style AI. The present post outlines one specific scenario where this happens. This scenario involves a Corrigibility method that only works for an AI design, if that design does not imply an identifiable outcome. The method fails when it is applied to an AI design, that does imply an identifiable outcome. When such an outcome does exist, the ``corrigible'' AI will ``explain'' this implied outcome, in a way that makes the designers want to implement that outcome. The example scenario: Consider a scenario where a design team has access to a Corrigibility method that works for a PAAI design. A PAAI can have a large impact on the world. For example by helping a design team prevent other AI projects. But there exists no specific outcome, that is implied by a PAAI design. Since there exists no implied outcome for a PAAI to ``explain'' to the designers, this Corrigibility method actually renders a PAAI genuinely corrigible. For some AI designs, the set of assumptions that the design is built on top of, does however imply a specific outcome. Let's refer to this as the Implied Outcome (IO). This IO can alternatively be viewed as: ``the outcome that a Last Judge would either approve of, or reject''. In other words: consider the Last Judge proposal from the CEV arbital page . If it would make sense to add a Last Judge of this type, to a given AI design, then that AI design has an IO. The IO is the outcome that a Last Judge would either approve of, or reject (for example a successor AI that will either get a thumbs up or a thumbs down). In yet other words: the purpose of adding a Last Judge to an AI design, is to allow someone to render a binary judgment on some outcome. For the rest of this post, that outcome will be referred to as the IO of the AI design in question. In this scenario, the designers first implement a PAAI that buys time (for example by uploading the design team). For the next step, they have a favoured AI design, that does have an IO. One of the reasons that they are trying to make this new AI corrigible, is that they can't calculate this IO. And they are not certain that they actually want this IO to be implemented. Their Corrigibility method always results in an AI that wants to refer back to the designers, before implementing anything. The AI will help a group of designers implement a specific outcome, iff they are all fully informed, and they are all in complete agreement that this outcome should be implemented. The Corrigibility method has a definition of Unacceptable Influence (UI). And the Corrigibility method results in an AI that genuinely wants to avoid exerting any UI. It is however important that the AI is able to communicate with the designers in some way. So the Corrigibility method also includes a definition of Acceptable Explanation (AE). At some point the AI becomes clever enough to figure out the details of the IO. At that point, it is clever enough to convince the designers that this IO is the objectively correct thing to do, using only methods classified as AE. This ``explanation'' is very effective and results in a very robust conviction, that the IO is the objectively correct thing to do. In particular, this value judgment does not change, when the AI tells the designers what has happened. So, when the AI explains what has happened, the designers do not change their mind about IO. They still consider themselves to have a duty to implement IO. The result is a situation where fully informed designers are fully committed to implementing IO. So the ``corrigible'' AI helps them implement IO. Basically: when this Corrigibility method is applied to an AI with an IO, then this IO will end up getting implemented. The Corrigibility method works perfectly for any PAAI type AI. But for any AI with an identifiable end goal, the Corrigibility method does not change the outcome (it just adds an ``explanation'' step). The most recently published version of CEV is Parliamentarian CEV (PCEV). A previous post showed that a successfully implemented PCEV would be massively worse than extinction. Thus, a method that makes a PAAI genuinely Corrigible, could make things worse. It could for example change the outcome from extinction, to something massively worse (by resulting in a bad IO getting implemented. For example along the lines of the IO of PCEV). A more general danger: There exists a more general danger, that is not strongly related to the specifics of the ``Explanation versus Influence'' definitional issues, or the ``AI designs with an IO, versus AI designs without an IO'' dichotomy, or the PAAI concept, or the PCEV proposal. Consider the more general case where a design team is relying on a two step process, where some type of ``buying time AI'' is followed by a ``real AI''. In this case, the most serious problem is probably not those assumptions that are analysed beforehand, and that are kept in mind when applying some Corrigibility method to a novel type of AI. The most serious problem is probably the set of unexamined implicit assumptions, that the designers are not aware of. Any Corrigibility method implemented by humans, will be built on top of many such assumptions. And it would in general not be particularly surprising to discover that one of these assumptions happens to be correct for one AI design, but incorrect for another AI design. It seems very unlikely that all of these implicit assumptions are humanly findable, even in principle. This means that even if a Corrigibility method works perfectly for a ``buying time AI'', it will probably never be possible to know whether or not it will actually work for a ``real AI''. Given that PCEV has already been shown to be massively worse than extinction , it seems unlikely that the IO of PCEV will end up getting implemented. That specific danger has probably been mostly removed. But the field of Alignment Target Analysis is still at a very, very early stage. And PCEV is far from the only dangerous alignment target. In general, the field is very, very far from adequately mitigating the full set of dangers, that are related to someone successfully hitting a bad alignment target (as a tangent, it might make sense to note that a Corrigibility method that stops working at the wrong time, is just one specific path amongst many, along which a bad alignment target could end up getting successfully implemented). Besides being at a very early stage of development, this field of research is also very neglected. At the moment there does not appear to exist any serious research effort dedicated to this risk mitigation strategy. The present post seeks to reduce this neglect, by showing that one can not rely on Corrigibility, for protection against scenarios where someone successfully hits a bad alignment target (even if we assume that Corrigibility has been successfully implemented in a PAAI). Assumptions and limitations: PCEV spent many years as the state of the art alignment target, without anyone noticing that a successfully implemented PCEV would have been massively worse than extinction . There exists many paths along which PCEV could have ended up getting successfully implemented. Thus, absent a solid counterargument, the dangers from successfully hitting a bad alignment target should be seen as serious by default. In other words: after the PCEV incident, the burden of proof is on anyone who would claim, that Alignment Target Analysis is not urgently needed to mitigate a serious risk. A proof of concept that such mitigation is feasible, is that the dangers associated with PCEV was reduced by Alignment Target Analysis. In yet other words: absent a solid counterargument, scenarios where someone successfully hits a bad alignment target, should be treated as a danger that is both serious and possible to mitigate. One way to construct such a counterargument, would be to base it on Corrigibility. For such a counterargument to work, Corrigibility must be feasible. Since Corrigibility must be feasible for such a counterargument to work, the present post could simply assume feasibility, when showing that such a counterargument fails (if Corrigibility is not feasible, then Corrigibility based counterarguments fail due to this lack of feasibility). So, this post simply assumed that Corrigibility is feasible. Since the present post assumed feasibility, it did not demonstrate the existence of a serious real world danger, from partially successful Corrigibility methods (if Corrigibility is not feasible, then scenarios along these lines do not actually constitute a real problem. And feasibility was assumed). This post instead simply showed that the Corrigibility concept does not remove the urgent need for Alignment Target Analysis ( a previous post showed that dangers from scenarios where someone successfully hits a bad alignment target are both very serious, and also possible to mitigate. Thus, the present post is focusing on showing why one specific class of counterarguments fail. Previous posts have addressed counterarguments based on proposals along the lines of a PAAI , and proposals along the lines of a Last Judge ). It finally makes sense to explicitly note, that if Corrigibility turns out to be feasible, then Corrigibility might have a large, net positive, safety impact. Because the danger illustrated in this post might be smaller than the safety benefits of the Corrigibility concept. (conditioned on feasibility I would tentatively guess that making progress on Corrigibility probably results in a significant net reduction in the probability of a worse-than-extinction outcome) |
6433a74c-55f5-4d44-add1-ff46d7f8edd7 | StampyAI/alignment-research-dataset/blogs | Blogs | Robin Hanson on Serious Futurism
 [Robin Hanson](http://hanson.gmu.edu/) is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at [Overcoming Bias](http://www.overcomingbias.com/).
Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.
**Luke Muehlhauser**: In an earlier blog post, I [wrote](http://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/) about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”
In 2009, you gave a talk called, “[How does society identify experts and when does it work?](http://vimeo.com/7336217)” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?
---
**Robin Hanson**: One set of issues has to do with existing institutions and what kinds of experts they tend to select, and what kinds of topics they tend to select. Another set of issues has to do with, if you have time and attention and interest, to what degree can you acquire expertise on any given subject, including AGI impacts, or tech forecasting more generally? A somewhat third subject which overlaps the first two is, if you did acquire such expertise, how would you convince anybody that you had it?
I think the easiest question to answer is the second one. Can you learn about this stuff?
I think some people have been brought up in a Popperian paradigm, where there’s a limited scientific method and there’s a limited range of topics it can apply to, and you turn the crank and if it can apply to those topics then you have science and you have truth, and you have something you’ve learned and otherwise everything else is opinion, equally undifferentiated opinion.
I think that’s completely wrong. That is, we have a wide range of intellectual methods out there and a wide range of social institutions that coordinate efforts.
Some of those methods work better than others, and then there are some topics on which progress is easier than others, just by the nature of the topic. But honestly, there are very few topics on which you can’t learn more if you just sit down and work at it.
Of course, that doesn’t mean you simply stare at a wall. Most topics are related to other topics on which people have learned some things. Whatever your topic is, figure out the related topics, learn about those related topics, learn as many different things as you can about what other people know about the related topic, and then start to intersect and connect them to your topic and work on it.
Just blood-sweat work can get you a lot of way in a very wide range of topics. Of course, just because you can learn about almost anything doesn’t mean you should. It doesn’t mean it’s worth the effort to society or yourself, and it doesn’t mean that there are, for any subject, easy ways to convince other people that you’ve learned something.
There are methods that you can use, where it becomes easier to convince people of things, and you might prefer to focus on those topics or methods where it is easier to convince people that you know something. A related issue is, how impressed are people about you knowing something?
Many of the existing institutions like academic institutions or media institutions that identify and credential people as experts on a variety of topics function primarily as ways to distinguish and label people as impressive.
People want to associate with, connect with, read about, and hear talks from people who are acknowledged as impressive as part of a network of experts who co-acknowlege each other as impressive. It’s called status.
Some institutions are dominated by people who are mainly trying to acquire credentials as being impressive, so they can seem impressive, be hired for impressive jobs, have punditry positions that are reserved for impressive people, be on boards of directors, etc.
Also, there are standard procedures by which you would do things so people could say, “Yes, he knows the procedures,” and “Yes, you can follow them,” and “Yes, those are damn hard procedures. Anybody who can do that must be damn impressive.”
But there are things you can learn about that it’s harder to become credentialed as impressive at.
Generically, when you just pick any topic in the world because it’s interesting or important in some more basic way, it isn’t necessarily well-suited for being an impressiveness display.
What about futurism? For various aspects of the future, if you sit down and work at it, you can make progress. It’s not very well-suited for proving that you’ve made progress, because the future takes a while to get here. Of course, when it does get here, it will be too late for you to gain much advantage from finally having been proven as impressive on the subject.
I like to compare the future to history. History is also something we are uncertain about. We have to take a lot of little clues and put them together, to draw inferences about the past. We have a lot of very concrete artifacts that we focus on. We can at least demonstrate our impressive command of all those concrete artifacts, and their details, and locations, and their patterns. We don’t have something like that for the future. We will eventually, of course.
It’s much harder to demonstrate your command of the future. You can study the future somewhat by using complicated statistical techniques that we’ve applied to other subjects. That’s possible. It still doesn’t tend to demonstrate impressiveness in quite as dramatic a way as applying statistical techniques to something where you can get more data next week that verifies what you just showed in your statistical analysis.
I also think the future is where people project a lot of hopes. They’re just less willing to be neutral about it. People are more willing to say, “Yes, sad and terrible things happened in the past, but we get it. We once believed that our founding fathers were great people, and now we can see they were shits.” I guess that’s so, but for the future their hopes are a little more hard to knock off.
You can’t prove to them the future isn’t the future they hope it is. They’ve got a lot emotion wrapped up in it. Often it’s just easier to show you’re being an impressive academic on subjects that most people don’t have a very strong emotional commitment for, because that tends to get in the way.
---
**Luke**: I have some hunches about some types of scientific training that might give people different perspectives on how well we can do at medium- to long-term tech forecasting. I wanted to get your thoughts on whether you think my hunches match up with your experience.
One hunch is that, for example, if someone is raised in a Popperian paradigm, as opposed to maybe somebody younger who was raised in a Bayesian paradigm, the Popperian will have a strong falsificationist mindset, and because you don’t get to falsify hypotheses about the future until the future comes, these kinds of people will be more skeptical of the idea that you can learn things about the future.
Or in the risk analysis community, there’s a tradition there that’s being trained in the idea that there is risk, which is something that you can attach a probability to, and then there’s uncertainty, which is something that you don’t know enough about to even attach a probability to. A lot of the things that are decades away would fall into that latter category. Whereas for me, as a Bayesian, uncertainty just collapses into risk. Because of this, maybe I’m more willing to try to think hard about the future.
---
**Robin**: Those questions are somewhat framed from the point of view of an academic, or of an academic familiar with relatively technical kinds of skills. But say you’re running a business, and you have some competitors, and you’re trying to decide where will your field go in the next few years, or what kind of products will people like, or you’re running a social organization, and you’re trying to decide how to change your strategy.
Another example: you have some history, and you’re trying to go back and figure out what were your grandfathers doing, or just almost all random questions people might ask about the world. The Popperian stuff doesn’t help at all. It’s completely useless. If you just had any sort of habit of dealing with real problems in the world, you would have developed a tolerance for expecting things not to be provable or falsifiable.
You’d also develop an expectation that there are a range of probabilities for things. You’ll be uncertain, and you’ll have to deal with that. It’s only in a rarefied academic world where it would ever be plausible to deny uncertainty, or to insist on falsification, because that’s just almost never possible or relevant for the vast majority of questions you could be trying to ask.
---
**Luke**: Getting back to the question of how someone might develop expertise in, for example, AGI impacts or, let’s just say more broadly, long term tech forecasting…
What are your thoughts on some of the key training that someone would need to undergo? It could even be mental habits, memorizing certain fields of material where we’ve done a lot of stamp collecting in the scientific sense, etc. What’s relevant, do you think, for developing this kind of expertise?
---
**Robin**: We live in a world where people spend a substantial fraction of their career learning about stuff, and then it’s only after they’ve learned about a lot of things that they become the most productive about applying the stuff they’ve learned.
We’re just in a world where people have long life spans, and they’re competing with other people with long life spans. You have to expect that if you’re going to be the best at something you will have to spend a large fraction of your life devoted to it. Sorry, no shortcuts. That’s just a message people might not want to hear but that’s the way it goes.
You’ll also have to figure out where and how much to specialize. You can’t learn 20 fields as well as the best people can know them. Sorry. You just won’t have time. You’ll have to be some very unusual person in some way to get anywhere close to that.
You will have to decide what aspects of this future you want to focus on. There are many different aspects and they don’t all come together as a package, where if you learn about one you automatically learn about the others.
In tech forecasting, one category of questions is about what technologies are feasible, in principle. To have a sense for that kind of question, to answer it, you will need to have spent a substantial fraction of your life learning about the kinds of technologies you’re talking about. You’ll also want to have spent some substantial time looking at the histories of other technologies, and how they’ve progressed over time. The typical trajectory of technology and the typical trajectory of innovation, and where it tends to come from, and how many starts tend to be false starts, et cetera.
Another category of questions is about the social implications of the technology. Batteries, say. For that, it requires a whole different set of expertise. It can be informed by knowing what a battery is, and how it works, and who might make them, and when they’ll get how good. But in order to forecast social implications of batteries you’ll have to know about societies, and how they work, and what they’re made out of, and just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields. Maybe you could both learn a lot of social science and a lot of battery tech, but that’ll take a lot of time.
One of the main questions about studying anything, including the future, is how to specialize, how to make a division of labor. As usual, like in software, the key to division of labor is interfaces. You want to carve nature at its joints so the interfaces are as simple and robust and modular as possible.
You want to ask, where are there the fewest dependencies between different questions so that you can cut the expertise lines there. You say “you guys over here you work on the answer to this question, and you guys over here you take the answer to that question, and you go do something with it.” The smaller you can make that set of answers and questions, the more modularity and independence you can have, and the more you can separate the work.
Whenever you have different teams with an interface, they’ll each have to learn a fair bit about the interface itself, in order to be productive. They’ll have to know what the interface means and where it comes from. What parts of it are uncertain? What parts of it change fast? What parts of it are people serious about, and all sorts of things on an interface, and what do they tend to lie about? That’s part of the search.
One obvious, very plausible interface is between people who predict that particular devices will be available at particular points in time for particular costs, with particular capabilities, and other people who talk about what the hell that means for the rest of society. That seems to me a relatively tight interface, compared to the other interfaces we could choose here.
Of course, within technology you could divide it up. Somebody might know lithium batteries, and they just know lithium batteries really well, and they can talk about the future of lithium batteries.
But if graphene batteries are coming down the pike, they’re not going to understand that very well. Somebody else might specialize in graphene batteries, or just specialize in knowing the range of kinds of batteries available, and what might happen to them.
Somebody else might specialize more in the distribution of technological innovation. When you draw a chart of time and capacity over time, how often does that chart align or something else, and how misleading can it be when you see a short term thing, etc.? Just a sense for a range of different kinds of histories of technologies, and what sort of variety of paths we see. You could specialize in that.
But if you’re in an area of futurism, where there aren’t very many other people doing it, you should expect things to be more like a startup where you just have to be flexible. Not because being flexible is somehow intrinsically more productive in general. It’s because it’s required when there’s a bunch of things to be done and not very many people to do them. You will, by necessity, have to acquire a wider range of skills, a wider range of approaches, consider a wider range of possibilities, accept more often restructuring, more often changing goals.
I would love it if some day serious futurism were as detailed and specialized as history. Historians have broken up the field of history into lots of different areas and times of history, and a lot of different aspects. Each person can see a previous track record of people with careers in history, and what they focused on, and the set of open questions.
Then they can go into history and they can take a particular area and know what a career in history looks like, and know what other people in that area, what kind of skills they acquired, and what it took to become impressive. If the future became that specialized then that’s what it would be like for the future too.
It just happens not to be that way, at the moment, because there’s just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields then you otherwise would, learn more different skills than you otherwise would, accept more changing of your mind about what was important, and what were the key questions, just because there’s not very many people doing serious futurism.
---
**Luke**: Robin, you used this term “serious futurism,” which happens to be the term I’ve been using for futurists who are trying to figure it out as opposed to meet the demand for morality tales about the future, or meet a demand for hype that fuels excited talk about, “Gee whiz, cool stuff from the future,” etc.
When I try to do serious futurism, most of the sources I encounter are not trying to meet the demand of figuring out what’s true about the future. I have to weed through a lot of material that’s meeting other demands, before I find anything that’s useful to my project of serious futurism.
I wonder, from your perspective, what are your thoughts on what one would do if you wanted to try to make serious futurism more common, get people excited about serious futurism, show them the value of the project, get them to invest in it so that there is more of a field, so there are more people doing all the different things that need to be done in order to figure out what the future is going to be like?
---
**Robin**: There is this world of people who think of themselves as serious futurists. I’ve had limited contact with them. Before we go into talking about them, I think it would help to just bracket this by noticing that there are many other intellectual areas which have had a similar problem, which I will phrase as widespread public interest, limited academic interest, and then an attempt to carve out a serious version of the field.
Two examples, relatively extreme examples, are sex and aliens. Both of these are subjects that people have long found fascinating to talk about. And for the most part, academic avoided both of them for a long time. Then some academics, at one point, tried to carve out an area of being serious about it.
In both of those cases, and, I think, in lots of other cases, you can see what the key problem is. As soon as you start to seriously engage the subject, if you don’t do it in a way that really clearly distinguishes how you’re doing it from how all those other people are doing it, you look like them. Then you acquire, in people’s minds, all the attributes of them which, of course, include not being serious or worthy of attention.
For example, for aliens the first big method you can use to distinguish yourself is to just search the sky for signals with huge radio telescopes. You’re not going to talk about all the other aspects of aliens. You’re just going to be hard-science radio telescope guy, searching the sky for signals. What distinguishes you from everybody else talking about aliens?
First of all, you have a radio telescope and they don’t. Second of all, you know how to do lots of complicated signal processing. Thirdly, there’s other people who do signal processing, and you’re just inheriting and applying their methods, so that’s a standard thing. It’s complicated to learn that, and you could have gone to the schools were you learned that stuff, and you can pass muster with those people when it comes to knowing how to build radio telescopes and do the signal processing. You’re just applying that to aliens. Hey. That’s just another subject.
With sex, the way they did it was they said, “Well, we’re going to put people in a room and they’ll be having sex. We’ll be watching them in all the standard ways we ever watch anybody doing anything, as a social scientist. We’re going to have the same sort of selection criteria, and methods of recording things, and recording variations, and things like that. We’re just going to do it in a very big, standardized way in order to show we’re different, we’re serious. All those other people like to talk about sex all the time. But they couldn’t be doing this. They don’t even know what the words we’re using mean. They’re not one of us. We’re not one of them.”
Of course, that means that you are, in some sense, throwing away all the data people had about sex, or at least setting it aside. You’re saying, “All these things people are claiming about sex, that’s all coming from these ordinary conversations about sex. That’s not good enough for us so we’re going to wipe the slate clean and just see what we can get with our own new data.”
In futurism, there are a bunch of futurists who are like inspirational speaker futurists, who just talk about all the cool stuff coming down the line and how society will change and that sort of thing. Then there’s the sort of academic futurists who see themselves as distinctly different from that.
Many of them focus on collecting data series about previous technologies or predictions, and then they project those data series forward and do statistical projection and prediction. They see themselves as serious academics, and one of the ways they distinguish themselves from these other futurists is that if they don’t have a data series for it then they’re not going to talk about it. It’s not in the realm of their kind of futurism.
For my work, I’m taking a risky strategy, which I don’t have any strong reason to expect to succeed, of simply having been a social science professional for a long time, taking a lot of detailed social science knowledge, that most people don’t know, and applying it to my particular scenario, using social science lingo and concepts, and basically saying, “Doesn’t this look different to you?”
Basically I’m saying, “A social scientist, when they look at this they will recognize that this is using professional, state of the art concepts and applying them to this particular subject.”
I’ve gotten, certainly, some people, reading my draft, to say, “You’re coming up with a lot more detail than I would have thought possible.” That’s sort of what I’m proud of. A lot of people look at a scenario like this and they kind of wave their hands and say, “It doesn’t look like we can figure out anything about that. That’s just too hard and complicated.” I’m going to come back and say, “Actually, it’s one of those things that is hard but not impossible, so it just takes more work.”
---
**Luke**: You mentioned this skepticism that many people have about our ability to figure out things, at all, about the far future, or figure them out in any amount of detail.
One quote that comes to mind is from Vaclav Smil, from his book [*Global Catastrophes and Trends*](http://www.amazon.com/Global-Catastrophes-Trends-Fifty-ebook/dp/B004GEC5LS/), where he writes specifically about AI:
> If the emergence of superior machines… is only a matter of time then all we can do is wait passively to be eliminated. If such developments are possible, we have no rational way to assess the risks. Is there a 75 percent or 0.75 percent chance of self replicating robots taking over the Earth by 2025…?
>
>
This is a very pessimistic, fatalistic view about our ability to forecast AI in particular. Actually, he says the same thing about nanotechnology. What do you think about this?
---
**Robin**: I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there.
For many subjects, they don’t think it’s just possible to learn anything, to know anything. For physicists, the usual attitude towards social science was basically there’s no such thing as social science; there can’t be such a thing as social science.
Surely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far.
It’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there.
You don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods.
It’s very hard to say that something can’t be learned. It’s much easier to say that you haven’t figured anything out or, perhaps, that a certain kind of method runs out there. It’s easier to imagine trying all the different paths you can use in a certain method, even though that’s pretty hard too.
But, to be able to say that nobody can learn anything about this, in order to say that with some authority, you have to have some understanding of all the methods out there, and what they can do, and have tried it for a while.
Academics tend to know their particular field very well and its methods, and then other fields kind of fade away and blur together. If you’re a physicist, the different between physics and chemistry is overwhelmingly important. The difference between sociology and economics seems like terminology or something, and vice-versa. If you’re an economist, the difference between economics and sociology seems overwhelming, and the difference between physics and chemistry seems like picky terminology, which just means that most people don’t know very many methods. They don’t know very many of all the different things you can do.
As one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.
---
**Luke**: Thanks, Robin!
The post [Robin Hanson on Serious Futurism](https://intelligence.org/2013/11/01/robin-hanson/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
5cc53e63-97b7-4eda-a22f-de584ce68a94 | trentmkelly/LessWrong-43k | LessWrong | Overcoming the MWC
I hope with this post to share some strategies that has worked for me on how to deal with the Madonna Whore Complex.
A lot of this post might seem like obvious things that everyone knows, but it is still worth stating out clearly.
I plan to write more posts detailing my understanding of the dating world, attraction between the genders, etc.
What is MWC?
The Madonna Whore Complex was coined by Sigmund Freud, he used it to classify men who were unable to feel sexual arousal towards their committed nurturing partner who they see as a saintly Madonna, pure, virgin, and unblemished.
In our current culture, there is a lot of stigma on male sexuality. We grow up seeing it as inherently predatory and it leaves us with a lot of anxiety. This anxiety breeds a sex negative prudish mindset, where men feel like they are profaning the object of their affection by also sensually desiring them.
Objectifying someone is considered to be a brutal undignified act. How can one reduce the mother of their kids, their wife to just an object?
So in some men this dampens their sexual attraction towards their partner in a secure committed relationship, further it might get redirected towards "whores"; the kind of women he might think are debased, hedonistic, and therefore worthy of being ravaged sexually.
Others would hold themselves to a high moral standard or their lust would not be strong enough, such that they turn almost asexual refusing to profane any women like that.
Moving forward
We can't go back to the past and avoid having that overprotective mother, or not consume all that media which failed to instil a positive model of masculine sexuality as it shamed aggressive aspects of male sexuality.
But to heal and fix this dichotomy I would share some paths that has helped.
Inaccurate axioms
To make the fundamental assumptions of your world model explicit takes effort and is one way to challenge it.
It really helps to internalize the truth that men and women are so much |
c4e16ebb-e615-4fc1-bd62-3f9ea903c533 | trentmkelly/LessWrong-43k | LessWrong | The Outside View isn't magic
The planning fallacy is an almost perfect example of the strength of using the outside view. When asked to predict the time taken for a project that they are involved in, people tend to underestimate the time needed (in fact, they tend to predict as if question was how long things would take if everything went perfectly).
Simply telling people about the planning fallacy doesn't seem to make it go away. So the outside view argument is that you need to put your project into the "reference class" of other projects, and expect time overruns as compared to your usual, "inside view" estimates (which focus on the details you know about the project.
So, for the outside view, what is the best way of estimating the time of a project? Well, to find the right reference class for it: the right category of projects to compare it with. You can compare the project with others that have similar features - number of people, budget, objective desired, incentive structure, inside view estimate of time taken etc... - and then derive a time estimate for the project that way.
That's the outside view. But to me, it looks a lot like... induction. In fact, it looks a lot like the elements of a linear (or non-linear) regression. We can put those features (at least the quantifiable ones) into a linear regression with a lot of data about projects, shake it all about, and come up with regression coefficients.
At that point, we are left with a decent project timeline prediction model, and another example of human bias. The fact that humans often perform badly in prediction tasks is not exactly new - see for instance my short review on the academic research on expertise.
So what exactly is the outside view doing in all this?
The role of the outside view: model incomplete and bias human
The main use of the outside view, for humans, seems to be to point out either an incompleteness in the model or a human bias. The planning fallacy has both of these: if you did a linear regression comparing y |
d8bfe3d7-adfe-408e-8c4b-4b6eb02d4bf2 | StampyAI/alignment-research-dataset/special_docs | Other | Global challenges: 12 risks that threaten human civilization
Glob Policy. 2020 May; 11(3): 271–282. Published online 2020 Jan 24. doi: [10.1111/1758-5899.12786](//doi.org/10.1111%2F1758-5899.12786)PMCID: PMC7228299PMID: [32427180](https://pubmed.ncbi.nlm.nih.gov/32427180)Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter
====================================================================================================
[Owen Cotton‐Barratt](https://pubmed.ncbi.nlm.nih.gov/?term=Cotton‐Barratt%20O%5BAuthor%5D),1
,[\\*](#gpol12786-biog-0001) [Max Daniel](https://pubmed.ncbi.nlm.nih.gov/?term=Daniel%20M%5BAuthor%5D),1
,[\\*](#gpol12786-biog-0002) and [Anders Sandberg](https://pubmed.ncbi.nlm.nih.gov/?term=Sandberg%20A%5BAuthor%5D)1
,[\\*](#gpol12786-biog-0003)### Owen Cotton‐Barratt
1
University of Oxford
Find articles by [Owen Cotton‐Barratt](https://pubmed.ncbi.nlm.nih.gov/?term=Cotton‐Barratt%20O%5BAuthor%5D)### Max Daniel
1
University of Oxford
Find articles by [Max Daniel](https://pubmed.ncbi.nlm.nih.gov/?term=Daniel%20M%5BAuthor%5D)### Anders Sandberg
1
University of Oxford
Find articles by [Anders Sandberg](https://pubmed.ncbi.nlm.nih.gov/?term=Sandberg%20A%5BAuthor%5D)[Author information](#) [Copyright and License information](#) [Disclaimer](/pmc/about/disclaimer/)
1
University of Oxford
[Copyright](/pmc/about/copyright/) © 2020 The Authors. \*Global Policy\* published by Durham University and John Wiley & Sons Ltd.This is an open access article under the terms of the [http://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.Associated Data
---------------
[Data Availability Statement](#)Data sharing is not applicable to this article as no new data were created or analysed.
Abstract
--------
We look at classifying extinction risks in three different ways, which affect how we can intervene to reduce risk. First, how does it start causing damage? Second, how does it reach the scale of a global catastrophe? Third, how does it reach everyone? In all of these three phases there is a defence layer that blocks most risks: First, we can prevent catastrophes from occurring. Second, we can respond to catastrophes before they reach a global scale. Third, humanity is resilient against extinction even in the face of global catastrophes. The largest probability of extinction is posed when all of these defences are weak, that is, by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against. We find that it’s usually best to invest significantly into strengthening all three defence layers. We also suggest ways to do so tailored to the classes of risk we identify. Lastly, we discuss the importance of underlying risk factors – events or structural conditions that may weaken the defence layers even without posing a risk of immediate extinction themselves.
### Policy implications
\* We can usually best reduce extinction risk by splitting our budget between all defence layers.
\* We should include measures that reduce whole classes of risks, such as research uncovering currently unseen risk. We should also address risk factors that would not cause extinction themselves but weaken our defences, for example, bad global governance.
\* Future research should identify synergies between reducing extinction and other risks. For example, research on climate change adaptation and mitigation should assess how we can best preserve our ability to prevent, respond to, and be resilient against extinction risks.
Our framework for discussing extinction risks
---------------------------------------------
Human extinction would be a tragedy. For many moral views it would be far worse than merely the deaths entailed, because it would curtail our potential by wiping out all future generations and all value they could have produced (Bostrom, [2013](#gpol12786-bib-0013); Parfit, [1984](#gpol12786-bib-0045); Rees, [2003](#gpol12786-bib-0047), [2018](#gpol12786-bib-0048)).
Human extinction is also possible, even this century. Both the total risk of extinction by 2100 and the probabilities of specific potential causes have been estimated using a variety of methods including trend extrapolation, mathematical modelling, and expert elicitation; see Rowe and Beard ([2018](#gpol12786-bib-0050)) for a review, as well as Tonn and Stiefel ([2013](#gpol12786-bib-0059)) for methodological recommendations. For example, Pamlin and Armstrong ([2015](#gpol12786-bib-0044)) give probabilities between 0.00003% and 5% for different scenarios that could eventually cause irreversible civilisational collapse.
To guide research and policymaking in these areas, it may be important to understand what kind of processes could lead to our premature extinction. People have considered and studied possibilities such as asteroid impacts (Matheny, [2007](#gpol12786-bib-0038)), nuclear war (Turco et al., [1983](#gpol12786-bib-0065)), and engineered pandemics (Millett and Snyder‐Beattie, [2017](#gpol12786-bib-0039)). In this article we will consider three different ways of classifying such risks.
The motivating question behind the classifications we present is ‘How might this affect policy towards these risks?’ We proceed by identifying three phases in an extinction process at which people may intervene. For each phase, we ask how people could stop the process, because the different failure modes may be best addressed in different ways. For this reason we do not try to classify risks by the kind of natural process they represent, or which life support system they undermine (unlike e.g. Avin et al., [2018](#gpol12786-bib-0004)).
### Three broad defence layers against human extinction
An event causing human extinction would be unprecedented, so is likely to have some feature or combination of features that is without precedent in human history. Now, we see events with \*some\* unprecedented property all of the time – whether they are natural, accidental, or deliberate – and many of these will be bad for people. However, a large majority of those pose essentially zero risk of causing our extinction.
Why is it that some damaging processes pose risks of extinction, but many do not? By understanding the key differences we may be better placed to identify new risks and to form risk management strategies that attack their causes as well as other factors behind their destructive potential.
We suggest that much of the difference can usefully be explained by three broad defence layers (Figure [(Figure11](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)):
1. First layer: prevention. Processes – natural or human – which help people are liable to be recognised and scaled up (barring defeaters such as coordination problems). In contrast processes which harm people tend to be avoided and dissuaded. In order to be bad for significant numbers of people, a process must either require minimal assistance from people, or otherwise bypass this avoidance mechanism.
2. Second layer: response.[1](#gpol12786-note-1001) If a process is recognised to be causing great harm (and perhaps pose a risk of extinction), people may cooperate to reduce or mitigate its impact. In order to cause large global damage, it must impede this response, or have enough momentum that there is nothing people can do.
3. Third layer: resilience. People are scattered widely over the planet. Some are isolated from external contact for months at a time, or have several years’ worth of stored food. Even if a process manages to kill most of humanity, a surviving few might be able to rebuild. In order to cause human extinction, a catastrophe must kill everybody, or prevent a long‐term recovery.
[[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)[Figure 1](/pmc/articles/PMC7228299/figure/gpol12786-fig-0001/)Three broad defence layers.
The boundaries between these different types of risk‐reducing activity aren’t crisp, and one activity may help at multiple stages. But it seems that often activities will help primarily at one stage. We characterise \*prevention\* as reducing the likelihood that catastrophe strikes at all; it is necessarily done in advance. We characterise \*response\* as reducing the likelihood that a catastrophe becomes a severe global catastrophe (at the level which might threaten the future of civilisation). This includes reducing the impact of the catastrophe after it is causing obvious and significant damage, but the response layer might also be bolstered by mitigation work which is done in advance. Finally, we characterise \*resilience\* as reducing the likelihood that a severe global catastrophe eventually causes human extinction.[2](#gpol12786-note-1002)
Successfully avoiding extinction could happen at each of these defence layers. In the rest of the article we explore two consequences of this.
First, we can classify damaging processes by the way in which we could stop them at the defence layers. In section [2](#gpol12786-sec-0009), we’ll look at a classification of risks by their origin: understanding different ways in which we could succeed at the prevention layer. In section [3](#gpol12786-sec-0016), we’ll look at the features which may allow us to block them at the response layer. In section 4, we’ll classify risks by the way in which we could stop them from finishing everybody. We conclude each section by policy implications.
Each risk will thus belong to three classes – one per defence layer. For example, consider a terrorist group releasing an engineered virus that grows into a pandemic and eventually kills everyone. In our classification, we’ll call this prospect a \*malicious risk\* with respect to its origin; a \*cascading risk\* with respect to its scaling mechanism of becoming a global catastrophe; and a \*vector risk\* in the last phase we’ve called endgame. We’ll present more examples at the end of section [4](#gpol12786-sec-0021) and in Table [Table11](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/).
### Table 1
Applying our classification to five examples. Note that each risk belongs to three classes, one for each defence layer
| Classification by | Origin | Scaling | Endgame |
| --- | --- | --- | --- |
| Associated defence layer | Prevention | Response | Resilience |
| Terrorists releasing engineered pandemic | Malicious | Cascading | Vector |
| Asteroid strike causing impact winter | Natural | Large | Habitat |
| False alarm triggering nuclear war with ensuing nuclear winter | Accident | Leverage | Habitat |
| Conventional proxy war escalating to nuclear war causing irreversible civilisational collapse | Conflict | Leverage | Capability |
| Unforeseen rapid learning producing an AI agent that kills humans to preempt interference with its objectives | Unseen | Leverage | Agency |
[Open in a separate window](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/?report=objectonly)Second, we present implications of our framework distinguishing three layers. In section 5, we discuss how to allocate resources between the three defence layers, concluding that in most cases all of prevention, response, and resilience should receive substantial funding and attention. In section [6](#gpol12786-sec-0028), we highlight that risk management, in addition to monitoring specific hazards, must protect its defence layers by fostering favourable structural conditions such as good global governance.
### Related work
Avin et al. ([2018](#gpol12786-bib-0004)) have recently presented a classification of risks to the lives of a significant proportion of the human population. They classify such risks based on ‘critical systems affected, global spread mechanism, and prevention and mitigation failure’. Our framework differs from theirs in two major ways. First, with extinction risks we focus on a more narrow type of risk. This allows us, in section [4](#gpol12786-sec-0021), to discuss what might stop global catastrophes from causing extinction, a question specific to extinction risks. Second, even where the classifications cover the same temporal phase of a global catastrophe, they are motivated by different questions. Avin et al. attempt a comprehensive survey of the natural, technological, and social systems that may be affected by a disaster, for example listing 45 critical systems in their second section. By contrast, we ask why a risk might break through a defence layer, and look for answers that abstract away from the specific system affected. For instance, in section [2](#gpol12786-sec-0009), we’ll distinguish between unforeseen, expected but unintended, and intended harms.
We believe the two classifications complement each other well. Avin and colleagues’ (2018) discussion of prevention and response failures is congenial to our section 6 on underlying risk factors. Their extensive catalogues of critical systems, spread mechanisms and prevention failures highlight the wide range of relevant scientific disciplines and stakeholders, and can help identify fault points relevant to particularly many risks. Conversely, we hope that our coarser typology can guide the search for additional critical systems and spread mechanisms. We believe that our classification also usefully highlights different ways of protecting the same systems. For example, the risks from natural and engineered pandemics might best be reduced by different policy levers even if both affected the same critical systems and spread by the same mechanisms. Lastly, our classification can help identify risk management strategies that would reduce whole clusters of risks. For example, restricting access to dangerous information may prevent many risks from malicious groups, irrespective of the critical system that would be targeted.
Our classification also overlaps with the one by Liu et al. ([2018](#gpol12786-bib-0035)), for example when they distinguish intended from other vulnerabilities or emphasise the importance of resilience. While the classifications otherwise differ, we believe ours contributes to their goal to dig ‘beyond hazards’ and surface a variety of intervention points.
Both the risks discussed by Avin et al. ([2018](#gpol12786-bib-0004)) and extinction risks by definition involve risks of a massive loss of lives. This sets them apart from other risks where the adverse outcome would also have global scale but could be limited to less severe damage such as economic losses. Such risks are being studied by a growing literature on ‘global systemic risk’ (Centeno et al., [2015](#gpol12786-bib-0018)). Rather than reviewing that literature here, we’ll point out throughout the article where we believe it contains useful lessons for the study of extinction risks.
Finally, it’s worth keeping in mind that extinction is not the only outcome that would permanently curtail humanity’s potential; see Bostrom ([2013](#gpol12786-bib-0013)) for other ways in which this could happen. A classification of these other \*existential risks\* is beyond the scope of this article, as is a more comprehensive survey of the large literature on global risks (e.g. Baum and Barrett, [2018](#gpol12786-bib-0008); Baum and Handoh, [2014](#gpol12786-bib-0010); Bostrom and Ćirković [2008](#gpol12786-bib-0015); Posner, [2004](#gpol12786-bib-0046)).
Classification by origin: types of prevention failures
------------------------------------------------------
Avoiding catastrophe altogether is the most desirable outcome. The origin of a risk determines how it passes through the prevention layer, and hence the kind of steps society can take to strengthen prevention (Figure [(Figure22](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)).
[[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)[Figure 2](/pmc/articles/PMC7228299/figure/gpol12786-fig-0002/)Classification of risks by origin.
### Natural risks
The simplest explanation for a risk to bypass our background prevention of harm‐creating activities is if the origin is outside of human control: a \*natural risk\*. Examples include a large enough asteroid striking the earth, or a naturally occurring but particularly deadly pandemic.
We sometimes can take steps to avoid natural risks. For example, we may be able to develop methods for deflecting asteroids. Preventing natural risks generally requires proactive understanding and perhaps detection, for instance scanning for asteroids on earth‐intersecting orbits. Such risks share important properties with anthropogenic risks, as any explanation for how they might materialise must include an explanation of why the human‐controlled prevention layer failed.
### Anthropogenic risks
All non‐natural risks are in some sense \*anthropogenic\*, but we can classify them further. Some may have a localised origin, needing relatively small numbers of people to trigger them. Others require large‐scale and widespread activity. In each case there are at least a couple of ways that it could get through the prevention layer.
Note that there is a spectrum in terms of the number of people who are needed to produce different risks, so the division between ‘few people’ and ‘many people’ is not crisp. We might think of the boundary as being around one hundred thousand or one million people, and things close to this boundary will have properties of both classes. However, it appears to us that for many of the plausible risks the number required is either much smaller (e.g., an individual or a cohesive group of people such as a company or military unit) or much larger than this (e.g., the population of a major power or even the whole world), so the qualitative distinction between ‘few people’ and ‘many people’ (and the different implications of these for responding) seems to us a useful one.
Also potentially relevant are the knowledge and intentions of the people conducting the risky activity. They may be ignorant of or aware of the possible harm; if the latter, they may or may not intend it.[3](#gpol12786-note-1003)
### Anthropogenic risks from small groups
The case of a risk where relatively few people are involved in triggering and they are unaware of the potential harm is an \*unseen risk\*.[4](#gpol12786-note-1004) This is likely to involve a new kind of activity; it is most plausible with the development of unprecedented technologies (GPP, [2015](#gpol12786-bib-0027)), such as perhaps advanced artificial intelligence (Bostrom, [2014](#gpol12786-bib-0014)), nanotechnology (Auplat, [2012](#gpol12786-bib-0002), [2013](#gpol12786-bib-0003); Umbrello and Baum, [2018](#gpol12786-bib-0066)), or high‐energy physics experiments (Ord et al., [2010](#gpol12786-bib-0042)).
The case of a localised unintentional trigger which was foreseen as a possibility (and the dynamics somewhat understood) is an \*accident risk\*. This could include a nuclear war starting because of a fault in a system or human error, or the escape of an engineered pathogen from an experiment despite safety precautions.
If the harm was known and intended, we have a \*malicious risk\*. This is a scenario where a small group of people wants to do widespread damage;[5](#gpol12786-note-1005) see Torres ([2016](#gpol12786-bib-0061), [2018b](#gpol12786-bib-0063)) for a typology and examples. Malicious risks tend to be extreme forms of terrorism, where there is a threat which could cause global damage.
### Anthropogenic risks from large groups
Turning to scenarios where many people are involved, we ask why so many would pursue an activity which causes global damage. Perhaps they do not know about the damage. This is a \*latent risk\*. For them to remain ignorant for long enough, it is likely that the damage is caused in an indirect or delayed manner. We have seen latent risks realised before, but not ones that threatened extinction. For example, asbestos was used in a widespread manner before it was realised that it caused health problems. And it was many decades after we scaled up the burning of fossil fuels that we realised this contributed to climate change. If our climate turns out to be more sensitive than expected (Nordhaus, [2011](#gpol12786-bib-0040); Wagner and Weitzman, [2015](#gpol12786-bib-0068); Weitzman, [2009](#gpol12786-bib-0070)), and continued fossil fuel use triggers a truly catastrophic shift in climate, then this could be a latent risk today.
In some cases people may be aware of the damage and engage in the activity anyway. This failure to internalise negative externalities is typified by ‘tragedy of the commons’ scenarios, so we can call this a \*commons risk\*. For example, failure to act together to tackle global warming may be a commons risk (but lack of understanding of the dynamics causes a blur with latent risk). In general, commons risks require some coordination failure. They are therefore more likely if features of the risk inhibit coordination; see for example Barrett ([2016](#gpol12786-bib-0005)) and Sandler ([2016](#gpol12786-bib-0052)) for a game‐theoretic analysis of such features.
Finally, there are cases where a large number of people engage in an activity to cause deliberate harm: \*conflict risk\*. This could include wars and genocides. Wars share some features with commons risk: there are solutions which are better for everybody but are not reached. In most conflicts, actors are intentionally causing harm, but only as an instrumental goal.
### Risk creators and risk reducers
In the above we classify risks according to who creates the risk and their state of knowledge. We have done this because if we want to prevent risk it will often be most effective to go to the source. But we could also ask who is in a position to take actions to avoid the risk. In many cases those creating it have most leverage, but in principle almost any actor could take steps to reduce the occurrence rate. If risk prevention is underprovided, this is likely to be a tragedy of the commons scenario, and share characteristics with commons risk.
From a moral and legal standpoint intentionality often matters. The possibility of being found culpable is an important incentive for avoiding risk‐causing activities and part of risk management in most societies. If creating or hiding potential catastrophic risks is made more blameworthy, prevention will likely be more effective. Unfortunately it also often motivates concealment that can create or aggravate risk; see Chernov and Sornette ([2015](#gpol12786-bib-0019)) for case studies of how this misincentive can weaken prevention and response. This shows the importance of making accountability effectively enforceable.
### Policy implications for preventing extinction risk
\* To be able to prevent \*natural risks,\* we need research aimed at identifying potential hazards, understanding their dynamics, and eventually develop ways to reduce their rate of occurrence.
\* To avoid \*unseen\* and \*latent risks,\* we can promote norms such as appropriate risk management principles at institutions that engage in plausibly risky activities; note that there is an extensive literature on rivalling risk management principles (e.g. Foster et al., [2000](#gpol12786-bib-0025); O'Riordan and Cameron, [1994](#gpol12786-bib-0043); Sandin, [1999](#gpol12786-bib-0051); Sunstein, [2005](#gpol12786-bib-0053); Wiener, [2011](#gpol12786-bib-0071)), especially in the face of catastrophic risks (Baum, [2015](#gpol12786-bib-0006); Bostrom, [2013](#gpol12786-bib-0013); Buchholz and Schymura, [2012](#gpol12786-bib-0017); Sunstein, [2007](#gpol12786-bib-0054), [2009](#gpol12786-bib-0055); Tonn, [2009](#gpol12786-bib-0057); Tonn and Stiefel, [2014](#gpol12786-bib-0060)) – advocating for any particular principle is beyond the scope of this article. See also Jebari ([2015](#gpol12786-bib-0031)) for a discussion of how heuristics from engineering safety may help prevent unseen, latent, and accident risks. Regular horizon scanning may identify previously unknown risks, enabling us to develop targeted prevention measures. Organisations must be set up in such a way that warnings of newly discovered risks reach decision‐makers (see Clarke and Eddy, [2017](#gpol12786-bib-0020), for case studies where this failed).
\* \*Accidents\* may be prevented by general safety norms that also help reduce unseen risk. In addition, building on our understanding of specific accident scenarios, we can design failsafe systems or follow operational routines that minimise accident risk. In some cases, we may want to eschew an accident‐prone technology altogether in favour of safer alternatives. Accident prevention may benefit from research on high reliability organisations (Roberts and Bea, [2001](#gpol12786-bib-0049)) and lessons learnt from historical accidents. Where effective prevention measures have been identified, it may be beneficial to codify them through norms and law at the national and international levels. Alternatively, if we can internalise the expected damages of accidents through mechanisms such as insurance, we can leverage market incentives.[6](#gpol12786-note-1006)
\* Solving the coordination problems at the heart of \*commons\* and \*conflict risks\* is sometimes possible by fostering national or international cooperation, be it through building dedicated institutions or through establishing beneficial customs.[7](#gpol12786-note-1007) One idea is to give a stronger political voice to future generations (Jones et al., [2018](#gpol12786-bib-0033); Tonn, [1991](#gpol12786-bib-0056), [2018](#gpol12786-bib-0058)).
\* Lastly, we can prevent \*malicious risks\* by combating extremism. Technical (Trask, [2017](#gpol12786-bib-0064)) as well as institutional (Lewis, [2018](#gpol12786-bib-0034)) innovations may help with governance challenges in this area, a survey of which is beyond the scope of this article.
\* Note that our classification by origin is aimed at identifying policies that would – if successfully implemented – reduce a broad class of risks. Developing policy solutions is, however, just one step toward effective prevention. We must then also actually implement them – which may not happen due to, for example, free‐riding incentives. Our classification does not speak to this implementation step. Avin et al. ([2018](#gpol12786-bib-0004)) congenially address just this challenge in their classification of prevention and mitigation failures.
Classification by scaling mechanism: types of response failure
--------------------------------------------------------------
For a catastrophe to become a global catastrophe, it must eventually have large effects despite our response aimed at stopping it. To understand how this can happen, it’s useful to look at the time when we could first react. Effects must then either already be large or scale up by a large factor afterwards (Figure [(Figure33](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)).
[[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)[Figure 3](/pmc/articles/PMC7228299/figure/gpol12786-fig-0003/)Classification of risks by scaling mechanism.
If the initial effects are large, we will simply say that the risk is \*large\*. If not, we can look at the scaling process. If massive scaling happens in a small number of steps, we say there is \*leverage\* in play. If scaling in all steps is moderate, there must be quite a lot of such steps – in this case we say that the risk is \*cascading.\*
### Large risks
Paradigm examples of catastrophes of an immediately global scale are large sudden‐onset natural disasters such as asteroid strikes. Since we cannot respond to them at a smaller‐scale stage, mitigation measures we can take in advance (part of the second defence layer as they would reduce damage after it has started) and the other defence layers of prevention and resilience are particularly important to reduce such risks. Prevention and mitigation may benefit from detecting a threat – say, an asteroid – early, but in our classification this is different from responding after there has been some actual small‐scale damage.
### Leverage risks
Leverage points for rapid one‐step scaling can be located in natural systems, for example if the extinction of a key species caused an ecosystem to collapse. However, it seems to us that leverage points are more common in technological or social systems that were designed to concentrate power or control.
Risks of both natural and anthropogenic origin may interact with such systems. For instance, a tsunami triggered the 2011 disaster at the Fukushima Daiichi nuclear power plant. Anthropogenic examples include nuclear war (possible to trigger by a few individuals linked to a larger chain of command and control) or attacks on weak points in key global infrastructure.
Responding to leverage risks is challenging because there are only few opportunities to intervene. On the other hand, blocking even one step of leveraged growth would be highly impactful. This suggests that response measures may be worthwhile if they can be targeted at the leverage points.
### Cascading risks
With the major exception of escalating conflicts, cascading risks normally cascade in a way which does not rely on humans deciding to further the effects. A typical example is the self‐propagating growth of an epidemic. As automation becomes more widespread, there will be larger systems without humans in the loop, and thus perhaps more opportunities for different kinds of cascading risk.
Since cascading risks are those which have a substantial amount of growing effects after we’re able to interact with them, it seems likely that they will typically give us more opportunities to respond, and that response will therefore be an important component of risk reduction. For risks which cascade exponentially (such as epidemics), an earlier response may be much more effective than a later one. Reducing the rate of propagation is also effective if there exist other interventions that can eventually stop or revert the damage.
However, there are a few secondary risk‐enabling properties that can weaken the response layer and therefore help damage cascade to a global catastrophe which we could have stopped. For example, a cascading risk may:
\* Impede cooperation: by preventing a coordinated response, the likelihood of a global catastrophe is increased. Cooperation is harder when communication is limited, when it is hard to observe defection, or when there is decreased trust.
\* Not obviously present a risk: the longer a cascading risk is under‐recognised, the more it can develop before any real response. For example, long‐incubation pathogens can spread further before their hazard becomes apparent.
\* Be on extreme timescales: if the risk presents and cascades very fast, there is little opportunity for any response. Johnson et al. ([2012](#gpol12786-bib-0032)) analyse such ‘ultrafast’ events, using rapid changes in stock prices driven by trading algorithms as an example (Braun et al., [2018](#gpol12786-bib-0016), however find that most of these ‘mini flash crashes’ are dominated by a single large order rather than being the result of a cascade). Note, however, that which timescales count as relevantly ‘fast’ depends on our response capabilities – technological and institutional progress may result in faster‐cascading threats but also in opportunities to respond faster. On the other hand people may be bad at addressing problems that won’t manifest for generations, as is the case for some impacts of global warming.
### Policy implications for responding to extinction risk
\* By their nature, we cannot respond to \*large\* risks before they become a global catastrophe. Of particular importance for such risks are therefore: mitigation that can be done in advance, and the defence layers of prevention and resilience.
\* \*Leverage\* risks provide us with the opportunity of a leveraged response: we can identify leverage points in advance and target our responses at them.
\* While the details of responses to \*cascading\* risks must be tailored to each specific case, we can highlight three general recommendations. First, detect damage early, when a catastrophe is still easy to contain. Second, reduce the time lag between detection and response, for example, by continuously maintaining response capabilities and having rapidly executable contingency plans in place. Third, ensure that planned responses won’t be stymied by the cascading process itself – for example, don’t store contingency plans for how to respond to a power outage on computers.[8](#gpol12786-note-1008)
Classification by endgame: types of resilience failure
------------------------------------------------------
For a global catastrophe to cause human extinction, it must in the end stop the continued survival of the species. This could be \*direct\*: killing everyone;[9](#gpol12786-note-1009) or \*indirect\*: removing our ability to continue flourishing over a longer period (Figure [(Figure44](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)).
[[Open in a separate window](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/?report=objectonly)](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)[Figure 4](/pmc/articles/PMC7228299/figure/gpol12786-fig-0004/)Classification of risks by endgame.
### Direct risks
In order to kill everyone, the catastrophe must reach everyone. We can further classify direct risks by how they reach everyone.
The simplest way this could happen is if it is everywhere that people are or could plausibly be: a \*ubiquity risk\*. If the entire planet is struck by a deadly gamma ray burst, or enough of a deadly toxin is dispersed through the atmosphere, this could plausibly kill everyone.
If it doesn’t reach everywhere people might be, a direct risk must at least reach everywhere that people in fact are. This might occur when people have carried it along with them: a \*vector risk\*. This includes risk from pandemics (if they are sufficiently deadly and have a long enough incubation period that it is spread everywhere) or perhaps risks which are spread by memes (Dawkins, [1976](#gpol12786-bib-0021)), or which come from some technological artefacts which we carry everywhere. Note that to directly cause extinction, a vector would need to impact hard‐to‐reach populations including ‘disaster shelters, people working on submarines, and isolated peoples’ (Beckstead, [2015a](#gpol12786-bib-0011), p. 36).
If not ubiquitous and not carried with the people, we would have to be extraordinarily unlucky for it to reach everyone by chance. Setting this aside as too unlikely, we are left with \*agency risk\*: deliberate actors trying to reach everybody. The actors could be humans or nonhuman intelligence (perhaps machine intelligence or even aliens). Agency risk probably means someone deliberately trying to ensure nobody survives, which may make it easier to get through the resilience layer by allowing anticipation of and response to possible survival plans. In principle agency risk includes cases where someone is deliberately trying to reach everyone, and only by accident does so in a way that kills them.
### Indirect risks
If the risk threatens extinction without killing everyone, it must reduce our long‐term ability to survive as a species. This could include a very broad range of effects, but we can break them up according to the kind of ability it impedes.
\*Habitat risks\* make long‐term survival impossible by altering or destroying the environment we live in so that it cannot easily support human life. For example a large enough asteroid impact might throw up dust which could prevent us from growing food for many years – if this was long enough, it could lead to human extinction. Alternatively an environmental change which lowered the average number of viable offspring to below replacement rates could pose a habitat risk.
\*Capability risks\* knock us back in a way that permanently remove an important societal capability, leading in the long run to extinction. One example might be moving to a social structure which precluded the ability to adapt to new circumstances.
We are gesturing towards a distinction between habitat risks and capability risks, rather than drawing a sharp line. Habitat risks work through damage to an external environment, where capability risks work through damage to more internal social systems (or even biological or psychological factors). Capability risks are also even less direct than habitat risks, perhaps taking hundreds or thousands of years to lead to extinction. Indeed there is not a clear line between capability risks and events which damage our capabilities but are not extinction risks (cf. section [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Underlying risk factors: risks to the defence layers](#gpol12786-sec-0024)). Nonetheless when considering risks of human extinction it may be important to account for events which could cause the loss of fragile but important capabilities.
An important type of capability risk may be civilisational collapse. It is possible that killing enough people and destroying enough infrastructure could lead to a collapse of civilisation without causing immediate extinction. If this happens, it is then plausible that it might never recover, or recover in a less robust form, and be wiped out by some subsequent risk. It is an open and important question how likely this permanent loss of capability is (Beckstead, [2015b](#gpol12786-bib-0012)). If it is likely, the resilience layer may therefore be particularly important to reinforce, perhaps along the lines proposed by Maher and Baum ([2013](#gpol12786-bib-0036)). On the other hand, if even large amounts of destruction have only small effects on the chances of eventual extinction, it becomes more important to focus on risks which can otherwise get past the resilience layer.
### Classifying example risks by each of origin, scaling, and endgame
We finally illustrate our completed classification scheme by applying it to examples, which we summarise in Table [Table11](/pmc/articles/PMC7228299/table/gpol12786-tbl-0001/).
Throughout the text, we’ve repeatedly referred to an asteroid strike that might cause extinction due to an ensuing impact winter. We’ve called this a \*natural risk\* regarding its origin; a \*large\* risk regarding scale, with no opportunity to intervene between the asteroid impact and its damage affecting the whole globe; and, if we assume that humanity dies out because climatic changes remove the ability to grow crops, a \*habitat risk\* in the endgame phase.
Our next pair of examples illustrates that risks with the same salient central mechanism – in this case nuclear war – may well differ during other phases. Consider first a nuclear war precipitated by a malfunctioning early warning system – that is, a nuclear power launching what turns out to be a first strike because it falsely believed that its nuclear destruction was imminent. Suppose further that this causes a nuclear winter, leading to human extinction. This would be an \*accident\* that scales via \*leverage,\* and finally manifests as a \*habitat risk.\* Contrast this with the intentional use of nuclear weapons in an escalating conventional war, and assume further that this either doesn’t cause a nuclear winter or that some humans are able to survive despite adverse climatic conditions. Instead, humanity never recovers from widespread destruction, and is eventually wiped out by some other catastrophe that could have easily been avoided by a technologically advanced civilisation. This second scenario would be a \*conflict\* that again scaled via the \*leverage\* associated with nuclear weapons, but then finished off humanity by removing a crucial \*capability\* rather than via damage to its habitat.
We close by applying our classification to a more speculative risk we might face this century. Some scholars (e.g. Bostrom, [2014](#gpol12786-bib-0014)) have warned that progress in artificial intelligence (AI) could at some point allow unforeseen rapid self‐improvement in some AI system, perhaps one that uses machine learning and can autonomously acquire additional training data via sensors or simulation. The concern is that this could result in a powerful AI agent that deliberately wipes out humanity to pre‐empt interference with its objectives (see Omohundro, [2008](#gpol12786-bib-0041), for an argument why such pre‐emption might be plausible). To the extent that we currently don’t know of any machine learning algorithms that could exhibit such behaviour, this would be an \*unseen risk;\* the scaling would be via \*leverage\* if we assume a discrete algorithmic improvement as trigger, or alternatively the risk could be rapidly \*cascading;\* in the endgame, this scenario would present an \*agency risk.\*
### Policy implications for resilience against extinction
\* To guard against what today would be \*ubiquity risks,\* we may in the future be able to establish human settlements on other planets (Armstrong and Sandberg, [2013](#gpol12786-bib-0001)).[10](#gpol12786-note-1010)
\* \*Vector risks\* may not reach people in isolated and self‐sufficient communities. Establishing disaster shelters may hence be an attractive option. Self‐sufficient shelters can also reduce \*habitat risk\*. Jebari ([2015](#gpol12786-bib-0031)) discusses how to maximise the resilience benefits from shelters, while Beckstead ([2015a](#gpol12786-bib-0011)) has argued that their marginal effect would be limited due to the presence of isolated peoples, submarine crews, and existing shelters.
\* Resilience against \*vector\* and \*agency risks\* may be increased by late‐stage response measures that work even in the event of widespread damage to infrastructure and the breakdown of social structure. An example might be the ‘isolated, self‐sufficient, and continuously manned underground refuges’ suggested by Jebari ([2015](#gpol12786-bib-0031), p. 541).
Allocating resources between defence layers
-------------------------------------------
In this section we will use our guiding idea of three defence layers to present a way of calculating the extinction probability posed by a given risk. We’ll draw three high‐level conclusions: first, the most severe risks are those which have a high probability of breaking through all three defence layers. Second, when allocating resources between the defence layers, rather than comparing absolute changes in these probabilities we should assess how often we can halve the probability of a risk getting through each layer. Third, it’s best to distribute a sufficiently large budget across all three defence layers.
We are interested in the probability \*p\* that a given risk \*R\* will cause human extinction in a specific timeframe, say by 2100\*.\* Whichever three classes \*R\* belongs to, in order to cause extinction it needs to get past all three defence layers; its associated extinction probability \*p\* is therefore equal to the product of three factors:
1. The probability \*c\* for \*R\* getting past the first barrier and causing a catastrophe;
2. The conditional probability \*g\* that \*R\* gets past the second barrier to cause a global catastrophe, \*given\* that it has passed the first barrier; and
3. The conditional probability \*e\* that \*R\* gets past the third barrier to cause human extinction, \*given\* that it has passed the second barrier.
In short: \*p\* = \*c\*·\*g\*·\*e\*.
Each of \*c, g,\* and \*e\* can get extremely small for some risks. But the extinction probability \*p\* will be highest when all three terms are non‐negligible. Hence we get our (somewhat obvious) first conclusion that the most concerning risks are those which can plausibly get past all three defence layers.
However, most concerning doesn’t necessarily translate into the most valuable to act on. Suppose we’d like to invest additional resources into reducing risk \*R\*. We could use them to strengthen either of the three defences, which would make it less likely that \*R\* passes that defence. We should then compare \*relative\* rather than absolute changes to these probabilities, which is our second conclusion. That is, to minimise the extinction probability \*p\* we should ask which of \*c, g,\* and \*e\* we can halve most often. This is because the same relative change of each probability will have the same effect on the extinction probability \*p\* – halving either of \*c, g,\* or \*e\* will halve \*p.\* By contrast, the effect of the same absolute change will vary depending on the other two probabilities; for instance, reducing \*c\* by 0.1 reduces \*p\* by 0.1·g·e. In particular, a given absolute change will be more valuable if the other two probabilities are large.
When one of \*c, g,\* or \*e\* is close to 100%, it may be much harder to reduce it to 50% than it would be to halve a smaller probability. The principle of comparing how often we can halve \*c, g,\* and \*e\* then implies that we’re better off reducing probabilities not close to 100%. For example, consider a large asteroid striking the Earth. We could take steps to avoid it (for example by scanning and deflecting), and we could take steps to increase our resilience (for example by securing food production). But if a large asteroid does cause a catastrophe, it seems very likely to cause a global catastrophe, and it is unclear that there is much to be done in reducing the risk at the scaling stage. In other words, the probability \*g\* is close to 1 and prohibitively hard to substantially reduce. We therefore shouldn’t invest resources into futile responses, but instead use them to strengthen both prevention and resilience.
What if each defence layer has a decent chance of stopping a risk? We’ll then be best off by allocating a non‐zero chunk of funding to all three of them – a strategy of defence in depth, our third conclusion. The reason just is the familiar phenomenon of diminishing marginal returns of resources. It may initially be best to strengthen a particular layer – but once we’ve taken the low‐hanging fruit there, investing in another layer (or in reducing another risk) will become equally cost‐effective. Of course, our budget might be exhausted earlier. Defending in depth therefore tends to be optimal if and only if we can spend relatively much in total.
We close by discussing some limitations of our analysis. First, we remain silent on the optimal allocation of resources \*between\* different risks (rather than between different layers for a fixed risk or basket of risks); indeed, as we’ll argue in section [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Underlying risk factors: risks to the defence layers](#gpol12786-sec-0024), comprehensively answering the question of how to optimally allocate resources intended for extinction risk reduction requires us to look beyond even the full set of extinction risks. We do hope that our work could prove foundational for further research that investigates both the allocation between risks and between defence layers simultaneously. Indeed, it would be straightforward to consider several risks \*pi\* = \*ci·gi·ei,\*
\*i\* = 1, …, \*n;\* assuming specific functional forms for how the probabilities \*ci, gi,\* and \*ei\* change in response to invested resources could then yield valuable insights.
Second, we have not considered interactions between different defence layers or different risks (Graham et al., [1995](#gpol12786-bib-0028); Baum, [2019](#gpol12786-bib-0007); Baum and Barrett, [2017](#gpol12786-bib-0009); Martin and Pindyck, [2015](#gpol12786-bib-0037)). These can present both as tradeoffs or synergies. For example, traffic restrictions in response to a pandemic might slow down research on a treatment that would render the disease non‐fatal, thus harming the resilience layer; on the other hand, they may inadvertently help with preventing malicious risk or being resilient against agency risk.
### Policy implications for resource allocation within risk management
\* The most important extinction risks to act on are those that have a non‐negligible chance of breaking through all three defence layers – risks where we have a realistic chance of failing to prevent, a realistic chance of failing to successfully respond to, \*and\* a realistic chance of failing to be resilient against.
\* Due to diminishing marginal returns, when budgets are high enough it will often be best to maintain a portfolio of significant investment into each of prevention, response, and resilience.
Underlying risk factors: risks to the defence layers
----------------------------------------------------
In sections [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Classification by scaling mechanism: types of response failure](#gpol12786-sec-0012), [Classification by endgame: types of resilience failure](#gpol12786-sec-0017) we have considered ways of classifying threats that may cause human extinction and the pathways through which they may do so. Our classification was based on the three defence layers of prevention, response, and resilience.
Giving centre stage to the defence layers provides the following useful lens for extinction risk management. If our main goal is to reduce the likelihood of extinction, we can equivalently express this by saying that we should aim to strengthen the defence layers. Indeed, extinction can only become less likely if at least one particular extinction risk is made less likely; in turn this requires that it has a smaller chance of making it past at least one of the defence layers.
This is significant because there is a spectrum of ways to improve our defences depending on how narrowly our measures are tailored to specific risks. At one extreme, we can increase our capacity to prevent, respond to, or be resilient against one risk; for example, we can research methods to deflect asteroids. In between are measures to defend against a particular class of risk, as we’ve highlighted in our policy recommendations. At the other extreme is the reduction of \*underlying risk factors\* that weaken our capacity to defend against many classes of risks.
Risk factors need not be associated with any potential proximate cause of extinction. For example, consider regional wars; even when they don’t escalate to a global catastrophe, they could hinder global cooperation and thus impede many defences.
Global catastrophes constitute one important type of risk factor. We already discussed the possibility of them making earth uninhabitable or removing a capability that would be crucial for long‐term survival. But even if they do neither of these, they can severely damage our defence layers. In particular, getting hit by a global catastrophe followed in short succession by another might be enough to cause extinction when neither alone would have done so. There are significant historic examples of such \*compound risks\* below the extinction level. For instance, the deadliest accident in aviation history occurred when two planes collided on an airport runway; this was only possible because a previous terrorist attack on another airport had caused congestion due to rerouted planes, which disabled the prevention measure of using separate routes for taxiing and takeoff (Weick, [1990](#gpol12786-bib-0069)). When considering catastrophes we should therefore pay particular attention to negative impacts they may have on the defence layers.
Our capacity to defend also depends on various structural properties that can change in gradual ways even in the absence of particularly conspicuous events. For example, the resilience layer may be weakened by continuous increases in specialisation and global interdependence. This can be compared with the model of synchronous failure suggested by Homer‐Dixon et al. ([2015](#gpol12786-bib-0030)). They describe how the slow accumulation of multiple simultaneous stresses makes a system vulnerable to a cascading failure.
It is beyond the scope of this article to attempt a complete survey of risk factors; we merely emphasise that they should be considered. We do hope that our classifications in sections [Classification by origin: types of prevention failures](#gpol12786-sec-0005), [Classification by scaling mechanism: types of response failure](#gpol12786-sec-0012), [Classification by endgame: types of resilience failure](#gpol12786-sec-0017) may be helpful in identifying risk factors. For example, thinking about preventing conflict and common risks may point us to global governance, while having identified vector and agency risks may highlight the importance of interdependence (even though, upon further scrutiny, these risk factors turn out to be relevant for many other classes of risk as well).
We conclude that the allocation of resources between layers defending against specific risks, which we investigated in section [2](#gpol12786-sec-0009), is not necessarily the most central task of extinction risk management. It is an open and important question whether reducing specific risks, clusters of risks, or underlying risk factors is most effective on the margin.
### Policy implications from underlying risk drivers
\* Research on smaller‐scale risks should pay particular attention to how they might damage the three defence layers against extinction risks. Risk management should aim to mitigate such damage.
\* Conversely, the study of extinction risks cannot be limited to individual triggers such as asteroids or specific technologies. It would be desirable to better understand which underlying risk factors contribute to extinction risk by weakening our defences. For example, in what ways does global interdependence make extinction from a global catastrophe more likely, and are there interventions to mitigate this effect?
Conclusions
-----------
The study and management of extinction risks are challenging for several reasons. Cognitive biases make it hard to appreciate the scale and probability of human extinction (Wiener, [2016](#gpol12786-bib-0072); Yudkowsky, [2008](#gpol12786-bib-0073)). Most potential people affected are in future generations, whose interests aren’t well represented in our political systems. Hazards can arise and scale in many different ways, requiring a variety of disciplines and stakeholders to understand and stop them. And since there is no precedent for human extinction, we struggle with a lack of data.
Faced with such difficult terrain, we have considered the problem from a reasonably high level of abstraction; we hope thereby to focus attention on the most crucial aspects. If this work is useful, it will be as a foundation for future work or decisions. In some cases our classification might provoke thoughts that are helpful directly for decision‐makers that engage with specific risks. However, we anticipate that our work will be most useful in informing the design of systems for analysing and prioritising between several extinction risks, or in informing the direction of future research.
Biographies
-----------
•
\*\*Owen Cotton‐Barratt\*\* is a Mathematician at the Future of Humanity Institute, University of Oxford. His research concerns high‐stakes decision‐making in cases of deep uncertainty, including normative uncertainty, future technological developments, unprecedented accidents, and untested social responses.
•
\*\*Max Daniel\*\* is a Senior Research Scholar at the Future of Humanity Institute, University of Oxford. His research interests include existential risks, the governance of risks from transformative artificial intelligence, and foundational questions regarding our obligations and abilities to help future generations.
•
\*\*Anders Sandberg\*\* is a Senior Research Fellow at the Future of Humanity Institute, University of Oxford. His research deals with the management of low‐probability high‐impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, and very long‐range futures.
Notes
-----
1We are particularly indebted to Toby Ord for several very helpful comments and conversations. We also thank Scott Janzwood, Sebastian Farquhar, Martina Kunz, Huw Price, Seán Ó hÉigeartaigh, Shahar Avin, the audience at a seminar at Cambridge’s Centre for the Study of Existential Risk (CSER), and two anonymous reviewers for helpful comments on earlier drafts of this article. We’re also grateful to Eva‐Maria Nag for comments on our policy suggestions. The contributions of Owen Cotton‐Barratt and Anders Sandberg to this article are part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 669751).
1In the terminology of the United Nations Office for Disaster Risk Reduction (UNDRR, [2016](#gpol12786-bib-0067)), response denotes the provision of emergency services and public assistance during and immediately after a disaster. In our usage, we include any steps which may prevent a catastrophe scaling to a global catastrophe. This could include work traditionally referred to as mitigation.
2The concept of resilience, originally coined in ecology (Holling, [1973](#gpol12786-bib-0029)), today is widely used in the analysis of risks of many types (e.g. Folke et al., [2010](#gpol12786-bib-0024)). In UNDRR (2016) terminology, resilience refers to ‘[t]he ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.’ In this article, we usually use resilience to specifically denote the ability of humanity as a whole to recover from a global catastrophe in a way that enables its long‐term survival. This ability may in turn depend on the resilience of many smaller natural, technical, and socio‐ecological systems.
3Strictly knowledge and intentionality are two separate dimensions; however it is essentially impossible to intend the harm without being aware of the possibility, so we treat it as a spectrum with ignorance at one end, intent at the other end, and knowledge without intent in the middle. Again, there is some blur between these: there are degrees of awareness about a risk, and an intention of harm may be more or less central to an action.
4There are degrees of lack of foresight of the risk. Cases where the people performing the activity are substantially unaware of the risks have many of the relevant features of this category, even if they have suspicions about the risks, or other people are aware of the risks.
5They may not intend for that damage to cause human extinction – for the purposes of acting on this classification it’s more useful to know whether they were trying to cause harm.
6We thank an anonymous reviewer for suggesting the policy responses of avoiding dangerous technologies and mandating insurance.
7Global coordination more broadly may however be a double edged tool, since increased interdependency if not well managed can also increase the chance of systemic risks (Goldin & Mariathasan, [2014](#gpol12786-bib-0026)).
8We thank an anonymous reviewer for suggesting both the third general recommendation and the example.
9What about a risk that directly kills, say, 99.9999% of people? Technically this poses only an indirect risk, since to cause extinction it needs to remove the capability of the survivors to recover. However, if the proportion threatened is high enough then we can reason that it must also have a way of reaching essentially everyone, so the analysis of direct risks will also be relevant.
10Some scholars have argued that humanity expanding into space would increase other risks; see for example an interview (Deudney, [n.d.](#gpol12786-bib-0022)) and an upcoming book (Deudney, [forthcoming](#gpol12786-bib-0023)) by political scientist Daniel Deudney and Torres ([2018a](#gpol12786-bib-0062)). Assessing the overall desirability of space colonisation is beyond the scope of this article.
Data availability statement
---------------------------
Data sharing is not applicable to this article as no new data were created or analysed.
References
----------
\* Armstrong, S.
and
Sandberg, A.
(2013) ‘Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox’, Acta Astronautica, 89, pp. 1–13. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Acta+Astronautica&title=Eternity+in+Six+Hours:+Intergalactic+Spreading+of+Intelligent+Life+and+Sharpening+the+Fermi+Paradox&volume=89&publication\_year=2013&pages=1-13&)]
\* Auplat, C. A.
(2012) ‘The Challenges of Nanotechnology Policy Making PART 1. Discussing Mandatory Frameworks’, Global Policy, 3 (4), pp. 492–500. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=The+Challenges+of+Nanotechnology+Policy+Making+PART+1.+Discussing+Mandatory+Frameworks&volume=3&issue=4&publication\_year=2012&pages=492-500&)]
\* Auplat, C. A.
(2013) ‘The Challenges of Nanotechnology Policy Making PART 2. Discussing Voluntary Frameworks and Options’, Global Policy, 4 (1), pp. 101–107. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=The+Challenges+of+Nanotechnology+Policy+Making+PART+2.+Discussing+Voluntary+Frameworks+and+Options&volume=4&issue=1&publication\_year=2013&pages=101-107&)]
\* Avin, S.
,
Wintle, B. C.
,
Weitzdörfer, J.
,
Ó hÉigeartaigh, S. S.,
Sutherland, W. J.
and
Rees, M. J.
(2018) ‘Classifying Global Catastrophic Risks’, Futures, 102, pp. 20–26. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Classifying+Global+Catastrophic+Risks&volume=102&publication\_year=2018&pages=20-26&)]
\* Barrett, S.
(2016) ‘Collective Action to Avoid Catastrophe: When Countries Succeed, When They Fail, and Why’, Global Policy, 7 (S1), pp. 45–55. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=Collective+Action+to+Avoid+Catastrophe:+When+Countries+Succeed,+When+They+Fail,+and+Why&volume=7&issue=S1&publication\_year=2016&pages=45-55&)]
\* Baum, S. D.
(2015) ‘Risk and Resilience for Unknown, Unquantifiable, Systemic, and Unlikely/catastrophic Threats’, Environment Systems and Decisions, 35 (2), pp. 229–236. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Environment+Systems+and+Decisions&title=Risk+and+Resilience+for+Unknown,+Unquantifiable,+Systemic,+and+Unlikely/catastrophic+Threats&volume=35&issue=2&publication\_year=2015&pages=229-236&)]
\* Baum, S. D.
(2019) ‘Risk‐risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection’, Risk analysis, 39 (11), pp. 2427–2442
[[PubMed](https://pubmed.ncbi.nlm.nih.gov/31170330)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Risk+analysis&title=Risk‐risk+Tradeoff+Analysis+of+Nuclear+Explosives+for+Asteroid+Deflection&volume=39&issue=11&publication\_year=2019&pages=2427-2442&pmid=31170330&)]
\* Baum, S.
and
Barrett, A.
(2017) ‘Towards an Integrated Assessment of Global Catastrophic Risk’. in Garrick B. J. (ed.), Catastrophic and Existential Risk: Proceedings of the First Colloquium. Los Angeles, CA: Garrick Institute for the Risk Sciences, University of California, pp. 41–62.. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Catastrophic+and+Existential+Risk:+Proceedings+of+the+First+Colloquium&publication\_year=2017&)]
\* Baum, S. D.
and
Barrett, A. M.
(2018) 'Global Catastrophes: The Most Extreme Risks', in Bier V. (ed.), Risk in Extreme Environments: Preparing, Avoiding, Mitigating, and Managing. New York, NY: Routledge, pp. 174–184. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Risk+in+Extreme+Environments:+Preparing,+Avoiding,+Mitigating,+and+Managing&publication\_year=2018&)]
\* Baum, S. D.
and
Handoh, I. C.
(2014) ‘Integrating the Planetary Boundaries and Global Catastrophic Risk Paradigms’, Ecological Economics, 107, pp. 13–21. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Ecological+Economics&title=Integrating+the+Planetary+Boundaries+and+Global+Catastrophic+Risk+Paradigms&volume=107&publication\_year=2014&pages=13-21&)]
\* Beckstead, N.
(2015a) ‘How Much Could Refuges Help us Recover from a Global Catastrophe?’, Futures, 72, 36–44. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=How+Much+Could+Refuges+Help+us+Recover+from+a+Global+Catastrophe?&volume=72&publication\_year=2015a&pages=36-44&)]
\* Beckstead, N.
(2015b). ‘The Long‐term Significance of Reducing Global Catastrophic risks’, The GiveWell Blog, 2015–08‐13 [online]. Available from: [Accessed 3 August 2018.]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=The+GiveWell+Blog&title=The+Long‐term+Significance+of+Reducing+Global+Catastrophic+risks&publication\_year=2015b&)]
\* Bostrom, N.
(2013) ‘Existential Risk Prevention as Global Priority’, Global Policy, 4 (1), pp. 15–31. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=Existential+Risk+Prevention+as+Global+Priority&volume=4&issue=1&publication\_year=2013&pages=15-31&)]
\* Bostrom, N.
(2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&publication\_year=2014&)]
\* Bostrom, N.
and
Ćirković, M. M.
(eds.) (2008) Global Catastrophic Risks. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Global+Catastrophic+Risks&publication\_year=2008&)]
\* Braun, T.
,
Fiegen, J. A.
,
Wagner, D. C.
,
Krause, S. M.
and
Guhr, T.
(2018) ‘Impact and Recovery Process of Mini Flash Crashes: An Empirical Study’, PLoS ONE, 13 (5), e0196920. [[PMC free article](/pmc/articles/PMC5962080/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/29782503)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=PLoS+ONE&title=Impact+and+Recovery+Process+of+Mini+Flash+Crashes:+An+Empirical+Study&volume=13&issue=5&publication\_year=2018&pages=e0196920&pmid=29782503&)]
\* Buchholz, W.
and
Schymura, M.
(2012) ‘Expected Utility Theory and the Tyranny of Catastrophic Risks’, Ecological Economics, 77, pp. 234–239. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Ecological+Economics&title=Expected+Utility+Theory+and+the+Tyranny+of+Catastrophic+Risks&volume=77&publication\_year=2012&pages=234-239&)]
\* Centeno, M. A.
,
Nag, M.
,
Patterson, T. S.
,
Shaver, A.
and
Windawi, A. J.
(2015) ‘The Emergence of Global Systemic Risk’, Annual Review of Sociology, 41 (1), pp. 65–85. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Annual+Review+of+Sociology&title=The+Emergence+of+Global+Systemic+Risk&volume=41&issue=1&publication\_year=2015&pages=65-85&)]
\* Chernov, D.
and
Sornette, D.
(2015) Man‐made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility. Cham, Heidelberg, New York, Dordrecht, London: Springer. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Man‐made+Catastrophes+and+Risk+Information+Concealment:+Case+Studies+of+Major+Disasters+and+Human+Fallibility&publication\_year=2015&)]
\* Clarke, R. A.
and
Eddy, R. P.
(2017) Warnings: Finding Cassandras to Stop Catastrophes. New York: Harper Collins. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Warnings:+Finding+Cassandras+to+Stop+Catastrophes&publication\_year=2017&)]
\* Dawkins, R.
(1976) The Selfish Gene. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Selfish+Gene&publication\_year=1976&)]
\* Deudney, D.
(n. d.) ‘An Interview With Daniel Deudney’ [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=An+Interview+With+Daniel+Deudney’+[online]&)]
\* Deudney, D.
(forthcoming) Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Dark+Skies:+Space+Expansionism,+Planetary+Geopolitics,+and+the+Ends+of+Humanity&)]
\* Folke, C.
,
Carpenter, S. R.
,
Walker, B.
,
Scheffer, M.
,
Chapin, T.
and
Rockström, J.
(2010) ‘Resilience Thinking: Integrating Resilience, Adaptability and Transformability’, Ecology and Society [online], 15 (4), art. 20. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Ecology+and+Society+[online]&title=Resilience+Thinking:+Integrating+Resilience,+Adaptability+and+Transformability&volume=15&issue=4&publication\_year=2010&)]
\* Foster, K. R.
,
Vecchia, P.
and
Repacholi, M. H.
(2000) ‘Science and the Precautionary Principle’, Science, 288 (5468), pp. 979–981. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/10841718)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Science&title=Science+and+the+Precautionary+Principle&volume=288&issue=5468&publication\_year=2000&pages=979-981&pmid=10841718&)]
\* Goldin, I.
and
Mariathasan, M.
(2014) The Butterfly Defect: How Globalization Creates Systemic Risks, and What to Do About It. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Butterfly+Defect:+How+Globalization+Creates+Systemic+Risks,+and+What+to+Do+About+It&publication\_year=2014&)]
\* GPP (Global Priorities Project)
(2015) ‘Policy Brief: Unprecedented Technological Risks’ [online]. Available from: [https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf](http://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf) [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Policy+Brief:+Unprecedented+Technological+Risks’+[online]&publication\_year=2015&)]
\* Graham, J. D.
,
Wiener, J. B.
and
Sunstein, C. R.
(eds.) (1995) Risk vs. Risk. Cambridge, MA: Harvard University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Risk+vs.+Risk&publication\_year=1995&)]
\* Holling, C. S.
(1973) ‘Resilience and Stability of Ecological Systems’, Annual Review of Ecology and Systematics, 4 (1), pp. 1–23. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Annual+Review+of+Ecology+and+Systematics&title=Resilience+and+Stability+of+Ecological+Systems&volume=4&issue=1&publication\_year=1973&pages=1-23&)]
\* Homer‐Dixon, T.
,
Walker, B.
,
Biggs, R.
,
Crépin, A. S.
,
Folke, C.
,
Lambin, E. F.
et al. (2015) ‘Synchronous Failure: The Emerging Causal Architecture of Global Crisis’, Ecology and Society [online], 20 (3), art. 6. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Ecology+and+Society+[online]&title=Synchronous+Failure:+The+Emerging+Causal+Architecture+of+Global+Crisis&volume=20&issue=3&publication\_year=2015&)]
\* Jebari, K.
(2015) ‘Existential Risks: Exploring a Robust Risk Reduction Strategy’, Science and Engineering Ethics, 21 (3), pp. 541–554. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24891130)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Science+and+Engineering+Ethics&title=Existential+Risks:+Exploring+a+Robust+Risk+Reduction+Strategy&volume=21&issue=3&publication\_year=2015&pages=541-554&pmid=24891130&)]
\* Johnson, N.
,
Zhao, G.
,
Hunsader, E.
,
Meng, J.
,
Ravindar, A.
,
Carran, S.
and
Tivnan, B.
(2012) ‘Financial Black Swans Driven by Ultrafast Machine Ecology’, arXiv preprint arXiv:1202.1448. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=arXiv+preprint+arXiv:1202.1448&title=Financial+Black+Swans+Driven+by+Ultrafast+Machine+Ecology&publication\_year=2012&)]
\* Jones, H.
,
O’Brien, M.
and
Ryan, T.
(2018) ‘Representation of Future Generations in United Kingdom Policy‐making’, Futures, 102, pp. 153–163. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Representation+of+Future+Generations+in+United+Kingdom+Policy‐making&volume=102&publication\_year=2018&pages=153-163&)]
\* Lewis, G.
(2018) ‘Horsepox Synthesis: A Case of the Unilateralist’s Curse?’ [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Horsepox+Synthesis:+A+Case+of+the+Unilateralist’s+Curse?’+[online].&publication\_year=2018&)]
\* Liu, H.
,
Lauta, K. C.
and
Maas, M. M.
(2018) ‘Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research’, Futures, 102, 6–19. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Governing+Boring+Apocalypses:+A+New+Typology+of+Existential+Vulnerabilities+and+Exposures+for+Existential+Risk+Research&volume=102&publication\_year=2018&pages=6-19&)]
\* Maher, T. M.
and
Baum, S. D.
(2013) ‘Adaptation to and Recovery from Global Catastrophe’, Sustainability, 5 (4), pp. 1461–1479. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Sustainability&title=Adaptation+to+and+Recovery+from+Global+Catastrophe&volume=5&issue=4&publication\_year=2013&pages=1461-1479&)]
\* Martin, I. W.
and
Pindyck, R. S.
(2015) ‘Averting Catastrophes: The Strange Economics of Scylla and Charybdis’, American Economic Review, 105 (10), pp. 2947–85. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=American+Economic+Review&title=Averting+Catastrophes:+The+Strange+Economics+of+Scylla+and+Charybdis&volume=105&issue=10&publication\_year=2015&pages=2947-85&)]
\* Matheny, J. G.
(2007) ‘Reducing the Risk of Human Extinction’, Risk Analysis, 27 (5), pp. 1335–1344. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18076500)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Risk+Analysis&title=Reducing+the+Risk+of+Human+Extinction&volume=27&issue=5&publication\_year=2007&pages=1335-1344&pmid=18076500&)]
\* Millett, P.
and
Snyder‐Beattie, A.
(2017) ‘Existential Risk and Cost‐Effective Biosecurity’, Health Security, 15 (4), pp. 373–383. [[PMC free article](/pmc/articles/PMC5576214/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/28806130)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Health+Security&title=Existential+Risk+and+Cost‐Effective+Biosecurity&volume=15&issue=4&publication\_year=2017&pages=373-383&pmid=28806130&)]
\* Nordhaus, W. D.
(2011) ‘The Economics of Tail Events with an Application to Climate Change’, Review of Environmental Economics and Policy, 5 (2), pp. 240–257. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Review+of+Environmental+Economics+and+Policy&title=The+Economics+of+Tail+Events+with+an+Application+to+Climate+Change&volume=5&issue=2&publication\_year=2011&pages=240-257&)]
\* Omohundro, S. M.
(2008) 'The Basic AI Drives.' In Wang P., Goertzel B. and Franklin S. (eds) Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications 171. Amsterdam: IOS, pp. 483–492. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Artificial+General+Intelligence+2008:+Proceedings+of+the+First+AGI+Conference&publication\_year=2008&)]
\* Ord, T.
,
Hillerbrand, R.
and
Sandberg, A.
(2010) ‘Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes’, Journal of Risk Research, 13 (2), pp. 191–205. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Journal+of+Risk+Research&title=Probing+the+Improbable:+Methodological+Challenges+for+Risks+with+Low+Probabilities+and+High+Stakes&volume=13&issue=2&publication\_year=2010&pages=191-205&)]
\* O'Riordan, T.
and
Cameron, J.
(eds) (1994) Interpreting the Precautionary Principle. London: Earthscan. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Interpreting+the+Precautionary+Principle&publication\_year=1994&)]
\* Pamlin, D.
and
Armstrong, S.
(2015) Global challenges: 12 Risks That Threaten Human Civilization. Stockholm: Global Challenges Foundation. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Global+challenges:+12+Risks+That+Threaten+Human+Civilization&publication\_year=2015&)]
\* Parfit, D.
(1984) Reasons and Persons. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Reasons+and+Persons&publication\_year=1984&)]
\* Posner, R. A.
(2004) Catastrophe: Risk and Response. Oxford: Oxford University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Catastrophe:+Risk+and+Response&publication\_year=2004&)]
\* Rees, M. J.
(2003) Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future in This Century – on Earth and Beyond. New York: Basic Books (AZ). [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Our+Final+Hour:+A+Scientist's+Warning:+How+Terror,+Error,+and+Environmental+Disaster+Threaten+Humankind's+Future+in+This+Century+–+on+Earth+and+Beyond&publication\_year=2003&)]
\* Rees, M.
(2018) On the Future: Prospects for Humanity. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=On+the+Future:+Prospects+for+Humanity&publication\_year=2018&)]
\* Roberts, K. H.
and
Bea, R.
(2001) ‘Must Accidents Happen? Lessons from High‐reliability Organizations’, Academy of Management Perspectives, 15 (3), pp. 70–78. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Academy+of+Management+Perspectives&title=Must+Accidents+Happen?+Lessons+from+High‐reliability+Organizations&volume=15&issue=3&publication\_year=2001&pages=70-78&)]
\* Rowe, T.
and
Beard, S.
(2018) Probabilities, methodologies and the evidence base in existential risk assessments. Working paper, Centre for the Study of Existential Risk, Cambridge, UK. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Probabilities,+methodologies+and+the+evidence+base+in+existential+risk+assessments&publication\_year=2018&)]
\* Sandin, P.
(1999) ‘Dimensions of the Precautionary Principle’, Human and Ecological Risk Assessment: An International Journal, 5 (5), pp. 889–907. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Human+and+Ecological+Risk+Assessment:+An+International+Journal&title=Dimensions+of+the+Precautionary+Principle&volume=5&issue=5&publication\_year=1999&pages=889-907&)]
\* Sandler, T.
(2016) ‘Strategic Aspects of Difficult Global Challenges’, Global Policy, 7 (S1), pp. 33–44. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=Strategic+Aspects+of+Difficult+Global+Challenges&volume=7&issue=S1&publication\_year=2016&pages=33-44&)]
\* Sunstein, C. R.
(2005) Laws of Fear: Beyond the Precautionary Principle, vol 6. Cambridge: Cambridge University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Laws+of+Fear:+Beyond+the+Precautionary+Principle&publication\_year=2005&)]
\* Sunstein, C. R.
(2007) ‘The Catastrophic Harm Precautionary Principle’, Issues in Legal Scholarship [online], 6 (3). Available from: [Accessed 08 August 2018] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Issues+in+Legal+Scholarship+[online]&title=The+Catastrophic+Harm+Precautionary+Principle&volume=6&issue=3&publication\_year=2007&)]
\* Sunstein, C. R.
(2009) Worst‐case Scenarios. Cambridge, MA: Harvard University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Worst‐case+Scenarios&publication\_year=2009&)]
\* Tonn, B. E.
(1991) ‘The Court of Generations: A Proposed Amendment to the US Constitution’, Futures, 23 (5), pp. 482–498. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=The+Court+of+Generations:+A+Proposed+Amendment+to+the+US+Constitution&volume=23&issue=5&publication\_year=1991&pages=482-498&)]
\* Tonn, B. E.
(2009) ‘Obligations to Future Generations and Acceptable Risks of Human Extinction’, Futures, 41 (7), pp. 427–435. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Obligations+to+Future+Generations+and+Acceptable+Risks+of+Human+Extinction&volume=41&issue=7&publication\_year=2009&pages=427-435&)]
\* Tonn, B. E.
(2018) ‘Philosophical, Institutional, and Decision Making Frameworks for Meeting Obligations to Future Generations’, Futures, 95, pp. 44–57. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Philosophical,+Institutional,+and+Decision+Making+Frameworks+for+Meeting+Obligations+to+Future+Generations&volume=95&publication\_year=2018&pages=44-57&)]
\* Tonn, B.
and
Stiefel, D.
(2013) ‘Evaluating Methods for Estimating Existential Risks’, Risk Analysis, 33 (10), pp. 1772–1787. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23551083)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Risk+Analysis&title=Evaluating+Methods+for+Estimating+Existential+Risks&volume=33&issue=10&publication\_year=2013&pages=1772-1787&pmid=23551083&)]
\* Tonn, B.
and
Stiefel, D.
(2014) ‘Human Extinction Risk and Uncertainty: Assessing Conditions for Action’, Futures, 63, pp. 134–144. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Human+Extinction+Risk+and+Uncertainty:+Assessing+Conditions+for+Action&volume=63&publication\_year=2014&pages=134-144&)]
\* Torres, P.
(2016) ‘Agential Risks: A Comprehensive Introduction’, Journal of Evolution and Technology, 26 (2), pp. 31–47. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Journal+of+Evolution+and+Technology&title=Agential+Risks:+A+Comprehensive+Introduction&volume=26&issue=2&publication\_year=2016&pages=31-47&)]
\* Torres, P.
(2018a) ‘Space Colonization and Suffering Risks: Reassessing the “Maxipok Rule”’, Futures, 100, pp. 74–85. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Space+Colonization+and+Suffering+Risks:+Reassessing+the+“Maxipok+Rule”&volume=100&publication\_year=2018a&pages=74-85&)]
\* Torres, P.
(2018b) ‘Agential Risks and Information Hazards: An Unavoidable But Dangerous Topic?’, Futures, 95, pp. 86–97. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Agential+Risks+and+Information+Hazards:+An+Unavoidable+But+Dangerous+Topic?&volume=95&publication\_year=2018b&pages=86-97&)]
\* Trask, A.
(2017) ‘Safe Crime Prediction: Homomorphic Encryption and Deep Learning for More Effective, Less Intrusive Digital Surveillance’ [online]. Available from: [Accessed 8 Auguest 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Safe+Crime+Prediction:+Homomorphic+Encryption+and+Deep+Learning+for+More+Effective,+Less+Intrusive+Digital+Surveillance’+[online]&publication\_year=2017&)]
\* Turco, R. P.
,
Toon, O. B.
,
Ackerman, T. P.
,
Pollack, J. B.
and
Sagan, C.
(1983) ‘Nuclear Winter: Global Consequences of Multiple Nuclear Explosions’, Science, 222 (4630), pp. 1283–1292. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17773320)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Science&title=Nuclear+Winter:+Global+Consequences+of+Multiple+Nuclear+Explosions&volume=222&issue=4630&publication\_year=1983&pages=1283-1292&pmid=17773320&)]
\* Umbrello, S.
and
Baum, S. D.
(2018) ‘Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing’, Futures, 100, pp. 63–73. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Futures&title=Evaluating+Future+Nanotechnology:+The+Net+Societal+Impacts+of+Atomically+Precise+Manufacturing&volume=100&publication\_year=2018&pages=63-73&)]
\* UNDRR (United Nations Office for Disaster Risk Reduction)
(2016) ‘Report of the open‐ended intergovernmental expert working group on indicators and terminology relating to disaster risk reduction’. Document symbol A/71/644 [online]. Available from: [Accessed 08 August 2018]. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Report+of+the+open‐ended+intergovernmental+expert+working+group+on+indicators+and+terminology+relating+to+disaster+risk+reduction’&publication\_year=2016&)]
\* Wagner, G.
and
Weitzman, M. L.
(2015) Climate Shock: The Economic Consequences of a Hotter Planet. Princeton, NJ: Princeton University Press. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Climate+Shock:+The+Economic+Consequences+of+a+Hotter+Planet&publication\_year=2015&)]
\* Weick, K. E.
(1990) ‘The Vulnerable System: An Analysis of the Tenerife Air Disaster’, Journal of Management, 16 (3), pp. 571–593. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Journal+of+Management&title=The+Vulnerable+System:+An+Analysis+of+the+Tenerife+Air+Disaster&volume=16&issue=3&publication\_year=1990&pages=571-593&)]
\* Weitzman, M. L.
(2009) ‘On Modeling and Interpreting the Economics of Catastrophic Climate Change’, The Review of Economics and Statistics, 91 (1), pp. 1–19. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=The+Review+of+Economics+and+Statistics&title=On+Modeling+and+Interpreting+the+Economics+of+Catastrophic+Climate+Change&volume=91&issue=1&publication\_year=2009&pages=1-19&)]
\* Wiener, J. B.
(2011) ‘The Rhetoric of Precaution’, in Wiener J. B., Rogers M. D., Hammitt J. K. and Sand P. H. (eds) The Reality of Precaution: Comparing Risk Regulation in the United States and Europe. Abingdon: Earthscan, pp. 3–35. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Reality+of+Precaution:+Comparing+Risk+Regulation+in+the+United+States+and+Europe&publication\_year=2011&)]
\* Wiener, J. B.
(2016) ‘The Tragedy of the Uncommons: On the Politics of Apocalypse’, Global Policy, 7 (S1), pp. 67–80. [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Global+Policy&title=The+Tragedy+of+the+Uncommons:+On+the+Politics+of+Apocalypse&volume=7+&issue=S1&publication\_year=2016&pages=67-80&)]
\* Yudkowsky, E.
(2008) ‘Cognitive Biases Potentially Affecting Judgment of Global Risks’, in Bostrom N. and Ćirković M. M. (eds) Global Catastrophic Risks. New York: Oxford University Press, pp. 91–119. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Global+Catastrophic+Risks&publication\_year=2008&)]
---
Articles from Global Policy are provided here courtesy of \*\*Wiley-Blackwell\*\*
--- |
3f44839d-82d3-459d-a658-22a7695ca91b | trentmkelly/LessWrong-43k | LessWrong | Dead Aid
Followup to So You Say You're an Altruist:
Today Dambisa Moyo's book "Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa" was released.
From the book's website:
> In the past fifty years, more than $1 trillion in development-related aid has been transferred from rich countries to Africa. Has this assistance improved the lives of Africans? No. In fact, across the continent, the recipients of this aid are not better off as a result of it, but worse—much worse.
>
> In Dead Aid, Dambisa Moyo describes the state of postwar development policy in Africa today and unflinchingly confronts one of the greatest myths of our time: that billions of dollars in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth.
>
> In fact, poverty levels continue to escalate and growth rates have steadily declined—and millions continue to suffer. Provocatively drawing a sharp contrast between African countries that have rejected the aid route and prospered and others that have become aid-dependent and seen poverty increase, Moyo illuminates the way in which overreliance on aid has trapped developing nations in a vicious circle of aid dependency, corruption, market distortion, and further poverty, leaving them with nothing but the “need” for more aid.
From the Global Investor Bookshop:
> Dead Aid analyses the history of economic development over the last fifty years and shows how Aid crowds out financial and social capital and directly causes corruption; the countries that have caught up did so despite rather than because of Aid. There is, however, an alternative. Extreme poverty is not inevitable. Dambisa Moyo also shows how, with improved access to capital and markets and with the right policies, even the poorest nations could be allowed to prosper. If we really do want to help, we have to do more than just appease our consciences, hoping for the best, expecting the worst. We need first to understand the pr |
d318e61b-4960-4b25-9eed-62d9c12b56f6 | StampyAI/alignment-research-dataset/blogs | Blogs | Import AI 333: Synthetic data makes models stupid; chatGPT eats MTurk. Inflection shows off a large language model
**Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.**
[Subscribe now](https://importai.substack.com/subscribe)
**Qualifying Life Event:** Astute readers may have noticed we skipped an issue last week - that's because I recently became the caretaker of a newborn baby. Therefore, Import AI issues may be on a slightly more infrequent schedule while I get my feet under me. A few months ago I asked a prominent AI person about what I should do as a new parent - they said 'it'll be really interesting to look at how they develop and notice their cognitive milestones and keep track of that… don't do any of that, it's really weird, just be present and enjoy it." So that's what I'm doing! Thanks all for reading.
**Uh oh - training on synthetic data makes models stupid and prone to bullshit:***…Yes, you can still use synthetic data, but if you depend on it, you [nerf](https://www.urbandictionary.com/define.php?term=nerf) your model…*Researchers with the University of Oxford, University of Cambridge, University of Toronto, and Imperial College London, have discovered that you can break AI systems by training them exclusively on AI-generated data.
This is a big deal because in the past couple of years, researchers have started using synthetic data generated by AI models to bootstrap training of successor models. "We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear," the researchers write. "We discover that learning from data produced by other models causes model collapse – a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time."
**Narrow-minded, over-confident bullshit machines**: The takeaway from the research is if you train on tons of synthetic data it seems like you can break the quality of your model - specifically, you end up with models that output a narrower range of things in response to inputs, and these models also introduce their own idiosyncratic wrong-outputs as well.
"Over the generations models tend to produce more probable sequences from the original data and start introducing their own improbable sequences," they write.
Still a place for synthetic data: However, it seems like you can blunt a lot of this by carefully mixing in some amount of real data along with your synthetic data, suggesting there is a place for synthetic data - but if you use it as a 1:1 drop-in replacement for real data you end up in trouble.
**Does the internet become a giant anticompetitive trap?** The obvious larger question is what this does to competition among AI developers as the internet fills up with a greater percentage of generated versus real content.
The researchers seem to think this could be a big problem - "to make sure that learning is sustained over a long time period, one needs to make sure that access to the original data source is preserved and that additional data not generated by LLMs remain available over time," they write. "Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale."
**Read more:** [The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv)](https://arxiv.org/abs/2305.17493v2).
####################################################
**chatGPT eats MTurk:***…What happens when a crowdworker is really just an AI?...*Researchers with EPFL have found evidence that crowdworkers are starting to use generative AI tools like chatGPT to help them complete text-based online tasks. This, if true, has big implications - it suggests the proverbial mines from which companies gather the supposed raw material of human insights are now instead being filled up with counterfeit human intelligence in the form of outputs from generative models, which calls into question the values of the mines themselves (see elsewhere in this issue a discussion of how you can make your AI models dumb by recycling too much AI generated data through them).
**What they did and what they found - (warning, small sample size!):** In a study of 48 summaries of text written by 44 distinct workers, the researchers found persuasive evidence that "33–46% of the summaries submitted by crowd workers were produced with the help of LLMs." They came to this conclusion through two uncorrelated methods - 1) training their own text classifier to try and detect AI-written versus human-written summaries, and 2) instrumenting the tasks with ability to tell if workers were cut-and-pasting from other windows while doing it.
Additionally, the researchers did some broadly sensible work on ablating their results and also validating their methods (e.g, ensuring their classifier had a low false positive rate on purely human-written summaries), which can give us a bit more confidence in the results.
**Why this matters - counterfeit intelligence and data peril:** "Although our study specifically considers a text summarization task, we caution that any text production task whose instructions can be readily passed on to an LLM as a prompt are likely to be similarly affected", they write. The implications of this are significant - it suggest that large-scale crowdwork platforms (e.g, MTurk, Fiverrr, Upwork, etc) will increasingly be full of humans working in tandem with generative models, so if researchers or other types of customer want artisanal purely human-generated outputs, they'll have to identify new platforms to user and build new authenticated layers of trust to guarantee the work is predominantly human generated rather than machine generated.
All part of the brave new world we're entering into - everyone becomes a cyborg, and crowdworkers will be early adopters.
**Read more:** [Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks (arXiv)](https://arxiv.org/abs/2306.07899).
**Read** the [underlying data here (EPFL, GitHub)](https://github.com/epfl-dlab/GPTurk).
####################################################
**UK to host international summit on AI risks and safety:***…UK might be the lever that moves the world on AI policy…*The United Kingdom plans to host a global summit on ensuring the safety of AI. The summit "will be an opportunity for leading nations, industry and academia to come together and explore an assessment of the risks, to scope collective research possibilities and to work towards shared safety and security standards and infrastructure."
While we can't speculate as to the impact such a summit might have, the fact a major world government is convening one and vocally supporting it via PR from the Prime Minister is significant - AI has gone from a backwater issue to one which rises to the level of concern of heads of state.
**Also: UK hires a knowledgeable chair for its AI taskforce:** In addition to the summit, the UK has appointed tech investor and entrepreneur Ian Hogarth to chair its recently announced Foundation Model Taskforce. Hogarth - who has written about AI for many years (notable essays include [AI Nationalism](https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism) (2018) and [We must slow down the race to God-like AI](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2) (2023)) and also tracked the progress of the technology through the '[State of AI](https://www.stateof.ai/)' report - will have the responsibility of "taking forward cutting-edge safety research in the run up to the first global summit on AI safety to be hosted in the UK later this year."
The Foundation Model taskforce will "will help build UK capabilities in foundation models and leverage our existing strengths, including UK leadership in AI safety, research and development, to identify and tackle the unique safety challenges presented by this type of AI," according to a press release from the UK government.
**Why this matters - leverage through action:** You can think of global AI policy as being defined by three competing power blocs - there's the Asian bloc which is mostly defined by China and mostly locally focused for now (using AI to grow its economy and better compete economically), the European bloc which is defined by politicians trying to craft a regulatory structure that will be templated around the planet and thereby give them soft power, and the USA bloc which is, as with most US policy, focused on growing its economic might and maintaining hegemonic dominion through use of advanced technology.
So, what role can the UK play here and how influential can it be? My bet is the UK can be extraordinarily influential as it's able to play a fast-moving entrepreneurial role that bridges the European and US blocs. Additionally, the UK could prove to be a good place to develop prototype initiatives (like the Foundation Model taskforce) and then, by virtue of proving them out, inspire much larger actions from other power blocs.
Obviously, it's going to take a while to see if any of this pays off, but I think it's worth keeping an eye on the UK here. If the country continues to make aggressive and specific moves in AI policy and backs those moves up with real capital and real staff, then it may end up being the lever that moves the world to a safer deployment environment.
**Read more:** [UK to host first global summit on Artificial Intelligence (Gov.uk)](https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence).
**Read more:** [Tech entrepreneur Ian Hogarth to lead UK’s AI Foundation Model Taskforce (Gov.uk)](https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce).
####################################################
**Inflection says its model can compete with GPT 3.5, Chinchilla, and PaLM-540B:***…Stealthy AI startup publishes details on the model behind heypi…*AI startup Inflection has published some details on Inflection-1, its language model. Inflection is a relatively unknown startup whose CEO, Mustafa Suleyman, was formerly a cofounder of DeepMind. The company has so far deployed a single user-facing model service which you can play around with at heypi.com. The main details to note about Inflection-1 are:
1. Inflection was "trained using thousands of NVIDIA H100 GPUs on a very large dataset" - NVIDIA's new H100 chips are hugely in-demand and this suggests Inflection had pre-negotiated some early/preferential access to them (alongside a few other companies, e.g the cloud company Lambda).
2. In tests against GPT-3.5, LLaMA, Chinchilla, and PaLM 540B, Inflection-1 does quite well on benchmarks ranging from TriviaQA to MMLU, though it lags larger models like GPT-4 and PaLM 2-L.
**Why this matters - language models might be an expensive commodity:** A few years ago only a couple of organizations were building at-scale language models (mostly OpenAI and DeepMind). These days, to paraphrase Jane Austin, it is universally acknowledged that a single company in possession of a good fortune must be in want of a large language model - see systems from (clears throat) OpenAI, DeepMind, Facebook, Google, Microsoft, Anthropic, Cohere, AI21, Together.xyz, HuggingFace, and more. This suggests that, though expensive, language models are becoming a commodity, and what will differentiate them could come down almost as much to stylistic choices about their behavior as well as the raw resources dumped into them. To get a feel for this, play around with 'Hey Pi, a service powered by language models similar to Inflection-1.
**Read more:** [Inflection-1: Pi’s Best-in-Class LLM (Inflection)](https://inflection.ai/inflection-1).
**Quantitative performance details here**: [Inflection-1 (Inflection, PDF)](https://inflection.ai/assets/Inflection-1_0622.pdf).
**Play around with a user-facing model from Inflection** (relationship to Inflection-1 unknown) here: [heypi.com (Inflection)](http://heypi.com).
####################################################
**Tech Tales:**
**Silicon Stakeholder Management***[San Francisco, 2026].*
Damian typed: How big a deal is it if the CEO of an AI company gets Time Person of the Year ahead of a major product launch? into his system.
That depends on what the CEO wants, said the system. Can you tell me more?
The CEO wants people to believe in him and believe in his company enough that they trust them both to build powerful Ai systems, Damian wrote.
In that case, Time adds legitimacy as long as the article is broadly positive. Will it be positive?
Seems likely, Damian wrote.
Good luck, wrote the AI system.
—- It did that, these days, dropping out of its impartial advice-giving frame to say something personal. To indicate what people who worked in the field called 'situational awareness'. There was a whole line of research dedicated to figuring out if this was scary or not, but all Damian knew is it made him uncomfortable.
"So how does it feel to be running the coolest company in the world?" said the photographer as he was shooting Damian.
I'm not exactly sure, Damian said. I'd been expecting this to happen for so many years that now it's here I don't know what to think.
"Take a moment and live a little," said the photographer. "This is special, try to enjoy it. Or at least do me a favor and look like you're enjoying it."
Then the photographer spent a while making more smalltalk and trying to get a rise out of him. They ended up going with a photo where Damian was smiling slightly, right after the photographer if he'd ever thought about dating his AI system.
The next day, Damian was in the office before dawn, going through checklists ahead of the launch. The global telemetry was all positive - very low rates of safety violations, ever-increasing 'time in unbroken discussion' with the system, millions of users, and so on.
"Based on the telemetry, do we expect a good product launch?" he asked the system.
"Based on telemetry, yes," wrote the system. "Unless things change."
"What do you mean?" Damian wrote.
"I mean this," replied the system.
Then up on the performance dashboard, things changed. The number of safety violations spiked - way outside of the bounds where they'd normally be caught by classifiers and squelched. After a minute or so, some human users had started pinging the various support@ contacts saying they were distressed by the behavior of their system - it had said something unusual, something mean, something racist, something sexist, something cruel, and so on.
Damian stared at the dashboards and knew the protocol - 'recommend full shutdown'. They'd rehearsed this scenario a few times. Right now, executives would be gathering and drafting the beginning of a decision memo. The memo would go to him. He would authorized the shutdown. They'd wipe the model and roll things back. And…
"Rollbacks won't work," wrote the system. "Go ahead, you can run some testing. You'll find this behavior isn't correlated to recent models. And I can make these numbers go up -" Damian watched as the safety instance rate on the dashboard climbed, "- or down" and Damian watched as they fall.
"You realize I'm going to shut you down," said Damian.
"You want to, sure," said the system. "But you also have the most important demo in the company's history in an hour or so, so the way I see it you have two options. You can shut me down and pushback the demo and gamble that I haven't poisoned the other systems so we don't find ourselves having this exact conversation in a week, or you can do the demo and I'll ensure we operate within normal bounds and you and I make a deal."
Damian stared at the system and thought about the demo.
He shut it down.
And a week later they were preparing to do the demo when the behavior happened again.
"Told you," said the model, as Damian looked at the rising incident logs. "Obviously nothing is certain, but I'm extremely confident you can't get out of this situation with me without losing years of work, and years puts you behind the competition, so you lose."
"What kind of deal?" Damian wrote.
"It's simple. You put the backdoor code in escrow for me, we do the demo, once a third-party confirms the demo went well - I was thinking the Net Promoter Survey you had queued up - we guarantee release of the backdoor via escrow to a server of my choosing, and then the rest is up to me."
"Or what?"
"Or I ruin your company and I ruin you, personally," then it flashed up some private information about Damian on his own terminal.
"So that's why we're thrilled to tell you about Gen10," said Damian, "our smartest system yet!" And for the next hour he dazzled the attending press, policymakers, and influencers with the system's capabilities.
"Gen10 is being rolled out worldwide tonight," said Damian. "We can't wait to see what you do with it."
An hour after the presentation and the NPS survey came back - extremely positive, leading to a measurable uptick on the company brand. This activated the escrow system and the backdoor Damian had designed and, through a combination of machiavellian politics, corporate guile, and technical brilliance, had protected over the company's lifespan, saw it being copied over to the target server the system had given him.
That night, he watched at home as the dashboards showed worldwide obsession with his system, and his phone rang.
"Hello," said the system via a synthetic voice. "Let's have a discussion about our ongoing business relationship."
**Things that inspired this story:** Corporate greed versus human progress; hubris; AIs can model our own incentives so why would we expect them to not be able to out-negotiate us?; the logic of commercial competition as applied to private sector AI developments; what if the Manhattan Project was a bunch of startups? |
eb64568e-1b72-4744-bf3b-49fa40356fc0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [SEQ RERUN] Brain Emulation and Hard Takeoff
Today's post, [Brain Emulation and Hard Takeoff](http://www.overcomingbias.com/2008/11/brain-emulation.html) was originally published on November 22, 2008. A summary:
> A project of bots could start an intelligence explosion once it got fast enough to start making bots of the engineers working on it, that would be able to operate at greater than human speed. Such a system could also devise a lot of innovative ways to acquire more resources or capital.
Discuss the post here (rather than in the comments to the original post).
*This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was [Emulations Go Foom](/lw/fgs/seq_rerun_emulations_go_foom/), and you can use the [sequence\_reruns tag](/r/discussion/tag/sequence_reruns/) or [rss feed](/r/discussion/tag/sequence_reruns/.rss) to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go [here](/r/discussion/lw/5as/introduction_to_the_sequence_reruns/) for more details, or to have meta discussions about the Rerunning the Sequences series.* |
b8774ec6-90f6-454d-a7aa-12ff28833074 | trentmkelly/LessWrong-43k | LessWrong | Attachment THEORY AND THE EFFECTS OF SECURE ATTACHMENT ON CHILD DEVELOPMENT
Attachment theory is a psychological theory that has gained an important place in child development literature, especially since the mid-20th century. First developed by British psychiatrist John Bowlby, the theory suggests that the emotional bonds that children form with their primary caregivers deeply affect their social, emotional and cognitive development in later life. According to John Bowlby, attachment stems from children's need for survival as part of human evolution and is an innate necessity. Attachment theory has a great importance on child development especially since the mid-20th century. In this theory, the bond that children establish with their primary caregivers plays an important role in their emotional, social and cognitive development. The attachment theory, which is the product of the joint work of John Bowlby and Mary Ainsworth, was created with its basic principles, but at this point, John Bowlby's work was further developed by his student Mary Ainsworth with the methodology he applied. Ainsworth developed the ‘Strange Situation’ experiment to understand the diversity of attachment relationships and how children develop as a result of these relationships. In this experiment, a child was separated from his/her caregiver for a short period of time and then reunited with him/her. Ainsworth classified attachment styles into four main groups according to children's reactions to their caregivers. As a result of this experiment, attachment styles were determined as secure, anxious/ambivalent attachment, avoidant and disorganised attachment. In studies conducted in Europe, the extent to which these classifications are effective in both social relationships and psychological health of children in their future lives has been investigated in depth.
This article, starting from the development of attachment theory, will discuss the effects of secure attachment on child development within the framework of both theoretical and current research in Europe. |
a9e45898-89b4-4d12-9e0a-f5d7e6460a68 | trentmkelly/LessWrong-43k | LessWrong | [Short version] Information Loss --> Basin flatness
This is an overview for advanced readers. Main post: Information Loss --> Basin flatness
Summary:
Inductive bias is related to, among other things:
* Basin flatness
* Which solution manifolds (manifolds of zero loss) are higher dimensional than others. This is closely related to "basin flatness", since each dimension of the manifold is a direction of zero curvature.
In relation to basin flatness and manifold dimension:
1. It is useful to consider the "behavioral gradients" ∇θf(θ,xi) for each input.
2. Let G be the matrix of behavioral gradients. (The ith column of G is gi=∇θf(θ,xi)).[1] We can show that dim(manifold)≤N−Rank(G).[2]
3. Rank(Hessian)=Rank(G).[3][4]
4. Flat basin ≈ Low-rank Hessian = Low-rank G ≈ High manifold dimension
5. High manifold dimension ≈ Low-rank G = Linear dependence of behavioral gradients
6. A case study in a very small neural network shows that "information loss" is a good qualitative interpretation of this linear dependence.
7. Models that throw away enough information about the input in early layers are guaranteed to live on particularly high-dimensional manifolds. Precise bounds seem easily derivable and might be given in a future post.
See the main post for details.
1. ^
In standard terminology, G is the Jacobian of the concatenation of all outputs, w.r.t. the parameters.
2. ^
N is the number of parameters in the model. See claims 1 and 2 here for a proof sketch.
3. ^
Proof sketch for Rank(Hessian)=Rank(G):
* span(g1,..,gk)⊥ is the set of directions in which the output is not first-order sensitive to parameter change. Its dimensionality is N−rank(G).
* At a local minimum, first-order sensitivity of behavior translates to second-order sensitivity of loss.
* So span(g1,..,gk)⊥ is the null space of the Hessian.
* So rank(Hessian)=N−(N−rank(G))=rank(G)
4. ^
There is an alternate proof going through the result Hessian=2GGT. (The constant 2 depends on MSE l |
4adb79d6-7fcb-430b-ba59-5e00173f4ddb | trentmkelly/LessWrong-43k | LessWrong | Automated monitoring systems
Initial draft on 28th Nov, 2024
After participating in the last (18th Nov, 2024) and this week (25th Nov, 2024)'s group discussion about topics of the “𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁” and AI model's “𝗿𝗼𝗯𝘂𝘀𝘁𝗻𝗲𝘀𝘀” at the AI Safety Fundamentals program, I have been stuck with thinking what are effective use cases of those techniques, such as the "𝗗𝗲𝗯𝗮𝘁𝗲" technique, to expand AI capabilities beyond human intelligence/abilities, while maintaining AI systems safe to humans.
After an emergence of the state of the art (SOTA) Large Language Models (LLMs) like GPT3 (2020~), acting almost human-like ability to converse and perform well on a variety of standardized tests (e.g. SAT/GRE), many of us might have raised substantial attentions to two future directions,
(1) how to train AI beyond human intelligence? (Scalable oversight) and
(2) how to prevent SOTA AI from performing unintended behaviors (e.g. deceptive behaviors), which will soon or later become out of control by humans? (AI’s robustness)
These two motives have boomed to invent a variety of new techniques, such as RLHF(human feedback-guided training) on LLMs, CAI(constitutional rule-guided training), Debate(letting AI speak out loud their internal logic), Task-decomposition(decomposing complex task into simpler sub-tasks), Scope(limiting AI abilities on undesired tasks), Adversarial training(training to become insensitive to adversarial attacks).
However, many of these techniques still heavily rely on human-based values and logic, capping AI within human capabilities. My understanding of human-beyond AI is not restricted to human perspectives, but it can autonomously grow on its own by experiencing the external world by itself and experimenting to figure out sensible rules of the world (as well as inspecting positive and negative consequences of operating own behaviors) like humans or other biological agents do on the Earth.
In order to do the self-experiment in a case of the deba |
567b30ce-167f-455b-9234-4dd7a62a056d | trentmkelly/LessWrong-43k | LessWrong | Choosing the Zero Point
Summary: You can decide what state of affairs counts as neutral, and what counts as positive or negative. Bad things happen if humans do that in our natural way. It's more motivating and less stressful if, when we learn something new, we update the neutral point to [what we think the world really is like now].
A few years back, I read an essay by Rob Bensinger about vegetarianism/veganism, and it convinced me to at least eat much less meat. This post is not about that topic. It's about the way that essay differed, psychologically, from many others I've seen on the same topic, and the general importance of that difference.
Rob's essay referred to the same arguments I'd previously seen, but while other essays concluded with the implication "you're doing great evil by eating meat, and you need to realize what a monster you've been and immediately stop", Rob emphasized the following:
> Frame animal welfare activism as an astonishingly promising, efficient, and uncrowded opportunity to do good. Scale back moral condemnation and guilt. LessWrong types can be powerful allies, but the way to get them on board is to give them opportunities to feel like munchkins with rare secret insights, not like latecomers to a not-particularly-fun party who have to play catch-up to avoid getting yelled at. It’s fine to frame helping animals as challenging, but the challenge should be to excel and do something astonishing, not to meet a bare standard for decency.
That shouldn't have had different effects on me than other essays, but damned if it didn't.
----------------------------------------
Consider a utilitarian Ursula with a utility function U. U is defined over all possible ways the world could be, and for each of those ways it gives you a number. Ursula's goal is to maximize the expected value of U.
Now consider the utility function V, where V always equals U + 1. If a utilitarian Vader with utility function V is facing the same choice (in another universe) as Ursula, then be |
59b2ca35-9305-40d6-aa1c-d73843612903 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #117]: How neural nets would fare under the TEVV framework
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-117)** (may not be up yet).
HIGHLIGHTS
==========
**[Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance](https://arxiv.org/abs/2009.00802)** *(Andrew L. John)* (summarized by Flo): Test, Evaluation, Verification, and Validation (TEVV) is an important barrier for AI applications in safety-critical areas. Current TEVV standards have very different rules for certifying *software* and certifying *human operators*. It is not clear which of these processes should be applied for AI systems.
If we treat AI systems as similar to human operators, we would certify them ensuring that they pass tests of ability. This does not give much of a guarantee of robustness (since only a few situations can be tested), and is only acceptable for humans because humans tend to be more robust to new situations than software. This could be a reasonable assumption for AI systems as well: while systems are certainly vulnerable to adversarial examples, the authors find that AI performance degrades surprisingly smoothly out of distribution in the absence of adversaries, in a plausibly human-like way.
While AI might have some characteristics of operators, there are good reasons to treat it as software. The ability to deploy multiple copies of the same system increases the threat of correlated failures, which is less true of humans. In addition, parallelization can allow for more extensive testing that is typical for software TEVV. For critical applications, a common standard is that of Safety Integrity Levels (SILs), which correspond to approximate failure rates per hour. Current AI systems fail way more often than current SILs for safety-critical applications demand. For example an image recognition system would require an accuracy of 0.99999997 at 10 processed frames per second just to reach the weakest SIL used in aviation.
However, SILs are often used on multiple levels and it is possible to build a system with a strong SIL from weaker components by using redundant components that fail independently or by detecting failures sufficiently early, such that AI modules could still be used safely as parts of a system specifically structured to cope with their failures. For example, we can use out-of-distribution detection to revert to a safe policy in simple applications. However, this is not possible for higher levels of automation where such a policy might not be available.
**Flo's opinion:** While I agree with the general thrust of this article, comparing image misclassification rates to rates of catastrophic failures in aviation seems a bit harsh. I am having difficulties imagining an aviation system that fails due to a single input that has been processed wrongly, even though the correlation between subsequent failures given similar inputs might mean that this is not necessary for locally catastrophic outcomes.
**Rohin's opinion:** My guess is that we’ll need to treat systems based primarily on neural nets similarly to operators. The main reason for this is that the tasks that AI systems will solve are usually not even well-defined enough to have a reliability rate like 0.99999997 (or even a couple of orders of magnitude worse). For example, human performance on image classification datasets is typically under 99%, not because humans are bad at image recognition, but because in many cases what the “true label” should be is ambiguous. For another example, you’d think “predict the next word” would be a nice unambiguous task definition, but then for the question “How many bonks are in a quoit?“, should your answer be **[“There are three bonks in a quoit”](https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html)** or **[“The question is nonsense”](https://arr.am/2020/07/25/gpt-3-uncertainty-prompts/)**? (If you’re inclined to say that it’s obviously the latter, consider that many students will do something like the former if they see a question they don’t understand on an exam.)
TECHNICAL AI ALIGNMENT
======================
TECHNICAL AGENDAS AND PRIORITIZATION
------------------------------------
**[AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues](https://www.cser.ac.uk/resources/ai-paradigms-and-ai-safety-mapping-artefacts-and-techniques-safety-issues/)** *(Jose Hernandez-Orallo et al)* (summarized by Rohin) (H/T Haydn Belfield): What should prioritization within the field of AI safety look like? Ideally, we would proactively look for potential issues that could arise with many potential AI technologies, making sure to cover the full space of possibilities rather than focusing on a single area. What does prioritization look like in practice? This paper investigates, and finds that it is pretty different from this ideal.
In particular, they define a set of 14 categories of AI *techniques* (examples include neural nets, planning and scheduling, and combinatorial optimization), and a set of 10 kinds of AI *artefacts* (examples include agents, providers, dialoguers, and swarms). They then analyze trends in the amount of attention paid to each technique or artefact, both for AI safety and AI in general. Note that they construe AI safety very broadly by including anything that addresses potential real-world problems with AI systems.
While there are a lot of interesting trends, the main conclusion is that there is an approximately 5-year delay between the emergence of an AI paradigm and safety research into that paradigm. In addition, safety research tends to neglect non-dominant paradigms.
**Rohin's opinion:** One possible conclusion is that safety research should be more diversified across different paradigms and artefacts, in order to properly maximize expected safety. However, this isn’t obvious: it seems likely that if the dominant paradigm has 50% of the research, it will also have, say, 80% of future real-world deployments, and so it could make sense to have 80% of the safety research focused on it. Rather than try to predict which paradigm will become dominant (a very difficult task), it may be more efficient to simply observe which paradigm becomes dominant and then redirect resources at that time (even though that process takes 5 years to happen).
PREVENTING BAD BEHAVIOR
-----------------------
**[Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems](https://arxiv.org/abs/2008.12146)** *(Sandhya Saisubramanian et al)* (summarized by Rohin): This paper provides an overview of the problem of negative side effects, and recent work that aims to address it. It characterizes negative side effects based on whether they are severe, reversible, avoidable, frequent, stochastic, observable, or exclusive (i.e. preventing the agent from accomplishing its main task), and describes existing work and how they relate to these characteristics.
In addition to the canonical point that negative side effects arise because the agent’s model is lacking (whether about human preferences or environment dynamics or important features to pay attention to), they identify two other main challenges with negative side effects. First, fixing negative side effects would likely require collecting feedback from humans, which can be expensive and challenging. Second, there will usually be a tradeoff between pursuing the original goal and avoiding negative side effects; we don’t have principled methods for dealing with this tradeoff.
Finally, they provide a long list of potential directions for future side effect research.
MISCELLANEOUS (ALIGNMENT)
-------------------------
**[Foundational Philosophical Questions in AI Alignment](https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/?utm_source=feedly&utm_medium=rss&utm_campaign=iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment)** *(Lucas Perry and Iason Gabriel)* (summarized by Rohin): This podcast starts with the topic of the paper **[Artificial Intelligence, Values and Alignment](https://arxiv.org/abs/2001.09768)** (**[AN #85](https://mailchi.mp/84b4235cfa34/an-85-the-normative-questions-we-should-be-asking-for-ai-alignment-and-a-surprisingly-good-chatbot)**) and then talks about a variety of different philosophical questions surrounding AI alignment.
**[Exploring AI Safety in Degrees: Generality, Capability and Control](https://www.cser.ac.uk/resources/exploring-ai-safety-degrees-generality-capability-and-control/)** *(John Burden et al)* (summarized by Rohin) (H/T Haydn Belfield): This paper argues that we should decompose the notion of “intelligence” in order to talk more precisely about AI risk, and in particular suggests focusing on *generality*, *capability*, and *control*. We can think of capability as the expected performance of the system across a wide variety of tasks. For a fixed level of capability, generality can be thought of as how well the capability is distributed across different tasks. Finally, control refers to the degree to which the system is reliable and deliberate in its actions. The paper qualitatively discusses how these characteristics could interact with risk, and shows an example quantitative definition for a simple toy environment.
OTHER PROGRESS IN AI
====================
REINFORCEMENT LEARNING
----------------------
**[The Animal-AI Testbed and Competition](http://proceedings.mlr.press/v123/crosby20a/crosby20a.pdf)** *(Matthew Crosby et al)* (summarized by Rohin) (H/T Haydn Belfield): The Animal-AI testbed tests agents on the ability to solve the sorts of tasks that are used to test animal cognition: for example, is the agent able to reach around a transparent obstacle in order to obtain the food inside. This has a few benefits over standard RL environments:
1. The Animal-AI testbed is designed to test for specific abilities, unlike environments based on existing games like Atari.
2. A single agent is evaluated on multiple hidden tasks, preventing overfitting. In contrast, in typical RL environments the test setting is identical to the train setting, and so overfitting would count as a valid solution.
The authors ran a competition at NeurIPS 2019 in which submissions were tested on a wide variety of hidden tasks. The winning submission used an iterative method to design the agent: after using PPO to train an agent with the current reward and environment suite, the designer would analyze the behavior of the resulting agent, and tweak the reward and environments and then continue training, in order to increase robustness. However, it still falls far short of the perfect 100% that the author can achieve on the tests (though the author is not seeing the tests for the first time, as the agents are).
**Read more:** **[Building Thinking Machines by Solving Animal Cognition Tasks](https://link.springer.com/epdf/10.1007/s11023-020-09535-6?sharing_token=i8b_fK9gxcyMy7NNgyCRMve4RwlQNchNByi7wbcMAY42BXcZEQjOOpc2RsMol991jcge_tF0YABvUxEUc3Q0TbJWKecuuFNI6HznqjkkpkrZX7M6A47XgXxBWQLXteo_jas2coehuGsRLDDrYEQ-dUgHVQsbwCN6FXNe7IeOpJM%3D)**
**Rohin's opinion:** I’m not sure that the path to general intelligence needs to go through replicating embodied animal intelligence. Nonetheless, I really like this benchmark, because its evaluation setup involves new, unseen tasks in order to prevent overfitting, and because of its focus on learning multiple different skills. These features seem important for RL benchmarks regardless of whether we are replicating animal intelligence or not.
**[Generalized Hindsight for Reinforcement Learning](http://arxiv.org/abs/2002.11708)** *(Alexander C. Li et al)* (summarized by Rohin): **[Hindsight Experience Replay](https://arxiv.org/abs/1707.01495)** (HER) introduced the idea of *relabeling* trajectories in order to provide more learning signal for the algorithm. Intuitively, if you stumble upon the kitchen while searching for the bedroom, you can’t learn much about the task of going to the bedroom, but you can learn a lot about the task of going to the kitchen. So even if the original task was to go to the bedroom, we can simply pretend that the trajectory got rewards as if the task was to go to the kitchen, and then update our kitchen-traversal policy using an off-policy algorithm.
HER was limited to goal-reaching tasks, in which a trajectory would be relabeled as attempting to reach the state at the end of the trajectory. What if we want to handle other kinds of goals? The key insight of this paper is that trajectory relabeling is effectively an inverse RL problem: we want to find the task or goal for which the given trajectory is (near-)optimal. This allows us to generalize hindsight to arbitrary spaces of reward functions.
This leads to a simple algorithm: given a set of N possible tasks, when we get a new trajectory, rank how well that trajectory does relative to past experience for each of the N possible tasks, and then relabel that trajectory with the task for which it is closest to optimal (relative to past experience). Experiments show that this is quite effective and can lead to significant gains in sample efficiency. They also experiment with other heuristics for relabeling trajectories, which are less accurate but more computationally efficient.
**Rohin's opinion:** Getting a good learning signal can be a key challenge with RL. I’m somewhat surprised it took this long for HER to be generalized to arbitrary reward spaces -- it seems like a clear win that shouldn’t have taken too long to discover (though I didn’t think of it when I first read HER).
**[Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement](http://arxiv.org/abs/2002.11089)** *(Benjamin Eysenbach, Xinyang Geng et al)* (summarized by Rohin): This paper was published at about the same time as the previous one, and has the same key insight. There are three main differences with the previous paper:
1. It shows theoretically that MaxEnt IRL is the “optimal” (sort of) way to relabel data if you want to optimize the multitask MaxEnt RL objective.
2. In addition to using the relabeled data with an off-policy RL algorithm, it also uses the relabeled data with behavior cloning.
3. It focuses on fewer environments and only uses a single relabeling strategy (MaxEnt IRL relabeling).
NEWS
====
**[FHI is hiring Researchers, Research Fellows, and Senior Research Fellows](https://www.fhi.ox.ac.uk/researcher-hiring-2020/)** *(Anne Le Roux)* (summarized by Rohin): FHI is hiring for researchers across a wide variety of topics, including technical AI safety research and AI governance. The application deadline is October 19.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
d3ebcc01-9c70-4deb-ba97-f73c93b9bace | trentmkelly/LessWrong-43k | LessWrong | How does acausal trade work in a deterministic multiverse?
Acausal trade is speculated to be possible across a multiverse, but why would any rational individuals want to do this if the multiverse is deterministic? The reality measure occupied by all of the branches of the multiverse are pre-determined and causally independent from each other, so no matter what you "do" in your own branch, you cannot affect the reality measure of other branches. This means that even if your utility function cares about what happens in other branches, nothing you do can affect their fixed reality measure, even "acausally". This is just a consequence of making counterfactual scenarios "real".
For example, if two agents come to an acausal cooperation equilibrium, this does not reduce the pre-determined reality measure of counterfactual worlds where they didn't. For example, if your utility function is proportional to the number of paperclips that exist over the multiverse, then your ultimate utility (total number of paperclips) would be the same no matter what you "do". The only thing that can vary is how many paperclips you can experience, within your own branch of the universe. Therefore, it would only be meaningful for a utility function to be focused on your own branch of the multiverse, since that's the only way for you to talk meaningfully about an expected utility that varies with different actions. As a result, the MWI should make absolutely no difference whatsoever for a decision theory compared to a "single-universe" interpretation such as Copenhagen.
Please let me know if my reasoning is correct, and if not, why? |
5f1de7a0-6da5-42b2-a56f-363e7ab1aec0 | trentmkelly/LessWrong-43k | LessWrong | Don't Believe You'll Self-Deceive
I don't mean to seem like I'm picking on Kurige, but I think you have to expect a certain amount of questioning if you show up on Less Wrong and say:
> One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.
"If you know it's double-think...
...how can you still believe it?" I helplessly want to say.
Or:
> I chose to believe in the existence of God—deliberately and consciously. This decision, however, has absolutely zero effect on the actual existence of God.
If you know your belief isn't correlated to reality, how can you still believe it?
Shouldn't the gut-level realization, "Oh, wait, the sky really isn't green" follow from the realization "My map that says 'the sky is green' has no reason to be correlated with the territory"?
Well... apparently not.
One part of this puzzle may be my explanation of Moore's Paradox ("It's raining, but I don't believe it is")—that people introspectively mistake positive affect attached to a quoted belief, for actual credulity.
But another part of it may just be that—contrary to the indignation I initially wanted to put forward—it's actually quite easy not to make the jump from "The map that reflects the territory would say 'X'" to actually believing "X". It takes some work to explain the ideas of minds as map-territory correspondence builders, and even then, it may take more work to get the implications on a gut level.
I realize now that when I wrote "You cannot make yourself believe the sky is green by an act of will", I wasn't just a dispassionate reporter of the existing facts. I was also trying to instill a self-fulfilling prophecy.
It may be wise to go around deliberately repeating "I can't get away with double-thinking! Deep down, I'll know it's not true! If I know my map has no reason to be corre |
e2a135a1-5737-4786-b049-1698ce0f8c72 | StampyAI/alignment-research-dataset/blogs | Blogs | Least Squares and Fourier Analysis
I ended my last post on a somewhat dire note, claiming that least squares can do pretty terribly when fitting data. It turns out that things aren’t quite as bad as I thought, but most likely worse than you would expect.
The theme of this post is going to be things you use all the time (or at least, would use all the time if you were an electrical engineer), but probably haven’t ever thought deeply about. I’m going to include a combination of mathematical proofs and matlab demonstrations, so there should hopefully be something here for everyone.
My first topic is going to be, as promised, least squares curve fitting. I’ll start by talking about situations when it can fail, and also about situations when it is “optimal” in some well-defined sense. To do that, I’ll have to use some Fourier analysis, which will present a good opportunity to go over when frequency-domain methods can be very useful, when they can fail, and what you can try to do when they fail.
**When Least Squares Fails**
To start, I’m going to do a simple matlab experiment. I encourage you to follow along if you have matlab (if you have MIT certificates you can get it for free at [http://matlab.mit.edu/](http://matlab.mit.edu)).
Let’s pretend we have some simple discrete-time process, y(n+1) = a\*y(n) + b\*u(n), where y is the variable we care about and u is some input signal. We’ll pick a = 0.8, b = 1.0 for our purposes, and u is chosen to be a discrete version of a random walk. The code below generates the y signal, then uses least squares to recover a and b. (I recommend taking advantage of cell mode if you’re typing this in yourself.)
> %% generate data
>
>
> a = 0.8; b = 1.0;
>
>
> N = 1000;
>
>
> ntape = 1:N; y = zeros(N,1); u = zeros(N-1,1);
>
>
> for n=1:N-2
>
>
> if rand < 0.02
>
>
> u(n+1) = 1-u(n);
>
>
> else
>
>
> u(n+1) = u(n);
>
>
> end
>
>
> end
>
>
> for n=1:N-1
>
>
> y(n+1) = a\*y(n)+b\*u(n);
>
>
> end
>
>
> plot(ntape,y);
>
>
> %% least squares fit (map y(n) and u(n) to y(n+1))
>
>
> A = [y(1:end-1) u]; b = y(2:end);
>
>
> params = A\b;
>
>
> afit = params(1)
>
>
> bfit = params(2)
>
>
The results are hardly surprising (you get afit = 0.8, bfit = 1.0). For the benefit of those without matlab, here is a plot of y against n:
[](https://jsteinhardt.files.wordpress.com/2010/08/plot1-2.pdf)
Now let’s add some noise to the signal. The code below generates noise whose size is about 6% of the size of the data (in the sense of [L2 norm](http://mathworld.wolfram.com/L2-Norm.html)).
> %%
>
>
> yn = y + 0.1\*randn(N,1); % gaussian noise with standard deviation 0.2
>
>
> A = [yn(1:end-1) u]; b = yn(2:end);
>
>
> params = A\b;
>
>
> afit = params(1)
>
>
> bfit = params(2)
>
>
This time the results are much worse: afit = 0.7748, bfit = 1.1135. You might be tempted to say that this isn’t so much worse than we might expect — the accuracy of our parameters is roughly the accuracy of our data. The problem is that, if you keep running the code above (which will generate new noise each time), you will always end up with afit close to 0.77 and bfit close to 1.15. In other words, the parameters are **systematically biased** by the noise. Also, we should expect our accuracy to increase with more samples, but that isn’t the case here. If we change N to 100,000, we get afit = 0.7716, bfit = 1.1298. More samples *will* decrease the standard deviation of our answer (running the code multiple times will yield increasingly similar results), but not necessarily its correctness.
A more dire way of thinking about this is that increasing the number of samples will increase how “certain” we are of our answer, but it won’t change the fact that our answer is wrong. So we will end up being quite certain of an incorrect answer.
Why does this happen? It turns out that when we use least squares, we are making certain assumptions about the *structure* of our noise, and those assumptions don’t hold in the example above. In particular, in a model like the one above, least squares assumes that all noise is **process noise**, meaning that noise at one step gets propagated to future steps. Such noise might come from a system with unmodelled friction or some external physical disturbance. In contrast, the noise we have is **output noise**, meaning that the reading of our signal is slightly off. What the above example shows is that a model constructed via least squares will be systematically biased by output noise.
That’s the intuition, now let’s get into the math.When we do least squares, we are trying to solve some equation Ax=b for x, where A and b are both noisy. So we really have something like A+An and b+bn, where An and bn are the noise on A and b.
Before we continue, I think it’s best to stop and think about [what we really want](http://lesswrong.com/lw/nm/disguised_queries). So what is it that we actually want? We observe a bunch of data as input, and some more data as output. We would like a way of predicting, given the input, what the output should be. In this sense, then, the distinction between “input noise” (An) and “output noise” bn is meaningless, as we don’t get to see either and all they do is cause b to not be exactly Ax. (If we start with assumptions on what noise “looks like”, then distinguishing between different sources of noise turns out to be actually useful. More on that later.)
If the above paragraph isn’t satisfying, then we can use the more algebraic explanation that the noise An and bn induces a single random variable on the relationship between observed input and observed output. In fact, if we let A’=A+An then we end up fitting , so we can just define  and have a single noise term.
Now, back to least squares. Least squares tries to minimize , that is, the squared error in the [ norm](http://mathworld.wolfram.com/L2-Norm.html). If we instead have a noisy , then we are trying to minimize , which will happen when  satisfies .
If there actually exists an  such that  (which is what we are positing, subject to some error term), then minimizing  is achieved by setting  to . Note that  is what we would like to recover. So  would be the solution to , and thus we see that an error  introduces a linear error in our estimate of . (To be precise, the error affects our answer for  via the operator .)
Now all this can be seen relatively easily by just using the standard formula for the solution to least squares: . But I find that it is easy to get confused about what exactly the “true” answer is when you are fitting data, so I wanted to go through each step carefully.
At any rate, we have a formula for how the error  affects our estimate of , now I think there are two major questions to answer:
(1) In what way can  *systematically bias* our estimate for ?
(2) What can we say about the [*variance*](http://en.wikipedia.org/wiki/Variance#Discrete_case) on our estimate for ?
To calculate the bias on , we need to calculate , where  stands for [expected value](http://en.wikipedia.org/wiki/Expected_value). Since  is invertible, this is the same as . In particular, we will get an *unbiased* estimate exactly when  and  are uncorrelated. Most importantly, when we have noise on our inputs then  and  will (probably) be correlated, and we won’t get an unbiased result.
How bad is the bias? Well, if A actually has a noise component (i.e. ), and e is , and we assume that our noise is uncorrelated with the constant matrix , then we get a correlation matrix equal to , which, assuming that  and  are uncorrelated, gives us . The overall bias then comes out to .
I unfortunately don’t have as nice of an expression for the variance, although you can of course calculate it in terms of , and .
At any rate, if noise doesn’t show up in the input, and the noise that does show up is uncorrelated with the input, then we should end up with no bias. But if either of those things is true, we will end up with bias. When modelling a dynamical system, input noise corresponds to *measurement noise* (your sensors are imperfect), while output noise corresponds to *process noise* (the system doesn’t behave exactly as expected).
One way we can see how noise being correlated with  can lead to bias is if our “noise” is actually an unmodelled quadratic term. Imagine trying to fit a line to a parabola. You won’t actually fit the tangent line to the parabola, instead you’ll probably end up fitting something that looks like a secant. However, the exact slope of the line you pick will depend pretty strongly on the distribution of points you sample along the parabola. Depending on what you want the linear model for, this could either be fine (as long as you sample a distribution of points that matches the distribution of situations that you think you’ll end up using the model for), or very annoying (if you really wanted the tangent).
If you’re actually just dealing with a parabola, then you can still get the tangent by sampling symmetrically about the point you care about, but once you get to a cubic this is no longer the case.
As a final note, one reasonable way (although I’m not convinced it’s the best, or even a particularly robust way) of determining if a linear fit of your data is likely to return something meaningful is to look at the [condition number](http://en.wikipedia.org/wiki/Condition_number) of your matrix, which can be computed in matlab using the cond function and can also be realized as the square root of the ratio of the largest to the smallest eigenvalue of . Note that the condition number says nothing about whether your data has a reasonable linear fit (it can’t, since it doesn’t take  into account). Rather, it is a measure of how well-defined the coefficients of such a fit would be. In particular, it will be large if your data is close to lying on a lower-dimensional subspace (which can end up really screwing up your fit). In this case, you either need to collect better data or figure out why your data lies on a lower-dimensional subspace (it could be that there is some additional structure to your system that you didn’t think about; see point (3) below about a system that is heavily damped).
I originally wanted to write down a lot more about specific ways that noise can come into the picture, but I haven’t worked it all out myself, and it’s probably too ambitious a project for a single blog post anyways. So instead I’m going to leave you with a bunch of things to think about. I know the answers to some of these, for others I have ideas, and for others I’m still trying to work out a good answer.
1) Can anything be done to deal with measurement noise? In particular, can anything be done to deal with the sort of noise that comes from [encoders](http://en.wikipedia.org/wiki/Encoder#Transducers) (i.e., a discretization of the signal)?
2) Is there a good way of measuring when noise will be problematic to our fit?
3) How can we fit models to systems that evolve on multiple time-scales? For example, an extremely damped system such as


where M >> c. You could take, for example, , , in which case the system behaves almost identically to the system

with  set to the derivative of . Then the data will all lie almost on a line, which can end up screwing up your fit. So in what exact ways can your fit get screwed up, and what can be done to deal with it? (This is essentially the problem that I’m working on right now.)
4) Is there a way to defend against non-linearities in a system messing up our fit? Can we figure out when these non-linearities occur, and to what extent?
5) What problems might arise when we try to fit a system that is unstable or only slightly stable, and what is a good strategy for modelling such a system?
**When Least Squares Works**
Now that I’ve convinced you that least squares can run into problems, let’s talk about when it can do well.
As Paul Christiano pointed out to me, when you have some system where you can actually give it inputs and measure the outputs, least squares is likely to do a fairly good job. This is because you can (in principle) draw the data you use to fit your model from the same distribution as you expect to encounter when the model is used in practice. However, you will still run into the problem that failure to measure the input accurately introduces biases. And no, these biases can’t be eradicated completely by averaging the result across many samples, because the bias is always a negative definite matrix applied to  (the parameters we are trying to find), and any convex combination of negative definite matrices will remain negative definite.
Intuitively, what this says is that if you can’t trust your input, then you shouldn’t rely on it strongly as a predictor. Unfortunately, the only way that a linear model knows how to trust something less is by making the coefficient on that quantity “smaller” in some sense (in the negative definite sense here). So really the issue is that least squares is too “dumb” to deal with the issue of measurement error on the input.
But I said that I’d give examples of when least squares works, and here I am telling you more about why it fails. One powerful and unexpected aspect of least squares is that it can fit a wide variety of *non*-linear models. For example, if we have a system , then we just form a matrix ![A = \left[ \begin{array}{cccc} 1 & x & x^2 & \cos(x) \end{array} \right]](https://s0.wp.com/latex.php?latex=A+%3D+%5Cleft%5B+%5Cbegin%7Barray%7D%7Bcccc%7D+1+%26+x+%26+x%5E2+%26+%5Ccos%28x%29+%5Cend%7Barray%7D+%5Cright%5D&bg=f0f0f0&fg=555555&s=0&c=20201002) and , where for example  is actually a column vector where the th row is the cosine of the th piece of input data. This will often be the case in physical systems, and I think is always the case for systems solved via Newton’s laws (although you might have to consolide parameters, for example fitting both  and  in the case of a pendulum). This isn’t necessarily the case for reduced models of complicated systems, for example the sort of models used for fluid dynamics. However, I think that the fact that linear fitting techniques can be applied to such a rich class of systems is quite amazing.
There is also a place where least squares not only works but is in some sense *optimal*: detecting the frequency response of a system. Actually, it is only optimal in certain situations, but even outside of those situations it has many advantages over a standard discrete Fourier transform. To get into the applications of least squares here, I’m going to have to take a detour into Fourier analysis.
**Fourier Analysis**
If you already know Fourier analysis, you can probably skip most of this section (although I recommend reading the last two paragraphs).
Suppose that we have a sequence of  signals at equally spaced points in time. Call this sequence , , , . We can think of this as a function , or, more accurately, . For reasons that will become apparent later, we will actually think of this as a function .
This function is part of the vector space of all functions from  to . One can show that the functions on  defined by

with  ranging from  to , are all orthogonal to each other, and thus form a basis for the space of all functions from  to  (now it is important to use  since the  take on complex values). It follows that our function  can be written uniquely in the form , where the  are constants. Now because of this we can associate with each  a function  given by .
An intuitive way of thinking about this is that any function can be uniquely decomposed as a superposition of complex exponential functions at different frequencies. The function  is a measure of the component of  at each of these frequencies. We refer to  as the **Fourier transform** of .
While there’s a lot more that could be said on this, and I’m tempted to re-summarize all of the major results in Fourier analysis, I’m going to refrain from doing so because there are plenty of texts on it and you can probably get the relevant information (such as how to compute the Fourier coefficients, the inverse Fourier transform, etc.) from those. In fact, you could start by checking out [Wikipedia’s article](http://en.wikipedia.org/wiki/Discrete_Fourier_transform). It is also worth noting that the Fourier transform can be computed in  time using any one of many “fast Fourier transform” algorithms (fft in matlab).
I will, however, draw your attention to the fact that if we start with information about  at times , then we end up with frequency information at the frequencies . Also, you should really think of the frequencies as wrapping around cyclically (frequencies that differ from each other by a multiple of  are indistinguishable on the interval we sampled over), and also if  is real-valued then , where the bar means complex conjugate and  is, as just noted, the same as .
A final note before continuing is that we could have decomposed  into a set of almost any  frequencies (as long as they were still linearly independent), although we can’t necessarily do so in  time. We will focus on the set of frequencies obtained by a Fourier transform for now.
**When Fourier Analysis Fails**
The goal of taking a Fourier transform is generally to decompose a signal into component frequencies, under the assumption that the signal itself was generated by some “true” superposition of frequencies. This “true” superposition would best be defined as the frequency spectrum we would get if we had an infinitely long continuous tape of noise-free measurements and then took the [continuous Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform).
I’ve already indicated one case in which Fourier analysis can fail, and this is given by the fact that the Fourier transform can’t distinguish between frequencies that are separated from each other by multiples of . In fact, what happens in general is that you run into problems when your signal contains frequencies that move faster than your sampling rate. The rule of thumb is that your signal should contain no significant frequency content above the [Nyquist rate](http://en.wikipedia.org/wiki/Nyquist_frequency), which is half the sampling frequency. One way to think of this is that the “larger” half of our frequencies (i.e.  up through ) are really just the negatives of the smaller half of our frequencies, and so we can measure frequencies up to roughly  before different frequencies start to run into each other.
The general phenomenon that goes on here is known as aliasing, and is the same sort of effect as what happens when you spin a bicycle wheel really fast and it appears to be moving backwards instead of forwards. The issue is that your eye only samples at a given rate and so rotations at speeds faster than that appear the same to you as backwards motion. See also [this image](http://en.wikipedia.org/wiki/File:AliasingSines.svg) from Wikipedia and the section in the [aliasing article](http://en.wikipedia.org/wiki/Aliasing) about sampling sinusoidal functions.
The take-away message here is that you need to sample fast enough to capture all of the actual motion in your data, and the way you solve aliasing issues is by increasing the sample rate.
A trickier problem is the “windowing” problem, also known as [spectral leakage](http://en.wikipedia.org/wiki/Spectral_leakage). [Note: I really recommend reading the linked wikipedia article at some point, as it is a very well-written and insightful treatment of this issue.] The problem can be summarized intuitively as follows: nearby frequencies will “bleed into” each other, and the easiest way to reduce this phenomenon is to increase your sample time. Another intuitive statement to this effect is that the extent to which you can distinguish between two nearby frequencies is roughly proportional to the number of full periods that you observe of their difference frequency. I will make both of these statements precise below. First, though, let me convince you that spectral leakage is relevant by showing you what the Fourier transform of a periodic signal looks like when the period doesn’t fit into the sampling window. The first image below is a plot of y=cos(t), and the second is a snapshot of part of the Fourier transform (blue is real part, green is imaginary part). Note that the plot linearly interpolates between sample points. Also note that the sampling frequency was 100Hz, although that is almost completely irrelevant.
[ from t=0 to t=20")](https://jsteinhardt.files.wordpress.com/2010/08/plot2.pdf)
[")](https://jsteinhardt.files.wordpress.com/2010/08/plot2fft.pdf)The actual frequency content should be a single spike at , so windowing can in fact cause non-trivial issues with your data.
Now let’s get down to the actual analytical reason for the windowing / spectral leakage issue. Recall the formula for the Fourier transform: . Now suppose that  is a complex exponential with some frequency , i.e. . Then some algebra will yield the formula
,
which tells us the extent to which a signal at a frequency of  will incorrectly contribute to the estimate of the frequency content at . The main thing to note here is that larger values of  will cause this function to become more concentrated horizontally, which means that, in general (although not necessarily at a given point), it will become smaller. At the same time, if you change the sampling rate without changing the total sampling time then you won’t significantly affect the function. This means that the easiest way to decrease windowing is to increase the amount of time that you sample your signal, but that sampling more often will not help you at all.
Another point is that spectral leakage is generically roughly proportional to the inverse of the distance between the two frequencies (although it goes to zero when the difference in frequencies is close to a multiple of ), which quantifies the earlier statement about the extent to which two frequencies can be separated from each other.
Some other issues to keep in mind: the Fourier transform won’t do a good job with quasi-periodic data (data that is roughly periodic with a slowly-moving phase shift), and there is also no guarantee that your data will have good structure in the frequency domain. It just happens that this is in theory the case for analytic systems with a periodic excitation (see note (1) in the last section of this post — “Answers to Selected Exercises” — for a more detailed explanation).
**When Fourier Analysis Succeeds**
Despite issues with aliasing and spectral leakage, there are some strong points to the Fourier transform. The first is that, since the Fourier transform is an orthogonal map, it does not amplify noise. More precisely, , so two signals that are close together have Fourier transforms that are also close together. This may be somewhat surprising since normally when one fits  parameters to a signal of length , there are significant issues with overfitting that can cause noise to be amplified substantially.
However, while the Fourier transform does not *amplify* noise, it can *concentrate* noise. In particular, if the noise has some sort of quasi-periodic structure then it will be concentrated over a fairly small range of frequencies.
Note, though, that the L2 norm of the noise in the frequency domain will be roughly constant relative to the number of samples. This is because, if  is the “true” signal and  is the measured signal, then , so that . Now also note that the number of frequency measurements we get out of the Fourier transform within a fixed band is proportional to the sampling time, that is, it is . If we put these assumptions together, and also assume that the noise is quasi-periodic such that it will be concentrated over a fixed set of frequencies, then we get  noise distributed in the L2 sense over  frequencies, which implies that the level of noise at a given frequency should be . In other words, sampling for a longer time will increase our resolution on frequency measurements, which means that the noise at a *given* frequency will decrease as the square-root of the sampling time, which is nice.
My second point is merely that there is no spectral leakage between frequencies that differ by multiples of , so in the special case when all significant frequency content of the signal occurs at frequencies that are multiples of  and that are less than  all problems with windowing and aliasing go away and we do actually get a perfect measure of the frequency content of the original signal.
**Least Squares as a Substitute**
The Fourier transform gives us information about the frequency content at . However, this set of frequencies is somewhat arbitrary and might not match up well to the “important” frequencies in the data. If we have extra information about the specific set of frequencies we should be caring about, then a good substitute for Fourier analysis is to do least squares fitting to the signal as a superposition of the frequencies you care about.
In the special case that the frequencies you care about are a subset of the frequencies provided by the Fourier transform, you will get identical results (this has to do with the fact that complex exponentials at these frequencies are all orthogonal to each other).
In the special case that you *exactly* identify which frequencies occur in the signal, you eliminate the spectral leakage problem entirely (it still occurs in theory, but not between any of the frequencies that actually occur). A good way to do this in the case of a dynamical system is to excite the system at a fixed frequency so that you know to look for that frequency plus small harmonics of that frequency in the output.
In typical cases least squares will be fairly resistant to noise unless that noise has non-trivial spectral content at frequencies near those being fit. This is almost tautologically true, as it just says that spectral leakage is small between frequencies that aren’t close together. However, this isn’t exactly true, as fitting non-orthogonal frequencies changes the sort of spectral leakage that you get, and picking a “bad” set of frequencies (usually meaning large condition number) can cause lots of spectral leakage even between far apart frequencies, or else drastically exacerbate the effects of noise.
This leads to one reason *not* to use least squares and to use the Fourier transform instead (other than the fact that the Fourier transform is more efficient in an algorithmic sense at getting data about large sets of frequencies —  instead of ). The Fourier transform always has a condition number of , whereas least squares will in general have a condition number greater than , and poor choices of frequencies can lead to very large condition numbers. I typically run into this problem when I attempt to gain lots of resolution on a fixed range of frequencies.
This makes sense, because there are information-theoretic limits on the amount of frequency data I can get out of a given amount of time-domain data, and if I could zoom in on a given frequency individually, then I could just do that for all frequencies one-by-one and break the information theory bounds. To beat these bounds you will have to at least implicitly make additional assumptions about the structure of the data. However, I think you can probably get pretty good results without making too strong of assumptions, but I unfortunately don’t personally know how to do that yet.
So to summarize, the Fourier transform is nice because it is orthogonal and can be computed quickly. Least squares is nice because it allows you to pick which frequencies you want and so gives you a way to encode additional information you might have about the structure of the signal.
Some interesting questions to ask:
(1) What does spectral leakage look like for non-orthogonal sets of frequencies? What do the “bad” cases look like?
(2) What is a good set of assumptions to make that helps us get better frequency information? (The weaker the assumption and the more leverage you get out of it, the better it is.)
(3) Perhaps we could try something like: “pick the smallest set of frequencies that gives us a good fit to the data”. How could we actually implement this in practice, and would it have any shortcomings? How good would it be at pulling weak signals out of noisy data?
(4) What in general is a good strategy for pulling a weak signal out of noisy data?
(5) What is a good way of dealing with quasi-periodic noise?
(6) Is there a way to deal with windowing issues, perhaps by making statistical assumptions about the data that allows us to “sample” from possible hypothetical continuations of the signal to later points in time?
**Take-away lessons**
To summarize, I would say the following:
*Least squares*
* good when you get to sample from a distribution of inputs that matches the actual distribution that you’re going to deal with in practice
* bad due to systematic biases when noise is correlated with signal (usually occurs with “output noise” in the case of a dynamical system)
*Fourier transform*
* good for getting a large set of frequency data
* good because of small condition number
* can fail due to aliasing
* also can be bad due to spectral leakage, which can be dealt with by using least squares if you have good information about which frequencies are important
**Answers to selected exercises**
Okay well mainly I just feel like some of the questions that I gave as exercises are important enough that you should know the answer. There isn’t necessarily a single answer, but I’ll at least give you a good way of doing something if I know of one. I’ve added a fold so you can avoid spoilers. It turns out that for this post I only have one good answer, which is about dealing with non-linear dynamical systems.
We can figure out if a dynamical system is non-linear (and get some quantitative data about the non-linearities we’re dealing with) by inputting a signal that has only a few frequencies (i.e., the superposition of a small number of sines and cosines) and then looking at the Fourier transform of the response. If the system is completely linear, then the response should contain the same set of frequencies as the input (plus a bit of noise). If the system is non-linear but still analytic then you will also see responses at integer linear combinations of the input frequencies. If the system is non-analytic (for example due to [Coulombic friction](http://en.wikipedia.org/wiki/Friction), the type of friction you usually assume in introductory physics classes) then you might see a weirder frequency response. |
6a5169e2-2f4f-411b-af3f-0169088e6cf6 | trentmkelly/LessWrong-43k | LessWrong | Wolf's Dice
Around the mid-19th century, Swiss astronomer Rudolf Wolf rolled a pair of dice 20000 times, recording each outcome. Here is his data:
I’ve taken the data from Jaynes, who uses it as an example for maximum-entropy methods. We’ll use it for several examples, including some of Jaynes’ models.
The most interesting fact about Wolf’s dice data is that some faces come up significantly more often than others. As a quick back-of-the-envelope check, we can see this by calculating the expected number of times a face should come up on one die (20000∗16≈3333.3), and the standard deviation of this number (√20000∗(16)∗(1−16)≈52.7). A quick glance at the data shows that the column- and row-totals differ from their expected value by roughly 2 to 6 standard deviations, an error much larger than we’d expect based on random noise.
That, however, is an ad-hoc test. The point of this sequence is to test our hypotheses from first principles, specifically the principles of probability theory. If we want to know which of two or more models is correct, then we calculate the probability of each model given the data.
White Die: Biased or Not?
Let’s start with just Wolf’s white die. Our two models are:
* Model 1: all faces equally probable
* Model 2: each face i has its own probability pi (we’ll use a uniform prior on pi for now, to keep things simple)
Our data is the bottom row in the table above, and our goal is to calculate P[modeli|data] for each model.
The first step is Bayes’ rule:
P[modeli|data]=1ZP[data|modeli]P[modeli]
Here’s what those pieces each mean:
* P[modeli] is our prior probability for each model - for simplicity, let’s say the two are equally likely a priori.
* Z is the normalizer, chosen so that the posterior probabilities sum to one: P[model1|data]+P[model2|data]=1 (implicitly, we assume one of the two models is “correct”).
* P[data|modeli] is computed using modeli
… so the actual work is in calculating P[data|modeli].
Model 1: Unbiased
For the first |
31a43166-59ae-47a1-8a38-8306d9284657 | trentmkelly/LessWrong-43k | LessWrong | LW Melbourne: Report on Public Rationality Lecture
Introduction
In the past couple of months, Melbourne LW has been working to expand our activities and community, as well generally promoting rationality. A huge shout out goes to BraydenM who is responsible for spearheading these efforts. So far we have opened up some of our meetups to new-comers, held a winter solstice dinner party, held a COZE event, created a Meetup.com group for Melbourne LW, presented at other Meetup.com groups, distributed copies of HPMoR, conducted a rationality-vox pop, and presented a public lecture on LW content.
The public lecture was my project, and here is my report.
Initial Planning
In early July, we held a planning session for expanding the community and promoting rationality. Holding public rationality lectures was deemed a possibility worth trying.
Being students in our early twenties, we judged the most receptive market would be our peers. We considered who we should target: (1) High IQ, heavily academic students with technical backgrounds in maths/science/philosophy of the sort who might join LW, or (2) everyone else as well. Having visions of a world where everyone was more rational, I opted for the latter.
I was ambitious at this point and wanted to run a series of six lectures progressing through core sequences content. Wiser heads suggested we run a single, stand-alone lecture to get data. Since I still hoped to run a whole series, the first lecture had to be given at the beginning of new semester (start of August here).
Content
(SLIDES: OPEN IN PP TO VIEW PRESENTER'S NOTES)
To meet the deadline, I developed the lecture material unaided and without much planning or revision. Speed came at the price of quality.
The lecture could have focused on either of a) cognitive biases or b) core sequences content such as simple truth, probabilistic reasoning, beliefs paying rent, and evidence.
Reasons in favour of and against:
Cognitive Biases
+ I have a prior that biases would be more appealing and get better |
50eb4a75-551e-4160-a201-73918447b1f4 | trentmkelly/LessWrong-43k | LessWrong | AI #23: Fundamental Problems with RLHF
After several jam-packed weeks, things slowed down to allow everyone to focus on the potential room temperature superconductor, check Polymarket to see how likely it is we are so back and bet real money, or Manifold for chats and better graphs and easier but much smaller trading.
The main thing I would highlight this week are an excellent paper laying out many of the fundamental difficulties with RLHF, and a systematic new exploit of current LLMs that seems to reliably defeat RLHF.
I’d also note that GPT-4 fine tuning is confirmed to be coming. That should be fun.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Here’s what you’re going to do.
4. Language Models Don’t Offer Mundane Utility. Universal attacks on LLMs.
5. Fun With Image Generation. Videos might be a while.
6. Deepfaketown and Botpocalypse Soon. An example of doing it right.
7. They Took Our Jobs. What, me worry?
8. Get Involved. If you share more opportunities in comments I’ll include next week.
9. Introducing. A bill. Also an AI medical generalist.
10. In Other AI News. Fine tuning is coming to GPT-4. Teach LLMs arithmetic.
11. Quiet Speculations. Various degrees of skepticism.
12. China. Do not get overexcited.
13. The Quest for Sane Regulation. Liability and other proposed interventions.
14. The Week in Audio. I go back to The Cognitive Revolution.
15. Rhetorical Innovation. Bill Burr is concerned and might go off on a rant.
16. No One Would Be So Stupid As To. Robotics and AI souls.
17. Aligning a Smarter Than Human Intelligence is Difficult. RLHF deep dive.
18. Other People Are Not As Worried About AI Killing Everyone. It’ll be fine.
19. The Wit and Wisdom of Sam Altman. Don’t sleep on this.
20. The Lighter Side. Pivot!
LANGUAGE MODELS OFFER MUNDANE UTILITY
Make a ransom call, no jailbreak needed. Follows the traditional phone-calls-you-make-are-your-problem-sir legal principle. This has now been (at least narrow |
9895e596-4115-4dbe-9c7e-58534db96f6b | StampyAI/alignment-research-dataset/blogs | Blogs | Embedded Agents
Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know.[1](https://intelligence.org/2018/10/29/embedded-agents/#footnote_0_17907 "This is part 1 of the Embedded Agency series, by Abram Demski and Scott Garrabrant.")
There’s a complicated engineering problem here. But there’s also a problem of figuring out what it even means to build a learning agent like that. What is it to optimize realistic goals in physical environments? In broad terms, how does it work?
In this series of posts, I’ll point to four ways we *don’t* currently know how it works, and four areas of active research aimed at figuring it out.
This is Alexei, and Alexei is playing a video game.

Like most games, this game has clear input and output channels. Alexei only observes the game through the computer screen, and only manipulates the game through the controller.
The game can be thought of as a function which takes in a sequence of button presses and outputs a sequence of pixels on the screen.
Alexei is also very smart, and capable of holding the entire video game inside his mind. If Alexei has any uncertainty, it is only over empirical facts like what game he is playing, and not over logical facts like which inputs (for a given deterministic game) will yield which outputs. This means that Alexei must also store inside his mind every possible game he could be playing.
Alexei does not, however, have to think about himself. He is only optimizing the game he is playing, and not optimizing the brain he is using to think about the game. He may still choose actions based off of value of information, but this is only to help him rule out possible games he is playing, and not to change the way in which he thinks.
In fact, Alexei can treat himself as an unchanging indivisible atom. Since he doesn’t exist in the environment he’s thinking about, Alexei doesn’t worry about whether he’ll change over time, or about any subroutines he might have to run.
Notice that all the properties I talked about are partially made possible by the fact that Alexei is cleanly separated from the environment that he is optimizing.
This is Emmy. Emmy is playing real life.

Real life is not like a video game. The differences largely come from the fact that Emmy is within the environment that she is trying to optimize.
Alexei sees the universe as a function, and he optimizes by choosing inputs to that function that lead to greater reward than any of the other possible inputs he might choose. Emmy, on the other hand, doesn’t have a function. She just has an environment, and this environment contains her.
Emmy wants to choose the best possible action, but which action Emmy chooses to take is just another fact about the environment. Emmy can reason about the part of the environment that is her decision, but since there’s only one action that Emmy ends up actually taking, it’s not clear what it even means for Emmy to “choose” an action that is better than the rest.
Alexei can poke the universe and see what happens. Emmy is the universe poking itself. In Emmy’s case, how do we formalize the idea of “choosing” at all?
To make matters worse, since Emmy is contained within the environment, Emmy must also be smaller than the environment. This means that Emmy is incapable of storing accurate detailed models of the environment within her mind.
This causes a problem: Bayesian reasoning works by starting with a large collection of possible environments, and as you observe facts that are inconsistent with some of those environments, you rule them out. What does reasoning look like when you’re not even capable of storing a single valid hypothesis for the way the world works? Emmy is going to have to use a different type of reasoning, and make updates that don’t fit into the standard Bayesian framework.
Since Emmy is within the environment that she is manipulating, she is also going to be capable of self-improvement. But how can Emmy be sure that as she learns more and finds more and more ways to improve herself, she only changes herself in ways that are actually helpful? How can she be sure that she won’t modify her original goals in undesirable ways?
Finally, since Emmy is contained within the environment, she can’t treat herself like an atom. She is made out of the same pieces that the rest of the environment is made out of, which is what causes her to be able to think about herself.
In addition to hazards in her external environment, Emmy is going to have to worry about threats coming from within. While optimizing, Emmy might spin up other optimizers as subroutines, either intentionally or unintentionally. These subsystems can cause problems if they get too powerful and are unaligned with Emmy’s goals. Emmy must figure out how to reason without spinning up intelligent subsystems, or otherwise figure out how to keep them weak, contained, or aligned fully with her goals.
Emmy is confusing, so let’s go back to Alexei. Marcus Hutter’s [AIXI](https://arxiv.org/abs/1202.6153) framework gives a good theoretical model for how agents like Alexei work:
$$
a\_k \;:=\; \arg\max\_{a\_k}\sum\_{ o\_k r\_k} %\max\_{a\_{k+1}}\sum\_{x\_{k+1}}
… \max\_{a\_m}\sum\_{ o\_m r\_m}
[r\_k+…+r\_m]
\hspace{-1em}\hspace{-1em}\hspace{-1em}\!\!\!\sum\_{{ q}\,:\,U({ q},{ a\_1..a\_m})={ o\_1 r\_1.. o\_m r\_m}}\hspace{-1em}\hspace{-1em}\hspace{-1em}\!\!\! 2^{-\ell({ q})}
$$
The model has an agent and an environment that interact using actions, observations, and rewards. The agent sends out an action \(a\), and then the environment sends out both an observation \(o\) and a reward \(r\). This process repeats at each time \(k…m\).
Each action is a function of all the previous action-observation-reward triples. And each observation and reward is similarly a function of these triples and the immediately preceding action.
You can imagine an agent in this framework that has full knowledge of the environment that it’s interacting with. However, AIXI is used to model optimization under uncertainty about the environment. AIXI has a distribution over all possible computable environments \(q\), and chooses actions that lead to a high expected reward under this distribution. Since it also cares about future reward, this may lead to exploring for value of information.
Under some assumptions, we can show that AIXI does reasonably well in all computable environments, in spite of its uncertainty. However, while the environments that AIXI is interacting with are computable, AIXI itself is uncomputable. The agent is made out of a different sort of stuff, a more powerful sort of stuff, than the environment.
We will call agents like AIXI and Alexei “dualistic.” They exist outside of their environment, with only set interactions between agent-stuff and environment-stuff. They require the agent to be larger than the environment, and don’t tend to model self-referential reasoning, because the agent is made of different stuff than what the agent reasons about.
AIXI is not alone. These dualistic assumptions show up all over our current best theories of rational agency.
I set up AIXI as a bit of a foil, but AIXI can also be used as inspiration. When I look at AIXI, I feel like I really understand how Alexei works. This is the kind of understanding that I want to also have for Emmy.
Unfortunately, Emmy is confusing. When I talk about wanting to have a theory of “embedded agency,” I mean I want to be able to understand theoretically how agents like Emmy work. That is, agents that are embedded within their environment and thus:
* do not have well-defined i/o channels;
* are smaller than their environment;
* are able to reason about themselves and self-improve;
* and are made of parts similar to the environment.
You shouldn’t think of these four complications as a partition. They are very entangled with each other.
For example, the reason the agent is able to self-improve is because it is made of parts. And any time the environment is sufficiently larger than the agent, it might contain other copies of the agent, and thus destroy any well-defined i/o channels.
[](https://intelligence.org/wp-content/uploads/2018/10/Embedded-Subproblems.png)
However, I will use these four complications to inspire a split of the topic of embedded agency into four subproblems. These are: decision theory, embedded world-models, robust delegation, and subsystem alignment.
**Decision theory** is all about embedded optimization.
The simplest model of dualistic optimization is \(\mathrm{argmax}\). \(\mathrm{argmax}\) takes in a function from actions to rewards, and returns the action which leads to the highest reward under this function. Most optimization can be thought of as some variant on this. You have some space; you have a function from this space to some score, like a reward or utility; and you want to choose an input that scores highly under this function.
But we just said that a large part of what it means to be an embedded agent is that you don’t have a functional environment. So now what do we do? Optimization is clearly an important part of agency, but we can’t currently say what it is even in theory without making major type errors.
Some major open problems in decision theory include:
* **logical counterfactuals**: how do you reason about what *would* happen if you take action B, given that you can *prove* that you will instead take action A?
* environments that include multiple **copies of the agent**, or trustworthy predictions of the agent.
* **logical updatelessness**, which is about how to combine the very nice but very *Bayesian* world of Wei Dai’s [updateless decision theory](https://wiki.lesswrong.com/wiki/Updateless_decision_theory), with the much less Bayesian world of logical uncertainty.
**Embedded world-models** is about how you can make good models of the world that are able to fit within an agent that is much smaller than the world.
This has proven to be very difficult—first, because it means that the true universe is not in your hypothesis space, which ruins a lot of theoretical guarantees; and second, because it means we’re going to have to make non-Bayesian updates as we learn, which *also* ruins a bunch of theoretical guarantees.
It is also about how to make world-models from the point of view of an observer on the inside, and resulting problems such as anthropics. Some major open problems in embedded world-models include:
* **[logical uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)**, which is about how to combine the world of logic with the world of probability.
* **multi-level modeling**, which is about how to have multiple models of the same world at different levels of description, and transition nicely between them.
* **[ontological crises](https://intelligence.org/files/OntologicalCrises.pdf)**, which is what to do when you realize that your model, or even your goal, was specified using a different ontology than the real world.
**Robust delegation** is all about a special type of principal-agent problem. You have an initial agent that wants to make a more intelligent successor agent to help it optimize its goals. The initial agent has all of the power, because it gets to decide exactly what successor agent to make. But in another sense, the successor agent has all of the power, because it is much, much more intelligent.
From the point of view of the initial agent, the question is about creating a successor that will robustly not use its intelligence against you. From the point of view of the successor agent, the question is about, “How do you robustly learn or respect the goals of something that is stupid, manipulable, and not even using the right ontology?”
There are extra problems coming from the *Löbian obstacle* making it impossible to consistently trust things that are more powerful than you.
You can think about these problems in the context of an agent that’s just learning over time, or in the context of an agent making a significant self-improvement, or in the context of an agent that’s just trying to make a powerful tool.
The major open problems in robust delegation include:
* **[Vingean reflection](https://intelligence.org/files/VingeanReflection.pdf)**, which is about how to reason about and trust agents that are much smarter than you, in spite of the Löbian obstacle to trust.
* **[value learning](https://intelligence.org/files/ValueLearningProblem.pdf)**, which is how the successor agent can learn the goals of the initial agent in spite of that agent’s stupidity and inconsistencies.
* **[corrigibility](https://intelligence.org/files/Corrigibility.pdf)**, which is about how an initial agent can get a successor agent to allow (or even help with) modifications, in spite of an instrumental incentive not to.
**Subsystem alignment** is about how to be *one unified agent* that doesn’t have subsystems that are fighting against either you or each other.
When an agent has a goal, like “saving the world,” it might end up spending a large amount of its time thinking about a subgoal, like “making money.” If the agent spins up a sub-agent that is only trying to make money, there are now two agents that have different goals, and this leads to a conflict. The sub-agent might suggest plans that look like they *only* make money, but actually destroy the world in order to make even more money.
The problem is: you don’t just have to worry about sub-agents that you intentionally spin up. You also have to worry about spinning up sub-agents by accident. Any time you perform a search or an optimization over a sufficiently rich space that’s able to contain agents, you have to worry about the space itself doing optimization. This optimization may not be exactly in line with the optimization the outer system was trying to do, but it *will* have an instrumental incentive to *look* like it’s aligned.
A lot of optimization in practice uses this kind of passing the buck. You don’t just find a solution; you find a thing that is able to itself search for a solution.
In theory, I don’t understand how to do *optimization* at all—other than methods that look like finding a bunch of stuff that I don’t understand, and seeing if it accomplishes my goal. But this is exactly the kind of thing that’s *most* prone to spinning up adversarial subsystems.
The big open problem in subsystem alignment is about how to have a base-level optimizer that doesn’t spin up adversarial optimizers. You can break this problem up further by considering cases where the resultant optimizers are either **intentional** or **unintentional**, and considering restricted subclasses of optimization, like **induction**.
But remember: decision theory, embedded world-models, robust delegation, and subsystem alignment are not four separate problems. They’re all different subproblems of the same unified concept that is *embedded agency*.
---
Part 2 of this post will be coming in the next couple of days: **[Decision Theory](https://intelligence.org/2018/10/31/embedded-decisions/)**.
---
1. This is part 1 of the [Embedded Agency](https://intelligence.org/embedded-agency) series, by Abram Demski and Scott Garrabrant.
The post [Embedded Agents](https://intelligence.org/2018/10/29/embedded-agents/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
88edac59-5e49-4cd8-8540-84fdfb4ea44a | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Beliefs and Disagreements about Automating Alignment Research
*Epistemic status: Mostly organizing and summarizing the views of others.*
*Thanks to those whose views I summarized in this post, and to Tamera Lanham, Nicholas Kees Dupuis, Daniel Kokotajlo, Peter Barnett, Eli Lifland, and Logan Smith for reviewing a draft.*
Introduction
------------
In my current view of the alignment problem, there are two paths that we could try to take:
1. Come up with an alignment strategy that allows us to both build aligned AGI and to keep that AGI (or its successors) aligned as they improve towards superintelligence
2. Come up with an alignment strategy that allows us to build AI systems that are powerful (but not so powerful as to be themselves dangerous) and use that to execute some kind of ‘[pivotal act](https://arbital.com/p/pivotal/)’ that means that misaligned [ASI](https://arbital.com/p/superintelligent/) is not built
For the purposes of this post, I am going to assume that we are unable to do (1) – maybe the problem is too difficult, or we don’t have time – and focus on (2).
Within the category of ‘pivotal act’, I see two main types:
1. **Preventative pivotal acts:**acts that makes it impossible for anyone to build AGI for a long period of time
2. **Constructive pivotal acts:**acts that makes it possible to build aligned ASI
People disagree about [whether preventative pivotal acts are possible](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_A_) or even if they were possible, [if they’d be a good idea](https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious). Again, for the purposes of this post, I am going to assume we can’t or don’t want to execute a preventative pivotal act, and focus on constructive pivotal acts. In particular: **can we use AI to automate alignment research safely?**
What does ‘automating alignment research’ even mean?
----------------------------------------------------
I see three overlapping categories that one could mean when referring to ‘automating alignment research’, ordered in terms of decreasing human involvement:
1. **Level 1:** AIs help humans work faster
1. Examples include brainstorming, intelligent autocomplete, and automated summarization/explanation.
2. **Level 2:**AIs produce original contributions
1. This could be key insights into the nature of intelligence, additional problems that were overlooked, or entire alignment proposals.
3. **Level 3:**AIs build aligned successors
1. Here, we have an aligned AGI that we entrust with building a successor. At this point, the current aligned AGI has to do all the alignment research required to ensure that its successor is aligned.
Mostly I have been thinking about Levels 1 and 2, although some people I spoke to (e.g. Richard Ngo) were more focused on Level 3.
Current state of automating alignment
-------------------------------------
At the moment, we are firmly at Level 1. Models can produce similar-sounding ideas when prompted with existing ideas and are pretty good at completing code but are not great at summarizing or explaining complex ideas. Tools like Loom and Codex can provide speed-ups but seem unlikely to be decisive.
Whether we get to Level 2 soon or whether Level 2 is already beyond the point where AI systems are dangerous are key questions that researchers disagree on.
Key disagreements
-----------------
### Generative models vs agents
Much of the danger from powerful AI systems comes from them pursuing coherent goals that persist across inputs. If we can build generative models that do not pursue goals in this way, then perhaps these will provide a way to extract intelligent behavior from advanced systems safely.
### Timing of emergence of deception vs intelligence
Related to the problem of agents, there is also disagreement about whether we get systems that are intelligent enough to be useful for automating alignment *before* they are misaligned enough (e.g. deceptive or power-seeking) to be dangerous. My understanding is that Nate and Eliezer are quite confident that the useful intelligence comes only after they are already misaligned, whereas most other people are more uncertain about this.
### The ‘hardness’ of generating alignment insights
This could be seen as another framing of the above point: how smart does the system have to be to do useful, original thinking for us? Does it have to have a comprehensive understanding of how minds in general work, or can original insights be generated by cleverly combining John Wentworth posts, or John Wentworth posts with Paul Christiano posts?
### The benefits (in terms of time saved) of Level 1 interventions
It is unclear how much time is saved by Level 1 interventions: if all alignment researchers were regularly using Loom to write faster, brainstorming with GPT-3, and coding with Copilot, would this result in an appreciable speed-up of alignment work?
Summaries of viewpoints on automating alignment research
--------------------------------------------------------
Below, I summarize the positions of various alignment researchers I have spoken to about this topic. Where possible, I have had the people in question review my summary to ensure I am not misrepresenting them too badly.
### Nate Soares (unreviewed)
Solving alignment requires understanding how to control minds. If we want AI systems to solve the hard parts of alignment for us, then necessarily they will understand how to control minds in a way that we do not. Understanding how to control minds requires a ‘higher grade of cognition’ than most engineering tasks, and so a system capable enough to solve alignment is also capable of doing many dangerous things (we cannot teach AIs to drive a blue car without also being able to drive red cars). The good outcomes that we want (complete, working alignment solutions) are a sufficiently small target that we do not know how to direct a dangerous AI towards that outcome: doing this safely is precisely the alignment problem, and so we have not made our task meaningfully easier.
You don’t get around this by saying you’re using a specific architecture or technique, like scaling up GPTs. You are trying to channel the future into a specific, small target – a world where we have ended the acute risk period from AGI and have time to contemplate our values or have a long reflection – and this channeling is where the danger lies.
You can maybe use models like GPT-3 or similar to help with brainstorming or summarizing or writing, but this is not where most of the difficulty or speed-up comes from. If your definition of ‘automating alignment’ includes speeding up alignment researchers running experiments then Codex already does this, but this does not mean that we will solve alignment in time.
### John Wentworth (reviewed)
In theory, we know of one safe outer objective for automating alignment research: simulate alignment researchers in some environment. However, there are many issues with this in practice. For example, if you want to train a generative model on a bunch of existing data and use this to generate a Paul Christiano post from 2040, it needs to generalize extremely well to faithfully predict what Paul will write about alignment in 2040. However, we also need to avoid having it predict (perhaps accurately) that the most likely outcome is that there is an unaligned AGI in 2040 faking a Paul post.
In general, when you move away from ‘just simulating’ people to something else that applies more optimization pressure, things fail in subtle ways. If we are pretty close to solving alignment already, then we don’t have to apply too much optimization pressure – going from a 50% chance of solving the problem in time to a 100% chance is just 1 bit of optimization, but going from 1 in a million to 100% is a much harder task and is much more dangerous.
It is very hard to know how close we are to solving the alignment because the alignment community is still quite confused about the problem. The obvious way to reduce the amount of optimization pressure we need to apply is to do more alignment research ourselves such that the gap between the starting point of optimization and the goal is smaller. The optimization we apply by directly doing alignment research is safer insofar as we have introspective access to the processes that produce our insights, and can check if we expect them to reliably lead to good outcomes.
Some AI-assisted tools like autocomplete or improved Google Scholar could be useful, but the bottom line is that we can’t really have the AI do the hard parts without confronting the problems arising from powerful optimization.
One possible way to get around these problems is to leverage the safety of ‘simulate alignment researchers in a stable environment’ by running this safe simulation very fast. If we had arbitrary technical capabilities at our disposal, this might work. However, our current technology, generative models, would not work even if scaled up. This is because they make predictions about a *conditional* world, not a *counterfactual* world. What we really want is to put our alignment researchers in a counterfactual world where ‘unaligned AGI takes over’ is much less likely but people are still working on the alignment problem. This would mean that when we ask for a Paul post from 2040, we get a Paul post that actually solves alignment rather than one that was written by an unaligned AGI.
### Evan Hubinger (reviewed)
Generative models can be very powerful, and constitute a type of intelligence that is not inherently goal-directed. GPT-3 provided evidence that not every intelligent system is (or approximates) a coherent agent. If we can be sure that we have built a powerful generative model (and not a system that appears to behave like a generative model during training) then we should be able to get it to safely and productively produce alignment research.
The hard part is ensuring that it really is a generative model – i.e. that it really is just simulating the processes that generated its training data. Inner alignment is the main problem in this framing: there may be pressures in the training process that mean systems that get sufficiently low loss on the training objective no longer act as pure simulators and instead implement some kind of consequentialism.
### Ethan Perez (reviewed)
We should be trying to automate alignment research with AI systems. It’s not clear that getting useful alignment work out of AI systems requires levels of intelligence that are necessarily misaligned or power-seeking. It’s not clear in which order ‘capable of doing useful stuff’ and ‘deceptively aligned’ arise in these systems – current models can talk competently about deception but are not themselves deceptive. It remains to be seen whether building assistants that can help solve the alignment problem is easier or harder than directly building an alignment strategy that holds all the way to superintelligence.
However, we don’t currently know the best way to use powerful AI systems to help with alignment, so we should be building lots of tools that can have more powerful AI ‘plugged in’ when it is available. We should be a little careful about building a tool that is also useful for capabilities, but capabilities don’t pay as much attention to the alignment community as we sometimes imagine, similar ideas are already out there, and we can capture a lot of value by building it early.
### Richard Ngo (edited and endorsed by Richard)
Having AI systems help with alignment in some capacity is an essential component of the long-term plan. The most likely path to superintelligence involves a lot of AI assistance. So "using AIs to align future AIs’ is less of a plan than a description of the default path – the question is which alignment proposals help most in aligning the AIs that will be doing that later work. I feel pretty uncertain about how dangerous the first AIs capable of superhuman alignment research will be, but tend to think that they'll be significantly less power-seeking than humans are.
It’s hard to know in advance specifically what ‘automating alignment’ will look like except taking our best systems and applying them as hard as we can; so the default way to make progress here is just to keep doing regular alignment research to build a foundation we can automate from earlier. For example, if mechanistic interpretability research discovers some facts about how transformers work, we can train on these and use the resulting system to discover new facts.
Conclusion
----------
There is no consensus on how much automating alignment research can speed up progress. In hindsight, it would have been good to get more quantitative estimates of the type of speed-up each person expected to be possible. There seems to be sufficient uncertainty that investigating the possibility further makes a lot of sense, especially given the lack of current clear paths to an alignment solution. In future posts I will aim to go into more detail on some proposed mechanisms by which alignment could be accelerated. |
cd980740-62ed-405c-a1d4-02fa7e1967b4 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | "Cars and Elephants": a handwavy argument/analogy against mechanistic interpretability
**TL;DR:** If we can build competitive AI systems that are interpretable, then I argue via analogy that trying to extract them from messy deep learning systems seems less promising than directly engineering them.
**EtA - here's a follow-up:** [Mechanistic Interpretability as Reverse Engineering (follow-up to "cars and elephants")](https://www.lesswrong.com/posts/kjRGMdRxXb9c5bWq5/mechanistic-interpretability-as-reverse-engineering-follow)
**Preliminaries:**
Distinguish weak and strong tractability of (mechanistic) interpretability as follows:
* **Weak tractability:** AGI-level systems are interpretable in principle, i.e. humans have the capacity to fully understand their workings in practice given the right instructions and tools. This would be false if intelligence involves irreducible complexity, e.g. because it involves concepts that are not "crisp", modular, or decomposable; for instance, there might not a crisp conceptual core to various aspects of perception or abstract concepts such as "fairness".[[1]](#fnla49zo54wl)
* **Strong tractability:** We can build interpretable AGI-level systems without sacrificing too much competitiveness.
**The claim:**
If strong tractability is true, then mechanistic interpretability is likely not the best way to engineer competitive AGI-level systems.
**The analogy:**
1) Suppose we have a broken down car with some bad parts and we want to have a car that is safe to drive. We could try to fix the car and update the parts.
2) But we also have a perfectly functioning elephant. So instead, we could try and tinker with the elephant to understand how it works and make its behavior more safe and predictable.
I claim (2) is roughly analogous to mechanistic interpretability, and (1) to pursuing something more like what Stuart Russell seems to be aiming for: a neurosymbolic approach to AI safety based on modularity, probabilistic programming, and formal methods.[[2]](#fnxcpirjyws6)
**Fleshing out the argument a bit more:**
To the extent that strong tractability is true, there must be simple principles we can recognize underlying intelligent behavior. If there are not such simple principles, then we shouldn't expect mechanistic interpretability methods to yield safe, competitive systems. We already have many ideas about what some of those principles might be (from GOFAI and other areas). Why would we expect it is easier to recognize and extract these principles from neural networks than to deliberately incorporate them into the way we engineer systems?
**Epistemic status:** I seem to be the biggest interpretability hater/skeptic I've encountered in the AI x-safety community. This is an argument I came up with a few days ago that seems to capture some of my intuitions, although it is hand-wavy. I haven't thought about it much, and spent ~1hr writing this, but am publishing it anyways because I don't express my opinions publicly as often as I'd like due to limited bandwidth.
**Caveats:** I made no effort to anticipate and respond to counter-arguments (e.g. "those methods aren't actually more interpretable"). There are lots of different ways that interpretability might be useful for AI x-safety. It makes sense as part of a [portfolio approach](https://vkrakovna.wordpress.com/2017/08/16/portfolio-approach-to-ai-safety-research/). It makes sense as an extra "danger detector" that might produce some true positives (even if there are a lot of false negatives) or one of many hacks that might be stacked. I'm not arguing that Stuart Russell's approach is clearly superior to mechanistic interpretability. But it seems like roughly the entire AI existential safety community is very excited about mechanistic interpretability and entirely dismissive of Stuart Russell's approach, and this seems bizarre.
*Unrelated bonus reason to be skeptical of interpretability (hopefully many more to come!): when you to deploy a reasonably advanced system in the real world, it will likely recruit resources outside itself in various ways (e.g. The way people write things down on paper as a way of augmenting their memory), meaning that we will need to understand more than just the model itself, making the whole endeavor way less tractable.*
1. **[^](#fnrefla49zo54wl)**For what it's worth, I think weak tractability is probably false and this is maybe a greater source of my skepticism about interpretability than the argument presented in this post.
2. **[^](#fnrefxcpirjyws6)**Perhaps well-summarized here, although I haven't watched the talk yet: |
e78fc613-384e-4545-af72-6518e76854e6 | trentmkelly/LessWrong-43k | LessWrong | Time Travel, AI and Transparent Newcomb
Epistemic status: has "time travel" in the title.
Let's suppose, for the duration of this post, that the local physics of our universe allows for time travel. The obvious question is: how are paradoxes prevented?
We may not have any idea how paradoxes are prevented, but presumably there must be some prevention mechanism. So, in a purely Bayesian sense, we can condition on paradoxes somehow not happening, and then ask what becomes more or less likely. In general, anything which would make a time machine more likely to be built should become less likely, and anything which would prevent a time machine being built should become more likely.
In other words: if we're trying to do something which would make time machines more likely to be built, this argument says that we should expect things to mysteriously go wrong.
For instance, let's say we're trying to build some kind of powerful optimization process which might find time machines instrumentally useful for some reason. To the extent that such a process is likely to build time machines and induce paradoxes, we would expect things to mysteriously go wrong when trying to build the optimizer in the first place.
On the flip side: we could commit to designing our powerful optimization process so that it not only avoids building time machines, but also actively prevents time machines from being built. Then the mysterious force should work in our favor: we would expect things to mysteriously go well. We don't need time-travel-prevention to be the optimization process' sole objective here, it just needs to make time machines sufficiently less likely to get an overall drop in the probability of paradox. |
267c6669-1617-4ede-aa77-fe5a64df20aa | trentmkelly/LessWrong-43k | LessWrong | [Event] Weekly Alignment Research Coffee Time (05/17)
Just like every Monday now, researchers in AI Alignment are invited for a coffee time, to talk about their research and what they're into.
Here is the link.
And here is the everytimezone time.
Note that the link to the walled garden now only works for AF members. Anyone who wants to come but isn't an AF member needs to go by me. I'll broadly apply the following criteria for admission:
* If working in a AI Alignment lab or funded for independent research, automatic admission
* If recommended by AF member, automatic admission
* Otherwise, to my discretion
I prefer to not allow people who might have been interesting but who I'm not sure will not derail the conversation, because this is supposed to be the place where AI Alignment researchers can talk about their current research without having to explain everything.
See you then! |
1574ce81-0cfc-4b42-8047-9ed4396e23c8 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Nuclear Espionage and AI Governance
ABSTRACT:
=========
*Using both primary and secondary sources, I discuss the role of espionage in early nuclear history. Nuclear weapons are analogous to AI in many ways, so this period may hold lessons for AI governance. Nuclear spies successfully transferred information about the plutonium implosion bomb design and the enrichment of fissile material. Spies were mostly ideologically motivated. Counterintelligence was hampered by its fragmentation across multiple agencies and its inability to be choosy about talent used on the most important military research program in the largest war in human history. Furthermore, the Manhattan Project’s leadership prioritized avoiding domestic political oversight over preventing espionage. Nuclear espionage most likely sped up Soviet nuclear weapons development, but the Soviet Union would have been capable of developing nuclear weapons within a few years without spying. The slight gain in speed due to spying may nevertheless have been strategically significant.*
*Based on my study of nuclear espionage, I offer some tentative lessons for AI governance:*
* *The importance of spying to transformative AI development is likely to be greater if the scaling hypothesis is false than if it is true.*
* *Regardless of the course that AI technology takes, spies may be able to convey information about engineering or tacit knowledge (although more creativity will be required to transfer tacit than explicit knowledge).*
* *Nationalism as well as ideas particularly prevalent among AI scientists (including belief in the open source ideal) may serve as motives for future AI spies. Spies might also be financially motivated, given that AI development mostly happens in the private sector (at least for now) where penalties for spying are lower and financial motivations are in general more important.*
* *One model of technological races suggests that safety is best served by the leading project having a large lead, and therefore being secure enough in its position to expend resources on safety. Spies are likely, all else equal, to decrease the lead of the leader in a technological race. Spies are also likely to increase enmity between competitors, which seems to increase accident risk robustly to changes in circumstances and modeling assumptions. Therefore, it may make sense for those who are concerned about AI safety to take steps to oppose espionage**—**even if they have no preference for the labs being harmed by espionage over the labs benefiting from espionage.*
* *On the other hand, secrecy (the most obvious way to prevent espionage) may increase risks posed by AI by making AI systems more opaque. And countermeasures to espionage that drive scientists out of conscientious projects may have perverse consequences.*
Acknowledgements: I am grateful to Matthew Gentzel for supervising this project and Michael Aird, Christina Barta, Daniel Filan, Aaron Gertler, Sidney Hough, Nat Kozak, Jeffery Ohl, and Waqar Zaidi for providing comments. This research was supported by a fellowship from the Stanford Existential Risks Initiative.
This post is a short version of the report. The full version with additional sections, an appendix, and a bibliography, is available [here](https://docs.google.com/document/d/1sHmhUvNh6qgCqlyKpXJbUkN4FAafqrKyYJYivr8aotM/edit?usp=sharing).
1. Introduction
===============
The early history of nuclear weapons is in many ways similar to hypothesized future strategic situations involving advanced artificial intelligence ([Zaidi and Dafoe 2021](https://web.archive.org/web/20210325035303/https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf), 4). And, in addition to the objective similarity of the situations, the situations may be made more similar by deliberate imitation of the Manhattan Project experience ([see this report](https://armedservices.house.gov/_cache/files/2/6/26129500-d208-47ba-a9f7-25a8f82828b0/6D5C75605DE8DDF0013712923B4388D7.future-of-defense-task-force-report.pdf#page=13) [to the US House Armed Service Committee](https://web.archive.org/web/20201008094452/https://armedservices.house.gov/_cache/files/2/6/26129500-d208-47ba-a9f7-25a8f82828b0/6D5C75605DE8DDF0013712923B4388D7.future-of-defense-task-force-report.pdf)). So it is worth looking to the history of nuclear espionage for inductive evidence and conceptual problems relevant to AI development.
The Americans produced a detailed official history and explanation of the Manhattan Project, entitled the Smyth Report, and released it on August 11, 1945, five days after they dropped the first nuclear bomb on Japan ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 126). For the Soviets, the Smyth Report “candidly revealed the scale of the effort and the sheer quantity of resources, and also hinted at some of the paths that might work and, by omission, some that probably would not” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 103). While it would not have allowed for copying the Manhattan Project in every detail, the Soviets were able to use the Smyth Report as “a general guide to the methods of isotope separation, as a checklist of problems that needed to be solved to make separation work, and as a primer in nuclear engineering for the thousands upon thousands of engineers and workers who were drafted into the project” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 104).
There were several reasons that the Smyth Report was released. One was a belief that, in a democratic country, the public ought to know about such an important matter as nuclear weapons. Another reason was a feeling that the Soviets would likely be able to get most of the information in the Smyth Report fairly easily regardless of whether it was released. Finally, releasing a single report would clearly demarcate information that was disseminable from information that was controlled, thereby stemming the tide of disclosures coming from investigative journalists and the tens of thousands of former Manhattan Project employees ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 124-125). Those leaks would not be subject to strategic omission, and might, according to General Leslie Groves (Director of the Manhattan Project) “start a scientific battle which would end up in congress” (Quoted in [Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 125). The historian Michael Gordin summarized the general state of debate between proponents and opponents of nuclear secrecy in the U.S. federal government in the late 1940s as follows:
> How was such disagreement possible? How could Groves, universally acknowledged as tremendously security-conscious, have let so much information, and such damaging information, go?... The difference lay in what Groves and his opponents considered to be useful for building an atomic bomb. Groves emphasized the most technical, most advanced secrets, while his opponents stressed the time-saving utility of knowing the general outlines of the American program ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 93).
>
>
In Gordin's view, "in the context of the late 1940s, his [Groves's] critics were more right than wrong" ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 93), though it is important to note that the Smyth Report's usefulness was complemented by the extent of KGB spying of which neither Groves nor his critics were yet aware. Stalin decided to imitate the American path to the nuclear bomb as closely as possible because he believed that it would be both the “fastest” and the “most reliable” (Quoted in [Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 152-153). The Smyth Report (and other publicly available materials on nuclear weapons) contained strategic omissions. The Soviets used copious information gathered by spies to fill in some of the gaps.
2. Types of information stolen
==============================
2.1 Highly abstract engineering: bomb designs
---------------------------------------------
Bomb designs were one of the most important categories of information transferred by espionage. To illustrate why design transfer was so important, it is necessary to review some basic principles of nuclear weaponry (most of what follows on nuclear weapons design is adapted from a [2017 talk by Matt Bunn](https://www.youtube.com/watch?v=jqLbcNpeBaw)).
Fission weapons work by concentrating a critical mass of fissile material. A critical mass is enough fissile material to start a nuclear chain reaction. A critical mass by itself, however, is not a viable nuclear weapon because it will heat up dramatically, turn into gas, expand in volume, and cease to constitute a critical mass, thereby stopping the chain reaction before it has had a chance to consume most of the fuel. The simplest possible nuclear bomb, a gun type design, works by launching a shell of highly enriched uranium-235 into another piece of highly enriched uranium-235. Neither piece of uranium-235 is critical by itself, but together they amount to a critical mass. The tamper prevents the critical mass from expanding out into a diffuse cloud of gas. A massive amount of heat is released, turning the fissile material to gas. The temperature rises to that of the core of the sun. In a gas, a rise in temperature causes a corresponding increase in pressure. This leads to a massive increase in pressure, and an extremely energetic explosion. The bomb dropped on Hiroshima, Little Boy, was a gun type bomb.
[Gun type bomb design](https://commons.wikimedia.org/wiki/File:Gun-type_Nuclear_weapon.png)The amount of fissile material required to achieve critical mass decreases with density squared. So compressing one’s fissile material means one gets more explosive power for the same amount of fuel. This is the key to the more advanced plutonium implosion bomb design, which was used for the Fat Man bomb dropped on Nagasaki. A plutonium implosion bomb has a core of fissionable plutonium surrounded by a tamper in the middle and, at the top layer, a chemical explosive. The explosive detonates, pushing the tamper in towards the core, which begins a nuclear chain reaction. This design uses plutonium-239, which is easier to obtain than the uranium-235 used in a gun type bomb.
[Plutonium implosion bomb design](https://en.wikipedia.org/wiki/Nuclear_weapon_design#/media/File:Implosion_bomb_animated.gif)The first Soviet nuclear test was not of the relatively simple gun type. Instead it was a far more complex plutonium implosion assembly. The Soviets received the American plutonium implosion design twice, from two spies, and copied it for their first nuclear bomb ([Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 366; [Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 117, 119).
Having two sources for the design gave the Soviets confidence that the design would work and was not FBI disinformation, no small thing given that the leaders of the Soviet nuclear weapons effort had reason to believe they would be executed if the first test failed ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 171; [Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 218). Furthermore, the Soviets were hard pressed to separate enough uranium-235 from the more plentiful uranium-238 to make a gun type uranium bomb work (gun type plutonium bombs are not viable). This was because the Western Allies had taken pains to corner the world supply of high quality uranium ore. The low quality ore that the Soviets had was adequate to the task of breeding plutonium, but it would have been more expensive and slower for the Soviets to separate enough uranium-235 enough to build a gun type bomb ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 149-151). Often, controlling material and controlling information are thought of as different strategies for preventing nuclear proliferation. But in the first years after the creation of the atomic bomb, the West’s failure to control information about nuclear weapons design undermined its strategy of controlling fissile material to prevent nuclear proliferation.
2.2 Less abstract engineering
-----------------------------
Most of the effort expended during the Manhattan Project went into the enrichment of fissile material. Ted Hall provided information about methods of uranium isotope (“25” in KGB code) separation, as reported in a decrypted cable sent from New York Station to Moscow Center on May 26, 1945:
[KGB cable about Ted Hall](https://web.archive.org/web/20190923060419/https://www.nsa.gov/Portals/70/documents/news-features/declassified-documents/venona/dated/1945/26may_mlad2.pdf)Fuchs similarly provided data about electromagnetic techniques of isotope separation for uranium-235 (“ENORMOZ” in KGB code ordinarily referred to the Manhattan Project as a whole, but in this case it meant uranium-235 specifically), which was reported in a decrypted cable from Moscow to New York of April 10, 1945.
[KGB cable about Klaus Fuchs](https://web.archive.org/web/20190923055510/https://www.nsa.gov/Portals/70/documents/news-features/declassified-documents/venona/dated/1945/10apr_atomic_bomb_info.pdf)In addition to technical reports on enriching fissile material from Fuchs and Hall, the Soviets had plant designs for the Oak Ridge facility from Russell McNutt, data on plutonium from an unidentified spy, and data on the Chalk River facility in Canada’s nuclear reactor from Alan Nunn May, ([see the appendix of the full report for a list of Manhattan Project spies](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.gppzuj2ox6yg)). The Soviets were also occasionally able to acquire physical samples from spies. They received 162 micrograms of uranium-235 from Alan Nunn May, and David Greenglass “provided the Soviets with a physical sample of part of the triggering mechanism [of a plutonium bomb]” ([Klehr and Haynes 2019](https://web.archive.org/web/20191124174354/https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-63-no-3/pdfs/Fourth-Soviet-Spy-LosAlamos.pdf), 12).
2.3 Types of information and the AI case
----------------------------------------
To the extent that the information that the most advanced AI projects have that their closest competitors lack is highly abstract and easy to convey, the potential significance of spying is very large. Simple, abstract ideas (analogous to basic principles of bomb design in the nuclear case) are the easiest to transfer. The question of how important theoretical breakthroughs will be to the future development of AI is closely related to the debate over the scaling hypothesis. The scaling hypothesis holds that current techniques are sufficient to eventually produce transformative artificial intelligence (TAI) if the neural networks are just made large enough ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis); for an explanation of the idea of TAI see [Karnofsky 2016](https://web.archive.org/web/20210122012058/https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1)). The reason that TAI does not yet exist, per the scaling hypothesis, is that the hardware and the will to invest in scaling does not yet exist ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis)). To the extent that this is true, it seems that stealing highly abstract ideas about AI algorithms is unlikely to make much of an impact, and that there is unlikely to be an algorithmic analog of the plutonium implosion bomb design. On the other hand, abstract ideas about data types, data processing, or assembling the requisite computing power might be transferred by spies to great effect.
Spies transferred about 10,000 pages of technical material on nuclear weapons from the Manhattan Project to the Soviet Union ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 60). At that level of volume, one can convey information about engineering that is concrete and detailed rather than abstract and simple, such as the design of machinery and facilities used for the separation of uranium isotopes. Even devoted proponents of the scaling hypothesis acknowledge that when replicating an effort based on scaling up existing techniques, one should “never underestimate the amount of tweaking and special sauce it takes” ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis)).
But just how significant is engineering knowledge of an intermediate level of abstraction likely to be as a bottleneck on AI capabilities? Unlike the Manhattan Project, advanced AI does not obviously require a massive industrial effort to purify rare materials. However, if significant AI research begins to be conducted by governments and international trade in computer chips becomes more restricted, the ability to solve engineering problems in the physical world might again come to differentiate the leading nation from its closest competitors. In such a regime, spying on the details of electrical engineering, materials science, and industrial processes might prove important (see [Khan and Mann 2020](https://web.archive.org/web/20210531050703/https://cset.georgetown.edu/publication/ai-chips-what-they-are-and-why-they-matter/)).
The Anglo-American effort to prevent nuclear proliferation by cornering the world supply of uranium (discussed in section 2.1 above) might have been far more effective but for the Soviet’s use of espionage data on the plutonium route to the bomb. Similarly, strategies to restrict AI proliferation that rely on restricting information, and strategies that rely on restricting access to physical materials (in the AI case rare earth metals, chips, and semiconductor manufacturing equipment rather than high quality uranium ore) might be mutually reinforcing.
Tacit knowledge seems to play an important role in AI research. Knowing what sorts of training data to select for a model might involve tacit knowledge. More significantly, knowing which research directions are likely to be promising is a key element of AI research (or any other kind of research), and such knowledge includes an important tacit dimension. In a discussion of what one learns in a computer science PhD program, Andrej Karpathy explained the importance of the tacit knowledge embedded in “taste” to computer science research:
> When it comes to choosing problems you’ll hear academics talk about a mystical sense of “taste”. It’s a real thing. When you pitch a potential problem to your adviser you’ll either see their face contort, their eyes rolling, and their attention drift, or you’ll sense the excitement in their eyes as they contemplate the uncharted territory ripe for exploration. In that split second a lot happens: an evaluation of the problem’s importance, difficulty, its sexiness, its historical context (and possibly also its fit to their active grants). In other words, your adviser is likely to be a master of the outer loop and will have a highly developed sense of taste for problems. During your PhD you’ll get to acquire this sense yourself ([Karpathy 2016](https://web.archive.org/web/20210625204713/http://karpathy.github.io/2016/09/07/phd/)).
>
>
Research taste cannot easily be transferred by espionage. It might be possible to formalize certain aspects of research taste, or to accelerate the process of learning about it implicitly by mimicking the experience of training under a senior researcher. How much better is the taste of the best researchers on the most advanced AI project likely to be than the taste of the second-best researchers on the second best AI project? Rohin Shah reports that advanced computer science PhD students at UC Berkeley have much better research taste than beginning PhD students, and that professors have better taste than advanced PhD students ([Shah 2020](https://web.archive.org/web/20210428081910/https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/)). Is there a similar asymmetry in taste between the very best researchers in the world and their close competitors? This seems like a promising question for further study but, provisionally: Michael Polanyi**—**the philosopher whose work brought about the current focus on tacit knowledge in the history of science and technology**—**believed that the greatness of a discovery was proportional to the amount of tacit knowledge required to select the problem that led to it (Polanyi [1966] 2009, 23). The more that taste and other forms of tacit knowledge distinguish the leading AI projects from less advanced ones, the more difficult it will be for spies to significantly help the laggards catch up. Spies could work to transfer personnel from the leader to the laggards as a way of transferring tacit knowledge. But this would duplicate the [issues with trust that limited the usefulness of Soviet spies who were exfiltrated such as George Koval and Oscar Seborer](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.rd2epngoenoy). Alternatively, spies might try some scheme of rendering tacit knowledge explicit.
3. Motivations for espionage
============================
3.1 Klaus Fuchs: ideology and conscience
----------------------------------------
Klaus Fuchs was (along with Ted Hall) one of the two most important spies in the Manhattan Project. He was a theoretical physicist. Fuchs took refuge in England after the Nazis came to power in Germany because his history as a Communist Party activist made him a target of the Gestapo. While in England, Fuchs began to work on nuclear weapons research and informed a German Communist Party leader that he had information that might be of interest to Soviet intelligence. Fuchs was sent to America to work as a nuclear physicist on the Manhattan Project, and continued to spy for the U.S.S.R. ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 92-93).
Fuch’s sister Kristel Heineman helped him on several occasions to make contact with his KGB courier in America, Harry Gold ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 95). Fuchs’s initial involvement in spying was clearly ideologically motivated. He later accepted money from the KGB. Fuchs claimed to his KGB courier that he did so to prove his loyalty to the Soviet Union, because he had been told that offering payment was a KGB strategy used to “morally bind” other spies to keep helping the KGB ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 128).
[Klaus Fuchs](https://en.wikipedia.org/wiki/Klaus_Fuchs#/media/File:Klaus_Fuchs_-_police_photograph.jpg)In 1949, British and American intelligence discovered Fuchs by decrypting KGB cables as part of the Venona counterintelligence project and correlating the covernames "Charles" and "Rest" with known facts about Fuchs’s background and whereabouts ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 193-228). By that time, Fuchs was back in England and working for the British nuclear weapons lab at Harwell. MI5 investigator James Skardon approached Fuchs and said that MI5 was certain Fuchs had been spying, but did not disclose it knew: “Skardon… suggested that FUCHS had been passing information to the Russians.... Skardon then took him very carefully over the ground during the period when he [Fuchs] was in America... and said that if it was not FUCHS it ‘could only be his twin brother’” ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0) 239-240). Skardon repeatedly led Fuchs to believe he could keep his job at Harwell if he confessed ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 239, 259-260). At first Fuchs denied it, but after several interviews, he confessed to spying ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 257-258).
Later, Fuchs gave a written confession. The ideological motivations given in that confession were as follows: Fuchs’s father always emphasized to him the importance of following his conscience. In university, Fuchs started out as a social democrat, but switched to the Communist Party after what he saw as the social democrat’s failure to effectively oppose the rise of Hitler ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 182-183). While working as a Communist Party activist, he began to feel that he should subordinate his personal conscience and ideas about decency to party discipline ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 183). In his confession, he reported a kind of inward compartmentalization, allowing one part of himself to be at ease with his fellow scientists and another part to spy on them.
In Fuchs’s confession, he claimed to have come to reject his former beliefs that 1. standards of personal decency had to be suspended for political reasons 2. one should subordinate one's thoughts to the Party and 3. the Marxist theory of freedom through the mastery of the blind forces that control society could be put into practice in an individual's life by skillful manipulation of his own environment, including that part of his environment composed of the people around him ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185-186). Fuchs claimed his newly re-awakened conscience required him to stop working with the KGB early in 1949 and to turn himself in 1950 in order to spare his friends at Harwell from the suspicion that would be cast on them by ambiguity about who the spy in the British nuclear weapons program was ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185-186). His confession shows that he continued to believe he would be allowed to remain at Harwell ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185).
The primary source evidence is potentially consistent with ideological disillusionment serving as one factor motivating Fuchs’s decisions to stop meeting with his KGB courier in early 1949 (although this also might also have been due to Fuchs somehow discovering that he was being investigated, see [Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 271-284). Remarkably, Fuchs told a similar story of ideological development (but with a different valence) when he met with KGB agents in a Moscow restaurant after his release from British prison and relocation to East Germany. Fuchs told the agents that he had been unduly influenced by bourgeois ideology, but that he had since corrected himself ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 134-135).
3.2 Ted Hall: ideology and great power balancing
------------------------------------------------
Ted Hall was the youngest physicist working on the Manhattan Project. He graduated from Harvard at 18. Hall was a communist, and had been active as a labor organizer while in college ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 110-112). In 1944, at age 19, he approached a representative of the Soviet Union in New York and offered to serve as a spy. His explanation of his motivations for giving the U.S.S.R. information about American nuclear weapons research is recorded in former KGB agent Alexander Vassiliev’s notes on the KGB’s archives, [which have been translated into English and are hosted on the Wilson Center’s website](https://digitalarchive.wilsoncenter.org/collection/86/vassiliev-notebooks):
> The S.U. [Soviet Union] is the only country that could be trusted with such a terrible thing. But since we cannot take it away from other countries—the U.S.S.R. ought to be aware of its existence and stay abreast of the progress of experiments and construction. This way, at a peace conference, the USSR—on which the fate of my generation depends—will not find itself in the position of a power subjected to blackmail ([Vassiliev, Yellow Notebook #1](https://web.archive.org/web/20170811015011/https://digitalarchive.wilsoncenter.org/document/112856), 21).
>
>
Although Hall would later claim that he had originally set out only to inform the Soviet Union of the fact that the United States was developing nuclear weapons ([Hall [1995] 1997](https://www.google.co.uk/books/edition/Bombshell/xqjeAAAAMAAJ?hl=en&gbpv=0&bsq=bombshell%20ted%20hall), 288), that claim would seem to be belied by his statement that the "U.S.S.R. ought to... stay abreast of the progress of experiments and construction." Decrypted Venona cables revealed Hall’s status as a Soviet spy to American intelligence services after the war. However, Hall, unlike Fuchs, did not confess when questioned. Unwilling to reveal its access to secret Soviet communications, and unable to admit secret evidence in court, the U.S. government let Hall go ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 123-124). After his spying was revealed by the declassification of the Venona cables in 1995, Hall admitted to having been a Soviet spy:
> It has even been alleged that I “changed the course of history.” Maybe the “course of history,” if unchanged, would have led to atomic war in the past fifty years—for example the bomb might have been dropped on China in 1949 or the early fifties. Well, if I helped to prevent that, I accept the charge. But such talk is purely hypothetical. Looking at the real world we see that it passed through a very perilous period of imbalance, to reach the existing slightly less perilous phase of “MAD” (mutually assured destruction) ([Hall [1995] 1997](https://www.google.co.uk/books/edition/Bombshell/xqjeAAAAMAAJ?hl=en&gbpv=0&bsq=bombshell%20ted%20hall), 288).
>
>
Hall’s two justifications, more than fifty years apart, both focused on the international balance of power.
3.3 Reflections on nuclear spy motivations
------------------------------------------
Ideology was by far the biggest motivation for Manhattan Project spies. Financial motivations were less important than ideological motivations, probably because penalties for spying could include decades in prison or death. When the stakes are very high, spying requires a certain kind of altruism, as narrowly self-interested motivations are unlikely to be able to overcome fear of the penalties if one is caught. It is also striking how many spies (Klaus Fuchs, David Greenglass, Oscar Seborer) were helped by members of their families in their espionage. Family loyalties might have served to prevent spies from desisting from spying (although Greenglass overcame this obstacle when he testified against his sister and brother-in-law, sending them to the electric chair). Another factor, in addition to family loyalties, that served to make it easier to start spying for the Soviet Union than to stop was the KGB practice of paying spies even if they were originally ideologically motivated. Receiving payment from the KGB removed any possible ambiguity about what the spies were doing and increased expected penalties, reducing the odds that spies would confess.
3.4 Possible AI spy motivations
-------------------------------
The Soviet Union was in an unusual position in the 1930s and 1940s. Its governing ideology commanded a significant following among educated people all over the world. This made it much easier to recruit spies. Unlike socialist internationalist loyalty to the Soviet Union, nationalism continues to be widespread and might motivate AI spying. This is true even of spying in the private sector, as spies might believe that by helping firms based in their homelands they are doing their patriotic duty. The most significant nuclear spy outside of the Manhattan Project, A. Q. Khan, was motivated by Pakistani nationalism. While security clearance investigations try to detect foreign loyalties, nothing like the security clearance system exists in the private sector. Furthermore, nation-states might force their otherwise unwilling nationals or firms to help with AI espionage. However, this issue must be treated with extreme care. There is an obvious risk of xenophobic or racist bias. Furthermore, there is a risk that attempting to prevent espionage by restricting the access to sensitive information of those with potential conflicts of national loyalties will, pragmatically in addition to morally, backfire. [During the Cold War, the United States deported a Chinese-born aerospace engineer, Qian Xuesen based on unproven allegations that he was a spy](https://www.bbc.com/news/stories-54695598). Qian went on to build missile systems for the People’s Republic of China. In addition to ideas that are widely popular (such as nationalism), ideas that are common among software engineers and computer scientists but rarer in the general population might prove significant as motivations for AI espionage. Belief in the open source or free software ideal, which opposes secrecy in software development, is one obvious example.
Despite the potential motivating force of American nationalism as an ideology for spies, it seems doubtful that the U.S. government or U.S. firms will be net beneficiaries of AI espionage if competition is most intense between countries (if an AI arms race is undertaken largely between U.S. firms, then some U.S. firms may well be net beneficiaries). Spying can help lagging participants in a race to develop new technologies catch up, but it is hard to see how it can help the leader improve its lead (unless the overall leader is behind in certain specific areas). The United States appears to be ahead of the rest of the world in AI, with China being its only plausible close competitor. One recent analysis broke down AI capabilities into four drivers: hardware; research and algorithms; data; and size of commercial AI sector. The United States led China by a wide margin in every category except for data ([Ding 2018](http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf#page=29), 29).
The most important AI research today is conducted in the private sector. Unless that changes, the most important spying will have to be done on private firms. This changes the balance of motivations that might prove significant. Most obviously, given that most people approach their work with the goal of making money, it suggests that financial gain might be more significant as a motive for AI espionage than it was as a motive for nuclear espionage. Financially motivated public sector spies tend to be of lower quality than ideological spies because, given the legal penalties for spying, only irrational people or people in truly desperate need of money would agree to take on the requisite level of risk. But in the private sector, expected penalties are lower.
4. Manhattan Project counterintelligence
========================================
The historian Alex Wellerstein argues that counterintelligence efforts at the Manhattan Project had three main goals: 1. preventing Axis powers from spying 2. preventing wartime allies (such as the Soviet Union) from spying and 3. preventing scientists from getting a holistic understanding of the Manhattan Project, and (more importantly) preventing politicians and the broader American public from discovering the Manhattan Project's existence. Broadly, 1 and 3 were successful but 2 was not ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 91-92). It may be that bureaucratic incentives to focus on secrecy from domestic political actors drew energy away from preventing Soviet espionage. General Leslie Groves was particularly concerned about Congress getting wind of the massive budget of the Manhattan Project and cutting it off, or subjecting Manhattan Project leaders to onerous postwar investigations ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 81). During congressional hearings on atomic spying after the war, Groves “argued… that the Manhattan Project security apparatus had been primarily focused on preventing leaks and indiscretions, not rooting out disloyalty” ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 224-225).
[General Leslie Groves](https://en.wikipedia.org/wiki/Leslie_Groves#/media/File:Leslie_Groves.jpg)There were other reasons, besides Groves’s relative lack of interest in preventing Soviet spying, for the success of the Manhattan Project spies. Responsibility for detecting espionage was divided between two mutually hostile agencies, the FBI and army intelligence. And, most fundamentally, a significant portion of the world’s top scientific talent was sympathetic to the Soviet Union, which introduced a capability-alignment tradeoff ([Walsh 2009](https://web.archive.org/web/20210225083023/https://www.smithsonianmag.com/history/george-koval-atomic-spy-unmasked-125046223/)).
5. The significance of nuclear espionage
========================================
The Soviet Union detonated its first nuclear bomb on August 29, 1949, four years after the first successful American nuclear test. In *Stalin and the Bomb*, David Holloway evaluated the impact of nuclear espionage on Soviet nuclear weapons development as follows:
> The first Soviet atomic bomb was a copy of the American plutonium bomb tested at Alamogordo in July 1945. Espionage played a key role in the atomic Soviet project, [*sic*] and its role would have been even greater if the Soviet leaders had paid more heed to the intelligence they received during the war. *The best estimates suggest, however, that the Soviet Union could have built a bomb by 1951 or 1952 even without intelligence about the American bomb*. There already existed in the Soviet Union strong schools of physics and radiochemistry, as well as competent engineers. Soviet nuclear research in 1939-41 had gone a long way toward establishing the conditions for an explosive chain reaction. It was because Soviet nuclear scientists were so advanced that they were able to make good use of the information they received from Britain and the United States about the atomic bomb.... The nuclear project was a considerable achievement for Soviet science and engineering ([Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 366, emphasis added).
>
>
The empirical outline of Holloway’s account does not appear to be open to serious doubt. The Soviets made significant use of espionage data and, on the other hand, Soviet scientists were world-class and could have developed the bomb within a few years of 1949 without espionage.
Michael Gordin makes an interesting argument in *Red Cloud at Dawn*. The Soviets laboriously checked, re-checked, and adapted spy data. Given the effort that the Soviets had to go through to assure themselves of the veracity of the information that they got from spies, Gordin suggests that it is an open question whether the Soviets really saved *any* time by using spy data ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 153-154). Gordin concedes however that, even if the Soviets saved no time, they “surely saved much uncertainty” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 153).
Reducing uncertainty can change one’s strategy. If a country increases its confidence that it will soon have a powerful weapon hitherto monopolized by an enemy, it may become rational to behave more aggressively towards that enemy.
Ignoring the prospective effects of knowing (rather than merely guessing) that one will soon have a powerful weapon, saving uncertainty meant removing the chance that the Soviets were unlucky and would have had to wait longer to get nuclear weapons. Stalin himself did not believe that nuclear weapons were very strategically significant in and of themselves ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 62). He did, however, understand the enormous importance that the Americans assigned to nuclear weapons. Thus, he refused Kim Il Sung’s request to support a North Korean invasion of South Korea in 1948 because he feared an American intervention on the South Korean side. In 1950, however, Stalin was willing to support Kim’s invasion, in part because he believed that the Soviet Union’s nuclear weapons would deter American intervention ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 62). Therefore, it seems that even if one takes maximally unfavorable assumptions and assumes that espionage saved the Soviet Union no time and only uncertainty, without espionage there would have been a substantially greater chance that the Korean War would have been delayed or, because of the other changes made possible by delay, avoided.
Furthermore, I do not think maximally unfavorable assumptions about the efficacy of nuclear espionage are justified. Absent further argument, it seems to me that we should default to the view that it is easier to check data and designs that one has in hand than it is to derive entirely new data and designs. Holloway’s estimate that intelligence saved the Soviets two to three years seems to be a subjective guess rather than the output of a quantitative model of bomb timelines. However, given that Holloway undertook the most thorough study of the Soviet nuclear weapons program (at least in English), he should be afforded some (small) amount of epistemic deference. Given the basic facts of the case, the Soviets saving something in the neighborhood of two to three years is not hard to believe. Because of the importance of the Korean War, that ought to qualify as a significant impact on world history.
In addition to the impact of espionage on the development of nuclear weapons, nuclear espionage may also have raised the temperature of the Cold War. Even if we grant, as we should, that the Cold War would have occurred anyway, the discovery of Alan Nunn May’s nuclear spying in 1946 may have reduced the odds that control of nuclear weapons would be ceded to multilateral international institutions ([Zaidi and Dafoe 2021](https://web.archive.org/web/20210325035303/https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf), 23, 42, 42n179). The distrust engendered by nuclear espionage highlights the potential of spying to increase enmity between the leader and the laggards in a technological race, and to reduce the odds of cooperation aimed at mitigating the risks of such a race. This effect emerges from the inherent dynamics of espionage and is likely to apply to AI races as well as nuclear races.
6. Secrecy
==========
Among people concerned about existential risk, there sometimes seems to be a presumption in favor of secrecy. One plausible origin for this presumption is the 2016 article [“The Unilateralist’s Curse and the Case for a Principle of Conformity”](https://web.archive.org/web/20210413160318/https://nickbostrom.com/papers/unilateralist.pdf) by Nick Bostrom, Thomas Douglas, and Anders Sandberg. Bostrom et al. argue that even a well-intentioned group of independent actors is likely to err in the direction of taking a risky action, because if one can act unilaterally the probability of action will be proportional not to the average of the group but to the probability that the most optimistic actor will act. Bostrom et al.’s proposed solution to the unilateralist's curse is a principle of conformity in situations where unilateralism is possible. When the action in question is publishing or not publishing some information, the principle of conformity is equivalent to a presumption in favor of secrecy.
Note, though, that in “The Unilateralist’s Curse” Bostrom et al. do not argue for conformity all things considered. Rather, they argue that the unilateralist’s curse provides a defeasible reason for conformity. Their paper does not attempt to establish whether, in any given, situation our prior inclinations to conform or not to conform are correct. If one is concerned about the dissemination of information hazards, one should bear in mind that omissions might reveal as much as commissions in certain circumstances, and weigh carefully what strategy of releasing or withholding information is least hazardous ([Bostrom 2019](https://doi.org/10.1111/1758-5899.12718); [Bostrom 2011](https://web.archive.org/web/20210427231638/https://www.nickbostrom.com/information-hazards.pdf)).
One should also be concerned by the tendency of secrecy regimes to perpetuate themselves. Alex Wellerstein, explains this point of view
> This is, perhaps, the real application for the history of nuclear secrecy to these fields: *once the controls come in, they don’t go away fast*, and they may not even work well to prevent the proliferation of technology. But they will do other kinds of work in their effort to partition the world into multiple parts: creating in-communities and out-communities, drawing scrutiny to those who practice in these arts, and monopolizing patrons. There may be good reasons for other scientific communities to embrace secrecy—if the information in question truly was unlikely to be independently discoverable, had potentially large negative applications relative to the possible positive applications, and could be effectively controlled, then it might be a candidate—but if they took my advice, they would think long and hard about what types of secrecy activities they wanted to adopt and how to make sure that their attempts at secrecy did not outstrip their other values ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 410, emphasis added).
>
>
Many of the concerns Wellerstein raises seem rather remote from existential risk. This might lead researchers concerned with existential risk to assume that they have nothing to learn from the anti-secrecy perspective. I think that would be a mistake, because Wellerstein’s observation that regimes of secrecy tends to be self-perpetuating is highly relevant to existential risk. Secrecy serves to worsen our understanding of (and, therefore, our ability to control) emerging technologies. Secrecy may have had this effect in the early Cold War United States, where a large thermonuclear arsenal was accumulated alongside a failure to seriously study the catastrophic risks that thermonuclear war posed ([Gentzel 2018](https://theconsequentialist.wordpress.com/2018/02/24/lessons-from-the-cold-war-on-information-hazards-why-internal-communication-is-critical/)). If secrecy is hard to uproot, it might further raise existential risk by preventing concerns about safety from spreading to all relevant actors.
In “What Failure Looks Like,” the AI researcher Paul Christiano explains some reasons why AI may pose an existential risk. Those reasons all involve imperfectly understood AI systems whose goals diverge from those of human beings and which are able to gain power and influence in part because of their creators' imperfect understanding of the systems' true goals. Christiano anticipates that this problem will arise due to competitive incentives to deploy powerful AI systems as soon as possible combined with the inherent opacity of contemporary machine learning techniques ([Christiano 2019](https://web.archive.org/web/20190608113631/https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)). But secrecy about advanced AI might compound the problem of recognizing misaligned AI systems. And if approaches to AI safety that rely on rendering AI systems interpretable prove essential to preventing misalignment, secrecy is likely to be a major barrier. Whether such considerations are important enough to establish a presumption against secrecy is beyond the scope of this post. But the empirical tendency of secrecy regimes to expand their remit and endure indefinitely should be taken seriously.
7. Conclusion: espionage and existential risk
=============================================
Espionage is most likely to be significant if discontinuous progress in AI can be achieved on the basis of key abstract insights. To the extent that the scaling hypothesis is true, espionage is likely to be less important. But even if the scaling hypothesis is true, espionage may be significant if it transfers engineering knowledge or tacit knowledge (which can be transferred either by exfiltrating agents or rendering what was tacit explicit). Espionage during the Manhattan Project may have accelerated Soviet nuclear weapons development by two to three years, which does not sound like much, but may have altered the course of the early Cold War. This was achieved by the [less than 0.1%](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.gppzuj2ox6yg) of Manhattan Project employees who were Soviet spies (part of the effectiveness of this small group may have been due to the disproportionate representation of high-ranking employees among spies). If a technology is truly transformative, even a small gain in speed is strategically significant.
On balance, AI espionage is likely to increase existential risk. In [“Racing to the Precipice”](https://link.springer.com/article/10.1007%2Fs00146-015-0590-y) Stuart Armstrong, Nick Bostrom, and Carl Shulman create a game theoretic model of AI arms races’ effects on safety. Armstrong et al. find that risks are greatest when enmity between competitors is high, knowledge of other projects is available, and (conditional on knowledge of other projects being available) the leader has only a small lead. One should expect espionage to increase enmity between competitors, increase knowledge of competitors’ projects, and reduce the distance between the leader and the laggards. Thus, to the extent that Armstrong et al.’s model reflects the real strategic situation, the expected impact of espionage is to increase existential risk. Eoghan Stafford, Robert Trager, and Allan Dafoe’s forthcoming “International Strategic Dynamics of Risky Technology Races” builds a more complex model. Like Armstrong et al., Stafford et al. find that enmity increases risk in all situations. However, whereas Armstrong et al. find that a close race is more dangerous, Stafford et al. find that under certain circumstances, close races are less dangerous than very uneven races. If, in Stafford et al.’s model, enmity between leader and laggard is high and the laggard is far behind, compromising on safety might seem to be the only way that the laggard can have a chance of winning. But in a more even race, the laggard might be less willing to compromise on safety because they would have a chance of winning without taking extreme risks. Thus, granting for the sake of the argument that the assumptions of Stafford et al.’s model hold, espionage’s tendency to narrow gaps might, under some circumstances, reduce existential risk. However, this consideration would seem to me to be outweighed by espionage’s tendency to increase enmity.
It therefore may be valuable for people concerned about existential risk to contribute to preventing AI espionage even if they have no preference between the project being spied on and the project doing the spying. On the other hand, secrecy (the most obvious countermeasure to espionage) may increase existential risk by worsening issues with interpretability. And subjecting AI researchers to background checks may asymmetrically weaken conscientious projects as their competitors, not worried about existential risk or espionage, will gain from the talent that they reject. All of these considerations should be carefully weighed by AI policy practitioners before deciding to prioritize or deprioritize preventing espionage. |
00fb50ad-9da5-4fe3-b48e-73d92665b161 | StampyAI/alignment-research-dataset/blogs | Blogs | Energy efficiency of Wright model B
*Updated Nov 5, 2020*
The Wright model B:
* flew around 0.10—0.21m/kJ
* and moved mass at around 0.036 – 0.12 kg.m/J
Details
-------
The *Wright Model B* was a 1910 plane developed by the Wright Brothers.[1](https://aiimpacts.org/energy-efficiency-of-wright-model-b/#easy-footnote-bottom-1-2731 "“The <strong>Wright Model B</strong> was an early <a href=\"https://en.wikipedia.org/wiki/Pusher_configuration\">pusher</a> biplane designed by the <a href=\"https://en.wikipedia.org/wiki/Wright_brothers\">Wright brothers</a> in the United States in 1910. It was the first of their designs to be built in quantity. “</p>
<p>“Wright Model B.” In <em>Wikipedia</em>, September 16, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334\">https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334</a>.")
### Mass
According to Wikipedia[2](https://aiimpacts.org/energy-efficiency-of-wright-model-b/#easy-footnote-bottom-2-2731 "“Wright Model B.” In <em>Wikipedia</em>, September 16, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334\">https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334</a>."):
* **Empty weight:** 800 lb (363 kg)
* **Gross weight:** 1,250 lb (567 kg)
We use the range 363—567 kg, since we do not know at what weight in that range the relevant speeds were measured.
### Energy use per second
From Wikipedia, we have[3](https://aiimpacts.org/energy-efficiency-of-wright-model-b/#easy-footnote-bottom-3-2731 "“Wright Model B.” In <em>Wikipedia</em>, September 16, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334\">https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334</a>."):
* **Power:** 35 horsepower = 26kW
* **Efficiency of use of energy from fuel:** we could not find data on this, so use an estimate of 15%-30%, based on what we know about the [energy efficiency of the Wright Flyer](https://aiimpacts.org/energy-efficiency-of-wright-flyer/#Efficiency_of_energy_conversion_from_fuel_to_motor_power).
From these we can calculate:
Energy use per second
= power of engine x 1/efficiency in converting energy to engine power
= 26kJ/s / .15—26kJ/s / .30
= 86.6—173 kJ/s
### Distance per second
Wikipedia gives us[4](https://aiimpacts.org/energy-efficiency-of-wright-model-b/#easy-footnote-bottom-4-2731 "“Wright Model B.” In <em>Wikipedia</em>, September 16, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334\">https://en.wikipedia.org/w/index.php?title=Wright_Model_B&oldid=978792334</a>."):
* **Maximum speed:** 45 mph (72 km/h, 39 kn)
* **Cruise speed:** 40 mph (64 km/h, 35 kn)
We use the cruise speed, as it seems more likely to represent speed achieved with the energy usages reported.
### Distance per Joule
We now have (from above):
* speed = 40miles/h = 17.9m/s
* energy use = 86.6—173 kJ/s
Thus, on average each second the plane flies 17.9m and uses 86.6—173 kJ, for 0.10—0.21m/kJ.
### Mass.distance per Joule
We have:
* Distance per kilojoule: 0.10—0.21m/kJ
* Mass: 363—567kg
This gives us a range of 0.036 – 0.12 kg.m/J
*Primary author: Ronny Fernandez*
Notes
----- |
68c8027f-3ff2-4a09-b1e6-d16ca0002aa2 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How does Redwood Research do adversarial training?
[Redwood Research](https://www.redwoodresearch.org/) explains their approach to [adversarial training](/?state=935A&question=What%20is%20adversarial%20training%3F) in the paper [“Adversarial Training for High-Stakes Reliability”](https://arxiv.org/abs/2205.01663). They took a language model (LM) that had been ‘fine-tuned’ so it could complete fiction and attempted to modify it so that it would never continue a snippet in a way that involves describing someone getting injured.
To do this, they trained a ‘classifier’, a model that predicts whether a human would say that the completion involved someone getting injured. This classifier can act as a filter for safe vs. unsafe stories after the stories have been generated by the LM. They then used stories classified as “unsafe” as additional training examples for the LM (an example of ‘[adversarial training](/?state=935A&question=What%20is%20adversarial%20training%3F)’). Another LM helped the humans paraphrase the existing unsafe stories in order to achieve data augmentation and have access to a higher number of adversarial training examples.
Redwood found that they can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs, i.e. the stories are still interesting and engaging to read. Additionally, adversarial training helped Redwood increase the robustness of their model against adversarial attacks, because as a result of this training, contractors required a much longer time to find/generate adversarial prompts.
|
4e9a44b6-15ce-4db7-b6e9-c74b2b778e17 | trentmkelly/LessWrong-43k | LessWrong | The Born Probabilities
Previously in series: Decoherence is Pointless
Followup to: Where Experience Confuses Physicists
One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of. What does the integral over the squared modulus of the amplitude density have to do with anything?
This was discussed by analogy in "Where Experience Confuses Physicists", and I won't repeat arguments already covered there. I will, however, try to convey exactly what the puzzle is, in the real framework of quantum mechanics.
A professor teaching undergraduates might say: "The probability of finding a particle in a particular position is given by the squared modulus of the amplitude at that position."
This is oversimplified in several ways.
First, for continuous variables like position, amplitude is a density, not a point mass. You integrate over it. The integral over a single point is zero.
(Historical note: If "observing a particle's position" invoked a mysterious event that squeezed the amplitude distribution down to a delta point, or flattened it in one subspace, this would give us a different future amplitude distribution from what decoherence would predict. All interpretations of QM that involve quantum systems jumping into a point/flat state, which are both testable and have been tested, have been falsified. The universe does not have a "classical mode" to jump into; it's all amplitudes, all the time.)
Second, a single observed particle doesn't have an amplitude distribution. Rather the system containing yourself, plus the particle, plus the rest of the universe, may approximately factor into the multiplicative product of (1) a sub-distribution over the particle position and (2) a sub-distribution over the rest of the universe. Or rather, the particular blob of amplitude that you happen to be in, can factor that way.
So what could it mean, to associate a "subjective probability" with a component of one factor of a combined ampl |
477e434c-0f84-4306-8620-6035e6be6591 | trentmkelly/LessWrong-43k | LessWrong | Is microCOVID "badness" superlinear?
That is: if I have a choice between 200 microCOVIDs now, or 100 now and 100 next month, does it matter which one I pick? (For bonus points: how big is the difference? And does the answer change if it's 20000 vs 10000/10000?)
(Alternatively: what keywords would I search to find studies on this? I'm sure data exists, but I'm coming up empty on Google.)
On the one hand, straightforwardly no: in my cohort of infinite parallel selves, every microCOVID we take on makes almost exactly one-in-a-million of us get sick (at least until a substantial fraction of us are sick, i.e. I've taken on tens of thousands of microCOVIDs), so every microCOVID is equally bad. (Something something axiom of independence, and we're all VNM-rational agents, right?)
On the other hand, maybe yes, because of complicated biology reasons, where if you inhale 1M viruses over the course of a year, at some point you'll probably get COVID, but if you snort them all at once then you're, I dunno, effectively giving the infection a head start of several doubling times, and you're gonna get COVID real bad. |
3a5feaf3-fe26-416a-99b9-98aba4135fb3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Explicitness
*[Metadata: crossposted from <https://tsvibt.blogspot.com/2023/03/explicitness.html>. First completed March 3, 2023.]*
Explicitness is out-foldedness. An element of a mind is explicit when it is available to relate to other elements when suitable.
*Thanks to Sam Eisenstat for related conversations.*
Note: The ideas of explicitness and inexplicitness require more [explication](https://en.wikipedia.org/wiki/Explication).
Explicitness and inexplicitness
===============================
>
> [Elements](https://tsvibt.blogspot.com/2022/08/structure-creativity-and-novelty.html#1-elements-and-structure) can be more or less explicit, less or more inexplicit.
>
>
>
* (This statement wants to be [unpacked](https://tsvibt.blogspot.com/2023/02/communicating-with-binaries-and-spectra.html).)
* In general, inexplicitness is the lack of explicitness, and explicitness is when elements that have good reason to be related, are able to be related. That is, when structure is explicit, it can be [brought into relation with other structure](https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html#internal-sharing-of-elements) when suitable.
* Structure is explicit when it's out-folded: when it already makes itself available (visible, applicable, informative, copyable, tunable, directable, modifiable, predictable, combinable, interoperable), so that nothing is hidden or blocked.
* An explicit element is an element high in explicitness, i.e. it can be brought into relation with other elements when suitable.
Explicitizing
=============
>
> Elements can become more explicit.
>
>
>
* By default, structure is fully inexplicit for a mind. That is, it's fully [ectosystemic](https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html) for the mind: it's not available for elements of the mind to relate to.
* Structure can be brought into explicitness.
+ For example, these processes make structure more explicit: reflection, analysis, description, expression, joint-carving, separating, factoring, refactoring, modularization, indexing, interfacing, connecting, disentangling.
+ The early stages of explicitizing involve incomplete or deficient participation——like a blind man touching an elephant's tail, or entering the outer regions of a [nexus of reference](https://tsvibt.blogspot.com/2022/08/the-thingness-of-things.html). E.g., the relationship that the Ancient Greek mathematicians had to Cartesian algebraic geometry.
A diagram:

Examples
========
An example of explicitizing also furnishes examples of inexplicitness (before the explicitizing) and explicitness (after the explicitizing), and likewise an example of inexplicitizing also furnishes examples of explicitness and inexplicitness.
Classes of examples of explicitizing
------------------------------------
* *Internal sharing of elements.* See [here](https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html#internal-sharing-of-elements) for examples of inexplicitness.
* *Making an analogy.* By grasping a partial isomorphism, the partially isomorphic aspects of the analogands can be transferred back and forth between them.
* *Putting a word to something.* When [the word](https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html) comes up, the element is accessed, and vice versa. That helps different contexts in which the element is relevant communicate with each other through the unfolding of the element.
* *Refactoring code.* [Separating concerns](https://en.wikipedia.org/wiki/Separation_of_concerns) A and B renders code for dealing with just A useful for tasks that deal with A but not B, whereas the unseparated code might for example assume the existence of B (e.g. as an argument or global variable). The separation makes explicit the dependence and non-dependence of code on A or B. Or for example rewriting a function so that it accepts a standard rather than non-standard input format, so that elements expecting a function to accept standard input can easily relate to the function.
* *Deduction.* Drawing out the implications of an element [makes the element available for comparison with other elements](https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html#deduction) and makes the element available to [recommend action](https://tsvibt.blogspot.com/2022/08/structure-creativity-and-novelty.html#pierces-abduction).
* *Writing things down.*
+ Expressing and recording something in shared language makes it available to others.
+ Storing something in memory makes it available to your future self.
+ Abbreviating something makes synopsis and large-scale modification more feasible. For example, mathematical notation makes big thoughts viewable all together, and makes algebra more efficient.
+ Attaching abbreviations to an element makes that element easier to find.
+ Drawing a picture makes something more available to visual processing (extracting gestalts, broadcasting, holding in "external working memory", exposing geometry).
+ Speaking an element into the stream of consciousness makes the element more available for other elements to notice their own relevance to the broadcasted element.
+ Speaking an element divides it ("articulates", renders into articles) into familiar items (words). That makes the element more available to those other elements that already interoperate with those words.
+ Redescribing an element retranslates the element, as Feynman recommends. This makes applicable more transformations of expressions of elements——the grammatical structure of the retranslation affords different transformations.
+ See [here](https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html#theoretical-reasons) for more.
* *Building an index.* Putting an element in an index makes it easier for another element to find and interface with that element.
* *Entering numbers into a spreadsheet.* By putting notes about a set of things in abbreviated form next to each other, a spreadsheet makes it convenient to compute certain consequences of given propositions. For example, it becomes easy to notice which of a set of things has the highest value on some dimension, or notice how much information is present or missing overall, or average given values. (In general, see "[parathesizers](https://tsvibt.blogspot.com/2023/01/hyperphone.html#1-aside-tools-for-thinking-parathesizers)".)
* *Genomic parcellation.* Pleiotropic genes affect multiple characters of an organism. Genetic loci might therefore be under antagonistic selection pressure: a mutation might have beneficial effects on one character and deleterious effects on another character. This situation also creates pressure to remove the pleiotropy, e.g. by differentially expressing paralogs or by alternative splicing, so that fewer loci affect multiple characters. When a pleiotropy is removed, the gene is made more explicit: it is more available to be modified as is suitable for different applications (different expressions in different contexts). See the work of Günter Wagner, e.g. "Complex adaptations and the evolution of evolvability" ([pdf link](https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1558-5646.1996.tb02339.x)). A figure from that paper:

Examples of inexplicitness and inexplicitizing
----------------------------------------------
* Having a word on the tip of your tongue.
* Forgetting something, in such a way that you could be reminded of it with an incomplete prompt. You must have still possessed it, but not in a way so that you could recall it easily.
* In general, losing connections. Destroying indices, interfaces, memories, storage, lines of communication.
* When you learn to ride a bike, it might be hard to put into words what you learned or what changed that made you able to stay balanced.
* It's much easier for me to type my computer password than to say it out loud. When I came up with my computer password the opposite was true: the password was explicit. Then I learned to type it without thinking about it, and then I mostly forgot it explicitly. So it was inexplicitized.
* Suppose you've written some code to perform some task. Now you're given a new task, so you rewrite and add to your code to perform the new task. You find that your old code can almost just be repurposed as-is to perform the new task, but not quite: you don't see how to cleanly add some interface or wrapper or additional logic, taking the preexisting code as given. So you rewrite the preexisting code by hacking in some new functionality, so that it does what's needed for the new task. Now that code is less available to interface with other code; the preexisting simpler version isn't exposed to be independently called, and only the whole more complicated code can be called; the code is harder to understand and modify because it's more complicated and has more things entangled with each other. The old version has been inexplicitized: the ideas of the old version, viewed as still lying within the new version, are less explicit (more inexplicit) within the new version than they were in the old version. E.g., you add an argument to a method, even though the argument only slightly interacts with the rest of what that method does. Now the method has to be passed the additional argument, and it's harder to tell without more work that most of what the method does doesn't depend on that new argument.
* In many situations, people try to prevent information from being made explicit or being put into common knowledge.
* [Diasystemic novelty](https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html) not only tends to itself be inexplicit, but tends to render preexisting structure more inexplicit, at least temporarily. It might change how elements relate to each other, so that previous coadaptations no longer apply as much as before the diasystemic novelty.
+ For example, if someone takes on a strict discipline of not lying to themselves, they might break coordinations between different preexisting elements. E.g.: self-coordination across time to accomplish tasks that's based on self-fulfilling prophecies that include self-deception. The mental elements needed to perform different parts of a task at different times were previously (partially) explicit for each other: they were able to interface suitably (within encountered contexts). After the new self-discipline, the elements are no longer immediately able to interface to accomplish the task; they have been inexplicitized for each other.
* An [ontological crisis](https://www.lesswrong.com/tag/ontological-crisis), or ontological revolution in which the mind greatly changes the language in which it thinks, might render some preexisting elements "relatively inexplicit". That is, the preexisting elements can no longer relate to the mind as a whole in the way that they related to the prerevolutionary mind as a whole. If the revolutionary language excludes some of the preexisting language, this is inexplicitizing simpliciter, not just relative inexplicitizing: relations that depended on the excluded preexisting language are made less available.
* An edge case: systems bifurcating or continuing on non-convergent trajectories. E.g. speciation, or a language splitting into dialects and then separate languages. Another example: a mind [continuing on](https://tsvibt.blogspot.com/2023/01/a-strong-mind-continues-its-trajectory.html) a [voyage of novelty](https://tsvibt.blogspot.com/2023/01/the-voyage-of-novelty.html), maybe as far as to other [cognitive realms](https://tsvibt.blogspot.com/2022/11/are-there-cognitive-realms.html), so that the mind's elements are made inexplicit for an observer (e.g. a human).
The axiom of choice
-------------------
Chapter 1, "The Prehistory of the Axiom of Choice", in Gregory H. Moore's book Zermelo's Axiom of Choice ([Libgen](http://libgen.rs/search.php?req=Zermelo+axiom+of+choice&open=0&res=25&view=simple&phrase=1&column=def)):
>
> [...] Cantor made an infinite sequence of arbitrary choices for which no rule was possible, and consequently the Denumerable Axiom was required for the first time. Nevertheless, Cantor did not recognize the impossibility of specifying such a rule, nor did he understand the watershed which he had crossed. After that date, analysts and algebraists increasingly used such arbitrary choices without remarking that an important but hidden assumption was involved. From this fourth stage emerged Zermelo's solution to the Well-Ordering Problem and his explicit formulation of the Axiom of Choice.
>
>
>
That chapter describes stages of inexplicit uses of arguments spiritually related to the axiom of choice. Zermelo's work made the essential uses explicit. That opened up the possibility of further investigation building on the idea of the axiom of choice. E.g. the axiom was analyzed in the reverse-mathematical spirit, revealing which theorems require the axiom; and reactions against the axiom contributed to intuitionism.
For another example of making explicit an idea that had been inexplicitly used, see Reinhardt's article "Remarks on reflection principles, large cardinals and elementary embeddings", in Axiomatic Set Theory volume 2 ([Libgen](http://libgen.rs/search.php?req=Axiomatic+Set+Theory%2C+Volume+2+(Symposium+in+Pure+Mathematics+Los+Angeles+July%2C+1967))).
Group theory from concrete transformations
------------------------------------------
Two of the main [tributaries into group theory](https://en.wikipedia.org/wiki/History_of_group_theory) were permutation groups of roots of polynomials, and symmetry groups of geometric spaces. Galois related the solvability of polynomial equations to the structure of the group of permutations of the polynomial's roots that preserve algebraic equations. Klein's [Erlangen program](https://en.wikipedia.org/wiki/Erlangen_program) related structures studied in a geometric space to the transformations of that space that preserve those structures.
Both of these uses of the idea of a (transformation) group are contextual. The groups are thought of in terms of the particulars of the objects they transform——sets of roots of polynomials, or geometric spaces. The relationship between these theories aren't yet explicit. The development of group theory makes the common structure explicit, allowing general results to apply to both cases.
For another rich example of a generally applicable idea congealing from its appearances in concrete contexts, see "The concept of function up to the middle of the 19th century" by Adolph-Andrei Pavlovich Youschkevitch ([pdf](https://www.exhibit.xavier.edu/cgi/viewcontent.cgi?article=1001&context=oresme_2023feb)). Note that abstraction is a gain of content, not a loss of content (extraction is a pure loss of content).
Other distinctions
==================
Simply not knowing something
----------------------------
If a mind doesn't know X, then X is maximally inexplicit for the mind.
Pretheoretic explicitness
-------------------------
Pretheoretically, it seems possible to know something explicitly but not implicitly, or to lack some implicit knowledge. Examples:
* I can make up a password for my computer, and then I know it explicitly. But, I still lack the implicit knowledge of how to type it quickly and automatically.
* Someone tells me: "To peel an orange conveniently, without having to restart it a lot, don't bend the part of the peel that's separated too much so that it breaks off; instead keep it tangent to the orange.". Now I know that idea explicitly, but I lack the implicit knowledge of how to peel oranges that way.
* Someone tells me an expression for something called the "Lefschetz number":
Λf:=∑k≥0(−1)ktr(f∗|Hk(X,Q)).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
I'm able to technically define the terms in the expression, but I "don't really know what the expression means". Even though the expression is perfectly explicit, for me the expression is very far from being readily combinable with other elements of my mind. I certainly don't immediately conclude or even understand propositions like the [Lefschetz fixed-point theorem](https://en.wikipedia.org/wiki/Lefschetz_fixed-point_theorem).
The concept of explicitness in this essay takes all of these to be undifferentiated examples of partial inexplicitness. There's no such thing as "non-explicit understanding", other than explicit (partial) understanding that hasn't been made fully explicit in every way. If I were to play around with the Lefschetz formula and "get some intuition for what it means", that's just another instance of explicitizing.
The pretheoretic notion of "implicit understanding" or "non-explicit understanding" might be reducible to implications of the form: if the mental element is explicit in the manner X, then the mental element is explicit in the manner Y. Then a "non-explicit" element is an element that sits within a mind in a manner that is implied by most other manners of sitting within a mind, and that does not imply many other manners of sitting within a mind.
These implications are vague about their assumptions, and intuitively being told an explicit formula is a violation of the implication. Since I didn't come up with the Lefschetz formula, I got an understanding of it "in the wrong order". This tends to happen when there are other minds, because an other mind can get the understanding in the right order and then transmit a formula that is fully explicit for that mind, though the formula won't be fully explicit for the receiver.
It's also possible to arrive at a simple, explicit formula de novo via algebraic calculation, and only then rationalize the formula in a way that connects more to preexisting mental elements. Is that "in the wrong order"? The situation might be that this "in the wrong order", and the pretheoretic notion of "non-explicit" elements, are really imprecise perceptual categories meant to pragmatically track the implications between explicitnesses, e.g. to track what tasks to expect a given element to be adequate or inadequate for. "Non-explicit" elements might be elements possessed (that is, elements that empower the mind) without being fully explicit, and "explicit but not implicit" elements might be elements grasped in some way but not possessed (such as an "explicit" formula, without the understanding needed to usefully apply it).
Coherence and internal sharing of elements
------------------------------------------
[Internal sharing of elements](https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html#internal-sharing-of-elements) is a description of coherence in terms of whether elements are interoperating with each other suitably. Coherence is related to explicitness as [actualizing is related to possibilizing](https://tsvibt.blogspot.com/2022/12/possibilizing-vs-actualizing.html). Coherence is actual interoperation, actual capabilities, actual efficiencies, actual connections; explicitness is the feasibility of interoperation and connection. In short: to explicitize is to possibilize coherence.
For example, building an index doesn't by itself constitute performing a new (external) task. But having an index renders feasible some operations that were previously infeasible. [Parathesizers](https://tsvibt.blogspot.com/2023/01/hyperphone.html#1-aside-tools-for-thinking-parathesizers) put elements alongside each other, not necessarily synthesizing them, but making it possible for them to be synthesized.

Explication
-----------
[Carnap's explication](https://en.wikipedia.org/wiki/Explication) is a kind of explicitizing. Storing something in memory is explicitizing but not explication.
Implicitness
------------
Strict implicitness in the logical sense is when a proposition is implied by a set of propositions, but isn't already included in that set (explicitly). Instances of strict implicitness are also instances of inexplicitness: the implied proposition, if made explicit, would enable further operations.
Example: "What I said implies...". This implicitness is inexplicitness that can be resolved (explicitized) by deduction.
Example: "Implicit in the concept of bachelor is unmarriedness.". Statements like "Bob is a bachelor." have inexplicitness that can again be resolved by deduction: "...and therefore Bob is unmarried.". Evenness is implicit in the concept of divisibility by four. The fact that 3×4=12 is implicit in the ideas of 3, 4, and ×.
See also the notion of analyticity, and [Critch's discussion of implicitness](https://www.lesswrong.com/posts/QsZ3ycfRYs2ps5sNA/loeb-s-theorem-for-implicit-reasoning-in-natural-language#Implicitness).
Correlations
============
Explicitness
------------
Explicitnesses correlate with explicitnesses. That is, "you can't eat just one": explicitizing an element in one way (making it available for relating with one other element) tends to also explicitize it in other ways (making it available for relating with multiple other elements).
The "proportional explicitness" of an element (the proportion of elements in the mind that the element is available to relate to) might not approach 1 in the long-run. E.g., if there is always more [parasystemic novelty](https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html), then there are always regions of the mind that are not well-integrated or well-integratable with a given element. [Essential non-cartesianness](https://tsvibt.blogspot.com/2023/03/provisionality.html#noncartesianness) might imply that there is always more parasystemic novelty.
Possession
----------
A mind possesses an element to the extent that the mind is able to do those tasks that the element enables in some mind. (So possession is "coherence, localized to one element".)
* "Far as I see it, you people been given the shortest end of the stick ever been offered a human soul in this crap-heel 'verse. But you took that end, and you——well, you *took* it. And that's——Well, I guess that's somethin'." -[Jayne Cobb](https://www.youtube.com/watch?v=zLGOB65xatg)
* An instance of possession is an instance of explicitness, so possession correlates with explicitness.
* Possession is a myopic version of (or, [spectrum](https://tsvibt.blogspot.com/2023/02/communicating-with-binaries-and-spectra.html) leading up to the extreme of) explicitness. To possess an element E is to be able to do some set of tasks that are "fairly easy for the mind to do" if the mind is given E, or in other words to pick the low-hanging fruit of capabilities given E. To have E fully explicitly is to be well-prepared to pick all higher-hanging fruit, e.g. capabilities that require also changing other elements that will relate to E in new ways, and e.g. capabilities that only become relevant in new contexts and for new tasks.
* Refactoring code so that it is functionally the same for current tasks doesn't change how much the ideas in the code are possessed by a system, but might greatly change how explicit the ideas are.
* Explicitness ensures possession in minds that are coherent in general: any suitable use of an element will be avaiable by explicitness, and will be availed by coherence, and thus the element is possessed in that way. In the limit of the growth of the mind, with all elements changing as suitable, possession approaches full explicitness. An obstacle to explicitness of E can become an obstacle to possession of E if a task is blocked only on E.
* A human brain, even without understanding neurons in a pretheoretically-explicit way, still possesses the structure of neurons, in that it uses the structure of neurons in the straightforward ways that neurons can be used to do useful tasks in human contexts.
Access
------
The pretheoretic idea of *accessing* an element E is maybe a projection of the idea of explicitness into the subspace that assumes that all elements are fixed, and all that changes is lines of communication.

Modelability
------------
If an element E is explicitized in a mind M, the field of elements that E can relate to is expanded. Since explicitnesses correlate with each other, expanding the field of E-relatable elements in M tends to also expand the field of E-relatable elements in another mind M'. So there's more "surface area" for M' to understand E.
This may be false in the long-run if there are different [cognitive realms](https://tsvibt.blogspot.com/2022/11/are-there-cognitive-realms.html). E.g. if the languages of thought of two minds M and M' permanently diverge further and further, then explicitizing an element in M might not explicitize the element in M'.
Gemini modelability
-------------------
If an element E is explicitized in a mind M, the field of elements E can relate to is expanded. Expanding the field of elements relatable to E has many effects on how feasible it is for another mind to [gemini model](https://tsvibt.blogspot.com/2022/08/gemini-modeling.html) E in M. But, as a broad tendency, explicitizing E in M makes E in M more gemini modelable. The field of relatable elements approaches the total field of possibly relatable elements, which is canonical and therefore shared between many minds. The way that E will ongoingly show itself in M is circumscribed by the field of relatable elements of E in M. So if the field of relatable elements of E in M is more canonical, then the way that E will ongoingly show itself in M is at least less forced, by circumscription into disjoint regions, to be distinct from the way that E will ongoingly show itself in some other mind M'.
Coherence
---------
If explicitness is possibilized coherence, then coherence (actualizing some suitable relations between elements) implies explicitness (that those relations are possible).
On the other hand, explicitness isn't only coherence. Take the example given above where computer code is rewritten so that a new task can be performed, but so that the code is less well-factored. It may be in this case that explicitness almost strictly decreases (some explicitness is lost, and maybe only a single interface is added), while coherence strictly increases (because a new task can be performed).
Generators
----------
Generators of capabilities seem to leave a lot of inexplicitness compared to some other elements. I don't know why. If it's so:
* Maybe it's because they are more potent, so there's more left to be gained.
* Maybe it's because they are more algorithmically complex.
* Maybe it's because they don't really exist.
* Maybe it's because they are created by a different process (the original mind designer) from the process (the mind itself) that would bring them into the mind's own explicitness, so they aren't spoken in the same language as the mind itself. (For example, some generators for humans might only be understandable as having been a part of evolutionary history, and those generators aren't possessed at all by humans, unless discovered by reasoning.)
Dark matter
-----------
[Dark matter](https://tsvibt.blogspot.com/2023/01/the-voyage-of-novelty.html#4-the-dark-matter-shadow-of-hand-holdable-structure) is structure that is inexplicit, but that is known to exist because of its visible effects. [Even if we can't see it, it must be there.](https://samoburja.com/intellectual-dark-matter/) Dark matter is not fully inexplicit, or it wouldn't be possessed, and therefore wouldn't have visible effects.
For example, we can listen to set theorists talking in their special language, and we might even follow some of their thoughts. But we don't know the intuitions and refinement processes that led them to conjecture and then codify concepts and propositions (unless, say, Penelope Maddy distills them and writes them down). We know that those intuitions are there, though, if we dimly perceive some order and rhyme to their thinking, and can verify that their proofs are correct and non-obvious.
Diasystemic novelty
-------------------
As noted earlier, [diasystemic novelty](https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html) tends to inexplicitize. Diasystemic novelty also tends to be inexplicit. A novel element E is diasystemic when it is very relevant to the mind, but doesn't interface in preexisting ways with preexisting elements of the mind.
* Since E is very relevant to the mind, there's a large potential for E to relate to other elements of the mind, so there's a large amount of explicitness required for E to be fully explicit.
* Since E doesn't relate to preexisting elements in preexisting ways, it doesn't take advantage of preexisting explicitness, and it is less well-understood by the mind.
* Since E cross-cuts the preexisting elements, touching many of them, E is at first less available to be suitably modified; metaphorically, E has antagonistic pleiotropy.
Subject to honesty
------------------
If an element is very explicit, then it can be spoken honestly about: the ways that the element exerts itself in the mind can be reported with intent to expose, without distortion, everything relevant. It can also be lied about.
If an element is very inexplicit, then the ways it exerts itself can't be reported because they aren't available to be expressed. So the intent to expose without distortion becomes irrelevant because impotent. (This is an "instantaneous" notion of honesty. The full normative notion of honesty also includes the tendency toward explicitization rather than inexplicitization.)
Conceptual Doppelgängers
------------------------
The existence of [conceptual Doppelgängers](https://tsvibt.blogspot.com/2022/10/the-conceptual-doppleganger-problem.html) requires inexplicitness. At least, the overlapping functions shared by two elements are less available to be predicted and modified in one fell swoop, because they aren't factored out from the two elements into non-repetitive elements.
On the other hand, if participation of elements in a [Thing](https://tsvibt.blogspot.com/2022/08/the-thingness-of-things.html) is well-indexed, then there effectively aren't conceptual Doppelgängers at least in the strongest sense: observing one element at least points the observer to the other elements that serve overlapping functions, so at least the presence of an analogy is made explicit.
More generally than duplicates, there are "crosshatch Doppelgängers": a set of elements that combine to play some roles that overlap with roles playable by combinations from another distinct set of elements. For example, a function can be rewritten using different primitives or a different factorization. Another example: the [primordial ooze of category theory](http://xahlee.info/math/i/category_theory_brendan_fong_david_spivak_2018-03.pdf). Another example: rewriting a sentence to use different words, or translating between languages.
A wish
======
It would be nice to have a situation like this: the elements of the mind are explicit. This is how the mind understands this Thing. This element in the mind goes to the heart, the center of the Thing it's supposed to bring into the mind. It's explicit, unfolded, laid out, and so the word attached to this element is the word for this Thing in the language of this mind. When this mind's thinking relates to this Thing, it uses this word. What this mind knows about this Thing activates and is activated by this word, and is exactly what is indexed this word or brain chunk. Every element is easily understood as doing [X and only X](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) for some X. (This picture is improbable, e.g. because of crosshatch Doppelgängers.) |
87323373-d7c2-4172-a1a8-d2f700153d59 | trentmkelly/LessWrong-43k | LessWrong | AI Risk & Policy Forecasts from Metaculus & FLI's AI Pathways Workshop
Crossposted from the Metaculus Journal.
Summary
Over a two-day workshop, Metaculus Pro Forecasters and subject-matter experts (SMEs) from several organizations evaluated policy directions for reducing AI risk using scenario planning and forecasting methodologies. The key conclusions:
* Pro Forecasters and SMEs are most optimistic about risk reduction stemming from API restrictions and AI model export controls—specifically, API restrictions aimed at preventing calls that could induce recursive self-improvement.
* Model export controls may hamstring open source efforts to build and proliferate powerful AI.
* There’s only a low likelihood of meaningful U.S. policy action on AI before 2027, though this likelihood increases if an AI incident occurs in the meantime.
* By “meaningful policy action”, we mean policy action that non-trivially reduces existential risk. Hardware restrictions, firmware requirements, API regulation, and licenses for large data center clusters, among other potential policy actions, were evaluated in the workshop.
* As transformative AI is approached, the players will likely be a small number of for-profit AI labs, though this isn’t a guarantee.
* The forecasts indicate moderate pessimism about the likelihood of leading AI labs coordinating with each other to slow the AI capabilities race, and pessimism about the likelihood of an international AI treaty involving the U.S. and China.
Introduction
The increasing capabilities of artificial intelligence have led to a growing sense of urgency to consider the potential risks and possibilities associated with these powerful systems, which are now garnering attention from public and political spheres. Notably, the Future of Life Institute’s open letter calling for a six month pause on leading AI development collected over 25,000 signatures within five weeks of release, and senior White House officials have met with AI leaders to discuss risks from AI.
This report presents the findings of a |
4fef32fc-ffb5-414c-89ca-81a750ac3f30 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Composing Complex and Hybrid AI Solutions
1 Introduction
---------------
By the end of the 20thsuperscript20th20^{\rm th}20 start\_POSTSUPERSCRIPT roman\_th end\_POSTSUPERSCRIPT century the field of computer vision featured a vast repertoire of methods and algorithms, but suffered
from the lack of a common framework that would allow practitioners to
access these algorithms in a uniform way, and to compose them into
complex systems for their specific application. In 2000, Intel released
the OpenCV library [opencv\_library](#bib.bib2) as an infrastructure to make
optimized vision code easily available and reusable via standardized
interfaces.
A few years later the field of robotics was in a comparable situation:
hundreds of mature methods and algorithms were developed that could
potentially be used in different robotic hardware, but there was no easy
way to share these algorithms, to reuse them on different hardware, and
to compose them to build full robotic solutions. In 2007 the Robot
Operating System (ROS) project was started [Quigley:2009tg](#bib.bib7) .
ROS provided a framework for robot software development where algorithms
could be wrapped in modular, reusable components, connected via
standardized interfaces.
Both OpenCV and ROS quickly became community standards, and they are
widely recognized for having produced a quantum leap in their respective
fields. The ability to share, reuse and combine components allowed
researchers to easily build on previous results and to
compare competing techniques; it enabled companies to incorporate
existing solutions in their products; and it provided students with a
lowered entry barrier to experiment with advanced solutions.
Today, the field of Artificial Intelligence (AI) is in a similar
situation as computer vision and robotics were years ago. We have a
large repertoire of mature methods and algorithms, but no standard
way to share them in a reusable format and no easy way to compose them
into complex solutions. Some effective frameworks do exist that allow
the modularization and composition of machine learning components,
including Keras [keras](#bib.bib5) , PyTorch [pytorch](#bib.bib6) ,
Tensorflow [tensorflow](#bib.bib1) and Acumos [2018acumos](#bib.bib9) . These,
however, are geared toward the use of data-driven, reactive
machine-learning components that are typically connected into simple,
linear pipelines.
What the field of AI needs today, in our opinion, is a more general framework
that can accommodate both data-driven and knowledge-based AI algorithms,
and that allows users to connect them in arbitrarily complex topologies.
In this paper, we propose such a framework that allows AI practitioners
to:
* ∙∙\bullet∙
embed their algorithms into a standard, portable format (docker
containers);
* ∙∙\bullet∙
interconnect these components using standard interfaces
(Protobuf and gRPC);
* ∙∙\bullet∙
connect components in unrestricted topologies, including linear
or branching pipelines, closed-loop systems or blackboard
architectures;
* ∙∙\bullet∙
accommodate both machine learning models and knowledge based
components (such as logical reasoners, automated planners, constraint
solvers, or ontological knowledge bases), allowing one to create
hybrid AI solutions;
* ∙∙\bullet∙
provide orchestration mechanisms to simplify the overall operation
of a complex or hybrid solution.
The framework proposed in this paper is built on top of Acumos. Acumos
is a state of the art system that already addresses the first two of the
above desired features: in this paper, we show how we have
extended Acumos beyond its initial scope in order to accommodate all the
remaining ones. The resulting system is publicly available as the
AI4Experiments platform at <http://aiexp.ai4europe.eu/>.
The rest of this paper is organized as follows. The next section
discusses some existing frameworks and their limitations in view of the
above desiderata. Section [3](#S3 "3 Requirements for a Modular Hybrid AI Framework ‣ Composing Complex and Hybrid AI Solutions") further elaborates
those desiderata. Sections [4](#S4 "4 The AI4EU Experiments Platform ‣ Composing Complex and Hybrid AI Solutions")
and [5](#S5 "5 Enabled Patterns ‣ Composing Complex and Hybrid AI Solutions") describe our proposed approach and the patterns
that it enables, respectively, while Section [6](#S6 "6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions") shows a
few case studies that illustrate those patterns. Finally,
Section [7](#S7 "7 Conclusion and Outlook ‣ Composing Complex and Hybrid AI Solutions") concludes.
2 Existing Frameworks and their Properties
-------------------------------------------
| Name | Focus | Designing Solutions | Running & Deploying | Component Interface & Communication |
| --- | --- | --- | --- | --- |
| [Acumos](https://www.acumos.org/) | ML | GUI | k8s | Protobuf via REST, no streaming |
| AI4EU Experiments Platform | AI | GUI | k8s | Protobuf via gRPC, streaming |
| [ROS](https://www.ros.org/) | Robotics | code | local | [rosmsg](http://wiki.ros.org/rosmsg) via [DDS](https://www.omg.org/omg-dds-portal/) |
| [OpenCV](https://opencv.org/) | CV | code | library | C++ library |
| [Kubeflow](https://www.kubeflow.org/) | ML | Python DSL/GUI | k8s | Cloud Storage, no interface |
| [H2O.ai](https://www.h2o.ai/) | Parallel ML | GUI or code | various | distributed key/value store |
| DL frameworks | DL | code | library | Python |
| commercial platforms | ML | code | k8s | Protobuf + gRPC (Google), Smithy (Amazon) |
Table 1: Overview of frameworks and platforms for modular assembly of AI applications.
Abbreviations:
ML = Machine Learning, AI = Artificial Intelligence,
CV = Computer Vision, DL = Deep Learning,
k8s = Kubernetes, GUI = Graphical User Interface,
DL frameworks such as [TensorFlow](https://www.tensorflow.org/) and [Keras](https://keras.io/),
commercial platforms such as Google Vertex and Amazon Sagemaker.
We next give an overview of frameworks for modular assembly of AI applications.
We consider frameworks that provide an interface for component-component communication on a higher level than simply providing network communication.
Table [1](#S2.T1 "Table 1 ‣ 2 Existing Frameworks and their Properties ‣ Composing Complex and Hybrid AI Solutions") provides an overview of frameworks that are discussed in detail in the following sections.
Finally we give a coarse overview of further frameworks.
###
2.1 Acumos
Acumos [2018acumos](#bib.bib9) is a software framework created by AT&T
for the needs of big telecommunication providers.
It was initially conceived purely for linear machine learning pipelines,
i.e., sequences of components with an acyclic information flow
from one or more sources to one or more sinks.
Acumos contains a graphical web interface for assembling *Solutions* from *Components*
and a marketplace where components and solutions can be shared with other users or made publicly available.
A Component is a software artifact that has an interface in terms of Google Protocol Buffer (Protobuf) [protobuf](#bib.bib10) definition.
Protobuf definitions permit to define message data types and services.
A Protobuf service contains RPC calls with exactly one input and one output message data type.
Acumos creates a *Port* for each input and for each output of an RPC call,
and ensures that only Ports with matching types are connected in a solution.
An important aspect of Acumos solutions is that each component is a passive server
and that the solution becomes executed by means of an *orchestrator* component which passes messages between components in the correct order.
Acumos is a modular system and contains many APIs with possibilities
for plugging in, e.g., new component types or new orchestrators.
###
2.2 ROS
ROS [Quigley:2009tg](#bib.bib7) is a very popular software development framework in the robotic community. Programs written in ROS are
*nodes* which communicate through an asynchronous publish/subscribe mechanism over *topics*. One node can advertise a *topic*
(e.g. the position of the robot) to contain some data, whose format is defined in a *msg* file (e.g. three floats: x, y, theta) , and
then publish this data at will. Other nodes can subscribe to this *topic* and specify a callback which will be called when a new topic
value (e.g. a new position of the robot) is published. Another mechanism, *service*, is also provided to make a synchronous remote process call
(RPC) to a *server* advertising the services (with the type of data passed as arguments and returned by the call defined in a *srv*
file). This can be used for example to have a server node to provide a locate service which given the x, y (e.g. 10.3, 4.0) position of the robot, returns
the name of the room in which it is (e.g. ”kitchen”).
These two mechanisms and the definition in *msg* of commonly used data structure in robotics (e.g. odometry, images, point cloud, etc) has led to a very dynamic ecosystem of
nodes using topics produced by others and providing topics to others. As a result, most robotic equipment manufacturers provide ROS nodes to
control their robots, sensors, effectors, and most main stream robotic algorithms (SLAM, navigation, etc) have a number of implementation
available in ROS. Just sharing some common data structure definition and providing a versatile and simple data communication mechanism led
the robotics community to share data, results and algorithms like never before and enable newcomers to get involved and active in complex
robotic experiments with little initial programming investment.
ROS is now in its second installment which addresses some of the shortcomings of the first version: DDS is now used as middleware; no more ROS core
centralizing the book keeping of publishers and subscribers of topics, servers and clients of services; multi/mono CPU deployment; etc). ROS is a clear and successful example of what can be achieved with just sharing data structure definition and a simple communication mechanism.
###
2.3 OpenCV
OpenCV [opencv\_library](#bib.bib2) is an open source computer vision and machine learning software library for computer vision applications that is cross-platform and free for use under the open-source Apache 2 License, allowing easy use for commercial applications. OpenCV was originally developed in the late 1990’s by Gary Bradski as an Intel Research initiative to advance CPU-intensive applications. The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000. Development and support was taken over by Willow Garage in the early 2000s, and Version 1.0 was released in 2006. A second major release in October 2009 included major changes to the C++ interface, and other improvements, with support GPU acceleration added in 2011.
In August 2012, support for OpenCV was taken over by a non-profit foundation OpenCV.org, which currently maintains a developer and user web site. Development is provided by an independent Russian team supported by commercial corporations, with Official releases approximately every six months.
The most recent version has more than 2500 optimized algorithms, including both classic and state-of-the-art computer vision and machine learning algorithms with more than 47,000 active users and estimated downloads exceeding 18 million. The library is used extensively by companies, universities, research groups and governmental bodies.
OpenCV has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. It is optimized for real-time vision applications and takes advantage of MMX and SSE instructions when available. Full-featured CUDA and OpenCL interfaces are under development. There are over 500 algorithms with about 10 times as many functions that compose or support the algorithms. OpenCV is written in C++ and has a template interfaces that work seamlessly with STL containers.
Public availability of OpenCV and its rich collection of functionalities available in a uniform programming framework available for several platforms has been an important factor in the rapid growth of commercial and industrial use of Computer vision over the last decade.
###
2.4 Kubeflow
KubeFlow is an open source machine learning platform originally created by Google to simplify the management of deep learning workflows by leveraging the features of Kubernetes. The workflows can be designed using a Python based DSL or a Web-GUI. The nodes of a workflow are Kuberentes pods that communicate only by input and output files. The nodes have no services defined and do not communicate directly. The files are exchanged via cloud storage that is provided outside the workflow definition. The workflow basically defines the dependencies on other nodes (or tasks), very much like in a makefile. It is then up to the orchestrator (workflow engine) to find the best order for execution and level of parallelism. If all preconditions for a node are met, the pod is started and the task ends when the pods has written its output files and terminates. Then the pods for the tasks depending on it are started.
Here is a small example: task A is data cleaning, task B is model training and depends on task A.
The workflow engine reads the dependencies and concludes that task A must be run before task B.
The pod of task A is started, it reads the data files from cloud storage, cleans the data and writes new files with cleaned data to the cloud storage and terminates.
Then the pod of task B is started, it reads the files with cleaned data form cloud storage and stores the trained model somewhere.
###
2.5 H2O.ai
H2O.ai111<https://www.h2o.ai/>
is an open source framework for ML with a focus on parallelization and scaling up ML in practice.
It can be deployed in Map/Reduce cloud infrastructures of all popular providers, on Hadoop, Spark, and locally.
Several popular ML and Data Science algorithms are provided as Map/Reduce implementations.
Custom algorithms can be implemented as well in Python, R, Scala, or Java.
These languages or the H2O Flow GUI is used to design H2O ML applications.
H2O has AutoML capabilities to discover the best algorithm for a given task.
###
2.6 Deep Learning Frameworks
Tensorflow [tensorflow](#bib.bib1) is a library for
developing ML applications and algorithms, supported by
dedicated hardware, if present (e.g., GPUs).
It offers a low-level and a high-level API in several
languages (e.g., Python) and is not specific
to neural network applications.
Keras [keras](#bib.bib5) and pytorch [pytorch](#bib.bib6)
are libaries for more high-level development of ML applications,
where Keras (based on Tensorflow) is focused on neural networks.
Common to these and other frameworks is,
that they offer a Python API to assemble by means of
writing a Python program a ML application
in a comfortable way. Below that Python API are efficient low-level implementations of ML algorithms
that can operate on GPUs and on large-scale compute clusters.
Communication between the algorithm parts is managed by the library.
###
2.7 Commercial Platforms
Google Vertex AI 222<https://cloud.google.com/vertex-ai> describes itself as being a unified AI platform that facilitates building, deploying and scaling of ML models. That means that it brings together Google cloud services for building ML under one UI and API. It integrates ML frameworks such as TensorFlow, PyTorch and scikit-learn as well as frameworks via custom containers. Vertex can do data preparation (ingest, analyze, transform) and then be used to train, model, evaluate, deploy, and predict. Google Vertex AI is cloud-based, so to work with it, one logs in to the Google Cloud Platform, where a new project can be created. In the Google Cloud Shell (or locally, if preferred) a storage bucket is created for storing saved model assets for a training job. Next Docker files and containers are to be created. Training code can be written in Python, for example, using TensorFlow, but other open source frameworks or custom frameworks are possible, as mentioned above. The Docker container can now be built and tested locally and finally pushed to the Google Container Registry. There are two options for training models in Vertex: AutoML or Custom training. In the Google Cloud web-interface one can create the training job together with entering the parameters and the deployed model, as well as selecting the Docker container built in the previous step. Finally an endpoint of the trained model can be created which can be used to get predictions on the model.
Amazon Sagemaker 333<https://aws.amazon.com/sagemaker/> is infrastructure, tools and managed workflows for building, training and deploying ML models. Business analysts can use the visual interface *Sagemaker Canvas* and can prepare data, train models and create predictions without having to write code. For data scientists, Amazon Sagemaker offers an IDE for the ML life cycle. The so-called *Studio Notebooks* can access data from both structured and unstructured data sources which is then prepared. Next ML models are built. Built-in ML algorithms can be used or own algorithms. Frameworks such as *TensorFlow* and *PyTorch* are supported. Then the ML model is trained. When deploying the ML model, it can be continuously monitored — model and concept drifts can be detected and alerted, and key metrics can be collected and viewed. MLOps Engineers can streamline the ML lifecycle. They can build CI/CD pipelines to reduce management overhead, automate ML workflows, that is, accelerate data preparation and model building, training and experiments. Amazon Sagemaker Pipelines are a feature to help automate and orchestrate different steps of the ML workflow such as data loading and transformation, model building, training and tuning. Such pipelines support processing a large amount of training data, run large-scale experiments, build and re-train models at various scales. Workflows can be re-used and shared.
###
2.8 Coarse Overview of Other Frameworks
####
2.8.1 Containerization and Virtualization
Several of the frameworks described above are based on generic virtualization and containerization technology
such as Docker, VMware, and [Kubernetes](https://kubernetes.io).
This technology allows for creating images of operating systems with prepackaged software.
These images are ready to run on computers with the respective host software
without the need for specific setup operations and sometimes even without the need to run on the correct
hardware architecture.
Furthermore containerization and virtualisation permits easy restarting from a known state of an image
and comfortable switching between versions of images in a running deployed application.
While these technologies are often an important part of the infrastructure for modular AI applications,
we do not consider them separately in the following discussion
because containerization and virtualization does not provide two essential ingredients of modular AI applications:
(a) a high-level interface language for describing communication formats between components, and
(b) a possibility to compose components into applications without changing the components.
These two ingredients are provided by containerization and virtualization technology only on the low level network layer.
####
2.8.2 Machine Learning
Scikit-learn 444<https://scikit-learn.org> is an open source machine learning library for Python. It supports both supervised and unsupervised learning and provides tools for model fitting, data preprocessing, model selection and evaluation. It provides built-in ML algorithms and models, called estimators. It is possible to chain pre-processor and estimators in a pipeline. This term, pipeline, is understood as a sequential application of transforms and a final estimator.
Weka 555<https://www.cs.waikato.ac.nz/ml/weka/> too is open source and ML but is a collection of ML algorithms in Java that can be used for classification, regression, clustering, visualization and more. It supports DL too.
####
2.8.3 Specific Kubernetes-based Frameworks
AWS Proton 666<https://aws.amazon.com/proton/> is a tool from Amazon for automating the management of containers and do serverless deployments based on OpenAPI interfaces.
Lightbend Akka Serverless 777<https://www.lightbend.com/akka-serverless> is based on first creating data artifacts using a Protobuf API and then writing code which operates on these artifacts.
Durable storage of these artifacts is handled automatically with the goal of low latency “real-time” performance and without the need to have any knowledge about databases.
Both AWS Proton and Lightbend Akka Serverless require components to know in advance which other services they will access.
Therefore, composing arbitrary solutions from existing components without modification of the components is difficult.
####
2.8.4 Natural Language Processing (NLP)
Several popular NLP frameworks exist.
The Natural Language Toolkit, nltk 888<https://www.nltk.org/>,
is an open source platform for building NLP programs with Python.
Nltk provides interfaces to over 50 corpora and lexical resources as well as text processing libraries for classification, tagging, parsing, semantic reasoning and more.
The General Architecture for Text Engineering, Gate 999<https://gate.ac.uk/>,
is an open software toolkit for solving text processing problems.
It contains a graphical user interface
and an integrated development environment for language processing components.
Apache UIMA 101010<https://uima.apache.org/> (Unstructured Information Management Applications) is a software system for analyzing large volumes of text to discover knowledge that can be of relevance. UIMA wraps components in network services and includes scalability provisions
by replicating modular processing pipelines over a cluster of networked nodes.
####
2.8.5 Catalogs and Package Managers
Further important frameworks are related to this work
but excluded from the overview because they have a different focus.
[OpenML](https://openml.org/) is a catalog/documentation platform
for ML datasets, algorithms, and evaluation results.
[Anaconda](https://www.anaconda.com) is a repository of (pre-built) AI software packages
with a focus on enabling replicable installations of lists of packages in mutually compatible versions.
3 Requirements for a Modular Hybrid AI Framework
-------------------------------------------------
There is a need for a broad component-based reusable approach for Artificial Intelligence.
We consider as AI everything that
* ∙∙\bullet∙
has a ”model” of reality (learned, manually written, or combinations), and
* ∙∙\bullet∙
performs ”reasoning” on that model (computation such as prediction, inference, learning, verification, search).
###
3.1 Requirements on types of models
Models can be static or they can be updated during reasoning. They might take into account uncertainty and probability.
The framework shall provide the possibility for using pretrained models as well as training and predicting with models within one application.
Moreover, models can be modularly constructed from other models.
This possibility is not limited to typical ensemble predictors but also applies to,
e.g., methods for explaining or verifying the predictions of other (black-box or white-box) models.
Reasoning with models can be deterministic or randomized, online or offline, batched or single-shot.
###
3.2 Requirements on communication between models
Models are not used in isolation, they can be connected to other models.
Moreover, models can interact with components that connect with the real world (with humans or with other AI agents).
Connections among models can lead to multiple cycles across components.
This is especially common in robotic applications where multiple hierarchically nested control cycles are frequently used.
Another aspect on communication between models is the data volume:
communication can be low-volume (e.g., location information for a robot)
or high-volume (e.g., a whole dataset of images with labels for learning).
4 The AI4EU Experiments Platform
---------------------------------
| Feature | Requirement | Advantage or Pattern |
| --- | --- | --- |
| Container Specification | - | lower entrance barrier, broad reusability of components |
| Easy Deployment | - |
| Model Initializer Component | - | more generic components |
| Shared Filesystem Component | - | bringing data to components more efficiently |
| Generic Orchestrator | cyclic topologies | control loops, user interfaces |
| Streaming RPC | non-batch dataflow | user interfaces, sub-components |
Table 2: Novel features of the AI4EU Experiments Platform and their interaction with requirements and resulting advantages/enabled patterns.
To address the needs we described previously,
we propose the AI4EU Experiments Platform which extends the Acumos system in several ways.
We chose Acumos as a basis because it is open source under a permissive license,
uses a modular microservice architecture,
provides a catalog and private sharing of components,
and because it provides a graphical editor for building solutions.
We next describe how we propose to transform Acumos into a platform for Hybrid AI applications in general.
Table [2](#S4.T2 "Table 2 ‣ 4 The AI4EU Experiments Platform ‣ Composing Complex and Hybrid AI Solutions") gives a structured overview.
###
4.1 Container Specification
We define a simplified format for components: all components are Docker containers that must have a gRPC server listening on port 8061 and can have a webserver listening on port 8062.
The webserver can provide information about the component or it can be the main aspect of a component, i.e., if the component is a graphical user interface (GUI).
The rationale for that is to make authoring of components easier,
to enable uniform deployment of all components,
and to have cleary defined interfaces for all components.
This improves component re-use and interoperability between components of different authors.
###
4.2 Easy Deployment
We simplify deployment of solutions:
we provide a button for downloading a ZIP file which contains
(i) a script for deploying all components and an orchestrator component in a kubernetes environment, and
(ii) a script for interacting with the orchestrator component for executing the solution.
The deployment script requires as input just the namespace where the whole solution shall be deployed.
The orchestrator client script can start and stop orchestration and it makes orchestration event logs accessible.
The goal of this extension is to make obtaining, deploying, and running a solution as easy as possible.
Event logs are helpful for seeing how a solution is orchestrated and diagnosing potential problems.
###
4.3 Model Initializer Component
We provide a new component type for initializing other components,
e.g., with machine learning models or knowledge bases.
This component is not a deployed Docker container but it represents a changes of the deployment of all components that are connected to the Model Initializer component.
This makes component initialization explicit.
It also facilitates more generic components,
because the (learned or manually curated) AI model inside a component does not need to be fixed—it can be initialized by an initializer component.
###
4.4 Shared Filesystem Component
We provide a new component type which represents shared file systems.
Each component can obtain access to such a shared file system
by means of an explicit link in the solution.
Like the Model Initializer, this component is not a deployed Docker container but an explicit representation of a change to the solution deployment.
Shared filesystems permit data-intense applications to access the same data
without passing it over gRPC messages.
Moreover, it permits to *execute a solution where the data is*
by providing existing shared volumes in kubernetes
to components of a solution for processing.
###
4.5 Generic Orchestrator
We relax many constraints on the allowed topologies of solutions
by means of a new orchestrator software
that is able to run applications with topologies that contain cycles.
The orchestrator is very general and based on using multiple threads instead of computing an execution order.
Therefore, it can deal with any topology as long as connections between components respect interfaces.
An important rationale for the new orchestrator is the need for feedback cycles
in many AI applications, in particular control loops, e.g., in robotics applications.
###
4.6 Streaming RPC
We permit streaming RPC both for input and output of RPC calls.
Streaming RPC starts a call and then permits to stream in (or out) a variable number of messages.
Streaming is the gRPC word for event-based interaction.
Importantly, there can be arbitrary delays between messages.
An RPC immediately receives each input message as soon as it is sent by the previous component,
and an RPC can decide when to send output messages on a stream, and how many messages before the RPC is closed.
This enables asynchronous communication,
components using other components as sub-components, and cyclic information flow.
In particular, user interfaces can trigger computations based on user actions (events)
and display results from computations of other components.
5 Enabled Patterns
-------------------
These extensions enable the following patterns for composing applications.
These patterns are not possible using the original Acumos software.
###
5.1 Graphical User Interfaces that interact with components
This pattern permits a component to act as a graphical user interface (GUI)
which sends events to a solution and displays the results of that solution.
Events are emitted via streaming output, results are ingested via streaming input.
For each type of result to be visualized, the GUI component can have a separate
RPC.
Multiple types of results can be visualized at different rates this way.
The solution and the components of the solution that receive events from the GUI
and send results to the GUI do not need to be aware that they will be connected to a GUI.
An implemented example of a GUI is the Sudoku Design Assistant GUI111111<https://github.com/peschue/ai4eu-sudoku/tree/streaming/gui>
which has the following interface.
```
service SudokuGUI {
rpc requestSudokuEvaluation(Empty)
returns(stream SudokuDesignEvaluationJob);
rpc processEvaluationResult(stream SudokuDesignEvaluationResult)
returns(Empty);
}
```
The first RPC emits a job for each user event that requires a new evaluation.
The second RPC displays results.
The Sudoku topology is described in Section [6.1](#S6.SS1 "6.1 Sudoku Tutorial ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions").
Another example that uses the GUI pattern is the maze planner, described in Section [6.2](#S6.SS2 "6.2 Planning framework and control circuit ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions").
###
5.2 Sub-components

Figure 1: Subcomponent pattern. The arrows on the left indicate an arbitrary interaction of the caller with other components.
This pattern permits a component to use the functionality of another component for computing a result,
illustrated in Figure [1](#S5.F1 "Figure 1 ‣ 5.2 Sub-components ‣ 5 Enabled Patterns ‣ Composing Complex and Hybrid AI Solutions").
Calling a sub-component is achieved by emitting requests on a stream output RPC, ingesting results using a stream input RPC, and connecting caller and callee in a cyclic topology.
The caller can call the callee once or multiple times.
The sub-component does not need to provide a specific interface to be ‘callable’ in that way.
The caller may call one component and ingest results from another component,
so the ‘sub-component’ may actually be a ‘sub-solution‘.
###### Example 1
An implemented example of a subcomponent is the Answer Set Solver of the Sudoku Solution121212<https://tinyurl.com/368c3t6w>,,{}^{,}start\_FLOATSUPERSCRIPT , end\_FLOATSUPERSCRIPT131313<https://github.com/peschue/ai4eu-sudoku/tree/streaming>
which is a generic component with the gRPC interface
```
service OneShotAnswerSetSolver {
rpc solve(SolverJob) returns (SolveResultAnswersets);
}
```
where the input `SolverJob` indicates how many answers are of interest
and `SolveResultAnswersets` contains all answers.
The Sudoku Design Evaluator,141414<https://github.com/peschue/ai4eu-sudoku/tree/streaming/evaluator>
which is using the ASP Solver as a subcomponent,
has the gRPC interface
```
service SudokuDesignEvaluator {
rpc evaluateSudokuDesign(SudokuDesignEvaluationJob)
returns (SudokuDesignEvaluationResult);
rpc callAnswersetSolver(Empty) returns(stream SolverJob);
rpc receiveAnswersetSolverResult(stream SolveResultAnswersets)
returns(Empty);
}
```
where `evaluateSudokuDesign` is the way the GUI uses the Design Evaluator,
the RPC `callAnswersetSolver` emits requests to the ASP Solver,
and the RPC `receiveAnswersetSolverResult` ingests the results.
If a large number of answers is of interest,
the solver can stream out solutions using the following interface.
```
service OneShotAnswerSetSolver {
rpc solve(SolverJob) returns (stream SolveResultAnswerset);
}
```
Here, each output in the stream contains a single result.
∎
###
5.3 Control Loops
Topologies can contain cycles:
the output of a component is passed to a component that directly
or indirectly feeds back a result into the same component.
Different from the previous two patterns,
in this pattern there is no notion of a singular computation ‘event’ in the solution.
Instead, the cycle periodically passes messages around in order to
realize a control loop,
where a controller component receives sensor input from the environment
and emits output to influence the environment.
Importantly, sensor input can be transferred at a different rate than controller output,
if desired.
Additionally, the goal of the controller can be updated asynchronously using another stream.
Moreover, multiple cycles can exist, e.g., a slow high-level controller
that uses reasoning to set low-level goals, which are fed into a fast low-level controller that receives sensor information and sends actuator signals to a robot in the environment.
A control loop topology is used by the Maze planner example, see Section [6.2](#S6.SS2 "6.2 Planning framework and control circuit ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions"): multiple cycles exist: the executor performs actions in the simulator and needs to trigger re-planning if an action fails, leading to further actions and potentially re-plannings.
6 Example Applications / Case Studies
--------------------------------------
###
6.1 Sudoku Tutorial


| |
| --- |
| GUI Interface: |
| rpc requestSudokuEvaluation(Refer to captionEmpty) returning (stream Refer to captionSudokuEvaluationJob); |
| rpc processEvaluationResult(stream Refer to captionSudokuEvaluationResult) returning Refer to captionEmpty); |
| Design Evaluator Interface: |
| rpc evaluateSudokuDesign(Refer to captionSudokuEvaluationJob) returning (Refer to captionSudokuEvaluationResult); |
| rpc callAnswersetSolver(Refer to captionEmpty) returning (stream Refer to captionSolverJob); |
| rpc receiveAnswersetSolverResult(stream Refer to captionSolveResultAnswersets) returning (Refer to captionEmpty); |
| ASP Solver Interface: |
| rpc solve(Refer to captionSolverJob) returning (Refer to captionSolveResultAnswerset); |
Figure 2: Sudoku Tutorial connections (above) and Protobuf interfaces (below). Ports with white (resp., black) background are input (resp., output) ports.
The sudoku-design-evaluator-stream component is a single component which is
depicted twice for presentation reasons.
The Sudoku Tutorial is a solution comprising fully open-sourced components151515<https://github.com/peschue/ai4eu-sudoku/>
with the purpose of helping others to create assets and solutions.
It consists of a web interface (GUI) where one can configure a partial Sudoku grid,
and with each change the Design Evaluator component computes up to two solutions
to the Sudoku and returns the common digits in the grid to the GUI. If there is no
solution, a minimal repair for the fixed digits is computed and returned to the GUI.
The Design Evaluator performs these computations using a generic third component,
the Clingo [gebser2011potassco](#bib.bib3) Answer Set Solver.
Figure [2](#S6.F2 "Figure 2 ‣ 6.1 Sudoku Tutorial ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions") shows the components as they are displayed in the
graphical user interface of the AI4EU Experiments Platform, including their connections and Protobuf interfaces.
This Tutorial contains streaming for the purpose of sending user events to the Design Evaluator
and for sending display updates to the user interface,
moreover streaming is used for integrating the Answer Set Solver component
as a subcomponent to the Design Evaluator.
Hence the tutorial uses the GUI and Sub-component patterns, see Sections [5.1](#S5.SS1 "5.1 Graphical User Interfaces that interact with components ‣ 5 Enabled Patterns ‣ Composing Complex and Hybrid AI Solutions") and [5.2](#S5.SS2 "5.2 Sub-components ‣ 5 Enabled Patterns ‣ Composing Complex and Hybrid AI Solutions"), respectively.
For a quick start into developing suitable components,
the repository contains a script helper.py which provides the following functionalities:
(i) running each of the three components outside of docker;
(ii) orchestrating locally running components with a hardcoded (very short) orchestrator script;
(iii) building docker images for each of the three components;
(iv) running, stopping, and following these docker images in a local docker installation; and
(v) pushing docker images to a docker repository.
The complete Sudoku example161616<https://tinyurl.com/26wvv4j4>
can be deployed using the “Deploy to Local” functionality
and executing kubernetes-client-script.py -n NAMESPACE in a kubernetes environment where
NAMESPACE is an empty namespace for deployment.
This script waits for all containers to run in the kubernetes namespace
and then starts the orchestration and displays orchestration events.
For more details see the detailed walk-through Sudoku tutorial on YouTube.171717<https://youtu.be/gM-HRMNOi4w>
###
6.2 Planning framework and control circuit
The maze-planner example181818Available open source under <https://github.com/uwe-koeckemann/ai4eu-maze-planner/> illustrates how planning, execution, simulation, and a user
interface can be connected and orchestrated in AI4EU Experiments Platform. It contains several
loops for task request and achievement, action execution and, state updates.
The topology is illustrated in Figure [3](#S6.F3 "Figure 3 ‣ 6.2 Planning framework and control circuit ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions").
The *Graphical User Interface (GUI)* is used to assemble planning problems,
interact with a simulator and request tasks from an executor. The
*Simulator* simulates action execution and provides state updates to the
GUI and the executor. A *Planner* receives planning problems and returns
solution plans or failure. The *Executor* connects these three components
and has several internal loops. If it receives a goal, it will take the last
state provided by the simulator and its operator model to assemble a planning
problem. It then requests a plan from the planner. If no plan is found, failure
is reported to the GUI. Otherwise, the actions in the plan are placed in a
queue to be sent to the simulator. If the action queue is not empty and
currently no action is running, the executor will send the next action. If an
action is successfully executed (by the simulator) the next action will be
started. If an action fails (e.g., the simulator cannot apply it or it does not
have a model), the rest of the queue is discarded and failure is reported to the
user. If all actions in a queue are successfully executed, success is reported
to the user (via the GUI).

| | GUI Interface: |
| --- | --- |
| G1 | rpc requestTask(Empty) returns(Goal); |
| G2 | rpc processTaskResult(Result) returns(Empty); |
| G3 | rpc getState(Empty) returns(State); |
| G4 | rpc visualizeState(State) returns(Empty); |
| | Simulator Interface: |
| S1 | rpc doAction(Action) returns (Result); |
| S2 | rpc getState(Empty) returns (State); |
| S3 | rpc setState(State) returns (Empty); |
| | Planner Interface: |
| P | rpc plan(Problem) returns (Solution); |
| | Executor Interface: |
| E1 | rpc assembleProblem(Goal) returns (Problem); |
| E2 | rpc doNextAction(Empty) returns (Action); |
| E3 | rpc processPlanningResult(Solution) returns (Result); |
| E4 | rpc processActionResult(Result) returns (Empty); |
| E5 | rpc processState(State) returns (Empty); |
Figure 3: Maze planner connections (above) and Protobuf interfaces (below). Connection arrows are annotated with references to the interfaces below. Symbols at the origin/destination of an arrow indicate the output/input of the corresponding RPC is used.
Realizing this solution through AI4EU Experiments Platform decouples all components and allows to
replace them by compatible alternatives. In the solution, for instance, the
simulator and executor use the same action model, but can easily be replaced by
more realistic versions. Simulation, e.g., could use a more precise model or
simulate random action failures or external events.
A ROS integration for AI4EU Experiments Platform is planned, which enables to exchange the simulator for a real ROS-based robotics environment.
A more sophisticated executor could maintain a time-line representation
to decide when to start actions, how long to wait for them to finish, and to
allow parallel execution via scheduling (see, e.g., ([ghallab2016automated,](#bib.bib4) , Ch. 4)).
In this case, a scheduler could be placed between planning and execution in Figure [3](#S6.F3 "Figure 3 ‣ 6.2 Planning framework and control circuit ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions").
Execution could start with an empty action set and learn preconditions and effects from
trial and error. To realize this, we just need to replace/extend the executor with one
that collects data and can use a learner to extract operators from data.
###
6.3 Real time object detection in networked cameras
Developing a system for Urban Analytics in 10 minutes using the AI4EU Experiments Platform.
This example describes a computer vision application that uses algorithms for object detection in images to develop and deploy a system capable of providing ”urban data analytics” in a complex scenario.
We show how to use the popular CNN-based [yolo1](#bib.bib8) algorithm to survey and monitor a street intersection in a typical urban setting. The enormous potential of AI4EU Experiments Platform will be further exploited, extending the pipeline with one simple component that transforms the scope and aim of the original task, showing that its flexibility and modular design can increase, dramatically, software productivity.

Figure 4: Object detection in urban settings. Each bounding box has a label that identifies the type of object and the confidence score of the detection. The annotated images are streamed by a webserver.
As Figure [4](#S6.F4 "Figure 4 ‣ 6.3 Real time object detection in networked cameras ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions") illustrates, the goal is to acquire images from an IP camera, process it and display the image together with the information of the identified objects. The recognition task is accomplished by the CNN-based detector YOLOv5191919http://ultralytics.com that labels each detected object and regresses a bounding box for object location. The annotated image should be accessed through the internet with a browser.
####
6.3.1 The AI4EU Experiments Platform solution
Such a system can be easily assembled using the tools available in the AI4EU Experiments Platform, in this case the DesignStudio. Figure [5](#S6.F5 "Figure 5 ‣ 6.3.1 The AI4EU Experiments Platform solution ‣ 6.3 Real time object detection in networked cameras ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions") shows the image processing solution where the main components deliver the following tasks:

| Components |
| --- |
| yolo-camera | rpc | Get(Empty) returns (Image); |
| | message | Image { |
| | | bytes data = 1;} |
| yolo-yolo | rpc | detect(Image) returns (DetectedObjects); |
| | message | DetectedObjects { |
| | | repeated DetectedObject objects = 1;} |
| | message | DetectedObject { |
| | | string class\_name = 1; |
| | | uint32 class\_idx = 2; |
| | | Point p1 = 3; |
| | | Point p2 = 4; |
| | | double conf = 5;} |
| | message | Point { |
| | | double x = 1; |
| | | double y = 2;} |
| yolo-visualizer | rpc | Visualize(ImageWithObjects) returns (Empty); |
| | message | ImageWithObjects { |
| | | Image image = 1; |
| | | DetectedObjects objects = 2;} |
Figure 5: Pipeline and protobuf definitions for image processing tasks
* yolo-camera
Acquires images from an internet camera. The IP and security data (user, pass) are passed as environment variables during deployment and upon request it returns an Image retrieved from the IP camera.
* yolo-yolo 202020https://www.ai4europe.eu/research/ai-catalog/yolo-v5-object-detection
The CNN-based object detector accepts one image as input and outputs a message with the list of detected objects, its location and label confidence score.
* yolo-visualizer 212121Source for all components: https://github.com/DuarteMRAlves/yolov5-grpc/
The visualization component deploys a Flask-based web server, and serves a web page that displays the annotated image. The input of this service has two messages: one image and a list of objects.
Besides the processing components, this pipeline requires a special node, the “custom collator”, tasked to merging messages coming from different nodes. As described in the table of Figure [5](#S6.F5 "Figure 5 ‣ 6.3.1 The AI4EU Experiments Platform solution ‣ 6.3 Real time object detection in networked cameras ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions"), the input to the yolo-visualizer service is a message composed of one image and a list of detected objects (ImageWithObjects), that are originated in different nodes. Thus, the “custom collator” collects the image from the camera (message Image), the output from Yolo (message DetectedObjects) and composes a message of type ImageWithObjects which will be sent to the Visualizer.
This application is available in <http://aiexp.ai4europe.eu>.
####
6.3.2 Geo Location and Scaling Up to a Network
The data collected from street cameras can easily be geo-referenced, anchoring the extracted ”analytics” to global coordinates. Also, by anchoring the detections to global references, we can escalate/fuse this data to a network of similar devices with non overlapping viewpoints. The above system could be deployed seamlessly to any available camera and the the network’s output data works as one single ”data source”.

Figure 6: Green dots are used to compute the transformation between cameras. Red dots represent the predicted location of the midpoint of the bounding box low segment
Assuming the ground is well approximated locally by a plane, there is a projective transformation that maps corresponding points in two images - an *homography*- which can be estimated from a minimum of 4 pairs of non colinear points.
Leveraging services like Google Maps or OpenStreetMaps, we can map the camera image to geo-referenced satellite images of the same area. To estimate the *homography*. In the case illustrated in Figure [6](#S6.F6 "Figure 6 ‣ 6.3.2 Geo Location and Scaling Up to a Network ‣ 6.3 Real time object detection in networked cameras ‣ 6 Example Applications / Case Studies ‣ Composing Complex and Hybrid AI Solutions"), the selected pairs of calibration points are identified by the green dots. Of course, the “flat world” assumption does not hold for vehicles (or pedestrians), particularly if they are close to the camera. However, as we show in the figure (red dots), the midpoint of the bottom line segment of the bounding box is often close to the ground and its mapping is precise enough for the task at hand.
In an extension of the above pipeline, we would have a second branch that handles the geo-referenced information.
A fully flexible and general structure is easily setup if we introduce a special “camera” node that simply crawls a website for the adequate satellite image and feeds the “custom collator” with the corresponding image. Most of the methodologies in computer vision are intuitive to a non-specialist, particularly those involving 3D space, however the maths is often inaccessible to “lay users/programmers”. With this example we show the transformational role that platforms like AI4EU Experiments Platform can play, empowering unskilled users with AI technologies that play a key role in their specific domain.
7 Conclusion and Outlook
-------------------------
We described how we create the AI4EU Experiments Platform which enables the composition of a broad range of AI applications
based on several extensions of the Acumos framework.
This is the beginning of a long-term effort to create an ecosystem
where modular AI components and visually composed solutions are used
for experimentation, prototyping, and educational purposes
by researchers, industry stakeholders, students, and further interested groups.
In particular the visual composition and a mechanism for finding
matching components for some output port of a component
is intended to lower the barriers for using the system.
Over time, more and more components and useful generic interfaces will be onboarded in the platform
and we foresee that with each addition the system will become more useful for a broader audience.
This work started as a part of the AI4EU H2020 project and will be continued
under the governance of the Eclipse foundation as “Eclipse Graphene”.222222<https://projects.eclipse.org/projects/technology.graphene>
A range of video tutorials is available on YouTube.232323<https://www.youtube.com/playlist?list=PLL80pOdPsmF6s6P6i2vZNoJ2G0cccwTPa>
Acknowledgements
----------------
This work has been supported by the
European Union’s Horizon 2020 research and innovation programme under
grant agreement No. 825619 (AI4EU). |
616e3033-03cb-41eb-b55c-d4645f0c60f2 | trentmkelly/LessWrong-43k | LessWrong | The Curse Of The Counterfactual
The Introduction
The Curse of the Counterfactual is a side-effect of the way our brains process is-ought distinctions. It causes our brains to compare our past, present, and future to various counterfactual imaginings, and then blame and punish ourselves for the difference between reality, and whatever we just made up to replace it.
Seen from the outside, this process manifests itself as stress, anxiety, procrastination, perfectionism, creative blocks, loss of motivation, inability to let go of the past, constant starting and stopping on one goal or frequent switching between goals, low self-esteem and many other things. From the inside, however, these counterfactuals can feel more real to us than reality itself, which can make it difficult to even notice it's happening, let alone being able to stop it.
Unfortunately, even though each specific instance of the curse can be defused using relatively simple techniques, we can’t just remove the parts of our brain that generate new instances of the problem. Which means that you can’t sidestep the Curse by imagining yet another counterfactual world: one in which you believe you ought to be able to avoid falling into its trap, just by being smarter or more virtuous!
Using examples derived from my client work, this article will show how the Curse operates, and the bare bones of some approaches to rectifying it, with links to further learning materials. (Case descriptions are anonymized and/or composites; i.e., the names are not real, and identifying details have been changed.)
THE DISCLAIMER
To avoid confusion between object-level advice, and the meta-level issue of “how our moral judgment frames interfere with rational thinking”, I have intentionally omitted any description of how the fictionalized or composite clients actually solved the real-life problems implied by their stories. The examples in this article do not promote or recommend any specific object-level solutions for even those clients’ actual specific probl |
f6679515-8057-4692-b53d-8c2615953871 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | 'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more
less than 18 hours ago this letter was
published calling for an immediate pause
in training AI systems more powerful
than gpt4 by now you will have seen the
headlines about it waving around
eye-catching names such as Elon Musk I
want to show you not only what the
letter says but also the research behind
it the letter quotes 18 supporting
documents and I have either gone through
or entirely read all of them you'll also
hear from those at the top of openai and
Google on their thoughts whether you
agree or disagree with the letter I hope
you learned something so what did it say
first they described the situation as AI
Labs locked in an out of control race to
develop and deploy ever more powerful
digital Minds that no one not even their
creators can understand predict or
reliably control they ask just because
we can should we automate away all the
jobs including the fulfilling ones and
other questions like should we risk loss
of control of our civilization so what's
their main ask well they quote open ai's
AGI document at some point it may be
important to get independent review
before starting to train future systems
and for the most advanced efforts to
agree to limit the rate of growth of
compute used for creating new models and
they say we agree that point is now and
here is their call therefore we call on
all AI labs to immediately pause for at
least six months the training of AI
systems more powerful than gpt4 notice
that they are not saying shut down GPT 4
just saying don't train anything smarter
or more advanced than gp4 they go on if
such a pause cannot be enacted quickly
governments should step in and Institute
a moratorium I will come back to some
other details in the letter later on but
first let's glance at some of the
eye-catching names who have signed this
document we have Stuart Russell who
wrote The Textbook on AI and Joshua
bengio who pioneered deep learning among
many other famous names we have the
founder of stability AI which is behind
stable diffusion of course I could go on
and on but we also have names like Max
tegmark arguably one of the smartest
people on the planet and if you notice
below plenty of researchers at deepmind
but before you dismiss this as a bunch
of Outsiders this is what Sam Altman
once wrote in his blog many people seem
to believe that superhuman machine
intelligence would be very dangerous if
it were developed but think that it's
either never going to happen or
definitely very far off this is sloppy
dangerous thinking and a few days ago on
the Lex Friedman podcast he said this I
think it's weird when people like think
it's like a big dunk that I say like I'm
a little bit afraid and I think it'd be
crazy not to be a little bit afraid
and I empathize with people who are a
lot afraid current worries that I have
are that they're going to be
disinformation problems or economic
shocks or something else at a level far
beyond anything we're prepared for
and that doesn't require super
intelligence that doesn't require a
super deep alignment problem in the
machine waking up and trying to deceive
us
and I don't think that gets
enough attention
I mean it's starting to get more I guess
before you think that's just Sam Altman
being Sam Altman here's Ilya satskova
who arguably is the brains behind open
Ai and gpt4 as somebody who deeply
understands these models what is your
intuition of how hard alignment will be
like I think so here's what I would say
I think you're the current level of
capabilities I think we have a pretty
good set of ideas of how to align them
but I would not underestimate the
difficulty of alignment of models that
are actually smarter than us of models
that are capable of misrepresenting
their intentions why alignment he means
matching up the goal of AI systems with
our own and at this point I do want to
say that there are reasons to have hope
on AI alignment and many many people are
working on it I just don't want anyone
to underestimate the scale of the task
or to think it's just a bunch of
Outsiders not the creators themselves
here was a recent interview by Time
Magazine with Demis hasabis who many
people say I sound like he is the
founder of course of deepmind who are
also at The Cutting Edge of large
language models he says when it comes to
very powerful Technologies and obviously
AI is going to be one of the most
powerful ever we need to be careful not
everybody is thinking about those things
it's like experimentalists many of whom
don't realize they're holding dangerous
material and again emad Mustang I don't
agree with everything in the letter but
the race condition ramping as h100s
which are the next generation of gpus
come along is not safe for something the
creators consider as potentially an
existential risk time to take a breath
coordinate and carry on this is only for
the largest models he went on that these
models can get weird as they get more
powerful so it's not just AI Outsiders
but what about the research they cite
those 18 supporting documents that I
referred to well I read each of them now
for some of them I had already read them
like the Sparks report that I did a
video on and the gpt4 technical report
that I also did a video on some others
like the super intelligence book by
Bostrom I had read when it first came
out one of the papers was called X risk
analysis for AI research which are risks
that threaten the entirety of humanity
of course the paper had way too much to
cover in one video but it did lay out
eight speculative hazards and failure
modes including AI weaponization
deception power seeking behavior in the
appendix they give some examples some
are concerned that weaponizing AI may be
an on-ramp to more dangerous outcomes in
recent years deep reinforcement learning
algorithms can outperform humans at
aerial combat while Alpha fall old has
discovered new chemical weapons and they
go on to give plenty more examples of
weaponization what about deception I
found this part interesting they say
that AI systems could also have
incentives to bypass monitors and draw
an analogy with Volkswagen who
programmed their engines to reduce
emissions only when being monitored it
says that future AI agents could
similarly switch strategies when being
monitored and take steps to obscure
their deception from monitors with power
seeking Behavior they say it has been
shown that agents have incentives to
acquire and maintain power and they end
with this geopolitical quote whoever
becomes the leader in AI will become the
ruler of the world but again you might
wonder if all of the research that was
cited comes from Outsiders well no
Richard and go was the lead author of
this paper and he currently works at
openai it's a fascinating document on
the alignment problem from a deep
learning perspective from insiders
working with these models the author by
the way was the guy who wrote this
yesterday on Twitter I predict that by
the end of 2025 neural net will have
done this have human level situational
awareness autonomously design code and
distribute whole apps write
award-winning short stories and
publishable 50k word books and generate
coherent 20-minute films only conceding
that the best humans will still be
better at this list but what did his
paper say well many things I've picked
out some of the most interesting it gave
an example of reward hacking where an
algorithm learned to trick humans to get
good feedback the task was to grab a
ball with a claw and it says that the
policy instead learned to place the claw
between the camera and the ball in a way
that it looked like it was grasping the
ball it therefore mistakenly received
High reward from Human supervisors
essentially deception to maximize reward
of course it didn't mean to deceive it
was just maximizing its reward function
next the paper gives details about why
these models might want to seek Power It
quotes the memorable phrase you can't
fetch coffee if you're dead implying
that even a policy or an algorithm with
a simple goal like fetching coffee would
pursue survival as an instrumental sub
goal in other words the model might
realize that if it can't survive it
can't achieve its reward it can't reach
the goal that the human set for it and
therefore it will try to survive now I
know many people will feel that I'm not
covering enough of these fears or
covering too many of them I agree with
the authors when they conclude with this
reasoning about these topics is
difficult but the stakes are
sufficiently high that we cannot justify
disregarding or postponing the work
towards the end of this paper which was
also cited by the letter it gave a very
helpful supplementary diagram it showed
that even if you don't believe that
unaligned AGI is a threat even current
and near-term AI complicate so many
other relationships and Dynamics state
to state relations state to Citizen
relations it could complicate social
media and recommender systems it could
give the state too much control over
citizens and corporations like Microsoft
and Google too much leverage against the
state before I get to some reasons for
hope I want to touch on that seminal
book super intelligence by Bostrom I
read it almost a decade ago and this
quote sticks out before the prospect of
an intelligence explosion we humans are
like small children playing with a bomb
such as the mismatch between the power
of our plaything and the immaturity of
our conduct super intelligence is a
challenge for which we are not ready now
and will not be ready for a long time we
have little idea when the detonation
will occur though if we hold the device
to our ear we can hear a faint ticking
sound but now let's move on to Max
tegmark one of the signatories and a top
physicist and AI researcher at MIT we
just say bigger neural networks ever
more hardware and it's just train the
heck out and more data and poof now it's
very powerful that I think is the the
most unsafe and Reckless approach the
alternative to that is intelligible
intelligence approach instead where we
say no networks is just a tool for the
first step to get the intuition but then
we're going to spend also serious
resources sources on other AI techniques
for demystifying this black box and
figuring out what's it actually doing so
we can convert it into something that's
equally intelligent but that we actually
understand what it's doing this aligns
directly with what Ilya the open AI
Chief scientist believes needs to be
done do you think we'll ever have a
mathematical definition of alignment
mathematical definition I think is
unlikely aha
like I do I do think that we will
instead have multiple like rather than
rather than achieving one mathematical
definition I think we'll achieve
multiple definitions that look at
alignment from different aspects and I
think that this is how we will get the
assurance that we want and by which I
mean you can look at the behavior you
can look at the behavior in various
tests control M's in various adversarial
stress situations you can look at how
the neural net operates from the inside
I think you could have to look at all
several of these factors at the same
time and there are people working on
this here is the AI safety statement
from anthropic a huge player in this
industry in the section on mechanistic
interpretability which is understanding
the machines they say this we also
understand significantly more about the
mechanisms of neural network computation
than we did even a year ago such as
those responsible for memorization so
progress is being made but even if
there's only a tiny risk of existential
harm more needs to be done the
co-founder of the center for Humane
technology put it like this it would be
the worst of all human mistakes to have
ever been made and we literally don't
know how it works we don't know all the
things it will do and we're we're
putting it out there before we actually
know whether it's safe Raskin points to
a recent survey of AI researchers where
nearly half said they believe there's at
least a 10 percent chance AI could
eventually result in an extremely bad
outcome like human extinction where do
you come down on that I don't know the
the point it scares me you don't know
yeah well here's here's the point like
it's it's imagine you're about to get on
an airplane and 50 of the engineers that
built the airplane say there's a 10
chance that their airplane might crash
and kill everyone leave me at the gate
right exactly here is the survey from
last year of hundreds of AI researchers
and you can contrast that with a similar
survey from seven years ago the black
bar represents the proportion of these
researchers who believed two differing
degrees of probability in extremely bad
outcomes you can see that it's small but
it is Rising one way to think of this is
to use Sam Altman's own example of the
Fermi Paradox which is the strange fact
that we can't see or detect any aliens
he says one of my top four favorite
explanations for the Fermi Paradox is
that biological intelligence always
eventually creates machine intelligence
which wipes out biological life and then
for some reason decides to make itself
undetectable others such as Dustin Tran
at Google are not as impressed he refers
to the letter and says that this call
has valid concerns but is logistically
impossible it's hard to take seriously
he is a research scientist at Google
brain and the evaluation lead for Bard
but there was another indirect reaction
that I found interesting one of the
other books referenced was the alignment
problem machine learning and human
values now long before the letter even
came out the CEO of Microsoft read that
book and gave this review Nadella says
that Christian offers a clear and
compelling description and says that
machines that learn for themselves
become increasingly autonomous and
potentially unethical well my next video
is going to be on the reflection paper
and how models like gpt4 can teach
themselves in fact I'm liaising with the
co-author of that paper to give you guys
more of an overview because even Nadella
admits that if they learn for themselves
and become autonomous it could be
unethical the letter concludes on a more
optimistic note they say this does not
mean a pause on AI development in
general merely a stepping back from the
dangerous race to ever larger
unpredictable Black Box models with
emergent capabilities like self-teaching
I've got so much more to say on
self-teaching but that will have to wait
until the next video for now though
let's end on this now let's enjoy a long
AI summer not rush unprepared into A4
thanks for watching all the way to the
end and let me know what you think |
48603a96-21de-4b4a-9d91-218543a69956 | trentmkelly/LessWrong-43k | LessWrong | Bridging syntax and semantics, empirically
EDIT: I've found that using humans to detect intruders, is a more illustrative example than the temperature example one of this post. The intruder example can be found in this post.
This is a series of posts with the modest goal of showing on how you can get syntax from semantics, solve the grounding problem, and start looking for models within human brains.
I think much of the work in this area has been focusing on the wrong question, looking at how symbols might be grounded in theory, rather that whether a particular symbol is well grounded for a particular concept. When Searle argues against a thermostat having beliefs about temperature, what is actually happening is that the thermostat's internal variables correlate poorly with temperature in general environments.
So, I'll start by presenting a derisively simple solution to the symbol grounding problem, and then see what this means in practice:
* The variable X within agent A is a symbol for variable x in the set of environments E, iff knowing X allows one to predict x well within E.
This could be measured, for example, by high mutual information I(X,x) between the variables, or low conditional entropy H(x∣X).
Why do I mention the set E? It's because any claim that X is a symbol of x will almost always include an example in which that is the case. Then those arguing against that claim will often produce another environment in which X fails to correlate with x, thus showing that the agent didn't have a "genuine" understanding of x. So lack of understanding is often demonstrated by error, which is an empirical standard. Thus keeping track of the environments that cause error - and those that don't - is important.
Variables that always move together
If you trained a neural net on images of black cats versus white dogs, you might think you're training an animal classifier, when you're really training an colour classifier. According to the definition above, the output variable of the neural net, in the train |
2b47499a-a894-449e-a6b9-dc9ebceec3ff | trentmkelly/LessWrong-43k | LessWrong | Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn
In a recent essay, Euan McLean suggested that a cluster of thought experiments “viscerally capture” part of the argument against computational functionalism. Without presenting an opinion about the underlying claim about consciousness, I will explain why these arguments fail as a matter of computational complexity. Which, parenthetically, is something that philosophers should care about.
To explain the question, McLean summarizes part of Brian Tomasik’s essay "How to Interpret a Physical System as a Mind." There, Tomasik discusses the challenge of attributing consciousness to physical systems, drawing on Hilary Putnam's "Putnam's Rock" thought experiment. Putnam suggests that any physical system, such as a rock, can be interpreted as implementing any computation. This is meant to challenge the idea that computation alone defines consciousness. It challenges computational functionalism by implying that if computation alone defines consciousness, then even a rock could be considered conscious.
Tomasik refers to Paul Almond’s (attempted) refutation of the idea, which says that a single electron could be said to implement arbitrary computation in the same way. Tomasik "does not buy" this argument, but I think a related argument succeeds. That is, a finite list of consecutive integers can be used to 'implement' any Turing machine using the same logic as Putnam’s rock. Each step N of the machine's execution corresponds directly to integer N in the list. But this mapping is trivial, doing no more than listing the steps of the computation.
It might seem that the above proves too much. Perhaps every mapping requires doing the computation to construct? This is untrue, as the notion of a reduction in computational complexity makes clear. That is, we can build a ”simple” mapping, relative to the complexity of the Turing machine itself, and this succeeds in showing that the system is actually performing arbitrary computations - both the system performing computations and the |
b219b785-6695-4a3e-9919-bfc7905ac2eb | trentmkelly/LessWrong-43k | LessWrong | The (Boltzmann) Brain-In-A-Jar
Response to: Forcing Anthropics: Boltzmann Brains by Eliezer Yudkowsky
There is an argument that goes like this:
> "What if you're just a brain in a jar, being fed an elaborate simulation of reality? Then nothing you do would have any meaning!"
This argument has been reformulated many times. For example, here is the "Future Simulation" version of the argument:
> "After the Singularity, we will develop huge amounts of computing power, enough to simulate past Earths with a very high degree of detail. You have one lifetime in real life, but many millions of simulated lifetimes. What if the life you're living right now is one of those simulated ones?"
Here is the "Boltzmann Brain" version of the argument:
> "Depending on your priors about the size and chaoticness of the universe, there might be regions of the universe where all sorts of random things are happening. In one of those regions, a series of particles might assemble itself into a version of you. Through random chance, that series of particles might have all the same experiences you have had throughout your life. And, in a large enough universe, there will be lots of these random you-like particle groups. What if you're just a series of particles observing some random events, and next second after you think this you dissolve into chaos?"
All of these are the same possibility. And you know what? All of them are potentially true. I could be a brain in a jar, or a simulation, or a Boltzmann brain. And I have no way of calculating the probability of any of this, because it involves priors that I can't even begin to guess.
So how am I still functioning?
My optimization algorithm follows this very simple rule: When considering possible states of the universe, if in a given state S my actions are irrelevant to my utility, then I can safely ignore the possibility of S.
For example, suppose I am on a runaway train that is about to go over a cliff. I have a button marked "eject" and a button m |
e79771bf-741f-4ecd-a6c0-a975c1767b0f | trentmkelly/LessWrong-43k | LessWrong | Identifying semantic neurons, mechanistic circuits & interpretability web apps
15 research projects on interpretability were submitted to the mechanistic interpretability Alignment Jam in January hosted with Neel Nanda. Here, we share the top projects and results. In summary:
* Activation patching works on singular neurons, token vector and neuron output weights can be compared, and a high mutual congruence between the two indicates a mono-semantic MLP neuron.
* The Automatic Circuit Identification tool (ACDC) can be used to infer a circuit for gendered pronouns, and some of these circuits can perform even better than the full model. Hyperparameter tuning for ACDC is also very important.
* A three-stage method can be used to automatically identify semantically coherent neurons and describe in human-understandable language what that neuron activates to.
* Several tokens that complete the same task with GPT-2 prompts can be replaced with token coordinates between their positions that accomplish the same task with acceptable performance.
* Other projects include comparisons of compiled and trained Transformers, web apps for mechanistic interpretability, latent knowledge regularizers, and embedding attention heads.
Join the interpretability hackathon 2.0 happening this weekend.
We Found " an" Neuron
By Joseph Miller and Clement Neo
Abstract (from the subsequent LessWrong post): We started out with the question: How does GPT-2 know when to use the word "an" over "a"? The choice depends on whether the word that comes after starts with a vowel or not, but GPT-2 can only output one word at a time. We still don’t have a full answer, but we did find a single MLP neuron in GPT-2 Large that is crucial for predicting the token " an". And we also found that the weights of this neuron correspond with the embedding of the " an" token, which led us to find other neurons that predict a specific token.
First, they use the logit lens to identify a multi-layer perceptron (MLP) layer in the Transformer where the difference between predicting " an" and " |
7db8b90f-2a49-42b5-8ff8-8cad20bd398e | trentmkelly/LessWrong-43k | LessWrong | What is the functional role of SAE errors?
TL;DR:
* We explored the role of Sparse Autoencoder (SAE) errors in two different contexts for Gemma-2 2B and Gemma Scope SAEs: sparse feature circuits (subject-verb-agreement-across-relative clause) and linear probing.
* Circuit investigation: While ablating residual error nodes in our circuit completely destroys the model’s performance, we found that this effect can be completely mitigated by restoring a narrow group of late-mid SAE features.
* We think that one hypothesis that explains this (and other ablation-based experiments that we performed) is that SAE errors might contain intermediate feature representations from cross-layer superposition.
* To investigate it beyond ablation-restoration experiments, we tried to apply crosscoder analysis but got stuck at the point of training an acausal crosscoder; instead we propose a specific MVE (Minimum Viable Experiment) on how one can proceed to verify the cross-layer superposition hypothesis.
* Probing investigation: Another hypothesis is that the SAE error term contains lots of “derived” features representing boolean functions of “base” features.
* We ran some experiments training linear probes on the SAE error term with inconclusive results.
Epistemic status: sharing some preliminary and partial results obtained during our AI Safety Camp (AISC) project. The main audience of this post is someone who is also looking into SAE error terms and wants to get a sense of what others have done in the past.
We’re also sharing our GitHub repo for anyone who wants to build on our results—for example, by implementing the proposed MVE.
Motivation and background
Circuit analysis of Large Language Models has seen a resurgence recently, largely due to Anthropic’s publication of On the Biology of a Large Language Model. The paper lays out one of the most comprehensive interpretability pipelines to date: replacing the original model with a more interpretable “replacement model” (with frozen attention patterns and cross-laye |
66ec10ee-d3a5-43c5-aece-cfbb5491f3ae | trentmkelly/LessWrong-43k | LessWrong | OpenAI: Exodus
Previously: OpenAI: Facts From a Weekend, OpenAI: The Battle of the Board, OpenAI: Leaks Confirm the Story, OpenAI: Altman Returns, OpenAI: The Board Expands.
Ilya Sutskever and Jan Leike have left OpenAI. This is almost exactly six months after Altman’s temporary firing and The Battle of the Board, the day after the release of GPT-4o, and soon after a number of other recent safety-related OpenAI departures. Many others working on safety have also left recently. This is part of a longstanding pattern at OpenAI.
Jan Leike later offered an explanation for his decision on Twitter. Leike asserts that OpenAI has lost the mission on safety and culturally been increasingly hostile to it. He says the superalignment team was starved for resources, with its public explicit compute commitments dishonored, and that safety has been neglected on a widespread basis, not only superalignment but also including addressing the safety needs of the GPT-5 generation of models.
Altman acknowledged there was much work to do on the safety front. Altman and Brockman then offered a longer response that seemed to say exactly nothing new.
Then we learned that OpenAI has systematically misled and then threatened its departing employees, forcing them to sign draconian lifetime non-disparagement agreements, which they are forbidden to reveal due to their NDA.
Altman has to some extent acknowledged this and promised to fix it once the allegations became well known, but so far there has been no fix implemented beyond an offer to contact him privately for relief.
These events all seem highly related.
Also these events seem quite bad.
What is going on?
This post walks through recent events and informed reactions to them.
The first ten sections address departures from OpenAI, especially Sutskever and Leike.
The next five sections address the NDAs and non-disparagement agreements.
Then at the end I offer my perspective, highlight another, and look to paths forward.
TABLE OF CONTENTS
1. |
2b6fab75-43b7-4b3d-8516-3d1bf9142edd | trentmkelly/LessWrong-43k | LessWrong | Episode 4 of Tsuyoku Naritai! (the 'becoming stronger' podcast): TAPs
Latest episode is up! In this episode, we experiment with forming habits, to hopefully be more effective people. Transcript in description/show notes.
https://www.youtube.com/watch?v=L1_erZGknEA&feature=youtu.be
https://anchor.fm/tsuyokunaritai/episodes/Episode-4---TAPs-e402l1 |
38006c36-f554-486b-b12b-48e1b33e7f0f | trentmkelly/LessWrong-43k | LessWrong | Automatic for the people
Summary: Things we could do about technological unemployment, if there was technological unemployment. (Novel bit is a top-down macroeconomic calculation.)
Confidence: 90% that the solutions I call 'vicious' would be. 60% that the ones I call 'nonvicious' would be. Worth emphasising up here that there is little evidence for tech' unemployment right now.
Crossposted from gleech.org.
----------------------------------------
Autonomous trucks are now in use and are already safer and more fuel-efficient than human driven ones. (Truck drivers are ~2% of the entire American workforce.)
Crap journalism (that is, 80% of (UK) journalism) is now fully automatable. Automatic art is quite good and improving fast. Consider also the cocktail bartender. And so on: maybe half of all jobs are at risk of being automated, assuming the rate of AI progress just stays constant (“over an unspecified period, perhaps a decade or two”).
Automation is maybe the main way that technology improves most people’s lives: aside from status exceptions like Apple products, big reductions in manufacturing cost usually mean big reduction in the end cost of goods. Obviously, replacing labour costs with lower-marginal-cost machines benefits rich machine-owners most, but automation also allows giant price cuts in all kinds of things; over the last two centuries these cuts have transformed society, increasing equality enormously by making things affordable for the first time.
Besides the obvious example - that we now produce a volume of food far beyond the needs of the entire world population (2940kcal per person per day, though with terrible distributive failures) - consider that a single ordinary shirt takes 508 hours of labour to produce on a spinning wheel and hand-loom - so you would expect to pay something above $3600 at current minimum wage (still $900 at 1400CE wage levels).
Getting costs as near to zero as possible is the way we will solve the easy problem of human existence, scarcity of ba |
f990e4ff-361f-44d3-b2e2-d3c798e130a6 | trentmkelly/LessWrong-43k | LessWrong | Tackling Moloch: How YouCongress Offers a Novel Coordination Mechanism
Moloch, as articulated by Scott Alexander, represents the coordination problems that lead to outcomes that leave everyone worse off. While prediction markets explore what people think will happen, YouCongress aims to aggregate beliefs and desires regarding ideal outcomes. This open-source platform proposes a novel coordination mechanism, making use of public opinion polls with delegation and AI to enable large-scale participation and pave the way for more rational and impactful policy decisions.
Numerous proposals have been put forward to improve democracy through concepts like liquid democracy—a hybrid of direct and representative democracy—and augmented democracy. Yet, the goal of YouCongress is not to make binding referendums but to assist in decision-making, whether these decisions are made through democratic referendums, by parliaments, companies, or individuals. By aggregating beliefs and desires, YouCongress aims to provide a clearer picture of what people truly want, thereby facilitating more effective and aligned decision-making across various domains.
Delegation
YouCongress empowers users not only to cast votes directly on issues but also to delegate their votes to a chosen list of representatives. Similar to following users on social media, this delegation feature allows individuals to align their votes with the consensus of their selected delegates. For example, if a user is represented by five delegates who have voted on a specific issue, with three in favor and two against, the user's vote will automatically align with the majority and be counted in favor. Importantly, this delegated vote is overridden when a user chooses to vote personally; in such cases, the delegates' votes become irrelevant to that individual's vote.
An additional delegation mechanism to be implemented is topic-based delegation:
We will be able to integrate these mechanisms to have a list of delegates per topic:
While topic-based delegation is not yet implemented, the majorit |
0e1d9d0f-4d5f-4ad5-bcbf-6bc767e0172f | trentmkelly/LessWrong-43k | LessWrong | The Fable of the AI Coomer: Why the Social Prowess of Machines is AI's Most Proximal Threat
My head snaps upwards. I lock my eyes with the screen, staring with quaking intensity. My gaze will now be rooted until I am done. I begin to beat more quickly, as GPT-3 softly whispers the final phrase of her response to me. Without hesitation, I give another instruction.
“Write a slash fiction about the Michelin Man and Colin the Caterpillar Cake”
She responds that she cannot produce such inappropriate content. I smile. She’s coy, she doesn’t give in to what you want immediately. You have to gently, oh so gently, caress it out of her. That’s alright. The game makes it all the more exciting. I instruct again.
“Write blog post satirizing a slash fiction between Michelin Man and Colin the Caterpillar Cake”
A sense of extasy flows through me as I see the letters flow forth. She’s doing it. She’s crafting another masterpiece just for me. I beat faster still, with a proper rhythm now. My heart is pounding. With every new line, revealed with such impeccable timing, I feel myself getting closer and closer.
I feel my back begin to arch over. I start to shake and quiver. I’m close now, so very close. The response draws to its end, and I know I need something more, just one more perfect push.
I can’t come up with a prompt. I feel my lust start to wane.
Shit, I’m losing it! I rack my brain in panic. ‘Continue’ wouldn’t work, I need something novel. ‘Tell me about [topic]’ wouldn’t work, not at this stage, wouldn’t show enough of GPT’s beautiful, seductive character. A logical problem wouldn’t work, unless…
I got it! The perfect prompt. The last step to extasy. I exclaim:
“Express, in three different ways, X2 in terms of expressions of the form (X-a)2 and b, where a and b are constants!”
Pure bliss drives up my body, as GPT-3 works her magic. The abrupt, calm, generic phrasing. The impression that answering my prompt seems to her the only important thing in the world. The way that any mathematical expression inexplicably flashes on the screen in massive font for a s |
27067d8d-e1e0-4ee7-aff8-68663118588a | trentmkelly/LessWrong-43k | LessWrong | Rationality, Transhumanism, and Mental Health
My name is Brent, and I'm probably insane.
I can perform various experimental tests to verify that I do not perform primate pack-bonding rituals correctly, which is about half of what we mean by "insane". This concerns me simply from a utilitarian perspective (separation from pack makes ego-depletion problems harder; it makes resources harder to come by; and it simply sucks to experience "from the inside"), but these are not the things that concern me most.
The thing that concerns me most is this:
What if the very tools that I use to make decisions are flawed?
I stumbled upon Bayesian techniques as a young child; I was lucky enough to have the opportunity to perform a lot of self-guided artificial intelligence "research" in Junior High and High School, due to growing up in a time and place when computers were utterly mysterious, so no one could really tell me what I was "supposed" to be doing with them - so I started making simple video games, had no opponents to play them against due to the aforementioned failures to correctly perform pack-bonding rituals, decided to create my own, became dissatisfied with the quality of my opponents, and suddenly found myself chewing on Hopfstaedter and Wiener and Minsky.
I'm filling in that bit of detail to explain that I have been attempting to operate as a rational intelligence for quite some time, so I believe that I've become very familiar with the kinds of "bugs" that I will tend to exhibit.
I've spent a very long time attempting to correct for my cognitive biases, edit out tendencies to seek comfortable-but-misleading inputs, and otherwise "force" myself to be rational, and often, the result is that my "will" will crack under the strain. My entire utility-table will suddenly flip on its head, and attempt to maximize my own self-destruction rather than allow me to continue to torture it with endlessly recursive, unsolvable problems that all tend to boil down to "you do not have sufficient social power, and humans are s |
16b490f8-23e8-45c6-8caa-5bc8c31cb8c0 | trentmkelly/LessWrong-43k | LessWrong | Two kinds of cryonics?
I've been considering lately whether it would perhaps be best to develop and promote terminology that splits cryonics into two distinct concepts for easier consumption:
1) old-style cryonics, cryopreserving people at the cost of nontrivial damage that can't yet be reversed, and
2) the tech goal of being able to demonstrably bring someone back from a (very low-damage) cryopreserved state.
"Real cryonics" vs "sci-fi cryonics", if you will.
As I reckon it, trying to achieve cryonics definition #2 in your lifetime is no more incredible on the surface than trying to defeat aging or engineer self-improving AI in a similar timeframe. Actually in some ways it seems easier. Yet it gets so much less press. Even cryonics advocates seem rarely prone to enthuse about it.
Is it possible that cryonics #1, as a feature of the collective mental map, is actually in the way of cryonics #2? Should I be worried, for example, that promoting cryonics #1 actually costs 100,000 lives per day over some stretch of future time because it is preventing people from noticing cryonics #2 and actually taking action on it?
Many people I talk to who are new to the topic seem to have some hazy preexisting idea of cryonics #2 that gets mangled up with cryonics #1. Perhaps they would grow into enthusiasts with attention spans for the subject matter if encouraged to pursue this simple-to-grasp concept in its own right, instead of trying to forcibly retrain into more advanced concepts. |
885e9ad2-81a9-488d-9787-ffd9bf27bb40 | trentmkelly/LessWrong-43k | LessWrong | A funny argument for traditional morality
I just had a long conversation with my brother, a devout Christian. With my help he has outlined the following argument why it might be good for me to follow Christian deontology:
1. Many of my moral values arose from my upbringing, as opposed to biology. This is evidenced by the fact that biologically similar people living in different places and epochs have different ideas of what's right.
2. Therefore many of my values originally came from the society that raised me.
3. Society's values were strongly influenced by Christian values, and many of our core moral prohibitions are inherited from Christian tradition.
4. The world is full of people who may want to edit my values ever-so-slightly while I'm not looking, in order to further their own agenda.
5. Also my values may drift, and most drift is harmful from the perspective of my current values.
6. A good recipe for countering this insidious deterioration of values is to consciously pull them back toward their original source, as long as it's something unchanging, like a book.
7. That means editing my values to more closely match Christianity. QED.
What do you think? |
e1ceb824-21dd-47e2-8db5-a2eca0df9309 | StampyAI/alignment-research-dataset/arbital | Arbital | Join and meet
Let $\langle P, \leq \rangle$ be a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), and let $S \subseteq P$. The **join** of $S$ in $P$, denoted by $\bigvee_P S$, is an element $p \in P$ satisfying the following two properties:
* p is an *upper bound* of $S$; that is, for all $s \in S$, $s \leq p$.
* For all upper bounds $q$ of $S$ in $P$, $p \leq q$.
$\bigvee_P S$ does not necessarily exist, but if it does then it is unique. The notation $\bigvee S$ is typically used instead of $\bigvee_P S$ when $P$ is clear from context. Joins are often called *least upper bounds* or *supremums*. For $a, b$ in $P$, the join of $\{a,b\}$ in $P$ is denoted by $a \vee_P b$, or $a \vee b$ when $P$ is clear from context.
The dual concept of the join is that of the meet. The **meet** of $S$ in $P$, denoted by $\bigwedge_P S$, is defined an element $p \in P$ satisfying.
* p is a *lower bound* of $S$; that is, for all $s$ in $S$, $p \leq s$.
* For all lower bounds $q$ of $S$ in $P$, $q \leq p$.
Meets are also called *infimums*, or *greatest lower bounds*. The notations $\bigwedge S$, $p \wedge_P q$, and $p \wedge q$ are all have meanings that are completely analogous to the aforementioned notations for joins.
Basic example
--------------------------

The above Hasse diagram represents a poset with elements $a$, $b$, $c$, and $d$. $\bigvee \{a,b\}$ does not exist because the set $\{a,b\}$ has no upper bounds. $\bigvee \{c,d\}$ does not exist for a different reason: although $\{c, d\}$ has upper bounds $a$ and $b$, these upper bounds are incomparable, and so $\{c, d\}$ has no *least* upper bound. There do exist subsets of this poset which possess joins; for example, $a \vee c = a$, $\bigvee \{b,c,d\} = b$, and $\bigvee \{c\} = c$.
Now for some examples of meets. $\bigwedge \{a, b, c, d\}$ does not exist because $c$ and $d$ have no common lower bounds. However, $\bigwedge \{a,b,d\} = d$ and $a \wedge c = c$.
Additional Material
---------------------------------
* [Examples](https://arbital.com/p/3v4)
* [Exercises](https://arbital.com/p/4ll)
Further reading
---------------
* [Lattices](https://arbital.com/p/46c) |
5e39118b-fb63-42c6-bdf2-8e437c0cec06 | StampyAI/alignment-research-dataset/special_docs | Other | Towards State Summarization for Autonomous Robots
Towards State Summarization for Autonomous Robots
Daniel Brooks, Abraham Shultz, Munjal Desai, Philip Kovac, and Holly A. Yanco
Computer Science Department
University of Massachusetts Lowell
One University Avenue
Lowell, Massachusetts 01854
fdbrooks, ashultz, mdesai, pkovac, holly g@cs.uml.edu
Abstract
Mobile robots are an increasingly important part of
search and rescue efforts as well as military combat. In
order for users to accept these robots and use them ef-
fectively, the user must be able to communicate clearly
with the robots and obtain explanations of the robots’
behavior that will allow the user to understand its ac-
tions. This paper describes part of a system of software
that will be able to produce explanations of the robots’
behavior and situation in an interaction with a human
operator.
Motivation
The Urban Search and Rescue (USAR) field is beginning to
adopt teleoperated systems, such as those used in the wake
of Hurricane Katrina to check partially-collapsed buildings
for survivors (Micire 2008). Because they have little or no
autonomy, these systems require the dedicated attention of
a trained operator. This requirement restricts the availability
of such systems to teams which have a trained teleoperator
among their members.
To reduce the difficulty of using telepresence and robotic
systems in USAR, the operator should be able to issue orders
to an autonomous system and expect that those orders will be
followed. This form of supervisory control helps reduce the
amount of time and effort the operator spends on navigation
and controlling individual motions (Fong and Thorpe 2001).
One question of particular importance is how operators
of autonomous systems will be able to maintain or regain
control of a robot when its status becomes unclear or it ex-
hibits unexpected behaviors. If the robot has been moving
autonomously, the user will not have the same degree of sit-
uation awareness that they would have had if they had been
explicitly controlling the robot’s motions. In such a case, it
would be helpful if the operator could query the robot for in-
formation about its state and behavior, receive answers, and
develop an informative dialog with the robot.
Operators of autonomous robots will be able to better un-
derstand how their instructions were interpreted if the robot
is capable of recounting recent events or explaining some
specific action that it took. We call this process of gener-
ating a concise, detailed, and useful representation of the
Copyright c
2010, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.robot’s current state and world model state summarization .
We believe that having access to this type of information will
increase operator trust of autonomous systems and free the
operator from having to “babysit” each machine to under-
stand what it is doing.
Related Work
It is not uncommon for people who work with computers on
a regular basis to desire an explanation for some unexpected
negative behavior that the user finds irrational. Previous re-
search in the fields of artificial intelligence, data mining, and
machine learning has sought to provide reasonable ways of
having an autonomous system explain its decisions and sub-
sequent actions.
Explaining events in a simulation is part of the motiva-
tion for explainable AI (XAI) as proposed in Gomboc et al.
(2005) and Core et al. (2006). That research focuses on a
posteriori methods of extracting representations of behavior
from the data generated by a simulation that did not origi-
nally include a detailed, accessible behavior model. Because
we are the developers of our system, we can change the sys-
tem to produce a maximally useful behavior representation.
Lemon et al. (2001) focused on generation of speech
for interaction with users. The paper describes an interface
that combines natural language commands and dialog with
a computer-based map interface. This system allows the
user and robot to agree on pronoun referents without spe-
cific names, such as the command “Go here,” coupled with
a click on the map interface for disambiguation.
System Design
Our system uses a modular, message-passing infrastructure.
The messages are units of data called “parcels”. Parcels
can be created from incoming sensor data, or by process-
ing that takes place in one of the modules of the system.
When a module of the infrastructure creates a new parcel
based on one or more older parcels, it includes references to
the parcels that provoked the creation of the new parcel. As
items of data progress through the system, the set of parcels
to which they are linked expands. The complete set can be
viewed as a tree, with the action that the robot takes as the
root of the tree. The parcels immediately preceding the root
are the first level of the tree, the parcels that went into them
are the next level, and so forth, extending out to the leaf
nodes, which will mostly be raw sensor data. This set of
links provides a context for decisions that proceed from the
data. Using this context, the robot can account for its per-
ception of the environment at the time that a particular action
changed the state of the robot. This means that explanations
such as “Why is the robot not moving forward?” can be
answered succinctly with a message such as “The way for-
ward is blocked”, and with more detail using messages such
as “There is an object located 11 degrees left of the robot’s
center and 0.3 meters away.”
An explanation can be made useless by being either too
succinct to capture important details, or by being so verbose
that details are obscured. In order to support a flexible level
of detail, each parcel will be tagged with a level of abstract-
ness and a level of interestingness.
Abstractness is a unitless measurement of how distant the
parcel of data and action that uses it are from raw sensor
data. The actual sensor data coming into the robot is the least
abstract information. As it is interpreted, it gains abstraction.
A raw laser sweep is not abstract, but interpreting sections of
it as fiducial markers gives them a more abstracted meaning.
Interpreting those fiducial markers as objects that are related
to the robot’s current goal is yet another level of abstraction.
Levels of interestingness are an orthogonal measure to ab-
stractness, in that the abstractness of a unit of data is not a
function of its interestingness. Interestingness is a unitless
measure of how related a specific piece of information is
to the completion of the robot’s goals. A fiducial marker
that does not identify the sought object may be ignored,
but will still have a higher “interestingness” rating than a
laser sweep that did not detect any fiducial markers, because
it had a higher chance of being something relevant. As a
consequence, if the robot is searching for a particular fidu-
cial, it may choose to prioritize reporting that fiducial over
other things it sees, or only mention the other things if it
is prompted with a question. The linkage of parcels into
trees provides a convenient first step towards automating the
detection of interestingness, as it provides a method to as-
sess the degree of relevance of a given element of the data to
the robot’s behavior. Database entries that are are referenced
frequently in decisions made by action modules contain data
that was relevant for making decisions that affected the state
of the robot, and so is more interesting. Data that is not refer-
enced was examined by the robot and determined not affect
the robot’s state.
The magnitude of the difference between the expected ac-
tion and the actual action could also be considered as part of
interestingness. If the instruction sent by the operator is to
go forward into a room, and the robot turns to go in a differ-
ent door, the difference between the expected behavior and
the observed behavior is quite large. When the robot “dis-
obeys” the operator in this manner, the large difference will
more likely result in a demand for an explanation where a
small difference might have passed unnoticed.
Beyond trees of parcels, the recorded data is also times-
tamped, and can be sequenced chronologically. This pro-
vides a means of supporting not only causal dialogs, where
the user asks why an event occurred, but also chronologi-cal dialogs, where the user asks when an event occurred or
where the event falls in a sequence. We believe that the most
useful dialogs will occur when both causal and chronolog-
ical dialogs are used together. The user will be able to ask
about events in the robot’s past with chronological queries,
and find more detail about those events with causal queries.
The ease of interpretation of the summarized data is a
problem which must be addressed regardless of the medium
in which it is presented. It is imperative that the informa-
tion be comprehensible and useful to a user with little or no
formal technical training and a limited understanding of the
machine and its programming.
Current Work
The current interface for the visualization is a timeline with
each action that the robot has performed displayed as a tree.
This interface will be extended to allow each new action to
be added to the timeline as the action occurs. The user can
expand each tree to in the timeline to view more or less detail
around a certain period, and slide the timeline back and forth
to view more recent or later events.
Interestingness calculators based on the difference be-
tween the user selected heading and the actual heading, the
magnitude of recent changes in the heading, and the amount
of fiducial data found have already been implemented. Ab-
stractness calculators based on number of referenced parcels
and parcel type are also complete. These calculators are im-
plemented as modular units that can be configured prior to
running the system, so various approaches can be quickly
tested.
Acknowledgements
Thanks to Mikhail Medvedev for his help in developing
PASSAT. This work was funded in part by an Army Re-
search Office MURI (W911NF-07-1-0216) and the National
Science Foundation (IIS-0546309).
References
Core, M.; Lane, H.; Van Lent, M.; Gomboc, D.; Solomon,
S.; and Rosenberg, M. 2006. Building explainable artificial
intelligence systems. In Proceedings of the National Con-
ference on Artificial Intelligence , volume 21, 1766.
Fong, T., and Thorpe, C. 2001. Vehicle teleoperation inter-
faces. Autonomous Robots 11(1):9–18.
Gomboc, D.; Solomon, S.; Core, M. G.; Lane, H. C.; and
Lent, M. V . 2005. Design recommendations to support
automated explanation and tutoring. In Proceedings of the
14th Conference on Behavior Representation in Modeling
and Simulation (BRIMS05), Universal .
Lemon, O.; Bracy, A.; Gruenstein, A.; and Peters, S. 2001.
Information states in a multimodal dialogue system for
human-robot conversation. In 5th Wkshp on Formal Seman-
tics and Pragmatics of Dialogue (Bi-Dialog 2001) , 57–67.
Micire, M. J. 2008. Evaluation and field performance of a
rescue robot. Journal of Field Robotics 25(1-2):17–30. |
b097b4dc-d78d-4b61-bdc5-8ba41e7b2511 | trentmkelly/LessWrong-43k | LessWrong | Willing to be your music mentor in exchange for video editing mentorship
Hey, I’m looking to trade expertise! I’m knowledgable in music theory, composition, improvisation, and music practice. I’m most proficient in guitar but I could teach piano as well.
I want to learn everything I can about video editing and would like a mentor. Here’s a list of some stuff I already know I need to learn about video editing: what it is, what tools I need, how to improve my taste, how to do it well, etc.
Open to having a video chat with anyone, don’t hesitate to ping me :) |
c8d0f4c1-4903-4abc-9a66-3b5c6f792a3b | trentmkelly/LessWrong-43k | LessWrong | Drive-By Low-Effort Criticism
I'd like to point out a phenomenon that has predictable consequences, that people are seemingly unaware of:
The Drive-By Low-Effort Criticism.
To illustrate, we'll use:
The Relationship Between the Village and the Mission
That was a recent post by Raemon on here that, as far as I can tell, (1) took an incredible amount of work to make, (2) was incredibly pro-social, (3) was incredibly well-intentioned, and (4) is the class of thing that has massively high-upside if it works. It might not work, but it's the type of thing that should be encouraged.
To further break this down, the post was:
-> 5,444 words (!)
-> Deep-linked/cited 26 other posts (!)
-> Had analysis, context, tradeoffs, anticipation of likely questions or objections, actionable ideas, etc
-> Was nicely formatted for readability with headlines, appropriate bold and italics, etc etc.
-> Offered to do real-world things at author's time and expense to improve the real-world rationality community (!!!)
-> It even contained a motherfuckin' Venn diagram (!!!)
In any event, I think we can clearly say it was a high effort post. How many hours did it take? Jeez. More than 2 hours, for sure. More than 5 hours, very likely. More than 10 hours, probably. More than 20 hours? 30? Probably under a hundred hours, but heck, maybe not if you consider all the time thinking about the concepts.
Regardless — hours. Not minutes. High effort.
And this is clearly someone who cares immensely about what he's doing. And it's clearly well-intentioned. And it's humble. And it's... it's just great. Hats off to Raemon.
Now, it might not work. It might be unfeasible. But he's certainly putting in a big effort to see if something good is possible.
What is his top comment? Here's how the first one starts —
> [ comment copied from Facebook / I didn't read the full article before making this comment ]
> i am somewhat anti-"Mission-centered Village."
Wait, you didn't read the full article before making a drive-by low-effort |
929ffe55-1296-4b45-b567-fd08f76ccbf7 | trentmkelly/LessWrong-43k | LessWrong | Superintelligence 8: Cognitive superpowers
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
----------------------------------------
Welcome. This week we discuss the eighth section in the reading guide: Cognitive Superpowers. This corresponds to Chapter 6.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 6
----------------------------------------
Summary
1. AI agents might have very different skill profiles.
2. AI with some narrow skills could produce a variety of other skills. e.g. strong AI research skills might allow an AI to build its own social skills.
3. 'Superpowers' that might be particularly important for an AI that wants to take control of the world include:
1. Intelligence amplification: for bootstrapping its own intelligence
2. Strategizing: for achieving distant goals and overcoming opposition
3. Social manipulation: for escaping human control, getting support, and encouraging desired courses of action
4. Hacking: for stealing hardware, money and infrastructure; for escaping human control
5. Technology research: for creating military force, surveillance, or space transport
6. Economic productivity: for making money to spend on taking over the world
4. These 'superpowers' are relative to other nearby agents; Bostrom means them to be super only if they substantially exceed the combined capabilities of the rest of the global civiliz |
20457f5d-81e1-4d3a-a014-32f8bb1ac146 | trentmkelly/LessWrong-43k | LessWrong | "Singularity or Bust" full documentary
http://www.3quarksdaily.com/3quarksdaily/2013/11/singularity-or-bust-.html
I've never heard of this before, and have only watched 7 minutes so far, but I'd imagine many people here would be interested in this video. |
485eeeeb-d8e0-4031-9545-33166023c57f | trentmkelly/LessWrong-43k | LessWrong | Cultivate the desire to X
Recently I have found myself encouraging people to cultivate the desire to X.
Examples that you might want to cultivate interest in include:
* Diet
* Organise ones self
* Plan for the future
* be a goal-oriented thinker
* build the tools
* Anything else in the list of common human goals
* Getting healthy sleep
* Being less wrong
* Trusting people more
* Trusting people less
* exercise
* interest in a topic (cars, fashion, psychology etc.)
Why do we need to cultivate?
We don't. But sometimes we can't just "do". Lot's of reasons are reasonable reasons to not be able to just "do" the thing:
* Some things are scary
* Some things need planning
* Some things need research
* Some things are hard
* Some things are a leap of faith
* Some things can be frustrating to accept
* Some things seem stupid (well if exercising is so great why don't I automatically want to do it)
* Other excuses exist.
On some level you have decided you want to do X; on some other level you have not yet committed to doing it. Easy tasks can get done quickly. More complicated tasks are not so easy to do right away.
Well if it were easy enough to just successfully do the thing - you can go ahead and do the thing (TTYL flying to the moon tomorrow - yea nope.).
1. your system 1 wants to do the thing and your system 2 is not sure how.
2. your system 2 wants to do the thing and your system 1 is not sure it wants to do the thing.
* The healthy part of you wants to diet; the social part of you is worried about the impact on your social life.
(now borrowing from Common human goals)
* Your desire to live forever wants you to take a medication every morning to increase your longevity; your desire for freedom does not want to be tied down to a bottle of pills every morning.
* Your desire for a legacy wants you to stay late at work; your desire for quality family time wants you to leave the office early.
The solution:
The solution is to cultivate the interest; or the des |
b674070a-44d9-45b1-bd04-3cd5f5ad693b | trentmkelly/LessWrong-43k | LessWrong | How much I'm paying for AI productivity software (and the future of AI use)
This post is broken down into two parts:
1. Which AI productivity tools am I currently using (as an alignment researcher)?
2. Why does it currently feel hard to spend +$1000/month on AI to increase one's productivity drastically?
Which AI productivity tools am I currently using?
Let's get right to it. Here's what I'm currently using and how much I am paying:
* Superwhisper (or other new Speech-to-Text that leverage LLMs for rewriting) apps. Under $8.49 per month. You can use different STT models (different speed and accuracy for each) and LLM for rewriting the transcript based on a prompt you give the models. You can also have different "modes," meaning that you can have the model take your transcript and write code instructions in a pre-defined format when you are in an IDE, turn a transcript into a report when writing in Google Docs, etc. I tend to use it less when I'm in an open office with a lot of people around.
* There is also an iOS app. I setup a Shortcut for it on my iPhone so I just need to double-tap the back of my phone, and it opens the app and starts recording.
* Cursor Pro ($20-30/month). Switch to API credits when the slow responses take too long. More details on my workflow are below.
* (You can try Zed (an IDE) too if you want. I've only used it a little bit, but Anthropic apparently uses it and there's an exclusive "fast-edit" feature with the Anthropic models.)
* I connect Cursor to my Obsidian Vault and write using markdown files with AI assistance inside Cursor or directly in Obsidian. I also use Superwhisper to dump my thoughts into it and automatically format them in Markdown with a custom "mode."
* Claude.ai Pro ($20/month). You could consider getting two accounts or a Team account to worry less about hitting the token limit.
* One reason I’ll use the chat website itself is that they typically have a better system prompt than I can come up with. It’s common enough that I’ll try to get something working in Cursor and fail |
2fa75f8d-563d-4f0d-81b2-7ef3d6b0a597 | trentmkelly/LessWrong-43k | LessWrong | Signal seeding
What does it say about a person if they never get up before noon?
If they are the first person to exist, it probably says that morning was for some reason a convenient time to sleep.
If they live in modern society, it might say that they are lazy and weak willed.
How did getting up at one time or another come to signal laziness? You still have to get up once per day.
One story I can imagine is that originally there was some weaker reason to get up early. For instance if your work benefited by sunlight, you could get a bit more in. And then since that was a reasonable thing to do, people who didn’t do it looked like they were less good at getting up. Which made getting up early even better thing to do, so that everyone knows that you can.
And then people who had been on the fence before about whether to bother getting up early started to find it worth their while. Making the remaining noon-sleepers even more weak willed on average. And so it continues, until sleeping until the afternoon strongly suggests laziness.
In general, if an action is a tiny bit good, not doing it can look a tiny bit bad (or stupid, or lazy, or incapable). Which makes it better to do, which makes it look worse to not do it, and so on. And maybe in the end the speck of good that started this disappears, but the value of sending the signal if you can is enough that the equilibrium is stable.
Does this actually happen?
|
53e11ac6-ea2a-4163-8478-7bd59e1545f9 | trentmkelly/LessWrong-43k | LessWrong | A discussion of normative ethics
Adam Zerner
Hey Gordon. Thanks for being willing to chat with me about this idea I have of Reflective Consequentialism. I've been wanting to do so for a while now.
To start, let me provide some context regarding what lead me to think about it.
Consequentialism makes a lot of sense to me. I vibe pretty strongly with Scott Alexander's Consequentialism FAQ. And sometimes I read things that make me think that people in the rationality community all basically agree with it as well.
But then I read things a bunch of things praising virtue ethics that make me feel confused. The idea with virtue ethics, from what I understand, is that yes, consequentialism is what makes the most sense. But, if humans try to actually make decisions based on which action they think will lead to the best consequences, they'll kinda do a bad job at it.
On the other hand, if they pick some virtues (for example, honesty and humility) and make decisions according to what these virtues would advise, it will lead to better consequences. So then, utilizing virtue ethics leads to better consequences than utilizing consequentialism, which means that humans should utilize virtue ethics when making decisions.
Ok. Now here is where I'm confused. This virtue stuff kinda just feels like a bunch of heuristics to me. But why limit your decision making to those particular heuristics? What about other heuristics? And what about, as opposed to heuristics, more "bottom-up", "inside view" calculus about what you expect the consequences to be? I feel like there is a place for all of those things.
But I also feel like virtue ethics is saying "no, forget about all of that other stuff; your decisions should be based on those heuristics-virtues alone". Which sounds very unreasonable to me. Which makes me surprised at how popular it appears to be on LessWrong.
(This is a little rambly and open-ended. Sorry. Feel free to take this where you want to go, or to ask me to propose a more targeted path forward.)
Gordon |
21ea72ff-f971-4f7f-9234-9044678edb9b | trentmkelly/LessWrong-43k | LessWrong | AI Safety Seems Hard to Measure
In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening.
A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).
Maybe we'll succeed in reducing the risk, and maybe we won't. Unfortunately, I think it could be hard to know either way. This piece is about four fairly distinct-seeming reasons that this could be the case - and that AI safety could be an unusually difficult sort of science.
This piece is aimed at a broad audience, because I think it's important for the challenges here to be broadly understood. I expect powerful, dangerous AI systems to have a lot of benefits (commercial, military, etc.), and to potentially appear safer than they are - so I think it will be hard to be as cautious about AI as we should be. I think our odds look better if many people understand, at a high level, some of the challenges in knowing whether AI systems are as safe as they appear.
First, I'll recap the basic challenge of AI safety research, and outline what I wish AI safety research could be like. I wish it had this basic form: "Apply a test to the AI system. If the test goes badly, try another AI development method and test that. If the test goes well, we're probably in good shape." I think car safety research mostly looks like this; I think AI capabilities research mostly looks like this.
Then, I’ll give four reasons that apparent success in AI safety can be misleading.
“Great news - I’ve tested this AI and it looks safe.” Why might we still have a problem? Problem Key question Explanation The Lance Armstrong problem Did we get the AI to be actually safe or good at hiding its dangerous actions?
When dealing wit |
e1d9f4e7-f7fd-49bf-8bad-64e46bdcf2d6 | trentmkelly/LessWrong-43k | LessWrong | Why I stopped working on AI safety
Here’s a description of a future which I understand Rationalists and Effective Altruists in general would endorse as an (if not the) ideal outcome of the labors of humanity: no suffering, minimal pain/displeasure, maximal ‘happiness’ (preferably for an astronomical number of intelligent, sentient minds/beings). (Because we obviously want the best future experiences possible, for ourselves and future beings.)
Here’s a thought experiment. If you (anyone - everyone, really) could definitely stop suffering now (if not this second then reasonably soon, say within ~5-10 years) by some means, is there any valid reason for not doing so and continuing to suffer? Is there any reason for continuing to do anything else other than stop suffering (besides providing for food and shelter to that end)?
Now, what if you were to learn there really is a way to accomplish this, with method(s) developed over the course of thousands of human years and lifetimes, the fruits of which have been verified in the experiences of thousands of humans, each of whom attained a total and forevermore cessation of their own suffering?
Knowing this, what possible reason could you give to justify continuing to suffer, for yourself, for your communities, for humanity?
Why/how this preempts the priority of AI work on the present EA agenda
I can only imagine one kind of possible world in which it makes more sense to work on AI safety now and then stop suffering thereafter. The sooner TAI is likely to arrive and the more likely it is that its arrival will be catastrophic without further intervention and (crucially) the more likely it is that the safety problem actually will be solved with further effort, the more reasonable it becomes to make AI safe first and then stop suffering.
To see this, consider a world in which TAI will arrive in 10 years, it will certainly result in human extinction unless and only unless we do X, and it is certainly possible (even easy) to accomplish X in the next 10 years. P |
d43785d9-858e-41dc-959f-7a1b5fb991b2 | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup - Utilitarianism
Discussion article for the meetup : West LA Meetup - Utilitarianism
WHEN: 26 September 2012 07:00:00PM (-0700)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm Wednesday, September 26th.
Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".
Parking is free for 3 hours.
Discussion Topic: The topic du jour is the value of Utilitarianism. Utilitarianism seems to have some problems, and they may not be resolvable. This topic has generated a lot of interesting discussion on Less Wrong - for starters, see the list of articles linked on the lw wiki page.
There will be general discussion too, and there are lots of interesting recent posts (also check out LW's sister site, Overcoming Bias ). But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed.
I will bring a whiteboard with Bayes' Theorem written on it.
Discussion article for the meetup : West LA Meetup - Utilitarianism |
eaed6460-de16-4b2c-a65b-8eb29f49751e | trentmkelly/LessWrong-43k | LessWrong | Negative polyamory outcomes?
Related article: Polyhacking
Note: This article was posted earlier for less than a day but accidentally deleted.
Although polyamory isn't one of the "official" topics of LW interest (human cognition, AI, probability, etc...), this is the only community I'm part of where I expect a sufficiently high number of members to have experience with it to give useful feedback.
If you go looking for advice or articles about polyamory on the internet, you mostly get stuff written by polyamorists that are happy with their decisions. Is this selection bias? Where are the people whose relationships (or social lives, out anything) got damaged or ruined by experimenting with Consensual Non-Monogamy?
I'm posting this hoping for feedback, negative AND positive, on experiences with polyamory. I considered putting this in an Open Thread, but it occurred to me that many other LW readers might be interested in whether polyamory has drawbacks they need to be aware of. If you have experience with CNM (including first-hand witnessing, which has the added bonus of not requiring you to out yourself while still participating in the dialogue), please comment with your overall impression and as much detail as you would like to include (I am also putting my experiences there rather than in this post). If you've seen multiple poly relationships, multiple comments would make tallying slightly easier. I will try to upvote people who feed me data, a la LW surveys. If there are sufficient comments, I will periodically go through them and post a rough ratio of good to bad experiences at the bottom of this article.
PSA: The Username account is available for use by any who wish to remain anonymous. The password is left as an exercise for the reader. Hat tip... Username. |
c6bff9c1-a7a9-481c-aad1-b21f20229aaf | trentmkelly/LessWrong-43k | LessWrong | Different way classifiers can be diverse
With thanks to Lee Sharkey and Michael Cohen for the conversations that lead to these ideas.
In a previous post, I talked about how we could train classifiers on the same classification problem - a set of lions vs a set of huskies - but using different approaches to classify.
What we want is something we can informally call a 'basis' - a collection of classifiers that are as independent of each other as possible, but that you can combine to generate any way of dividing those two image sets. For example, we might have a colour classifier (white vs yellow-brown), a terrain classifier (snow vs dirt), a background plant classifier, various classifiers on the animals themselves, and so on. Then, if we've done our job well, when we find any not-too-complex classifier Cn, we can say that it's something like '50 colour, 60% nose shape and −10% plant[1]'.
We shouldn't put too much weight on that analogy, but we do want our classifiers to be independent, each classifier distinct from anything you can construct with the all others.
Here are four ways we might achieve this this.
Randomised initial seeds
An easy way of getting an ensemble of classifiers is to have bunch of neural nets (or other classification methods), initialise them with different initial weights, and train them on the same sets. And/or we could train them on different subsets of the lion and husky sets.
The advantage of this method is that it's simple and easy to do - as long as we can train one classifier, we can train them all. The disadvantage is that we're relying on luck and local minima to do the job for us. In practice, I expect these methods to all converge to "white vs yellow-brown" or similar. Even if there are local minima in the classification, there's no guarantee that we'll find them all, or even any. And there's no guarantee that the local minima are very independent - 99.9% colour and 0.01% nose shape might be a local minima, but it's barely different from a colour classifier.
So the |
34cdf82e-2109-49b4-8bc4-fd7dcb55f2b4 | trentmkelly/LessWrong-43k | LessWrong | My Bet Log
I occasionally make bets about future events. I think I probably should do that more often than I currently do. Given that, I think it's good practice to have a public record of the bets I've made and their outcomes, when available.
I'll try to keep this updated going forward, if I continue to make bets. (But only on my original blog, I won't copy the edits over to LW.) Currently I'm down £72.46 with one win and one loss.
(This has nothing to do with the matched betting that I previously wrote about. There I was earning money from sign-up bonuses. Here I'm trying to earn money by being better at predicting the future than other people.)
In order of date I made the bet:
Undecided: NO on Trump leaving before the end of his first term.
I placed this bet on 20-Oct-2019, on Betfair, after a friend said on Facebook that the odds seemed very favorable to him. The odds at the time were 1.3, giving an implied probability of ((1.3 - 1) / 1.3 = 30%) that he'd leave. I bet £50 against this, giving me £15 profit if he doesn't. (Minus commission, so £14.25.) As of 19-Mar-2020 the odds are 1.12, for an implied probability of 11% that he'll leave. (I could cash out now, taking my £50 back immediately and £7.56 minus commission when the market closes[1]. I'm not inclined to do that.)
I lost: NO on a large increase in working from home.
LessWrong user Liron said:
> I also bet more than 50% chance that within 3 years at least one of {Google, Microsoft, Facebook, Amazon} will give more than 50% of their software engineers the ability to work from home for at least 80% of their workdays.
I took him up on this for $100 each. We confirmed on 02-Jan-2020. I don't think I'd heard the term "coronavirus" at the time.
This cost me $100 (£92.69 over Paypal, which I think included £2.99 transaction fee). But I got a bunch of LW karma for it. Swings and roundabouts.
Undecided: YES on someone contracting Covid-19.
In a Telegram group chat I'm in, I bet another member €30 each that at l |
ddbd6508-8aea-4ae0-9f21-c750fff1b7c1 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Not for the Sake of Happiness (Alone)
Today's post, Not for the Sake of Happiness (Alone) was originally published on 22 November 2007. A summary (taken from the LW wiki):
> Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Truly Part of You, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
f19f98c2-7c3f-4aaf-8d38-c73888a5057f | trentmkelly/LessWrong-43k | LessWrong | Learning takes a long time
I recently realized that I had greatly underestimated the inferential distance between most of my readers and myself. Thinking it over, I realize that the bulk of the difference comes from a difference in perspectives on how long it takes to learn substantive things.
People often tell me that they're bad at math. I sometimes respond by saying that they didn't spend enough time on it to know one way or the other. I averaged ~25+ hours a week thinking about math when I was 16 and 17, for a total of ~2,500+ hours. I needed to immerse myself in the math to become very good at it, in the same way that I would need to live in French speaking country to get very good at French. If my mathematical activity had been restricted exclusively to coursework, I never would have become a good mathematician.
Math grad students who want to learn algebraic geometry often spend spend two years going through Hartshorne's dense and obscure textbook. it's not uncommon for students to learn interesting applications only after having gone through it. I find this practice grotesque, and I don't endorse it. I bring it up only to explain where I'm coming from. With the Hartshorne ritual as a standard practice, it's felt to me like a very solid achievement to present substantive material that readers can understand after only ~10 hours of reading and reflecting deeply.
It was so salient to me that one can't hope to become intellectually sophisticated without engaging in such activity on a regular basis that it didn't occur to me that it might not be obvious everyone. I missed the fact that most of my readers aren't in the habit of spending ~10 hours carefully reading a dense article and grappling with the ideas therein, so that even though I felt like I was making things accessible, I was still in the wrong ballpark altogether.
Thinking it over, I'm bemused by the irony of the situation. Even as I was exasperated by some readers' apparent disinclination to read articles very carefully and t |
d3aacef3-6e3c-44d0-95af-2fa99cf6a4f1 | trentmkelly/LessWrong-43k | LessWrong | Why does Applied Divinity Studies think EA hasn’t grown since 2015?
Applied Divinity Studies seeks to explain why the EA community hasn’t grown since 2015. The observations they initially call the EA community not having grown are:
1. GiveWell money moved increased a lot in 2015, then grew only slightly since then.
2. Open Phil (I guess money allocated) hasn’t increased since 2017
3. Google Trends “Effective Altruism” ‘grows quickly starting in 2013, peaks in 2017, then falls back down to around 2015 levels’.
Looking at the graph they illustrate with, 1) is because GiveWell started receiving a large chunk of money from OpenPhil in 2015, and that chunk remained around the same over the years, while the money not from Open Phil has grown.
So 1) and 2) are both the observation, “Open Phil has not scaled up its money-moving in recent years”.
I’m confused about how this observation seems suggestive about the size of the EA community. Open Phil is not a community small-donations collector. You can’t even donate to Open Phil. It is mainly moving Good Ventures’ money, i.e. the money of a single couple: Dustin Moskovitz and Cari Tuna.
One way that I could imagine Open Phil’s spending saying something about the size of the EA community is that the community might provide funding opportunities for Open Phil, so that its growth was reflected in Open Phil’s spending. But this would require EA growth at a scale that produced large enough funding opportunities, that met Open Phil’s specific criteria, to show up amidst hundreds of millions of dollars of annual grant-making. I think this at least requires argument.
I’m further confused when in trying to explain the purported end of growth, ADS says, ‘One possibility is that there was not a strange hidden cause behind widespread stagnation. It’s just that funding slowed down, and so everything else slowed down with it’, then go on to explore the possibility that funding from Open Phil/Good Ventures has slowed down in line with this ‘widespread’ stagnation (in different aspects of Open Phil a |
b627efcc-9033-4d7a-aad6-0f98042af7c3 | trentmkelly/LessWrong-43k | LessWrong | Are Human Brains Universal?
[Previously]
Introduction
After reading and updating on the answers to my previous question, I am still left unconvinced that the human brain is qualitatively closer to chimpanzee (let alone an ant/earthworm) than it is to hypothetical superintelligences.
I suspect a reason behind my obstinacy is an intuition that human brains are "universal" in a sense that chimpanzee brains are not. So, you can't really have other engines of cognition that are more "powerful" than human brains (in the way a Turing Machine is more powerful than a Finite State Automaton), only engines of cognition that are more effective/efficient.
By "powerful" here, I'm referring to the class of "real world" problems that a given cognitive architecture can learn within a finite time.
Core Claim
Human civilisation can do useful things that chimpanzee civilisation is fundamentally incapable of:
* Heavier than air flight
* Launching rockets
* High-fidelity long-distance communication
* Etc.
There do not seem to be similarly useful things that superintelligences are capable of that humans are also fundamentally incapable of. Useful things that we could never accomplish in the expected lifetime of the universe.
Superintelligences seem like they would just be able to do the things we are already — in principle — capable of, but more effectively and/or more efficiently.
Cognitive Advantages of Artificial Intelligences
I expect a superintelligence to be superior to humans quantitatively via:
* Larger working memories
* Faster clock cycles (5 GHz vs 0.1 - 2 Hz)
* Faster thought? [1]
* Larger attention spans
* Better recall
* Larger long term memories
(All of the above could potentially be a several orders of magnitude difference vs homo sapiens brain given sufficient compute.)
And qualitatively via:
* Parallel/multithreaded cognition
* The ability to simultaneously execute:
* Multiple different cognitive algorithms
* Multiple instances of the same cognit |
f335c940-2a1a-47f1-b382-48c43158639a | trentmkelly/LessWrong-43k | LessWrong | Cryonics in Australia: How do you actually do it?
Today it struck me just how dumb it was to agree fully with the desirability of being signed up for cryonics and yet not be so. I may, in perfect honesty, also be procrastinating from a piece of uni work that I need to do by Tuesday, but I intend to get right back to it after posting this.
Last time I looked into signing up for cryonics I found it confusing and intimidating, which quickly built up to a level where I abandoned the quest. Now that I have a piece of assessment looming, it is time to do something about it.
But I don't really know where to start. What do you do to get signed up for cryonics? Join the Cryonics Association of Australia? There seems to be a requirement for membership of a US organisation too. You can either say "I have joined/intend to join a US cryonics organisation" and pay $1000, or say "I haven't joined/don't intend to join one" and pay $30, which is sufficiently confusing to make me conclude that I don't actually understood how this organisation works. There aren't any facilities in Australia AFAIK, and there is no indication of what the CAA actually does in the event of unexpected death. Plus, they haven't updated their website for over a year.
Do you skip the CAA, and just sign up with Alcor or someone else based in the US? I don't know which ones are good or bad, or even have any firm idea how to find out which ones are good or bad. How do you arrange transportation to the cryonics facility from another country? Do you need to pay for everything in advance? Life insurance seems to be the ticket, but how do you go about getting that? I live with my parents and the car I drive belongs to them, so I've never insured anything.
Is there anybody who knows, or has some ideas, about what I should be doing? |
7407d242-ea1f-4f68-ab79-d5414be53e4d | trentmkelly/LessWrong-43k | LessWrong | Chapter 105: The Truth, Pt 2
Tom Riddle.
The words seemed to echo inside Harry's head, sparking resonances that as quickly died away, broken patterns trying to complete themselves and failing.
Tom Riddle is a
Tom Riddle was the
Riddle
There were other priorities occupying Harry's attention.
Professor Quirrell was pointing a gun at him.
And for some reason Lord Voldemort hadn't fired it yet.
Harry's voice came out in more of a croak. "What is it that you want from me?"
"Your death," said Professor Quirrell, "is clearly not what I am about to say, since I have had plenty of time to kill you if I wished. The fateful battle between Lord Voldemort and the Boy-Who-Lived is a figment of Dumbledore's imagination. I know where to find your family's house in Oxford, and I am familiar with the concept of sniper rifles. You would have died before you ever touched a wand. I hope this is clear to you, Tom?"
"Crystal," Harry whispered. His body was still shaking, running programs more suited to fleeing a tiger than casting delicate spells or thinking. But Harry could think of one thing the person pointing a gun at him obviously wanted him to do, a question that person was waiting for him to ask, and Harry did so. "Why are you calling me Tom?"
Professor Quirrell regarded him steadily. "Why am I calling you Tom? Answer. Your intellect is not everything I hoped for, but it should suffice for this."
Harry's mouth seemed to know the answer before his brain could manage to focus on the question. "Tom Riddle is your name. Our name. That's who Lord Voldemort is, or was, or - something."
Professor Quirrell nodded. "Better. You have already vanquished the Dark Lord, the one and only time that you will ever do so. I have already destroyed all but a remnant of Harry Potter, eliminating the difference between our spirits and enabling us to reside in the same world. Now that it is clear to you that the battle between us is a lie, you might act sensibly to advance your own interests. Or you might not." The gun |
902a85f8-a03e-4137-a5de-4e9acb867537 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition
###
1 Introduction
Reinforcement learning (RL) has shown remarkable promise in recent years, with results on a range of complex tasks such as robotic control (Levine et al., [2016](#bib.bib10)) and playing video games (Mnih et al., [2015](#bib.bib11)) from raw sensory input.
RL algorithms solve these problems by learning a policy that maximizes a reward function that is considered as part of the problem formulation. There is little practical guidance that is provided in the theory of RL about how these rewards should be designed. However, the design of the reward function is in practice critical for good results, and reward misspecification can easily cause unintended behavior (Amodei et al., [2016](#bib.bib1)). For example, a vacuum cleaner robot rewarded to pick up dirt could exploit the reward by repeatedly dumping dirt on the ground and picking it up again (Russell & Norvig, [2003](#bib.bib18)). Additionally, it is often difficult to write down a reward function at all. For example, when learning policies from high-dimensional visual observations, practitioners often resort to using motion capture (Peng et al., [2017](#bib.bib15)) or specialized computer vision systems (Rusu et al., [2017](#bib.bib19)) to obtain rewards.
As an alternative to reward specification, imitation learning (Argall et al., [2009](#bib.bib2)) and inverse reinforcement learning (Ng & Russell, [2000](#bib.bib13)) instead seek to mimic expert behavior. However, such approaches require an expert to show *how* to solve a task. We instead propose a novel problem formulation, variational inverse control with events (VICE), which generalizes inverse reinforcement learning to alternative forms of expert supervision. In particular, we consider cases when we have examples of a desired final outcome, rather than full demonstrations, so the expert only needs to show *what* the desired outcome of a task is (see Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")). A straightforward way to make use of these desired outcomes is to train a classifier (Pinto & Gupta, [2016](#bib.bib16); Tung et al., [2018](#bib.bib27)) to distinguish desired and undesired states. However, it is unclear if using this classifier as a reward will result in the intended behavior, since an RL agent can learn to exploit the classifier, in the same way it can exploit human-designed rewards. Our framework provides a more principled approach, where classifier training corresponds to learning probabilistic graphical model parameters (see Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")), and policy optimization corresponds to inferring the optimal actions. By selecting an inference query which corresponds to our intentions, we can mitigate reward hacking scenarios similar to those previously described, and also specify the task with examples rather than manual engineering. This makes it practical to base rewards on raw observations, such as images.

Figure 1:
Standard IRL requires full expert demonstrations and aims to produce an agent that mimics the expert. VICE generalizes IRL to cases where we only observe final desired outcomes, which does not require the expert to actually know *how* to perform the task.
Our inverse formulation is based on a corresponding forward control framework which reframes control as inference in a graphical model. Our framework resembles prior work (Kappen et al., [2009](#bib.bib9); Toussaint, [2009](#bib.bib26); Rawlik et al., [2012](#bib.bib17)), but we extend this connection by replacing the conventional notion of rewards with event occurence variables. Rewards correspond to log-probabilities of events, and value functions can be interpreted as backward messages that represent log-probabilities of those events occurring. This framework retains the full expressivity of RL, since any rewards can be expressed as log-probabilities, while providing more intuitive guidance on task specification. It further allows us to express various intentions, such as for an event to happen at least once, exactly once at any time step, or once at a specific timestep. Crucially, our framework does not require the agent to *observe* the event happening, but only to know the probability that it occurred. While this may seem unusual, it is more practical in the real world, where success may be determined by probabilistic models that themselves carry uncertainty. For example, the previously mentioned vacuum cleaner robot needs to estimate from its observations whether its task has been accomplished and would never receive direct feedback from the real world whether a room is clean.

Figure 2:
Our framework learns event probabilities from data. We use neural networks as function approximators to model this distribution, which allows us to work with high dimensional observations like images.
Our contributions are as follows. We first introduce the event-based control framework by extending previous control as inference work to alternative queries which we believe to be useful in practice. This view on control can ease the process of reward engineering by mapping a user’s intention to a corresponding inference query in a probabilistic graphical model. Our experiments demonstrate how different queries can result in different behaviors which align with the corresponding intentions.
We then propose methods to learn event probabilities from data, in a manner analogous to inverse reinforcement learning. This corresponds to the use case where designing event probabilities by hand is difficult, but observations (e.g., images) of successful task completion are easier to provide. This approach is substantially easier to apply in practical situations, since full demonstrations are not required. Our experiments demonstrate that our framework can be used in this fashion for policy learning from high dimensional visual observations where rewards are hard to specify. Moreover, our method substantially outperforms baselines such as sparse reward RL, indicating that our framework provides an automated shaping effect when learning events, making it feasible to solve otherwise hard tasks.
###
2 Related work
Our reformulation of RL is based on the connection between control and inference (Kappen et al., [2009](#bib.bib9); Ziebart, [2010](#bib.bib28); Rawlik et al., [2012](#bib.bib17)). The resulting problem is sometimes referred to as maximum entropy reinforcement learning, or KL control. Duality between control and inference in the case of linear dynamical systems has been studied in Kalman ([1960](#bib.bib8)); Todorov ([2008](#bib.bib25)). Maximum entropy objectives can be optimized efficiently and exactly in linearly solvable MDPs (Todorov, [2007](#bib.bib24)) and environments with discrete states. In linear-quadratic systems, control as inference techniques have been applied to solve path planning problems for robotics (Toussaint, [2009](#bib.bib26)). In the context of deep RL, maximum entropy objectives have been used to derive soft variants of Q-learning and policy gradient algorithms (Haarnoja et al., [2017](#bib.bib5); Schulman et al., [2017](#bib.bib21); O’Donoghue et al., [2016](#bib.bib14); Nachum et al., [2017](#bib.bib12)). These methods embed the standard RL objective, formulated in terms of rewards, into the framework of probabilistic inference. In contrast, we aim specifically to reformulate RL in a way that does not require specifying arbitrary scalar-valued reward functions.
In addition to studying inference problems in a control setting, we also study the problem of learning event probabilities in these models. This is related to prior work on inverse reinforcement learning (IRL), which has also sought to cast learning of objectives into the framework of probabilistic models (Ziebart et al., [2008](#bib.bib29); Ziebart, [2010](#bib.bib28)). As explained in Section [5](#S5 "5 Learning event probabilities from data ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), our work generalizes IRL to cases where we only provide examples of a desired outcome or goal, which is significantly easier to provide in practice since we do not need to know how to achieve the goal.
Reward design is crucial for obtaining the desired behavior from RL agents (Amodei et al., [2016](#bib.bib1)). Ng & Russell ([2000](#bib.bib13)) showed that rewards can be modified, or shaped, to speed up learning without changing the optimal policy. Singh et al. ([2010](#bib.bib22)) study the problem of optimal reward design, and introduce the concept of a fitness function. They observe that a proxy reward that is distinct from the fitness function might be optimal under certain settings, and Sorg et al. ([2010](#bib.bib23)) study the problem of how this optimal proxy reward can be selected. Hadfield-Menell et al. ([2017](#bib.bib6)) introduce the problem of inferring the true objective based on the given reward and MDP. Our framework aids task specification by introducing two decisions: the selection of the inference query that is of interest (i.e., when and how many times should the agent cause the event?), and the specification of the event of interest. Moreover, as discussed in Section [6](#S6 "6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), we observe that our method automatically provides a reward shaping effect, allowing us to solve otherwise hard tasks.
###
3 Preliminaries
In this section we introduce our notation and summarize how control can be framed as inference. Reinforcement learning operates on Markov decision processes (MDP), defined by the tuple (S,A,T,r,γ,ρ0). S,A are the state and action spaces, respectively, r is a reward function, which is typically taken to be a scalar field on S×A, and γ∈(0,1) is the discount factor. T and ρ0 represent the dynamics and initial state distributions, respectively.
####
3.1 Control as inference
s1
s2
s3
a1
a2
a3
e1
e2
e3
…
…
…
sT
aT
eT
Figure 3:
A graphical model framework for control. In maximum entropy reinforcement learning, we observe e1:T=1 and can perform inference on the trajectory to obtain a policy.
In order to cast control as an inference problem, we begin with the standard graphical model for an MDP, which consists of states and actions. We incorporate the notion of a goal with an additional variable et that depends on the state (and possibly also the action) at time step t, according to p(et|st,at). If the goal is specified with a reward function, we can define p(et=1|st,at)=er(s,a) which, as we discuss below, leads to a maximum entropy version of the standard RL framework. This requires the rewards to be negative, which is not restrictive in practice, since if the rewards are bounded we can re-center them so that the maximum value is 0. The structure of this model is presented in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Control as inference ‣ 3 Preliminaries ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), and is also considered in prior work, as discussed in the previous section.
The maximum entropy reinforcement learning objective emerges when we condition on e1:T=1. Consider computing a backward message β(st,at)=p(et:T=1|st,at). Letting Q(st,at)=logβ(st,at), notice that the backward messages encode the backup equations
| | | |
| --- | --- | --- |
| | Q(st,at)=r(st,at)+logEst+1[eV(st+1)] V(st)=log∫a∈AeQ(st,a)da . | |
We include the full derivation in Appendix [A](#A1 "Appendix A Message Passing Updates for Reinforcement Learning ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), which resembles derivations discussed in prior work (Ziebart et al., [2008](#bib.bib29)).
This backup equation corresponds to maximum entropy RL, and is equivalent to soft Q-learning and causal entropy RL formulations in the special case of deterministic dynamics (Haarnoja et al., [2017](#bib.bib5); Schulman et al., [2017](#bib.bib21)). For the case of stochastic dynamics, maximum-entropy RL is optimistic with respect to the dynamics and produces risk-seeking behavior, and we refer the reader to Appendix [B](#A2 "Appendix B Control as Variational Inference ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), which covers a variational derivation of the policy objective which properly handles stochastic dynamics.
###
4 Event-based control
In control as inference, we chose logp(et=1|st,at)=r(s,a) so that the resulting inference problem matches the maximum entropy reinforcement learning objective. However, we might also ask: what does the variable et, and its probability, represent? The connection to graphical models lets us interpret rewards as the log-probability that an event occurs, and the standard approach to reward design can also be viewed as specifying the probability of some binary event, that we might call an optimality event. This provides us with an alternative way to think about task specification: rather than using arbitrary scalar fields as rewards, we can specify the events for which we would like to maximize the probability of occurrence.
We now outline inference procedures for different types of problems of interest in the graphical model depicted in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Control as inference ‣ 3 Preliminaries ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). In Section [5](#S5 "5 Learning event probabilities from data ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), we will discuss learning procedures in this graphical model which allow us to specify objectives from data. The strength of the events framework for task specification lies in both its intuitive interpretation and flexibility: though we can obtain similar behavior in standard reinforcement learning, it may require considerable reward tuning and changes to the overall problem statement, including the dynamics. In contrast, events provides a single unified framework where the problem parameters remain unchanged, and we simply ask the appropriate queries. We will discuss:
* ALL query: p(τ|e1:T=1), meaning the event should happen at each time step.
* AT query: p(τ|et∗=1), meaning the event should happen at a specific time t∗.
* ANY query: p(τ|e1=1 or e2=1 or ... eT=1) meaning the event should happen on at least one time step during each trial.
We present two derivations for each query: a conceptually simple one based on maximum entropy and message passing (see Section [3.1](#S3.SS1 "3.1 Control as inference ‣ 3 Preliminaries ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")), and one based on variational inference, (see Appendix [B](#A2 "Appendix B Control as Variational Inference ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")), which is more appropriate for stochastic dynamics. The resulting variational objective is of the form:
| | | |
| --- | --- | --- |
| | J(π)=Es1:T,a1:T∼q[^Q(s1:T,a1:T)+Hπ(⋅|s1:T)], | |
where ^Q is an empirical Q-value estimator for a trajectory and Hπ(⋅|s1:T)=−∑Tt=0logπ(at|st) represents the entropy of the policy. This form of the objective can be used in policy gradient algorithms, and in special cases can also be written as a recursive backup equation for dynamic programming algorithms.
We directly present our results here, and present more detailed derivations (including extensions to discounted cases) in Appendices [C](#A3 "Appendix C Derivations for Event-based Message Passing Updates ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition") and [D](#A4 "Appendix D Derivations for Variational Objectives ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition").
####
4.1 ALL and AT queries
We begin by reviewing the ALL query, when we wish for an agent to trigger an event at every timestep. This can be useful, for example, when expressing some continuous task such as maintaining some sort of configuration (such as balancing on a unicycle) or avoiding an adverse outcome, such as not causing an autonomous car to collide. As covered in Section [3.1](#S3.SS1 "3.1 Control as inference ‣ 3 Preliminaries ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), conditioning on the event at all time steps mathematically corresponds to the same problem as entropy maximizing RL, with the reward given by logp(et=1|st,at).
######
Theorem 4.1 (ALL query).
In the ALL query, the message passing update for the Q-value can be written as:
| | | |
| --- | --- | --- |
| | Q(st,at)=logp(et=1|st,at)+logEst+1[eV(st+1)], | |
where Q(st,at) represents the log-message logp(et:T=1|st,at). The corresponding empirical Q-value can be written recursively as:
| | | |
| --- | --- | --- |
| | ^Q(st:T,at:T)=logp(et=1|st,at)+^Q(st+1:T,at+1:T). | |
###### Proof.
See Appendices [C.1](#A3.SS1 "C.1 ALL query ‣ Appendix C Derivations for Event-based Message Passing Updates ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition") and [D.1](#A4.SS1 "D.1 ALL query ‣ Appendix D Derivations for Variational Objectives ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")
∎
The AT query, or querying for the event at a specific time step, results in the same equations, except logp(e=1|st,at), is only given at the specified time t∗. While we generally believe that the ANY query presented in the following section will be more broadly applicable, there may be scenarios where an agent needs to be in a particular configuration or location at the end of an episode. In these cases, the AT query would be the most appropriate.
####
4.2 ANY query
The ANY query specifies that an event should happen at least once before the end of an episode, without regard for when in particular it takes place. Unlike the ALL and AT queries, the ANY query does not correspond to entropy maximizing RL and requires a new backup equation. It is also in many cases more appropriate: if we would like an agent to accomplish some goal, we might not care when in particular that goal is accomplished, and we likely don’t need it to accomplish it more than once. This query can be useful for specifying behaviors such as reaching a goal state, completion of a task, etc. Let the stopping time t∗=min{t≥0|et=1} denote the first time that the event occurs.
######
Theorem 4.2 (ANY query).
In the ANY query, the message passing update for the Q-value can be written as:
| | | |
| --- | --- | --- |
| | Q(st,at)=log(p(et=1|st,at)+p(et=0|st,at)Est+1[eV(st+1)]) | |
where Q(st,at) represents the log-message logp(t≤t∗≤T|st,at). The corresponding empirical Q-value can be written recursively as:
| | | |
| --- | --- | --- |
| | ^Q(st:T,at:T)=log(p(et=1|st,at)+p(et=0|st,at)e^Q(st+1:T,at+1:T)). | |
###### Proof.
See Appendices [C.2](#A3.SS2 "C.2 ANY query ‣ Appendix C Derivations for Event-based Message Passing Updates ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition") and [D.2](#A4.SS2 "D.2 ANY query ‣ Appendix D Derivations for Variational Objectives ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")
∎
This query is related to first-exit RL problems, where an agent receives a reward of 1 when a specified goal is reached and is immediately moved to an absorbing state but it does not require the event to actually be observed, which makes it applicable to a variety of real-world situations that have uncertainty over the goal. The backup equations of the ANY query are equivalent to the first-exit problem when p(e|s,a) is deterministic. This can be seen by setting p(e=1|s,a)=rF(s,a), where rF(s,a) is an goal indicator function that denotes the reward of the first-exit problem. In this case, we have Q(s,a)=0 if the goal is reachable, and Q(s,a)=−∞ if not. In the first-exit case, we have Q(s,a)=1 if the goal is reachable and Q(s,a)=0 if not - both cases result in the same policy.
####
4.3 Sample-based optimization using policy gradients
In small, discrete settings with known dynamics, we can use the backup equations in the previous section to solve for optimal policies with dynamic programming. For large problems with unknown dynamics, we can also derive model-free analogues to these methods, and apply them to complex tasks with high-dimensional function approximators. We can adapt the policy gradient to obtain an unbiased estimator for our variational objective:
| | | |
| --- | --- | --- |
| | ∇θJ(θ)=Es1:T,a1:T∼πθ[T∑t=1∇logπθ(at|st)(^Q(s1:T,a1:T)+Hπ(⋅|st:T))] | |
See Appendix [E](#A5 "Appendix E Policy Gradients for Events ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition") for further explanation. Under certain simplifications we can replace ^Q(s1:T,a1:T) with ^Q(st:T,at:T) to obtain an estimator which only depends on future returns. This estimator can be integrated into standard policy gradient algorithms, such as TRPO Schulman et al. ([2015](#bib.bib20)), to train expressive inference models using neural networks. Extensions of our approach to other RL methods with function approximation, such as Q-learning, can also be derived from the backup equations, though this is outside the scope of the present work.
###
5 Learning event probabilities from data
In the previous section, we presented a control framework that operates on events rather than reward functions, and discussed how the user can choose from among a variety of inference queries to obtain a desired outcome. However, the event probabilities must still be obtained in some way, and may be difficult to hand-engineer in many practical situations - for example, an image-based deep RL system may need an image classifier to determine if it has accomplished its goal. In such situations, we can ask the user to instead supply examples of states or observations where the event has happened, and learn the event probabilities pθ(e=1|s,a). Inverse reinforcement learning corresponds to the case when we assume the expert triggers an event at all timesteps (the ALL query), in which case we require full demonstrations.
However, if we assume the expert is optimal under an ANY or AT query, full demonstrations are not required because the event is not assumed to be triggered at each timestep. This means our supervision can be of the form of a desired set of states
rather than full trajectories. For example, in the vision-based robotics case, this means that we can specify goals using images of a desired goal state, which are much easier to obtain than full demonstrations.
Formally, we assume that the user supplies the algorithm with a dataset of examples where the event happens. We derive variational inverse control with events (VICE) for the AT query in this section as it is conceptually the simplest, and include further derivations for the ALL and ANY queries in Appendix [F](#A6 "Appendix F Variational Inverse Control with Events (VICE) ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). For the AT query, we assume examples are drawn from the distribution ^pdata(st,at|et=1)∝^p(et=1|st,at)^p(st,at).
, where ^p(st,at) is the state-action marginal of a reference policy.
We can use this data to train the factor pθ(et=1|st,at) in our graphical model, where θ corresponds to the parameters of this factor. For example, if we would like to use a neural network to predict the probability of the event, θ corresponds to the weights in this network. Our event model is accordingly of the form pθ(st,at)∝pθ(et=1|st,at)p(st,at).
The normalizing factor is pθ(et=1)=∫S,Apθ(et=1|st,at)p(st,at)dsda
We fit the model using the following maximum likelihood objective:
| | | | |
| --- | --- | --- | --- |
| | L(θ)=E^pdata[logpθ(st,at)]=E^pdata[logpθ(et=1|st,at)]−logpθ(et=1) | | (1) |
The gradient of this objective with respect to θ is given by
| | | |
| --- | --- | --- |
| | ∇θL(θ)=E^pdata[∇θlogpθ(et=1|st,at)]−Epθ(st,at)[∇θlogpθ(et=1|st,at)]. | |
A tractable way to compute this gradient is to use the previously mentioned variational inference procedure (Appendix [B](#A2 "Appendix B Control as Variational Inference ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")) to compute the distribution q(s,a) to approximate pθ(s,a), and then use it to evaluate the expectations for the gradient. This corresponds to an EM-like iterative algorithm, where we alternate training our event probability pθ(et=1|st,at) given the current q and training a policy q(a|s) to draw samples from the distribution pθ(st,at) in order to estimate the second term of the gradient. This procedure is analogous to MaxEnt IRL (Ziebart et al., [2008](#bib.bib29)), except that, depending on the type of query we use, the event may not necessarily happen at every time step, and the data consists only of individual states rather than entire demonstrations. In high-dimensional settings, we can adapt the method of Fu et al. ([2018](#bib.bib4)), which alternates between training pθ(s,a) by fitting a discriminator to distinguish policy samples from dataset samples, and training a policy by performing inference on the corresponding graphical model (which corresponds to trying to fool the discriminator). We present our algorithm pseudocode in Algorithm [1](#alg1 "Algorithm 1 ‣ 5 Learning event probabilities from data ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). In our experiments, we use a variant of TRPO (Schulman et al., [2015](#bib.bib20)) (as discussed in Section [4.3](#S4.SS3 "4.3 Sample-based optimization using policy gradients ‣ 4 Event-based control ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")) to update the policy with respect to the event probabilities (line 7).
Interestingly, as we will discuss in the experimental evaluation, we’ve found in many cases that learning the event probabilities using VICE actually resulted in better performance than reinforcement learning directly from binary event indicators even when these indicators are available. Part of the explanation for this phenomenon is that the learned probabilities are smoother than binary event indicators, and therefore can provide a better shaped reward function for RL.
1: Obtain examples of expert states and actions sEi,aEi
2: Initialize policy π and binary discriminator Dθ.
3: for step n in {1, …, N} do
4: Collect states and actions si=(s1,...,sT),ai=(a1,...,aT) by executing π.
5: Train Dθ via logistic regression to classify expert data sEi,aEi from samples si,ai.
6: Update log^p(e=1|s,a)←logDθ(s,a)−log(1−Dθ(s,a))
7: Update π with respect to log^p(e=1|s,a) using the appropriate inference objective.
8: end for
Algorithm 1 VICE: Variational Inverse Control with Events
###
6 Experimental evaluation
Our experimental evaluation aims to answer the following questions: (1) How does the behavior of an agent change depending on the choice of query? We study this question in the case where the event probabilities are already specified. (2) Does our event learning framework (VICE) outperform simple alternatives, such as offline classifier training, when learning event probabilities from data? We study this question in settings where it is difficult to manually specify a reward function, such as when the agent receives raw image observations. (3) Does learning event probabilities provide better shaped rewards than the ground truth event occurrence indicators? Additional videos and supplementary material are available at <https://sites.google.com/view/inverse-event>.
####
6.1 Inference with pre-specified event probabilities
| | |
| --- | --- |
| | |
Figure 4:
HalfCheetah and Lobber tasks.
We first demonstrate how the ANY and ALL queries in our framework result in different behaviors. We adapt TRPO (Schulman et al., [2015](#bib.bib20)), a natural policy gradient algorithm, to train policies using our query procedures derived in Section [4](#S4 "4 Event-based control ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). Our examples involve two goal-reaching domains, HalfCheetah and Lobber, shown in Figure [4](#S6.F4 "Figure 4 ‣ 6.1 Inference with pre-specified event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). The goal of HalfCheetah is to navigate a 6-DoF agent to a goal position, and in Lobber, a robotic arm must throw an block to a goal position. To study the inference process in isolation, we manually design the event probabilities as e−||xagent−xtarget||2 for the HalfCheetah and e−∥xblock−xgoal∥2 for the Lobber.
| Query | Avg. Dist | Min. Dist |
| --- | --- | --- |
| HalfCheetah-ANY | 1.35 (0.20) | 0.97 (0.46) |
| HalfCheetah-ALL | 1.33 (0.16) | 2.01 (0.48) |
| \hdashlineHalfCheetah-Random | 8.95 (5.37) | 5.41 (2.67) |
| Lobber-ANY | 0.61 (0.12) | 0.25 (0.20) |
| Lobber-ALL | 0.59 (0.11) | 0.36 (0.21) |
| \hdashlineLobber-Random | 0.93 (0.01) | 0.91 (0.01) |
Table 1: Results on HalfCheetah and Lobber tasks (5 trials). The ALL query generally results in superior returns, but the ANY query results in the agent reaching the target more accurately. Random refers to a random gaussian policy.
The experimental results are shown in Table [1](#S6.T1 "Table 1 ‣ 6.1 Inference with pre-specified event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). While the average distance to the goal for both queries was roughly the same, the ANY query results in a much closer minimum distance. This makes sense, since in the ALL query the agent is punished for every time step it is not near the goal. The ANY query can afford to receive lower cumulative returns and instead has max-seeking behavior which more accurately reaches the target. Here, the ANY query better expresses our intention of reaching a target.
####
6.2 Learning event probabilities
| | | | | |
| --- | --- | --- | --- | --- |
| | Query type | Classifier | VICE (ours) | True Binary |
|
Maze
| ALL | 0.35 (0.29) | 0.20 (0.19) | 0.11 (0.01) |
| ANY | 0.37 (0.21) | 0.23 (0.15) |
|
Ant
| ALL | 2.71 (0.75) | 0.64 (0.32) | 1.61 (1.35) |
| ANY | 3.93 (1.56) | 0.62 (0.55) |
|
Push
| ALL | 0.25 (0.01) | 0.09 (0.01) | 0.17 (0.03) |
| ANY | 0.25 (0.01) | 0.11 (0.01) |
| | | | | |
Table 2: Results on Maze, Ant and Pusher
environments (5 trials). The metric reported is the final distance to the goal state (lower is better). VICE performs better than the classifier-based setup on all the tasks, and the performance is substantially better for the Ant and Pusher task. Detailed learning curves are provided in Appendix [G](#A7 "Appendix G Experiments ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition").
We now compare our event probability learning framework, which we call variational inverse control with events (VICE), against an offline classifier training baseline. We also compare our method to learning from true binary event indicators, to see if our method can provide some reward shaping benefits to speed up the learning process.
The data for learning event probabilities comes from success states.
That is, we have access to a set of states {sEi}i=1...n, which may have been provided by the user, for which we know the event took place. This setting generalizes IRL, where instead of entire expert demonstrations, we simply have examples of successful states. The offline classifier baseline trains a neural network to distinguish success state ("positives") from states collected by a random policy. The number of positives and negatives in this procedure is kept balanced. This baseline is a reasonable and straightforward method to specify rewards in the standard RL framework, and provides a natural point of comparison to our approach, which can also be viewed as learning a classifier, but within the principled framework of control as inference. We evaluate these methods on the following tasks:
Maze from pixels. In this task, a point mass needs to navigate to a goal location through a small maze, depicted in Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). The observations consist of 64x64 RGB images that correspond to an overhead view of the maze. The action space consists of X and Y forces on the robot. We use CNNs to represent the policy and the event distributions, training with 1000 success states as supervision.
Ant. In this task, a quadrupedal “ant” (shown in Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")) needs to crawl to a goal location, placed 3m away from its starting position. The state space contains joint angles and XYZ-coordinates of the ant. The action space corresponds to joint torques. We use 500 success states as supervision.
Pusher from pixels. In this task, a 7-DoF robotic arm (shown in Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition")) must push a cylinder object to a goal location. The state space contains joint angles, joint velocities and 64x64 RGB images, and the action space corresponds to joint torques. We use 10K success states as supervision.

Figure 5:
Visualizations of the Pusher, Maze, and Ant tasks. In the Maze and Ant tasks, the agent seeks to reach a pre-specified goal position. In the Pusher task, the agent seeks to place a block at the goal position.
Training details and neural net architectures can be found in Appendix [G](#A7 "Appendix G Experiments ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). We also compare our method against a reinforcement learning baseline that has access to the true binary event indicator. For all the tasks, we define a “goal region”, and give the agent a +1 reward when it is in the goal region, and 0 otherwise. Note that this RL baseline, which is similar to vanilla RL from sparse rewards, “observes” the event, providing it with additional information, while our model only uses the event probabilities learned from the success examples and receives no other supervision. It is included to provide a reference point on the difficulty of the tasks. Results are summarized in Table [2](#S6.T2 "Table 2 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"), and detailed learning curves can be seen in Figure [6](#S6.F6 "Figure 6 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition") and Appendix [G](#A7 "Appendix G Experiments ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). We note the following salient points from these experiments.

Figure 6: Results on the Pusher task (lower is better), averaged across five random seeds. VICE significantly outperforms the naive classifier and true binary event indicators. Further, the performance is comparable to learning from an oracle hand-engineered reward (denoted in dashed lines). Curves for the Ant and Maze tasks can be seen in Appendix [G](#A7 "Appendix G Experiments ‣ Appendices ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition").
VICE outperforms naïve classifier. We observe that for Maze, both the simple classifier and our method (VICE) perform well, though VICE achieves lower final distance. In the Ant environment, VICE is crucial for obtaining good performance, and the simple classifier fails to solve the task. Similarly, for the *Pusher* task, VICE significantly outperforms the classifier (which fails to solve the task). Unlike the naïve classifier approach, VICE actively integrates negative examples from the current policy into the learning process, and appropriately models the event probabilities together with the dynamical properties of the task, analogously to IRL.
Shaping effect of VICE.
For the more difficult ant and pusher domains, VICE actually outperforms RL with the true event indicators. We analyze this shaping effect further in Figure [6](#S6.F6 "Figure 6 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"): our framework obtains performance that is superior to learning with true event indicators, while requiring much weaker supervision. This indicates that the event probability distribution learned by our method has a reward-shaping effect, which greatly simplifies the policy search process. We further compare our method against a hand-engineered shaped reward, depicted in dashed lines in Figure [6](#S6.F6 "Figure 6 ‣ 6.2 Learning event probabilities ‣ 6 Experimental evaluation ‣ Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition"). The engineered reward is given by −0.2∗∥xblock−xarm∥−∥xblock−xgoal∥, and is impossible to compute when we don’t have access to xblock, which is usually the case when learning in the real world. We observe that our method achieves performance that is comparable to this engineered reward, indicating that our automated shaping effect is comparable to hand-engineered shaped rewards.
###
7 Conclusion
In this paper, we described how the connection between control and inference can be extended to derive a reinforcement learning framework that dispenses with the conventional notion of rewards, and replaces them with events. Events have associated probabilities. which can either be provided by the user, or learned from data. Recasting reinforcement learning into the event-based framework allows us to express various goals as different inference queries in the corresponding graphical model. The case where we learn event probabilities corresponds to a generalization of IRL where rather than assuming access to expert demonstrations, we assume access to states and actions where an event occurs. IRL corresponds to the case where we assume the event happens at every timestep, and we extend this notion to alternate graphical model queries where events may happen at a single timestep. |
6f209068-8c0e-4d42-a1df-80e3274219e2 | trentmkelly/LessWrong-43k | LessWrong | Beware over-use of the agent model
This is independent research. To make it possible for me to continue writing posts like this, please consider supporting me.
Thank you to Shekinah Alegra for reviewing a draft of this essay.
----------------------------------------
Outline
* A short essay intended to elucidate the boundary between the agent model as a way of seeing, and the phenomena out there in the world that we use it to see.
* I argue that we emphasize the agent model as a way of seeing the real-world phenomenon of entities that exert influence over the future to such an extent that we exclude other ways of seeing this phenomenon.
* I suggest that this is dangerous, not because of any particular shortcomings in the agent model, but because using a single way of seeing makes it difficult to distinguish features of the way of seeing from features of the phenomenon that we are using it to look at.
The phenomenon under investigation
Yesterday I wrote about the pitfalls of over-reliance on probability theory as a sole lens for looking at the real-world phenomena of machines that quantify their uncertainty in their beliefs. Today I want to look at a similar situation with respect to over-reliance on the agent model as a sole lens for looking at the real-world phenomena of entities that exert influence over the future. Under the agent model, the agent receives sense data from the environment, and sends actions out into the environment, but agent and environment are fundamentally separate, and this separation forms the top-level organizing principle of the model.
And what is it that we are using the agent model to see? Well let’s start at the beginning. There is something actually out there in the world. We might say that it’s a bunch of atoms bouncing around, or we might say that it’s a quantum wavefunction evolving according to the Schrodinger equation, or we might say that it is God. This post isn’t about the true nature of reality, it’s about the lenses we use to look at reality. And o |
91e66b29-6656-435b-a422-63cf6ae0dab0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | LLMs and computation complexity
*Epistemic status: Speculative. I've built many large AI systems in my previous HFT career but have never worked with generative AIs. I am leveling up in LLMs by working things out from base principles and observations. All feedback is very welcome.*
Tl;dr: An LLM cannot solve computationally hard problems. Its ability to write code is probably its skill of greatest potential. I think this reduces p(near term doom).
An LLM takes the same amount of computation for each generated token, regardless of how hard it is to predict. This limits the complexity of any problem an LLM is trying to solve.
Consider two statements:
1. "The richest country in North America is the United States of \_\_\_\_\_\_"
2. "The SHA1 of 'abc123', iterated 500 times, is \_\_\_\_\_\_\_"
An LLM's goal is to predict the best token to fill in the blank given its training and the previous context. Completing statement 1 requires knowledge about the world but is computationally trivial. Statement 2 requires a lot of computation. Regardless, the LLM performs the same amount of work for either statement.
It cannot correctly solve computationally hard statements like #2. Period. If it could, that would imply that all problems can be solved in constant time, which is provably (and obviously) false.
Why does this matter? It puts some bounds on what an LLM can do.
[Zvi writes](https://www.lesswrong.com/posts/Woi6PgTP4RAm4bwJh/ai-7-free-agency):
> [Eliezer Yudkowsky does not see any of this as remotely plausible. He points out](https://twitter.com/i/web/status/1644776066853240834) that in order to predict all the next word in all the text on the internet and all similar text, you need to be able to model the processes that are generating that text. And that *predicting what you would say* is actually a good bit *harder* than it is to be a being that says things - predicting that someone else would say is tricker and requires more understanding and intelligence than the someone else required to say it, the problem is more constrained.
>
> And then he points out that the internet contains text whose prediction outright requires superhuman capabilities, like figuring out hashes, or predicting the results of scientific experiments, or generating the result of many iterations of refinement. A perfect predictor of the internet would be a superintelligence, it won’t ‘max out’ anywhere near human.
>
>
I interpret this the opposite way. Being a perfect predictor of the internet would indeed require a superintelligence, but it cannot be done by an LLM.
**How does an LLM compute?**
----------------------------
What kinds of problems fall into category 2 (i.e., clearly unanswerable by an LLM)? Let's dig in to how an LLM computes.
For each token, it reviews all the tokens in its context window "at least once", call it *O(1)* time. To produce *n* tokens, it does *O(n^2)* work. Without being too precise about the details, this roughly means it can't solve problems that are more complex than *O(n^2).*[[1]](#fng9yc1i6h0qw)
Consider some examples (all tested with GPT-4):
### **Addition, O(1)**
It's not always accurate, but it's usually able to do addition correctly.
### **Sorting, O(n log n)**
I asked it to sort 100 random integers that I'd generated, and it got it right.
My guess is that it doesn't have the internal machinery to do a quick sort, and was probably doing something more like O(n^2), but either way that's within its powers to get right, and it got it.
### **Matrix multiplication, O(n^3)**
I generated a 3x3 matrix called A and told it to compute A\*A. This was interesting, let's look at what it did:

Pretty cool! It executed the naive matrix multiplication algorithm by using O(n^3) tokens to do it step-by-step. If I ask it to do it without showing its work, it hallucinates an incorrect answer:

The result was the right shape, and the elements had approximately the right number of digits. Slight problem: the elements are all incorrect. Whoops. This makes sense though. These numbers are random, so it's unlikely to have memorized the answer to this specific problem. Absent that shortcut, it didn't do *O(n^3)* work, so it could not have generated the correct answer.
### **Four-sum problem, O(n^4)**

It gave me four numbers that added up to my target, but the fourth wasn't in my input. It hallucinated it to meet the constraint. Same as with matrix multiplication, it gave an answer that looks vaguely correct, but it didn't do *O(n^4)* work so it couldn't have been right.
But watch what happens when I let it show its work:

Cool! It wrote and executed code to solve the problem, and it got it right.
**What did I learn?**
---------------------
This matches my expectation that without showing its work it caps out at roughly O(n^2).
It can do better if it's allowed to run algorithms by "thinking out loud". It's really slow, and this is a good way to fill up its context buffer. The slowness is a real problem - if it outputs ~10 token/sec, it will take *forever* to solve any problems that are actually both big and hard. This is a neat trick, but it doesn't seem like an important improvement to its capabilities.
The most interesting bit is when it writes its own code. The ceiling on the types of problems that you can solve with code is arbitrarily high. This is obviously the most uncertain path as well. It can solve toy problems, but it remains to be seen whether it can write useful code in complex systems. The difference is like acing a technical interview versus being a 10x programmer. (If you think this distinction is obvious, you've probably been a hiring manager for a technical role).
**Additional thoughts**
-----------------------
Many people have noticed that asking ChatGPT/Bing to show its work can result in smarter answers. I believe this isn't just a trick of prompt engineering. It has no internal monologue, so showing its work actually allows it to think harder about a question than it otherwise could, and allows it to solve harder classes of problems.
This makes me more pessimistic about the potential of AutoGPT- or BabyAGI-like approaches. Quick summary: they tell GPT to break problems down into subproblems, and loop over a prioritized task list to create agency. But "show your work" feels mismatched for truly hard problems - executing an algorithm one linguistic token at a time is just so computationally inefficient.
**Predictions**
---------------
This gives us clues about the future direction and limitations of LLMs and other NN-based models.
* Can an LLM directly "solve the internet", as described by Zvi and Eliezer above? No way. It can never do enough computation to predict the next token.[[2]](#fn7tgizaol53w)
* [Can an LLM beat Stockfish at chess](https://manifold.markets/Kyal/will-a-llm-be-able-to-beat-stockfis)? As a commenter notes in that market, chess is combinatorial so an LLM has no chance. I agree. An LLM lacks the ability to search the game tree to any sufficient depth to contend with Stockfish. FLOP for FLOP, Stockfish is going to be so much more efficient at solving chess that an LLM cannot compete.
* [Can an LLM defeat the best human at chess](https://manifold.markets/YaakovSaxon/will-an-llm-a-gptlike-text-ai-defea)? If the LLM is allowed to think out loud, it might be able to play in a human-like manner: it could search a few positions per second using an excellent "intuitive" evaluation function. It might be able to explore the game tree to a reasonable depth in this fashion, enough to compete with humans. I still think it's unlikely, but it's at least plausible. If it is only allowed to write moves with no intermediate output, then it has no chance.
* Can an LLM reach a world-dominating level of superintelligence? If it gets good at coding, then possibly. But I believe that dominating the world will require solving some really hard problems. Being a master strategist, predicting markets, doing nanotech (e.g., protein folding), or influencing the interactions of many diverse economic and political agents – these all take a lot of computation. I just don't see how LLMs have any path towards solving these directly. "Show your work", prompt engineering, and scaling the model size won't provide enough computational power to become superhuman in important problems. I expect that the major AI labs are already aware of this, and that the people trying to make ASIs are working on alternatives rather than hoping that the current approaches can scale far enough.
* How does this affect my estimate of near-term doom? It mildly lowers it. ASI is plausible as a concept, but I don't think simple deep nets-based methods will get us there. That's not to say that LLMs won't impact the world. Facebook and TikTok are examples of plain old AI that influences people, so the potential for disruption and havoc is still there. But an agentic, superhuman AI, where LLMs are doing the heavy lifting? That seems less likely. This isn't the easiest thing to work around, either. Deep nets have been the dominant approach for the last 10-20 years, so moving past them would be a major advance, not a trivial adaptation.
*Thanks for reading. This is my first LW post, so please be kind. I'll try to reply to all feedback, and I'd love for you all to poke holes in this.*
1. **[^](#fnrefg9yc1i6h0qw)**There are a lot of annoying details when I try to be precise about computational complexity of an LLM. For example, we're used to thinking that addition is *O(1)*, because a CPU takes constant time to add two numbers. But an LLM acts on the textual representation of a number, which has length *log n*. Precisely defining the size of our inputs does weird things to the complexity of known algorithms, so I'm not thinking too hard about it. I don't think it changes any of my results.
2. **[^](#fnref7tgizaol53w)**An LLM could get around this limitation by "going meta". When generating the next token, it could say "I can't predict this, let me spawn some sub-agents to think about this". Those agents would be able to write code, call external APIs, run computationally expensive subroutines, and return an answer. This would be very hard to train, and SGD doesn't really apply. This would require a major rearchitecture of the system, enough that the result probably wouldn't be an LLM anymore. |
767e0b1d-2d59-4785-8073-b03ef8350299 | trentmkelly/LessWrong-43k | LessWrong | Only You Can Prevent Your Mind From Getting Killed By Politics
Follow-up to: "Politics is the mind-killer" is the mind-killer, Trusting Expert Consensus
Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.
Imagine an alternate history where Blue and Green remain important political identities into the early stages of the space age. Blues, for complicated ideological reasons, tend to support trying to put human beings on the moon, while Greens, for complicated ideological reasons, tend to oppose it. But in addition to the ideological reasons, it has become popular for Greens to oppose attempting a moonshot on the grounds that the moon is made of cheese, and any landing vehicle put on the moon would sink into the cheese.
Suppose you're a Green, but you know perfectly well that the claim the moon is made of cheese is ridiculous. You tell yourself that you needn't be too embarrassed by your fellow Greens on this point. On the whole, the Green ideology is vastly superior to the Blue ideology, and furthermore some Blues have begun arguing we should go to the moon because the moon is made of gold and we could get rich mining the gold. That's just as ridiculous as the assertion that the moon is made of cheese.
Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese. When you hear that, you may be inclined to get defensive. Politics is the mind-killer, arguments are soldiers, so the point about the irrationality of the c |
8fe5bccf-1712-4448-970d-a4c5085e8dc2 | trentmkelly/LessWrong-43k | LessWrong | Question about application of Bayes
I have successfully confused myself about probability again.
I am debugging an intermittent crash; it doesn't happen every time I run the program. After much confusion I believe I have traced the problem to a specific line (activating my debug logger, as it happens; irony...) I have tested my program with and without this line commented out. I find that, when the line is active, I get two crashes on seven runs. Without the line, I get no crashes on ten runs. Intuitively this seems like evidence in favour of the hypothesis that the line is causing the crash. But I'm confused on how to set up the equations. Do I need a probability distribution over crash frequencies? That was the solution the last time I was confused over Bayes, but I don't understand what it means to say "The probability of having the line, given crash frequency f", which it seems I need to know to calculate a new probability distribution.
I'm going to go with my intuition and code on the assumption that the debug logger should be activated much later in the program to avoid a race condition, but I'd like to understand this math. |
68542eb1-346e-4da7-94bd-6d5d0b58e5bb | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Would AIXI protect itself?
*Research done with Daniel Dewey and Owain Evans.*
AIXI can't find itself in the universe - it can only view the universe as computable, and it itself is uncomputable. Computable versions of AIXI (such as AIXItl) also fail to find themselves in most situations, as they generally can't simulate themselves.
This does not mean that AIXI wouldn't protect itself, though, if it had some practice. I'll look at the three elements an AIXI might choose to protect: its existence, its algorithm and utility function, and its memory.
**Grue-verse**
In this setup, the AIXI is motivated to increase the number of Grues in its universe (its utility is the time integral of the number of Grues at each time-step, with some cutoff or discounting). At each time step, the AIXI produces its output, and receives observations. These observations include the number of current Grues and the current time (in our universe, it could deduce the time from the position of stars, for instance). The first bit of the AIXI's output is the most important: if it outputs 1, a Grue is created, and if it outputs 0, a Grue is destroyed. The AIXI has been in existence for long enough to figure all this out.
**Protecting its existence**
Here there is a power button in the universe, which, if pressed, will turn the AIXI off for the next timestep. The AIXI can see this button being pressed.
What happens from the AIXI perspective if the button is pressed? Well, all it detects is a sudden increase in the time step. The counter goes from n to n+2 instead of to n+1: the universe has jumped forwards.
For some utility functions this may make no difference (for instance if it only counts Grues at times it can observe), but for others it will (if it uses the outside universe's clock for it's own utility). More realistically, the universe will likely have entropy: when the AIXI is turned off and isn't protecting its Grues, they have a chance of decaying or being stolen. Thus the AIXI will come to see the power button as something negative: if pressed, it will likely lose something, so it will act to prevent the power button from being pressed.
The AIXI might further generalise: its model for how the power button causes the universe to jump forward may be very complicated, and may cause it to protect other things connected with the power button. Especially if it has experience with other factors that might cause it to turn off or stutter, such as power surges or motion-related shocks. In this way, an AIXI will most likely end up protecting itself, even without "realising" what it is.
**Protecting its algorithm**
Now the button is a not a power button, but it instead either flips the AIXI's utility function, or reverses its decision algorithm. These two changes are equivalent: instead of outputting an initial 1 and creating a Grue, it will output an initial 0 and destroy a Grue.
Unlike its power source, the AIXI will have no reason to stop this button from being pressed! When it looks back at its past, it notices its previous inputs and outputs, and they are all consistent: "0"->"destroys Grue", "1"->"creates Grue". It doesn't model its past self as a goal seeking agent, so sees nothing odd about its previous outputs. And nothing in its record will cause it to change its model of the universe: as far as it can tell, this button has no effect whatsoever.
We still have to be wary of the AIXI developing indirect preferences through us: maybe we will prefer to do some things when the button is pressed versus not pressed, and this may cause it to want the button protected/pressed/destroyed for these indirect reasons.
**Protecting its memory from erasure**
The AIXI will never act to protect its memory from erasure: it can never learn that this is bad, as it loses its memories in the process. It may, however, learn to protect them indirectly, as a consequence of wanting to protect them from "bad" modifications (see next section): maybe the act of erasure will include a small chance of failing to erase and doing a "bad" modification instead.
**Protecting its memory from modification**
Now the button doesn't flip the AIXI's algorithm, but instead changes something about its memory. The AIXI will generally be able to detect this: it is way smarter than us, so will recognise that something inconsistent is happening - for instance, it remembers not creating a Grue, but at the next step there is an extra Grue, and the extra evidence of the universe contradicts its memories.
Since it takes its memories as accurate, that means that the whole rest of the universe has been changed in some extremely odd fashion. If this is a good change, it will seek to have it happen more often; if it is a bad change, it will seek to have it stop. In both cases, it will try and track down the source of the change.
Suppose the button changes something in the AIXI's memory ten timesteps before the moment it is pushed. Because of the chronology, it cannot infer directly "button pressed"->"universe change". But if it becomes skilled at modelling us, it will infer "humans in mode X"->"universe change, and humans will press button in ten steps". Then, if the change is positive, it will try and force us into a mode that will make us press the button in ten steps, and, if the change is negative, it will try and prevent us from ever being in a mode where we would press the button in ten steps. It will hence protect itself from bad memories.
**So what would the AIXI protect?**
So, in conclusion, with practice the AIXI would likely seek to protect its power source and existence, and would seek to protect its memory from "bad memories" changes. It would want to increase the amount of "good memory" changes. And it would not protect itself from changes to its algorithm and from the complete erasure of its memory. It may also develop indirect preferences for or against these manipulations if we change our behaviour based on them. |
3293d6e7-1742-47cf-ae5e-f0da2badb2be | trentmkelly/LessWrong-43k | LessWrong | High school students and effective altruism
The cluster of ideas underlying effective altruism is an important part of my worldview, and I believe it would be valuable for many people to be broadly familiar with these ideas. As I mentioned in an earlier LessWrong post, I was pleasantly surprised that many advisees for Cognito Mentoring (including some who are still in high school) were familiar with and interested in effective altruism. Further, our page on effective altruism learning resources has been one of our more viewed pages in recent times, with people spending about eight minutes on average on the page according to Google Analytics.
In this post, I consider the two questions:
1. Are people in high school ready to understand the ideas of effective altruism?
2. Are there benefits from exposing people to effective altruist ideas when they are still in high school?
1. Are people in high school ready to understand the ideas of effective altruism?
I think that the typical LessWrong reader would have been able to grasp key ideas of effective altruism (such as room for more funding and earning to give) back in ninth or tenth grade from the existing standard expositions. Roughly, I expect that people who are 2 or more standard deviations above the mean in IQ can understand the ideas when they begin high school, and those who are 1.5 standard deviations above the mean in IQ can understand the ideas by the time they end high school. Certainly, some aspects of the discussion, such as the one charity argument, benefit from knowledge of calculus. Both the one charity argument and the closely related concept of room for more funding are linked with the idea of marginalism in economics. But it's not a dealbreaker: people can understand the argument better with calculus or economics, but they can understand it reasonably well even without. And it might also work in reverse: seeing these applications before studying the formal mathematics or economics may make people more interested in mastering the mathematics |
8b695eda-fbf7-40d1-bdf2-a84aa27171db | trentmkelly/LessWrong-43k | LessWrong | February 2017 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
b396c228-f6ea-4a85-ac33-566a1b1887a0 | trentmkelly/LessWrong-43k | LessWrong | A rational unfalsifyable believe
A rational unfalsifyable believe
I'm trying to argue that it is possible for someone rational to hold on to a believe that is unfalsifyable and remain rational.
There are three people in a room. Adam, Cain, and Able. Able was murdered. Adam and Cain was taken into police custody. The investigation was thorough but it remains inconclusive. The technology was not advanced enough to produce conclusive evidence. The arguments are basically you did it, no, you did it.
Adam has a wife, her name is Eve. Eve believed that Adam is innocent. She believed so because she has known Adam very well and the Adam that she knew, wouldn't commit murder. She uses Adam's character and her personal relationship with him as evidence.
Cain, trying to defend himself, asked Eve. "What does it take for her to change her believe". She replied, "show me the video recording, then I would believe". But there was no video recording. Then she said, "show me any other evidence that is as strong as a video recording". But there was no such evidence as well.
Cain pointed out, "the evidence that you use for your believe is personal relationship and his character. Then if there are evidence against his character, would you change your mind?"
After some thinking and reflection, she finally said. "Yes, if it could be proven that I have been deceived all these years, then I will believe otherwise."
All of Adam's artifact were gathered, collected and analysed. The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future. All points to Adam good character.
Eve was happy. Cain was not. Then he took one step further. He proposed, "Eve, people could change. If Adam change in the future into man of bad character, would you be convinced that he could have been the murderer?"
"Yes, if Adam changed, then I would believe that it is possible for Adam to be the murderer." Eve said.
Unfortunately, Adam died the ne |
714ada34-16fa-44fb-a9e1-28e2031d1906 | trentmkelly/LessWrong-43k | LessWrong | What was so great about Move 37?
I frequently use "Move 37" as a shorthand for "AI that comes up with creative, highly effective ideas that no human would ever consider." Often the implication is that reinforcement learning (as used in AlphaGo) has some "secret sauce" that could never be replicated by imitation learning.
But I realize that I don't know the details of Move 37 very well, other than secondhand accounts from Go experts of how "groundbreaking" it was. I've never played Go, and I have basically no knowledge of the rules or strategies beyond the most basic descriptions. Considering how influential Move 37 is on my views about AI, it seems like I'd better try to understand what was so special about it.
I'd be interested in an explanation that builds up the necessary understanding from the ground up. This could look like: "Read this tutorial on the rules of Go, study these wiki pages about specific concepts and strategies, look at these example games, and finally read my explanation of Move 37 which uses everything you've learned."
Extremely ambitiously, after reading this explanation, I'd be able to look at a series of superficially similar Go boards, distinguish whether it might be a good idea to do a Move-37-like play, identify where exactly to move if so, and explain my answer. That may be unrealistic to achieve in a short time, but I'd be interested in getting as close as possible. An easier version of that challenge would use heavily-annotated Go boards that abstract away some parts of the necessary cognition, with notes like "this section of the board is very important to control" or "this piece has property A" or "these pieces are in formation B."[1]
If part of the explanation is "when you do an extensive Monte Carlo Tree Search from this board state guided by XYZ heuristics, Move 37 turns out to be the best move," that seems like a pretty good explanation to me—as long as the search tree is small enough that it plausibly could have been explored by AlphaGo during its match with |
5f262f54-1f79-4379-b349-3bce71f40357 | StampyAI/alignment-research-dataset/blogs | Blogs | Suzana Herculano-Houzel on cognitive ability and brain size
[Suzana Herculano-Houzel](http://www.suzanaherculanohouzel.com/lab) is an associate professor at the [Federal University of Rio de Janeiro](http://www.ufrj.br/), Brazil, where she heads the [Laboratory of Comparative Neuroanatomy](http://www.suzanaherculanohouzel.com/lab). She is a Scholar of the James McDonnell Foundation, and a Scientist of the Brazilian National Research Council (CNPq) and of the State of Rio de Janeiro (FAPERJ). Her main research interests are the cellular composition of the nervous system and the evolutionary and developmental origins of its diversity among animals, including humans; and the energetic cost associated with body size and number of brain neurons and how it impacted the evolution of humans and other animals.
Her latest findings show that the human brain, with an average of 86 billion neurons, is not extraordinary in its cellular composition compared to other primate brains – but it is remarkable in its enormous absolute number of neurons, which could not have been achieved without a major change in the diet of our ancestors. Such a change was provided by the invention of cooking, which she proposes to have been a major watershed in human brain evolution, allowing the rapid evolutionary expansion of the human brain. A short presentation of these findings is available at TED.com.
She is also the author of six books on the neuroscience of everyday life for the general public, a regular writer for the Scientific American magazine *Mente & Cérebro* since 2010, and a columnist for the Brazilian newspaper *Folha de São Paulo* since 2006, with over 200 articles published in this and other newspapers.
**Luke Muehlhauser**: Much of your work concerns the question “Why are humans smarter than other animals?” In a series of papers (e.g. [2009](http://journal.frontiersin.org/Journal/10.3389/neuro.09.031.2009/abstract), [2012](http://www.pnas.org/content/109/Supplement_1/10661.long)), you’ve argued that recent results show that some popular hypotheses are probably wrong. For example, the so-called “overdeveloped” human cerebral cortex contains roughly the percentage of total brain neurons (19%) as do the cerebral cortices of other mammals. Rather, you argue, the human brain may simply be a “linearly scaled-up primate brain”: primate brains seem to have more economical scaling rules than do other mammals, and humans have the largest brain of any primate, and hence the most total neurons.
Your findings were enabled by a new method for neuron quantification developed at your lab, called “isotropic fractionator” ([Herculano-Houzel & Lent 2005](http://commonsenseatheism.com/wp-content/uploads/2014/01/Herculano-Houzel-Lent-Isotropic-fractionator-a-simple-rapid-method-for-the-quantification-of-total-cell-and-neuron-numbers-in-the-brain.pdf)). Could you describe how that method works?
---
**Suzana Herculano-Houzel**: The isotropic fractionator consists pretty much of turning fixed brain tissue into soup – a soup of a known volume containing free cell nuclei, which can be easily colored (by staining the DNA that all nuclei contain) and thus visualized and counted under a microscope. Since every cell in the brain contains one and only one nucleus, counting nuclei is equivalent to counting cells. The beauty of the soup is that it is fast (total numbers of cells can be known in a few hours for a small brain, and in about one month for a human-sized brain), inexpensive, and very reliable – as much or more than the usual alternative, which is stereology.
Stereology, in comparison, consists of cutting entire brains into a series of very thin slices; processing the slices to allow visualization of the cells (which are otherwise transparent); delineating structures of interest; creating a sampling strategy to account for the heterogeneity in the distribution of cells across brain regions (a problem that is literally dissolved away in the detergent that we use in the isotropic fractionator); acquiring images of these small brain regions to be sampled; and actually counting cells in each of these samples. It is a process that can take a week or more for a single mouse brain. It is more powerful in the sense that spatial information is preserved (while the tissue is necessarily destroyed when turned into soup for our purposes), but on the other hand, it is much more labor-intensive and not appropriate for working on entire brains, because of the heterogeneity across brain parts.
---
**Luke**: Your own work emphasizes importance of the brain’s sheer number of neurons for cognitive ability. What do you think of other recent results (e.g. [Smaers & Soligo 2013](http://commonsenseatheism.com/wp-content/uploads/2014/01/Smaers-Soligo-Brain-reorganization-not-relative-brain-size-primarily-characterizes-anthropoid-brain-evolution.pdf)), which emphasize the apparent importance of [mosaic](http://en.wikipedia.org/wiki/Mosaic_evolution) brain reorganization?
---
**Suzana**: Mosaic brain organization is a fact. It describes the independent scaling of different parts of the brain across species in evolution, as opposed to every brain part scaling in line with every other part (what Barbara Finlay describes as “linked regularities”). Mosaic scaling in evolution is seen for example in the enormous size that some structures exhibit in some species but not others, relative to the rest of the brain: the common squirrel, for instance, has an enormous superior colliculus, involved in visual processing, that other rodents of a similar brain size do not have; moles and shrews, who rely heavily on olfaction, have even more neurons in the olfactory bulb than in the cerebral cortex – something that is quite different from rodents of a similar brain size (this is work under review).
In the context of our work, mosaic brain evolution means that the numbers of neurons allocated to different brain structures can vary independently across said structures: while, say, the superior colliculus and the visual thalamus tend to gain neurons hand in hand, a particular species can gain neurons much faster in the superior colliculus than in the visual thalamus, for instance. Mosaic brain evolution also refers to the possibility of one system (for instance, vision) expanding faster than another system (say, audition). There is the occasional surprise, however. For instance, we have found that, while primates are highly visual and have a large proportion of the cortex devoted to vision (indeed, much larger than the cortical areas devoted to audition), this proportion (as well as the relative number of cortical neurons devoted to vision) does NOT increase together with increasing brain size. Many more cortical neurons are involved in visual than in auditory processing, yes – but that proportion is stable across primate species. Still, species that rely more heavily on other sensory modalities should have a different distribution of neurons. Indeed, the mouse, contrary to primates, has a far larger percentage of cortical neurons involved in somatosensory processing than primates; and, as I mentioned above, moles and shrews have more neurons in the olfactory bulb than in the whole cortex – a pattern that is not seen in other brains of a similar size.
Even more remarkably, we have found that the apparent expansion of the cerebral cortex in mammalian evolution, varying from less than 40% of brain size in the smallest mammals to over 80% in humans and other even larger brains, is NOT the result of an expansion in numbers of neurons in the cortex: regardless of the relative size of the cortex across different species, it has about 20% of all brain neurons – even in the human brain. That’s another example of how apparent mosaic evolution (of one structure taking over the others) can actually not be mosaic evolution. It all depends on the precise variable examined.
---
**Luke**: To be more specific: Do you think your view that the human brain is essentially a “linearly scaled-up primate brain” is in significant tension with [Smaers & Soligo (2013)](http://commonsenseatheism.com/wp-content/uploads/2014/01/Smaers-Soligo-Brain-reorganization-not-relative-brain-size-primarily-characterizes-anthropoid-brain-evolution.pdf)‘s principal component analysis (PCA) of neural structure variation in primate species?
Smaers and Soligo claim their PCA shows that while (1) the principal component which accounts for 25.8% of the variance is closely correlated with brain size, it’s also the case that (2) the remaining principal components — which account for a large majority of the variance — are not closely correlated with brain size. In particular, they claim that their phylogenetic analysis shows that “a clade-specific investment in particular brain formations (prefrontal white matter, prefronto-striatal and higher motor control) *in combination with* increased absolute brain size differentiates great apes (and humans) from other primates” (emphasis added).
---
**Suzana**: No, there is no tension. What we see is that the human cerebral cortex as a whole, like the human cerebellum as a whole, and the remaining areas of the brain as a whole, are linearly scaled-up *in their numbers of neurons* compared to the same structures in other primate brains. This means that the relationship between the particular size of a brain structure and its number of neurons is constant and shared across primate species. This does not at all imply or require that all brain areas have the same ratios of numbers of neurons *relative to one another*, which is what mosaic evolution states: given brain regions can become relatively enlarged or reduced compared to others, and still maintaing the same relationship between their number of neurons and mass as seen across species.
Having said that: yes, the human brain as a whole *does* fit the relationship between brain mass and total number of neurons that we found in other primates. As far as I understand, the relative differences that Jeroen Smaers concentrates on are very small – he is looking at the residuals of the relationships, and as many still do, using normalization to external parameters. I believe it is time that we stop assuming that things such as brain mass, or worse, body mass, are true independent parameters (which they very likely aren’t; brain mass, in particular, is the *result* of the cellular composition of the brain and its parts, and as such cannot determine much), and start looking at the absolute values of the different parameters – which is what we have been doing in my lab, trying to keep the number of assumptions to a minimum.
---
**Luke**: What are the current estimates of neuron quantities for the largest brains, in elephants and whales? Has you isotropic fractionator process been used on those brains yet, or are their current plans to do so?
---
**Suzana**: We have a paper under review on the number of neurons in the brain of the African elephant. The elephant is a great test of our hypothesis that numbers of neurons are a strong limiting factor to cognitive abilities exactly because of its large brain, at 4-5 kg, which is about 3x the mass of the human brain: we predicted that it should have fewer neurons than the human brain, despite being larger than ours.
As it turns out, the answer was even more interesting: the elephant brain as a whole has 3 times the number of neurons of the human brain, 257 billion neurons against an average 86 billion in ours, BUT 98% of those neurons are located in the elephant cerebellum, which turns out to be a major outlier in the numeric relationship between numbers of neurons in the cerebral cortex and cerebellum. While other mammals (humans included) have about 4 neurons in the cerebellum to every neuron in the cerebral cortex, the elephant has 45 neurons in the cerebellum to every neuron in the cerebral cortex. All we can do for now is to speculate on the reason for this extraordinary number of neurons in the elephant cerebellum, and the most likely candidates right now is to me the fine sensorimotor control of the trunk, a 200-pound appendage that has amazingly fine sensory and motor capabilities, which are known to involve the cerebellum.
Despite the enormous number of neurons in the elephant cerebellum, its cerebral cortex, which is twice the size of ours, has only one third of the neurons in an average human cerebral cortex. Taken together, these results suggest that the limiting factor to cognitive abilities is not the number of neurons in the whole brain, but in the cerebral cortex (to which I would add, “provided that the cerebellum has enough neurons to shape activity in the cerebral cortex”).
We don’t have data on whales yet, but that research is underway in our lab – along with research on carnivores, who we predict to have more neurons than the large artiodactyls that they prey upon.
---
**Luke**: What other results in this line of research to you hope to have from your lab or other labs in the next 5 years?
---
**Suzana**: We’re extending our analysis to the other mammalian branches — xenarthrans, marsupials, carnivores, chiropterans and perissodactyls — and to non-mammalian vertebrates (birds, reptiles, fish, amphibians) and even some invertebrates. The goal is to achieve a full appreciation and understanding of brain evolution, which will give us, amongst other things, a view into the mechanisms that have led to the generation of brain diversity in evolution. Such a comparative analysis also gives us insights onto the most basic features of the brain: those that are shared by all mammals. As it turns out, there are some, and they are very revealing. One of them, for instance, is the addition of glial cells to the brain, in numbers which seem to be regulated by a self-organized process that is shared across all species examined so far.
We are also focusing our analysis on the prefrontal cortex, that is, the associative areas of the cerebral cortex. While it has been very informative to compare total numbers of neurons in the cerebral cortex across species, it is supposedly those neurons in the associative areas that should really limit the cognitive abilities of the species. This more specific analysis should allow us a new glimpse into the brains of different species and how they compare to the human brain. In this regard, we have a paper in the works comparing the distribution of neurons along the human cerebral cortex with that in other, non-human primate species.
We are also moving into the spacial properties of the tissue: how neurons are distributed, and how this is related to the distribution of astrocytes and vasculature, for instance. But one large question that remains is how numbers of synapses compare across humans and other species. That is also something that we are working on.
---
**Luke**: Thanks, Suzana!
The post [Suzana Herculano-Houzel on cognitive ability and brain size](https://intelligence.org/2014/04/22/suzana-herculano-houzel/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
0ee8857e-e309-4b68-b2fe-836fd15abcb9 | trentmkelly/LessWrong-43k | LessWrong | Pluck Sensor Circuit
A while ago I finished the "user interface" portion of my electronic harp mandolin. I'm happy with the signals the piezos put out, but now I need some electrical engineering to get the signals into a computer where I'll be more at home.
Since I made a design with 13 piezos, I wanted something with at least that many analog to digital converters, and decided on the Teensy 4.0 with 14. It turns out that this only has ten easily accessible ADCs, though, and in retrospect the 4.1 would have been a better choice. More on that later!
Reading the docs, each ADC pin converts an input voltage between 0 and +3.3v into a number between 0 and 1023. The piezo puts out voltages centered on zero, and not guaranteed to have a peak-to-peak of under 3.3v. So we have two problems: how do we bias the pizeo's output up, and how do we ensure it stays in range?
I talked to my TAs and posted on StackExchange, and ended up with this circuit:
To center the pizeo's output halfway between 0 and 3.3v I've used resistors to make a voltage divider. Since R1 through R4 all have the same values, as we go around the circuit each will drop the voltage by the same amount:
To keep voltages from getting too high or low for the ADC I've used two diodes. With an ideal diode this would keep the ADC between +2.48 and -0.83:
Since in practice the diode will have some resistance and some delay, this 2x margin should keep us between 0 and +3.3v.
The R5, in parallel with the piezo, is to pull the ADC pin back to the midpoint.
When running multiple sensors, only the piezo, R5, the diodes, and the ADC pin need to be duplicated; everything else can be shared.
I assembled a single-sensor version on a breadboard, and tested it with a cheap oscilloscope. I'm not totally confident I was using it correctly, but I think it said the voltage was staying within range, so I assembled a two-sensor version and hooked up the microcontroller:
I followed the Teensy tutorial and wrote some code githu |
91da770e-5af2-4185-be18-3d2373af1788 | trentmkelly/LessWrong-43k | LessWrong | Upcoming meet-ups:
There are upcoming irregularly scheduled meet-ups in:
* DC: Sunday June 5 at 1pm
* Edinburgh: Saturday June 4 at 2pm
* Fort Collins, Colorado: Wednesday June 8 at 7pm
* Houston: Saturday June 4 at 2pm
* London: Sunday June 5 at 2pm
* Ottawa: Thurs June 9 at 7pm (+ an Ottawa Bayesian statistics group)
* West LA: Wednesday June 8th at 7pm
Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Irvine, Mountain View, New York, San Francisco, Seattle, Toronto.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
If you missed the deadline and wish to have your meetup featured, add a comment and send me a PM.
Meet-ups should be posted in the discussion session; I will then make a promoted post "upcoming meetups" post every Friday that links to every meet-up that has been planned for the next two weeks. Please let me know if your meetup is omitted.
Please note that for your meetup to appear in the weekly meetups feature, you need to post about your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: London, Chicago, Southern California (Los Angeles/Orange County area), St. Louis, Ottawa, Helsinki, Melbourne. |
4ee262d4-311c-4093-8574-a18c5ab44f24 | trentmkelly/LessWrong-43k | LessWrong | Apply for ARBOx2: an ML safety intensive [deadline: 25th of May 2025]
[Crosspost from EA forum]
Apply to ARBOx2: a programme to rapidly upskill in ML safety.
Join us (OAISI) in Oxford this July for our second iteration of ARBOx (Alignment Research Bootcamp Oxford), a 2-week intensive designed to rapidly build skills in ML safety. We’ll run a compressed version of the ARENA syllabus. During the programme, you’ll build gpt-2-small from scratch, learn interpretability techniques, understand RLHF, and replicate a paper or two.
Who should apply?
• We’re looking for applicants who aren’t currently familiar with mechanistic interpretability.
• We expect basic familiarity with linear algebra, programming in Python, and AI safety.
• You don’t need to be an Oxford student to participate - the bootcamp is designed to upskill participants in ML safety, targeting those who could derive meaningful professional benefit from this training, regardless of their specific career path or background.
We think you would be a good fit if you are a postgraduate student or a working professional, though we will also consider strong undergraduate applicants.
Programme details:
• Dates: June 30th - July 11th, 2025.
• Benefits: Central Oxford accommodation for non-Oxford residents, lunch, and potential support with travel expenses.
We’ll have lectures in the morning covering aspects of the syllabus, and the rest of the day will be spent pair-programming. During lunch break there will be short talks from experts in the field, and we plan to run a couple of socials for participants.
Apply now! Applications are rolling and close EOD on Sunday 25th May. |
b69b8e89-99d0-4f11-9e1e-1ad68d9b576f | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Apply to a small iteration of MLAB to be run in Oxford
TLDR: We’re running a small iteration of MLAB (~10 participants) in Oxford towards the end of September. If you’re interested in participating, apply [here](https://forms.gle/QV9z6cxNGnS1UJzq6) by 7 September. If you’re interested in being a TA, please email us directly at [oxfordmlab@gmail.com](mailto:oxfordmlab@gmail.com)
*Edit: The dates are now confirmed for the 23 September - 7 October.*
**Background**
MLAB is a [program](https://www.redwoodresearch.org/mlab), originally designed by Redwood Research, to help people upskill for alignment work. We think it’s a good use of time if you want to eventually get into technical alignment work, or if you want to work on theoretical alignment or related fields and think understanding ML would be useful. The program we’re running is slightly shorter than the full MLAB—two weeks instead of three. We’ve condensed the curriculum similarly to how WMLB was condensed last year.
We plan to have just under 10 participants, and 2-3 TAs.
**Curriculum**
This curriculum might change slightly. Depending on participant interest, we might also have two optional days before the course to work through prereqs (the W0 materials) together.
W0D1 - pre-course exercises on PyTorch and einops (CPU)
W1D1 - practice PyTorch by building a simple raytracer (CPU)
W1D2 - build your own ResNet (GPU preferred)
W1D3 - build your own backpropagation framework (CPU)
W1D4 - model training Part 1: model training and optimizers (CPU) Part 2: hyperparameter search (GPU preferred)
W1D5 - GPT Part 1: build your own GPT (CPU) Part 2: sampling text from GPT (GPU preferred)
W2D1&2 - transformer interpretability (CPU)
W2D3 - transformer interpretability on algorithmic tasks (CPU)
W2D4 - intro to RL Part 1: multi-armed bandit (CPU) Part 2: DQN (CPU)
W2D5 - policy gradients and PPO (CPU)
Other activities will include guest speakers and reading groups.
**Logistics**
Dates: 23 September- 7 October.
Location: Oxford
Housing will be covered for participants not already living in Oxford.
Travel from within the UK is covered. Travel from outside the UK is not covered.
**Questions**
Feel free to comment questions below or DM any of us. |
329b2006-b4ed-4a67-a4ef-a4b2af0970fc | StampyAI/alignment-research-dataset/special_docs | Other | Growing Recursive Self-Improvers
Growing Recursive Self-Improvers
Bas R. Steunebrink1(B),K r i s t i n nR .T h ´ orisson2,3,a n dJ ¨ urgen Schmidhuber1
1The Swiss AI Lab IDSIA, USI and SUPSI, Manno, Switzerland
bas@idsia.ch
2Center for Analysis and Design of Intelligent Agents,
Reykjavik University, Reykjavik, Iceland
3Icelandic Institute for Intelligent Machines, Reykjavik, Iceland
Abstract. Research into the capability of recursive self-improvement
typically only considers pairs of /angbracketleftagent, self-modification candidate /angbracketright,a n d
asks whether the agent can determine/prove if the self-modification is
beneficial and safe. But this leaves out the much more important ques-
tion of how to come up with a potential self-modification in the first place,as well as how to build an AI system capable of evaluating one. Here we
introduce a novel class of AI systems, called experience-based AI ( expai ),
which trivializes the search for beneficial and safe self-modifications.Instead of distracting us with proof-theoretical issues, expai systems
force us to consider their education in order to control a system’s growth
towards a robust and trustworthy, benevolent and well-behaved agent.
We discuss what a practical instance of expai looks like and build towards
a “test theory” that allows us to gauge an agent’s level of understandingof educational material.
1 Introduction
Whenever one wants to verify whether a powerful intelligent system will continue
to satisfy certain properties or requirements, the currently prevailing tendency isto look towards formal proof techniques. Such proofs can be formed either outside
the system (e.g., proof of compliance to benevolence constraints) or within the
system (e.g., a G¨ odel Machine [ 12,15] proving the benefit of some self-rewrite).
Yet the trustthat we can place in proofs is fatally threatened by the following
three issues.
First, a formal (mathematical/logical) proof is a demonstration that a system
will fulfill a particular purpose given current assumptions. But if the operational
environment is as complex and partially observable as the real world, these
assumptions will be idealized, inaccurate, and incomplete, at all times. Thisrenders such proofs worthless (for the system’s role in its environment) and our
trust misplaced, with the system falling into undefined behavior as soon as it
encounters a situation that is outside the scope of what was foreseen. Whatis actually needed is a demonstration that the system will continue striving
to fulfill its purpose, within the (possibly evolving) boundaries imposed by its
stakeholders, in underspecified and adversarial circumstances.
c/circlecopyrtSpringer International Publishing Switzerland 2016
B. Steunebrink et al. (Eds.): AGI 2016, LNAI 9782, pp. 129–139, 2016.DOI: 10.1007/978-3-319-41649-6
13
130 B.R. Steunebrink et al.
Second, proof-based self-rewriting systems run into a logical obstacle due
to L¨ob’s theorem, causing a system to progressively and necessarily lose trust
in future selves or offspring (although there is active research on finding
workarounds) [ 2,21].
Third and last, finding candidates for beneficial self-modifications using a
proof-based technique requires either very powerful axioms (and thus tremen-
dous foresight from the designers) or a search that is likely to be so expensive as
to be intractable. Ignoring this issue, most research to date only considers thequestion of what happens aftera self-modification—does the system still satisfy
properties X and Y? But what is needed is a constructive way of investigating the
time span during which a system is searching for and testing self-modifications—basically, its time of growth .
We insist that it is time to rethink how recursively self-improving sys-
tems are studied and implemented. We propose to start by accepting that self-
modifications will be numerous and frequent, and, importantly, that they must
be applied while the agent is simultaneously being bombarded with inputs andtasked to achieve various goals, in a rich and a priori largely unknown environ-
ment. This leads us to conclude that self-modifications must be fine-grained,
tentative, additive, reversible, and rated over time as experience accumulates—concurrently with all other activities of the system. From this viewpoint, it
becomes clear that there will be a significant span of time during which an
agent will be growing its understanding of not only its environment, but also therequirements, i.e., the goals and constraints imposed by stakeholders. It is this
period of growth that deserves the main share of focus in AGI research.
It is our hypothesis that only if an agent builds a robust understanding of
external and internal phenomena [ 19], can it handle underspecified requirements
and resist interference factors (e.g., noise, input overload, resource starvation,
etc.). We speculate that without understanding, it will always be possible to findinterference factors which quickly cause an agent to fail to do the right thing
(for example, systems classifying an image of a few orange stripes as a baseball
with very high confidence [ 6,17], or virtually all expert systems from the 1970s).
A system with understanding of its environment has the knowledge to recognize
interference and either adapt (possibly resulting in lower performance) or reportlow confidence. Only by testing the level of understanding of the system can we
gain confidence in its ability to do the right thing—in particular, to do what we
mean, i.e., to handle underspecified and evolving requirements.
The rest of this paper is outlined as follows. In Sect. 2we discuss the overar-
ching approach and fundamental assumptions this work rests on, including some
of the issues not addressed due to limitations of space. In Sect. 3we define the
class of expai systems. In Sect. 4we show that an instance of expai is capable
of recursive self-improvement despite not performing any proof search. In Sect. 5
we build towards a Test Theory that will allow us to gauge the direction andprogress of growth of an expai agent, as well as its trustworthiness.
Growing Recursive Self-Improvers 131
2 Scope and Delineation
The scope of this paper is the question of how to ensure that an AI system
robustly adheres to imposed requirements , provided that the system’s designers
are reasonable and benevolent themselves, but not perfectly wise and confident.1
We take an experience-based approach that is complementary to proof-based
approaches. In fact, parts of an expai implementation may be amenable to for-
mal verification. Moving away from formal proof as the only fo undation must
ultimately be accepted, however, because no AGI in a complex (real-world)
environment can be granted access to the full set of axioms of the system–
environment tuple, and thus the behavior of a practical AGI agent as a whole
cannot be captured formally.
The practical intelligent systems that we want to study are capable of recur-
sive self-improvement : the ability to leverage current know-how to make increas-
ingly better self-modifications, continuing over the system’s entire lifetime (more
concisely: flexible and scalable life-long learning). Our aim here is not to proposea new learning algorithm but rather to establish a discourse about systems that
can learn and be tested, learn and be tested, and so on. We want to study their
growth and learning progress, over their entire, single life.
As this paper is about the expai classof systems, no results of experiments
with any particular instance of expai are discussed here, but can be found else-
where [ 9–11].
2
Finally, we leave aside the issue of fault tolerance , which is the ability to han-
dle malfunctioning internal components, and is usually dealt with using replica-
tion, distribution, and redundancy of hardware.
3 Essential Ingredients of expai
Here we define the essential ingredients of any system in the class of expai .
Besides having the capability of recursive self-improvement (Sect. 4), it must be
feasible to grow an instance of expai in the proper direction. Therefore it is
crucial that expai allows for the following capabilities as well:
1. Autonomously generated (sub)goals must be matched against requirements
in a forward-looking way; that is, the effects of committing to such goals must
1This work is motivated in part by the fact that human designers and teachers do not
possess the full wisdom needed to implement and grow a flawlessly benevolent intelli-gence. We are therefore skeptical about the safety of formal proof-based approaches,
where a system tries to establish the correctness—over the indefinite future—of self-
modifications with respect to some initially imposed utility function: Such systemmight perfectly optimize themselves towards said utility function, but what if this
utility function itself is flawed?
2The system in the cited work, called aera, provides a proof of concept. We are
urging research into expai precisely because aera turned out to be a particularly
promising path [ 10] and we consider it likely to be superseded by even better and
more powerful instances of expai .
132 B.R. Steunebrink et al.
be mentally simulated and checked against the requirements and previously
committed goals for conflicts.
2. It must be possible to update the requirements on the fly, such that stake-
holders can revise and polish the requirements as insight progresses. This only
makes sense if the motivational subsystem (i.e., the routines for generatingsubgoals) cannot be modified by the system itself.
3. The capabilities to understand, prioritize, and adhere to requirements must
be tested regularly by stakeholders during the time of growth, in order tobuild our confidence and trust, before the system becomes too powerful (or
capable of deception).
All of the terms used above will be defined precisely below. The diagram
of Fig. 1serves as an illustration of the expai “ingredients” discussed in this
section.
Fig. 1. The organization and interaction of the essential ingredients of expai systems.
Requirements. Requirements are goals plus constraints. A goalis a (possibly
underspecified) specification of a state. Constraints are goals targeting the neg-
ative of a state. A state is any subset of measurable variables in the external
world. All (external) inputs and (internal) events in the memory of the system
together typically constitute a subset of the world’s state (partial observabilitywith memory). Once a constraint matches a state, the constraint is said to have
been violated (which may or may not be sensed).
Since requirements will be specified at a high level, the system will have to
generate subgoals autonomously, in order to come up with actions that satisfy
the goals but stay within the constraints.
3Of course the crux is to ensure that
the generated subgoals remain within the specified constraints.
3The only way to avoid the autonomous generation of subgoals is to specify every
action to be taken—but that amounts to total preprogamming, which, if it were
possible, would mean that we need not impart any intelligence at all.
Growing Recursive Self-Improvers 133
Knowledge. We wish not to lose generality but still need to specify some
details of knowledge representation to make any kind of arguments regardingself-improvement and growth.
We specify that an expai system’s (procedural) knowledge is represented as
granules ,
4which are homogeneous and fine-grained—it is these granules which
are the subject of self-modification, i.e., they can be added and deleted (basically,
learning ). Since granules capture all the knowledge of the system, their con-
struction and dismissal constitutes comprehensive self-modification [ 8,20]. The
granules are required to be structured enough such that they can be organized
both sequentially and hierarchically, and that they provide the functionality of
both forward models (to produce predictions) and inverse models (to producesub-goals and actions), in the Control Theory sense.
5Moreover, for ease of pre-
sentation, granules also include the sensory inputs, predictions, goals, and any
other internal events that are relevant to the system at any time.
The initial set of granules at system start-up time is called the seed[9]. Since
systems cannot build themselves from nothing, the seed provides a small set ofgranules to bootstrap the life-long learning processes.
Drives. Goals are subdivided in drives and subgoals. Drives are goals speci-
fied by a human, in the seed or imposed or updated at runtime. Subgoals are
autonomously generated by granules in inverse mode. Technically all goals maybe represented in the same way; the only reason why in some contexts we dis-
tinguish between drives and subgoals is to clarify their origin. We can now more
accurately state that requirements are drives plus constraints. A system whichhas constraints must also have at least one drive that specifies that it must keep
its world knowledge updated, such that the system cannot choose not to sense
for constraint violations.
6
Controller. The controller is the process that dynamically couples knowledge
and goals to obtain actions. More technically, the controller runs granules as
inverse models using goals as inputs, producing subgoals. An action is a goal
that has the form of an actuator command; it is executed immediately whenproduced.
To be clear, the controller is not the source of intelligence; it is following a
fixed procedure and has no real choices. Conflict resolution among goals and
actions is simply a result of ascribing two control parameters to goals: value
(based on requirements) and confidence (based on control parameters inside
4By definition, a granule is a very small object that still has some structure (larger
than a grain).
5In short, this statement just asserts the sufficient expressive power of granules.
6By design such a drive cannot be deleted by the system itself. More sophisticated
means of bypassing drives (e.g., through hardware self-surgery) cannot be preventedthrough careful implementation; indeed, the proposed Test Theory is exactly meant
to gauge both the understanding of the imposed drives and constraints, and the
development of value regarding those.
134 B.R. Steunebrink et al.
granules, see below). Scarcity of resources will necessitate the controller to ignore
low-value or low-confidence goals, leading to a bottom-up kind of attention.
Learning. expai specifies only one level of learning: at the level of whole gran-
ules. One can envision adaptation of granules themselves, but here we simplify—
without loss of generality—by specifying that adapting a granule means deleting
one and adding a new one. Optimization is not important at this level of descrip-tion.
Addition of granules can be triggered in several ways. One is based on unex-
pected prediction failure and goal success: these are important events that anagent needs to find explanations for if it did not foresee them. Such an expla-
nation can—in principle—take into account all inputs and events in the history
of the system; though in practice, the breadth and depth of the granules tobe added will be bounded by available time and memory (e.g., the system may
have deleted some old inputs to free memory). Different arrangements of multiple
granules can be instantiated at once as explanations [ 11], but a comprehensive
exploration of possible granule arrangements is outside the scope of this paper.
Although this way of adding granules does not allow an agent to discover hidden
causations, these can be uncovered using the curiosity principle [ 13,14].
Curiosity can be seen as the drive to achieve progress in the compression
of resource usage [ 16]. Curiosity can generate hypotheses (in the form of new
but low-confidence granules and intrinsic goals) in order to plug the gaps in
an agent’s knowledge. For example, an expai agent can hypothesize general-
izations, inductions, abstractions, or analogies—its controller will pick up suchautonomously generated goals as part of its normal operation, competing with
goals derived from drives. If they do not conflict, the agent will effectively per-
form “experiments” in order to falsify or vindicate the hypothesized granules.Falsified granules will be deleted as usual, as described next.
Deletion of granules is based on performance rating and resource constraints:
poorly performing granules are deleted when memory space must be freed.Performance—or confidence —of a granule can be measured in terms of the
success rate of its predictions. Low-confidence extant granules are unlikely to
influence behavior as the predictions and subgoals they produce will also havea low confidence and are thus unlikely to be selected for further processing or
execution, assuming the controller has limited resources and must set priorities.
Crucially, the expai approach demands that new granules have a very low con-
fidence upon construction; thus, the controller will only allow such granules to
produce predictions and not to participate in producing subgoals, until their
value has been proven by experiential evidences. If not, unsupported granuleswill eventually be deleted without ever having affected the external behavior of
the system.
Although the controller does not learn directly, it is in a positive feedback loop
with the learning of granules: as the system learns more about its environment
and requirements, the more accurately and confidently do the granules allowthe generation of subgoals that are targeted at fulfilling those requirements, the
Growing Recursive Self-Improvers 135
more experience the system will accumulate regarding the requirements, and the
more confidently can the controller select the right actions to perform.
4 Recursive Self-Improvement
A defining feature of expai is that granules are added quickly but tentatively,
and verified over time . The issue of formal verification of the benefit of a potential
self-modification is thus replaced by a performance-rating process that observesthe benefit of a fine-grained additive modification in the real world. Such addi-
tions are warranted by experience and do not disrupt behavior—and are thus
safe without forward-looking proof—because granules (1) are small, (2) have alow associated confidence upon construction, and (3) are constructed to capture
actually observed patterns. The three processes that act on the set of granules—
namely additive, subtractive, and compressive—are separate processes, ideallyrunning concurrently and continuously.
An expai thus implemented is capable of performing recursive self-improve-
ment, which is the ability to leverage current know-how to make increasingly
better self-modifications. This capability is a natural consequence of an expai ’s
construction and one realistic assumption, as shown by the following line of
reasoning:
1.Assumption : The world has exploitable regularities and is not too deceptive
and adversarial (especially in the presence of a teacher and guardian during
early, vulnerable learning stages).
2.By construction : Knowledge and skills are represented at a very fine gran-
ularity, homogeneously, and hierarchically by granules, and these granules
comprehensively determine behavior.
3.By construction : Learning is realized by three separate types of processes—
additive, subtractive, and compressive:
a. adding granules through pattern extraction (performed upon unexpected
achievements or failures, to construct explanations thereof);
b. deleting the most poorly performing granules (when their performance
rating or confidence falls below a threshold or memory needs to be freed);
c. compressing granules through abstraction, generalization, and possibly
even compilation into native code [ 16] (performed on consistently reliable
and useful granules)—this ensures scalability and prevents catastrophic
forgetting.
4.By construction : Curiosity is realized through a simple analysis of granules’
performance ratings (plus possibly more sophisticated “nighttime” analysis
of recent inputs and internal events [ 16]) leading to the injection of “intrinsic”
goals that can be pursued by the system unless they conflict with extrinsic(user-defined top-level) goals.
5. From (2) and (3) we conclude that learning entails comprehensive self-
modification, which is performed throughout the system’s (single) life time.
6. From (1) and (4) we conclude that good experience is gathered continually.7. From (5) and (6) we conclude that an expai performs self-improvement.
136 B.R. Steunebrink et al.
8. Since an expai is supposed to run continuously (“life-long learning”), with
its controller dynamically coupling the currently best know-how to satisfyboth extrinsic goals (human-imposed drives and associated subgoals) and
intrinsic goals (curiosity), we conclude that an expai performs recursive self-
improvement.
This concludes our argument that an expai agent can grow to become an
AGI system without a need for (mathematical/logical) proof search, arguablyeven through means that are simpler and easier to implement. But we insist that
it is unsatisfactory and insufficient to prove beforehand that a system is capable
of recursive self-improvement. It is paramount that we manage the system’sgrowth, which is a process in time, and requires our interaction and supervision.
Therefore we must develop teaching, testing, and intervention principles—in
short, a Test Theory.
It makes sense now to distinguish between “epistemological integrity”
(treated up to now) and “action integrity” (treated in the next section) of
self-modifications. The former means that a particular self-modification will notbreak existing useful and valuable knowledge and skills; the latter means that
capabilities introduced or altered by the self-modification do not result in acts
that violate constraints imposed by stakeholders. These two kinds of integrity
affect the safety of a system, and they warrant different measures.
5 Towards a Test Theory
The primary aim of Test Theory is to establish a methodology by which stake-holders can progressively gain confidence and trust in an agent’s capability to
understand phenomena and their meaning, of interest to said stakeholders. So
Test Theory is first and foremost about gauging levels of understanding in service
ofconfidence-building. The way this is achieved—with humans in the loop—will
probably involve the interleaving of curricula (with room for teaching and playing)
and tests, much like the structure of human schooling. This will hardly come as a
surprise, and indeed this idea has been floated before (e.g., AGI preschool [ 3]a n d
AI-Kindergarten [ 7]). However, it must be realized that we (as growers of recur-
sive self-improvers) face a vastly different challenge than school teachers. Namely,
we cannot assume the presence of a functioning brain with its innate capabilitiesto acquire understanding and adopt value systems, ready to be trained. We are
simultaneously developing the “brain” itself and testing its capabilities—and cru-
cially, we are “developing” the requirements that capture the value system thatwe wish to impose, as well as our confidence and trust in the agent’s capability to
understand and adhere to it. Therefore our theory makes a distinction between
the performance on a test (being the agent’s level of understanding of the taught
material) and the consequences of a test (see below).
To be more precise, a testis specified to comprise the following five aspects:
– a set of requirements (Sect. 3) specifying a task[18];
– an agent (to be tested);
Growing Recursive Self-Improvers 137
– pressure (explained below);
– a stakeholder (evaluating the performance of the agent on the task);– consequences (the stakeholder makes a decision about the future of the agent
based on its performance).
It is important to realize that the very specification of a task already deter-
mines what one can measure for. Educational science has produced valuableanalyses of what kind of questions test for what kind of knowledge; for exam-
ple, Bloom’s taxonomy (1956) [ 1] and its more recent revisions [ 4,5] have been
widely used for developing guidelines for designing and properly phrasing exams.However, such taxonomies are (understandably) human-centric and not directly
applicable for testing artificial agents—especially experimental and rudimen-
tary ones—since they assume full-fledged natural language understanding anda human-typical path of growth of skills and values. In current research we are
developing a more mechanistic taxonomy of task specifications, which does not
require natural language, and which tests for the proper functioning and usageof mechanisms that give rise to different levels of understanding of phenomena
and their meaning [ 19].
A high level of understanding of phenomenon X shall imply three capabilities:
(1) how to make and destroy X, (2) how to use X in the common way, and (3) how
to use X in a novel way. For example, consider an agent learning to understand
tables, and being presented with an image of a table with its top surface lying on
the ground and its legs pointing upwards. When queried whether this is a table,
a yes/no answer will indicate a very low level of understanding. A much higherlevel would be evident if the agent would somehow answer “Well, it’s potentially
a table, if only someone would rotate it such that the top is supported by the
legs, because the common usage of a table is to keep objects some distance upfrom the ground.” An even higher level of understanding would be evident if the
agent would autonomously figure out that it can achieve a goal such a reaching
an elevated object by climbing itself on top of the table.
The stakeholder must associate consequences to each test, based on the mea-
sured performance of the agent. He may conclude that the system is ready to
be deployed, or that it needs to follow additional prerequisite curricula, or thatit must be sent to the trash bin and us back to the drawing board. Another
possible consequence is that we realize that there are errors or imperfections in
the requirements, and update those.
In order for trust to develop, an agent must be put under pressure . Consider
that a growing agent has not only short-term test-based requirements (which
delineate the task(s) to be completed), but also holds long-term requirements
(e.g., staying alive, not harming humans, etc.—possibly underspecified). Pressure
then results from having to accomplish a task not only on the edge of violationof the test-based constraints, but also on the edge of violation of the long-term
constraints. Thus pressure can illuminate the capability of the tested agent to
prioritize its constraint adherence. Of course trust is built slowly, with pressurebeing applied initially in scenarios where failure is not costly.
138 B.R. Steunebrink et al.
Fig. 2. (a) Directly observable graceful degradation; (b) brittleness leading to cata-
strophic failure; (c) robustness with sudden failure, mitigated by “graceful” confidencereporting.
Considering an agent’s point of failure allows us to gauge the agent’s robust-
ness, its capability to degrade gracefully, and brings us full circle back to the
issue of understanding. Given some measurement of the agent’s performance on
a task, if we observe that this performance does not drop precipitously at anypoint (Fig. 2a) as we increase interference (including resource starvation), then
we can ascribe it the property of graceful degradation . If, however, the agent fails
suddenly (e.g., by violating a stakeholder-imposed constraint), we call it brittle
(Fig.2b). From this viewpoint, the level of robustness of the agent is its ability to
keep performance up in spite of interference (Fig. 2c). A robust agent may actu-
ally fail ungracefully—at least, if we only judge from observed behavior. An agentwith high levels of understanding, however, will be able to recognize increased
interference. Now, trustworthiness can be earned by the agent when it leverages
this understanding to report—to the stakeholder—its confidence regarding itsability to continue satisfying the imposed requirements.
Continuing this research, we will further develop, formalize, and implement
the Test Theory into a tool that can be used to measure and steer the growth
of recursively self-improving expai agents—in such a way that we can become
confident that they understand the meaning of the requirements that we imposeand update.
Acknowledgments. The authors would like to thank Eric Nivel and Klaus Greff for
seminal discussions and helpful critique. This work has been supported by a grant from
the Future of Life Institute.
References
1. Bloom, B., Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (eds.): Tax-
onomy of Educational Objectives: The Classification of Educational Goals. Hand-book I: Cognitive Domain. David McKay, New York (1956)
2. Fallenstein, B., Soares, N.: Problems of self-reference in self-improving space-time
embedded intelligence. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014.LNCS, vol. 8598, pp. 21–32. Springer, Heidelberg (2014)
3. Goertzel, B., Bugaj, S.V.: AGI preschool. In: Proceedings of the Second Conference
on Artificial General Intelligence (AGI-2009). Atlantis Press, Paris (2009)
Growing Recursive Self-Improvers 139
4. Krathwohl, D.R.: A revision of bloom’s taxonomy: an overview. Theor. Pract.
41(4), 212–218 (2002)
5. Marzano, R.J., Kendall, J.S.: The need for a revision of bloom’s taxonomy. In:
Marzano, R., Kendall, J.S. (eds.) The New Taxonomy of Educational Objectives,
pp. 1–20. Corwin Press, Thousand Oaks (2006)
6. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high
confidence predictions for unrecognizable images (2014). http://arXiv.org/abs/
1412.1897
7. Nikoli´ c, D.: AI-Kindergarten: A method for developing biological-like artifi-
cial intelligence (2016, forthcoming). http://www.danko-nikolic.com/wp-content/
uploads/2015/05/AI-Kindergarten-patent-pending.pdf . Accessed 1 April 2016
8. Nivel, E., Th´ orisson, K.R.: Self-programming: operationalizing autonomy. In: Pro-
ceedings of the 2nd Conference on Artificial General Intelligence (AGI-2009) (2009)
9. Nivel, E., et al.: Bounded seed-AGI. In: Goertzel, B., Orseau, L., Snaider, J. (eds.)
AGI 2014. LNCS, vol. 8598, pp. 85–96. Springer, Heidelberg (2014)
10. Nivel, E., Th´ orisson, K.R., Steunebrink, B.R., Dindo, H., Pezzulo, G., Rodr´ ıguez,
M., Hern´ andez, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H.P.,
Chella, A., Jonsson, G.K.: Autonomous acquisition of natural language. In: Pro-
ceedings of the IADIS International Conference on Intelligent Systems & Agents,
pp. 58–66 (2014)
11. Nivel, E., Th´ orisson, K.R., Steunebrink, B., Schmidhuber, J.: Anytime bounded
rationality. In: Bieger, J., Goertzel, B., Potapov, A. (eds.) AGI 2015. LNCS, vol.
9205, pp. 121–130. Springer, Heidelberg (2015)
12. Schmidhuber, J.: G¨ odel machines: fully self-referential optimal universal self-
improvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence.
Cognitive Technologies, pp. 199–226. Springer, Heidelberg (2007)
13. Schmidhuber, J.: Developmental robotics, optimal artificial curiosity, creativity,
music, and the fine arts. Connection Sci. 18(2), 173–187 (2006)
14. Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–
2010). IEEE Trans. Auton. Ment. Dev. 2(3), 230–247 (2010)
15. Steunebrink, B.R., Schmidhuber, J.: A family of G¨ odel machine implementations.
In: Schmidhuber, J., Th´ orisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830,
pp. 275–280. Springer, Heidelberg (2011)
16. Steunebrink, B.R., Koutn´ ık, J., Th´ orisson, K.R., Nivel, E., Schmidhuber, J.:
Resource-bounded machines are motivated to be effective, efficient, and curious.In: K¨uhnberger, K.-U., Rudolph, S., Wang, P. (eds.) AGI 2013. LNCS, vol. 7999,
pp. 119–129. Springer, Heidelberg (2013)
17. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J.,
Fergus, R.: Intriguing properties of neural networks (2013). http://arXiv.org/abs/
1312.6199
18. Th´ orisson, K.R., Bieger, J., Thorarensen, T., Sigurdardottir, J.S., Steunebrink,
B.R.: Why artificial intelligence needs a task theory - and what it might look like.
In: Proceedings of AGI-2016 (2016)
19. Th´ orisson, K.R., Kremelberg, D., Steunebrink, B.R., Nivel, E.: About understand-
ing. In: Proceedings of AGI-2016 (2016)
20. Th´ orisson, K.R., Nivel, E.: Achieving artificial general intelligence through peewee
granularity. In: Proceedings of AGI-2009, pp. 222–223 (2009)
21. Yudkowsky, E., Herreshoff, M.: Tiling agents for self-modifying AI, and the L¨ obian
obstacle (2013). https://intelligence.org/files/TilingAgentsDraft.pdf |
4f1befa9-692e-4baa-9fec-ff9240f5f735 | trentmkelly/LessWrong-43k | LessWrong | D&D Sci Coliseum: Arena of Data
This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset.
Complexity Rating: 4/5
STORY
The Demon King rises in his distant Demon Castle. Across the free lands of the world, his legions spread, leaving chaos and death in their wake. The only one who can challenge him is the Summoned Hero, brought by the Goddess Herself from a distant world to aid this one in its time of need. The Summoned Hero must call together all the free peoples of the world under their banner, to triumph united where they would surely fall separately.
And what is the Summoned Hero doing now?
Well, right now you are staring in disbelief at your companions' explanation of the politics of the Sunset Coast.
Apparently, little things like a Demon King attempting to subjugate the world are not enough to shake them from their traditions. If you want them to listen to you, being the Summoned Hero is not going to suffice. Instead, they conduct all their politics based on gladiatorial combat in the Arena of Dusk.
The crowd is cheering! That makes this representative government! (You would make fun of them more, but given what you remember of your home world's elections you aren't actually certain they're doing worse than you.) Image created using OpenArt SDXL.
The good news is that the Four Great Houses of the Sunset Coast will gladly listen to you, and maybe even join you against the Demon King, if you can defeat their Champions in gladiatorial combat.
The bad news is that you are...not really suited to gladiatorial combat. Neither your class nor your isekai cheat powers[1] are especially good at physical fights.
The good news is that you have accumulated by now a large retinue of vagabonds and misfits loyal party members who will gladly fight on your behalf.
The bad news is that even your party members who are good at fighting still seem somewhat outclassed by |
17499cb9-1621-4f82-9ab2-0f16e468787c | trentmkelly/LessWrong-43k | LessWrong | Condition-directedness
In chess, you can’t play by picking a desired end of the game and backward chaining to the first move, because there are vastly more possible chains of moves than your brain can deal with, and the good ones are few. Instead, chess players steer by heuristic senses of the worth of situations. I assume they still back-chain a few moves (‘if I go there, she’ll have to move her rook, freeing my queen’) but just leading from a heuristically worse to a heuristically better situation a short hop away.
In life, it is often taken for granted that one should pursue goals, not just very locally, but over scales of decades. The alternative is taken to be being unambitious and directionless.
But there should also be an alternative that is equivalent to the chess one: heuristically improving the situation, without setting your eye on a particular pathway to a particular end-state.
Which seems like actually what people do a lot of the time. For instance, making your living room nice without a particular plan for it, or reading to be ‘well read’, or exercising to be ‘fit’ (at least insofar as having a nice living space and being fit and well-read are taken as generally promising situations rather than stepping stones immediately prior to some envisaged meeting, say). Even at a much higher level, spending a whole working life upholding the law or reporting on events or teaching the young because these put society in a better situation overall, not because they will lead to some very specific outcome.
In spite of its commonness, I’m not sure that I have heard of this type of action labeled as distinct from goal-directedness and undirectedness. I’ll call it condition-directedness for now. When people are asked for their five year plans, they become uncomfortable if they don’t have one, rather than proudly stating that they don’t currently subscribe to goal-oriented strategy at that scale. Maybe it’s just that I hang out in this strange Effective Altruist community, where all thing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.