id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
a1729263-21df-4a11-a7e0-4f89ad80d59a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Mountain View: Board Game Night
Discussion article for the meetup : Mountain View: Board Game Night
WHEN: 09 April 2013 07:30:00PM (-0700)
WHERE: 278 Castro St, Mountain View, CA
It's been four whole weeks since we last played board games! How has this happened!?
The Quixey office has quite a few board games, so we won't be short, but you're more than welcome to bring anything else you might like to play. I will ensure that both Zendo and The Resistance are available. :)
----------------------------------------
If you're in the San Francisco Bay area and reading this, consider joining the Bay Area Less Wrong mailing list. Regular meetups in Mountain View and Berkeley are announced and discussed there, and other events of interest to the local community.
Entering the Quixeyplex: Our doors will be locked in the evening, and it's not always the case that someone's watching them for passerby. If you need to be let in, you can call me at 608.698.2959.
Discussion article for the meetup : Mountain View: Board Game Night |
42f2623b-f82d-40ca-ab9e-3f655f7fb40c | trentmkelly/LessWrong-43k | LessWrong | Allegory On AI Risk, Game Theory, and Mithril
“Thorin, I can’t accept your generous job offer because, honestly, I think that your company might destroy Middle Earth.”
“Bifur, I can tell that you’re one of those “the Balrog is real, evil, and near” folks who thinks that in the next few decades Mithril miners will dig deep enough to wake the Balrog causing him to rise and destroy Middle Earth. Let’s say for the sake of argument that you’re right. You must know that lots of people disagree with you. Some don’t believe in the Balrog, others think that anything that powerful will inevitably be good, and more think we are hundreds or even thousands of years away from being able to disturb any possible Balrog. These other dwarves are not going to stop mining, especially given the value of Mithril. If you’re right about the Balrog we are doomed regardless of what you do, so why not have a high paying career as a Mithril miner and enjoy yourself while you can?”
“But Thorin, if everyone thought that way we would be doomed!”
“Exactly, so make the most of what little remains of your life.”
“Thorin, what if I could somehow convince everyone that I’m right about the Balrog?”
“You can’t because, as the wise Sinclair said, ‘It is difficult to get a dwarf to understand something, when his salary depends upon his not understanding it!’ But even if you could, it still wouldn’t matter. Each individual miner would correctly realize that just him alone mining Mithril is extraordinarily unlikely to be the cause of the Balrog awakening, and so he would find it in his self-interest to mine. And, knowing that others are going to continue to extract Mithril means that it really doesn’t matter if you mine because if we are close to disturbing the Balrog he will be awoken.”
“But dwarves can’t be that selfish, can they?”
“Actually, altruism could doom us as well. Given Mithril’s enormous military value many cities rightly fear that without new supplies they will be at the mercy of cities that get |
b7a5ab3e-7988-4697-88d5-69b2268c9a6b | trentmkelly/LessWrong-43k | LessWrong | Mathematics as a lossy compression algorithm gone wild
This is yet another half-baked post from my old draft collection, but feel free to Crocker away.
There is an old adage from Eugene Wigner known as the "Unreasonable Effectiveness of Mathematics". Wikipedia:
the mathematical structure of a physical theory often points the way to further advances in that theory and even to empirical predictions.
The way I interpret is that it is possible to find an algorithm to compress a set of data points in a way that is also good at predicting other data points, not yet observed. In yet other words, a good approximation is, for some reason, sometimes also a good extrapolation. The rest of this post elaborates on this anti-Platonic point of view.
Now, this point of view is not exactly how most people see math. They imagine it as some near-magical thing that transcends science and reality and, when discovered, learned and used properly, gives one limited powers of clairvoyance. While only the select few wizard have the power to discover new spells (they are known as scientists), the rank and file can still use some of the incantations to make otherwise impossible things to happen (they are known as engineers).
This metaphysical view is colorfully expressed by Stephen Hawking:
What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?
Should one interpret this as if he presumes here that math, in the form of "the equations" comes first and only then there is a physical universe for math to describe, for some values of "first" and "then", anyway? Platonism seems to reach roughly the same conclusions:
Wikipedia defines platonism as
the philosophy that affirms the existence of abstract objects, which are asserted to "exist" in a "third realm distinct both from the sensible external w |
d3d10e52-56b3-47dc-b7ab-2f69a4d925c2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Optimisation Measures: Desiderata, Impossibility, Proposals
**Previously:** [Towards Measures of Optimisation](https://www.alignmentforum.org/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3)
When thinking about optimisation processes it is seductive to think in information-theoretic terms.
Is there some useful measure[[1]](#fnprhbwdnnxc) of 'optimisation' we can derive from utility functions or preference orderings, just as Shannon derived '[information](https://en.wikipedia.org/wiki/Information_content)' from probability distributions? Could there be a 'mathematical **theory of optimisation**' that is **analogous** to **Shannon's theory of information**? In this post we exhibit **negative evidence** that this point of view is a fertile direction of inquiry.
In the [last post](https://www.alignmentforum.org/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3) we reviewed proposals in that direction, most notably Yudkowsky's [original idea](https://www.alignmentforum.org/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power) using preference orderings, and suggested some informal desiderata. In this post we state our desiderata formally, and show that they can't all be satisfied at once. We exhibit a new proposal from Scott Garrabrant which relaxes one desideratum, and revisit the previous proposals to see which desiderata they satisfy.
Setup
-----
Recall our setup: we're choosing an action from a set A.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
to achieve an outcome in a set Ω. For simplicity, we assume that Ω is finite. Denote the set of probability distributions on Ωby ΔΩ. We have a default distribution p∈ΔΩ, which describes the state of affairs before we optimise, or in a counterfactual world where we *don't* optimise, and action distributions pa∈ΔΩ for each a∈A, which describe the state of affairs if we do. Our preferences are described by a utility function u:Ω→R. Let U denote the set of utility functions.
In the previous post we considered random variables OP(p,u)(x), which measure the optimisation entailed by achieving some *outcome*x, given a utility function u and base distribution p. We then took an expectation over pa to measure the optimisation entailed by achieving some *distribution* *over outcomes,* i.e. we defined OP(p,pa,u)=EpaOP(p,u)(x).
In this post we state our desiderata directly over OP(p,pa,u) instead. For more on this point see the discussion of the **convex-linearity** desideratum below.
Desiderata
----------
Here are the desiderata we originally came up with for OP:ΔΩ×ΔΩ×U→R∪{−∞,∞}. They should hold for all p,pa,pb∈ΔΩ and for all u∈U. Explanations below.
1. **(Continuity)**
OP is continuous[[2]](#fnhra5fploc7) in all its arguments.
2. **(Invariance under positive scaling).**
OP(p,pa,αu)=OP(p,pa,u) for all α∈R≥0.
3. **(Invariance under translation).**
OP(p,pa,u)=OP(p,pa,u+β) for all β∈R.
4. **(Zero for unchanged expected utility).**
OP(p,pa,u)=0 whenever Ep(u)=Epa(u).
5. **(Strict monotonicity).**
OP(p,pa,u)>OP(p,pb,u) whenever Epa(u)>Epb(u).
6. (**Convex-linearity).**
OP(p,λpa+(1−λ)pb,u)=λOP(p,pa,u)+(1−λ)OP(p,pb,u) for all λ∈[0,1].
7. **(Interesting and not weird).**
See below.
**Continuity** just seems reasonable.
We want **invariance under positive scaling** and **invariance under translation** because a [von Neumann-Morgernstern utility function](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) is only defined up to an equivalence class [u]=[αu+β] for α∈R≥0 and β∈R (we denote this equivalence relation by ∼ in the remainder of the post). One of our motivations for this whole endeavour is to be able to talk about how much a utility function is being optimised *without* having to choose a specific α and β.
The combination of **zero for unchanged expected utility** and **strict mononicity** means that OP(p,pa,u) follows the sign of Epa(u)−Ep(u). Increases in expected utility count as positive optimisation, decreases count as negative optimisation, and when expected utility is unchanged no optimisation has taken place.
**Convex-linearity** holds if and only if OP(p,pa,u) can be rewritten as the expectation under pa of an underyling measure of the optimisation of *outcomes*x rather than *distributions*pa. This is intuitively desirable since we're pursuing an analogy with information theory, where OP(p,pa,u) corresponds to the entropy of a random variable, and OP(p,u)(x) corresponds to the information content of its outcomes. This is the desideratum that Scott's proposal violates in order to satisfy the rest.
**Interesting and not weird** is an informal catch-all desideratum.
**Not weird** is inspired by the problem we ran across in [the previous post](https://www.lesswrong.com/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3) when trying to redefine Yudkowsky's proposed optimisation measure for utility functions instead of preference orderings: you could make OP arbitrarily low by adding some x to Ω with zero probability under both p and pa and large negative utility. This kind of brittleness counts against a proposed OP.
**Interesting** is the most important desideratum and the hardest to formalise. OP should be a derivative of utility functions thatmight plausibly lead us to new insights that are much harder to access by thinking about utility functions directly. If we compare to derivatives of probability, it should be more like information than like odds ratios. OPdefinitely shouldn't just recapitulate expected utility.
Impossibility
--------------
It turns out our first 6 desiderata sort of just recapitulate expected utility.
In particular, they're equivalent to saying that OP(p,pa,u) just picks out some representative v of each equivalence class of utility functions and reports its expectation under pa. The zero point of the representative must be set so that Ep(v)=0 (and so we'll get different representatives for different p), but other than that, any representative defined continuously in terms of p and u will do.
**Proposition 1:** *Desiderata **1**-**6** are satisfied if and only if*OP(p,pa,u)=Eparep(p,[u]), *for some continuous function*rep:ΔΩ×U/∼→U*with*rep(p,[u])∼u *and* Eprep(p,[u])=0 *for all* p∈ΔΩ *and* u∈U.
**Proof:** See Appendix.
For example, we can translate u by −Ep(u) and rescale by the utility difference between any two points, i.e. set rep(p,[u])=u(x)−Ep(u)u(x1)−u(x2) for some fixed x1,x2∈Ω[[3]](#fno8musspqotr). The translation gets us the right zero point, and the scaling ensures the function is well defined, i.e. it sends all equivalent u to the same representative.
OP(p,pa,u)=Epa(u(x)−Ep(u)u(x1)−u(x2))=Epa(u)−Ep(u)u(x1)−u(x2) satisfies desiderata **1**-**6**. But it's not very **interesting**! It's just the expectation of a utility function.
Not all possibilities for rep(p,[u]) are of this simple form, and as stated it remains possible that a well-chosen rep(p,[u]), with a scaling factor that depends more subtly on p and u, could lead to an interesting and well-behaved optimisation measure. But **proposition 1** contrains the search space, and can be seen as moderately strong evidence that any optimisation measure satisfying desiderata **1**-**6** must violate desideratum **7.**
If we take this perspective, perhaps we should think of dropping one of the desiderata. Which one should it be?
New Proposal: Garrabrant
------------------------
**Convex-linearity,** according to the following idea, suggested by Scott Garrabrant.
Intuitively, we start with some notion of 'disturbance' - a goal-neutral measure of how much an action affects the world. Then we say that the amount of optimisation that has taken place is the least amount of disturbance necessary to achieve a result at least this good.
We'll use the [KL-divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) as our notion of disturbance, but there are other options. Let ⪰u order probability distributions by their expected utility, so p′⪰pa if and only if Ep′u(x)≥Epau(x). Then we define OP+SG(p,pa,u)=minp′⪰paDKL(p∣∣p′).
So if it didn't require much disturbance to transform the default distribution p into pa, we won't get much credit for our optimisation. If it required lots of disturbance then we might get more credit - but only if there wasn't some p′ much closer to p which would've been just as good. In that case most of our disturbance was wasted motion.
Notice that Scott's idea only measures *positive* optimisation: if Epa(u)<Ep(u) then we just get OP+SG(p,pa,u)=0.
To get something which does well on our desiderata, we need to combine it with some simliar way of measuring *negative* optimisation. One idea is to say that when expected utility goes down as a result of our action, the amount of de-optimisation that has taken place is the least amount of disturbance needed to get back to somewhere as good as you started[[4]](#fnwcpk9bac8zs). Using the KL-divergence as our notion of disturbance we get OP−SG(p,pa,u)=minp′⪰pDKL(pa∣∣p′).
Then we can say OPSG=OP+SG−OP−SG=minp′⪰paDKL(p∣∣p′)−minp′⪰pDKL(pa∣∣p′).
Note that one or both of the two terms will always be zero, depending on whether expected utility goes up or down.
**Proposition 2:**OPSG *satisfies **continuity**, **invariance under positive scaling**, **invariance under translation**, **zero for unchanged expected utility**,and **strict monotonicity**; and does not satisfy **convex-linearity**.*
**Proof:** See Appendix.
OPSG seems **interesting**, and not obviously **weird**, so whether or not you like it might come down the importance you assign to **convex-linearity**. Remember that in the information theory analogy, the lack of linearity makes OPSG like a notion of entropy without a notion of information. If you didn't like that analogy anyway, this is probably fine.
Previous Proposals
------------------
Let's quickly run through how some previously proposed definitions of OP score according to our desiderata. We won't give much explanation of these proposals - see our [previous post](https://www.alignmentforum.org/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3) for that.
### Yudkowsky
In our current setup and notation we can write Yudkowsky's original proposal[[5]](#fn25o5gmgpcw9)as OPEY(p,pa,u)=−Ex∼pa(log∑u(y)≥u(x)p(y)).
Since this proposal is about preference orderings rather than utility functions, it violates a few of our utility function-centric desiderata.
**Proposition 3:**OPEY *satisfies **invariance under positive scaling, invariance under translation,** and **convex-linearity**; it does not satisfy **continuity**, **zero for unchanged expected utility**, or **strict monotonicity.***
**Proof:** See Appendix.
OPEY was **interesting** enough to kick off this whole investigation. How **weird** it is is open to interpretation.
### Yudtility
In the previous post we tried to augment Yudkowsky's proposal to a version sensitive to the size of utility differences betweeen different outcomes, rather than just their order. We can write our idea as OPUY(p,pa,u)=−Ex∼pa(log∑u(y)≥u(x)p(y)(u(y)−u(xw))∑p(y)(u(y)−u(xw))) where xw=argminx∈Ωu(x).
**Proposition 4:**OPUY *satisfies **invariance under positive scaling, invariance under translation,** and **convex-linearity**; it does not satisfy **continuity,** **zero for unchanged expected utility,** or **strict monotonicity.***
**Proof:** See Appendix.
OPUY also intuitively fails **not weird,** since as we noted in the previous post, it can be made arbitrarily small by making u(xw) sufficiently low - the existence of a very bad outcome can kill your optimisation power, even if it has zero probability under either the default or achieved distribution.
### Altair
In a [post from 2012](https://www.greaterwrong.com/posts/aD7mZMeu9waAfm3au/mathematical-measures-of-optimization-power), Alex Altair identifies some desiderata for a measure of optimisation. His setup is slightly different to ours: instead of comparing two distinct distributions p and pa, he imagines one distribution pt varying continously over time, and aims to define instantaneuous OP in terms of the rate of change of Ept(u). He gives the following desiderata:
1. The sign of OP should be the sign of ddtEpt(u).
2. Constant positive OP should imply exponentially increasing Ept(u).
The analogue of **1** in our setup is the requirement that the sign of OP should be the sign of Epa(u)−Ep(u). As we mentioned, this is a consequence of **zero for unchanged expected utility** and **strict mononicity.** But importantly, **strict monotonicity** also rules out the degenerate solution of setting OP to 1 if Epa(u)−Ep(u) is positive and −1 if it's negative.
**2** seems like the sort of condition should fall out of desiderata along the lines of 'OP should be additive across a sequence of independent optimisation events'. We haven't thought much about the sequential case, but we think it's the most interesting direction to consider in the future.
Altair tentatively suggests defining OP as ddtEpt(u).∣Ept(u)∣. An analogue in our setup would be OPAA(p,pa,u)=Epa(u)−Ep(u)∣Ep(u)∣.
**Proposition 5:**OPAA *satifies **continuity, invariance under positive scaling, zero for unchanged expected utility, strict monotonicity,** and **convex-linearity.** It does not satisfy **invariance under translation.***
**Proof:** See Appendix.
But OPAA is not very **interesting** - it's similar to the OP we got out of **Proposition 5.**
Future Directions
-----------------
Comments suggesting new desiderata or proposals, rejecting existing ones, or pointing out mistakes are encouraged. We don't plan to work on this any more for now, so anyone who wants to to take up the baton is welcome to.
That said, there are a few reasons why the basic premise of this project, which is something like, "Let's look for a [true name](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) for optimisation defined in terms of utility functions, inspired by information theory," might be misguided:
* **Utility functions might already be the true name** - after all, they do directly measure optimisation, while probability doesn't directly measure information.
* **The true name might have nothing to do with utility functions** - Alex Altair has [made the case](https://www.lesswrong.com/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3?commentId=evPeyefFEDWBrbsrN) that it should be defined in terms of preference orderings instead.
* **Following the information theory analogy might put form over substance** - perhaps quests for the true name of a quantity need to be guided by clear cut use cases instead, so you get the meaningful feedback loops necessary to iterate to something interesting.
If you're not put off by these concerns, and want something concrete and mathematical to work on, then we think there's some low hanging fruit in formulating desiderata over a sequence of optimisation events, instead of just one. One of Shannon's desiderata for a measure of information was that the information content of two independent events should be the sum of the information content of the individual events. It seems natural to say something similar for optimisation, but we haven't thought much about how to formulate it.[[6]](#fnp84hjfa943)[[7]](#fnp9hivn0y6m)
Appendix: Proofs
----------------
### **Proposition 1**
**Proposition 1:** *Desiderata **1**-**6** are satisfied if and only if*OP(p,pa,u)=Eparep(p,[u]), *for some continuous function*rep:ΔΩ×U/∼→U*with*rep(p,[u])∼u *and* Eprep(p,[u])=0 *for all* p∈ΔΩ *and* u∈U.
**Proof:** Forwards direction:
By **convex-linearity**OP(p,pa,u)=Epaf(p,x,u) for some f:ΔΩ×Ω×U. By the two **invariance** conditions we have that Epaf(p,x,u1)=Epaf(p,x,u2) for any pa,p∈ΔΩ and u1,u2∈U with u1∼u2. A consequence of this is that f(p,x,u1)=f(p,x,u2) for any x∈Ω, p∈ΔΩ and u1,u2∈U with u1∼u2 (to see this for a given x, just set pa=δx, where δx is the distribution which is 1 at x and 0 elsewhere). That means we can well-define a function g:ΔΩ×Ω×U/∼→R by g(p,x,[u])=f(p,x,u). Since U is the set of functions from Ω to R, we can reformulate g as a function (whose name is not yet justified) rep:ΔΩ×U/∼→U, where rep(p,[u]) is defined by rep(p,[u])(x)=g(p,x,[u])=f(p,x,u). Then we get that OP(p,pa,u)=Eparep(p,[u]), as in the statement of the proposition.
To show that rep is continuous, note that by the **continuity** of OP we have thatlim(p,[u])→(p∗,[u∗])OP(p,δx∗,u)=lim(p,[u])→(p∗,[u∗])rep(p,[u])(x∗) for any sequence (p,[u])→(p∗,[u∗]). Since lim(p,[u])→(p∗,[u∗])OP(p,δx∗),u)=lim(p,[u])→(p∗,[u∗])rep(p,[u])(x∗) and OP(p∗,δx∗,u∗)=rep(p∗,[u∗])(x∗), the continuity of rep follows.
To see that rep(p,[u])∼u, note that by **strict monotonicity**rep(p,[u]) and uinduce the same ordering of probability distributions by expected utility, so by the von Neumann-Morgernstern utility theorem there exists a positive affine transformation between them. That Eprep(p,[u])=0 follows directly from **zero for unchanged expected utility.**
Backwards direction:
**Continuity** follows from continuity of rep. **Invariance** follows from the fact that rep is defined on U/∼ rather than U. That rep(p,[u])∼u gives us that if Epa(u)<Epb(u) then Eparep(p,[u])<Epbrep(p,[u]), i.e. **strict monotonicity**, and also that if Epa(u)=Ep(u) then Eparep(p,[u])=Eprep(p,[u]), which since the latter is zero by assumption gives **zero for unchanged expected utility. Linearity** follows from linearity of expectation.
### **Proposition 2**
OPSG *satisfies **continuity**, **invariance under positive scaling**, **invariance under translation**, **zero for unchanged expected utility**,and **strict monotonicity**; and does not satisfy **convex-linearity**.*
**Proof:** We have **continuity** since OPSG is the composition of continuous functions. Since we only use the ordering a utility function induces over distributions we get **invariance** for free. When Epa(u)=Ep(u) both KL-divergences can be minimised to zero by p′=p, so we get **zero for unchanged expected utility.**
For **strict monotonicity** we can assume that Epa(u)>Epb(u)≥Ep(u) and show thatOP(p,pa,u)>OP(p,pb,u) (the case where both are Ep(u)≥Epa(u)>Epb(u) is similar, and the case where exactly one is Epa(u)>Ep(u)>Epb(u) is easy). In particular we need to show that minp′⪰paDKL(p∣∣p′)>minp′⪰pbDKL(p∣∣p′), which we can do by taking some qa∈argminp′⪰paDKL(p∣∣p′) and constructing from it some qb with DKL(p∣∣qb)<DKL(p∣∣qa) and qb≻pb. Take any y,z∈Ω with qa(y)>p(y) and qa(z)>p(z), and set qb(y)=qa(y)−ε and qb(y)=qa(y)+ε, with qb(x)=qa(x) for all x∈Ω∖{y,z}. For small ε>0, we get decreased KL-divergence without taking the expected utility below that of pb.
To see that OPSG violates **convex-linearity**, consider the case where Ω={x1,x2} with u(x1)>u(x2), and Ep(u)<Epa(u). In this case OPSG(p,pa,u)=minp′⪰paDKL(p∣∣p′)=DKL(p∣∣pa). Since KL-divergence is not linear its easy to find a counterexample.
### **Proposition 3**
OPEY *satisfies **invariance under positive scaling, invariance under translation,** and **convex-linearity**; it does not satisfy **continuity**, **zero for unchanged expected utility**, or **strict monotonicity.***
**Proof:** We get **invariance** by since OPEY only uses the ordering over outcomes implied by a utility function, and **convex-linearity** since OPEY is an expectation.
OPEY is not **continuous** in u: look at OP(p,δx,u)=−log∑u(y)≥u(x)p(y) for some x, and consider what happens when we take some z with p(z)>0 and and u(z)<u(x), and increase u(z) all the way to u(x). **Zero for unchanged expected utility** fails since OPEY is only zero when pa=δxw.
For a counterexample to **strict monotonicity**, let Ω={x1,x2,x3}, p(x1)=p(x2)=p(x3)=13 (which we will notate as p=[131313]), pa=[010], pb=[12012], and u=[034]. Then pa wins on expected utility but loses on OPEY.
### **Proposition 4**
OPUY *satisfies **invariance under positive scaling, invariance under translation,** and **convex-linearity**; it does not satisfy **continuity,** **zero for unchanged expected utility,** or **strict monotonicity.***
**Proof:** We get **invariance** by construction, and **convex-linearity** since OPEY is an expectation.
**Continuity** fails for the same reason as before. We can reuse the same counterexample as before for **strict monotonicity.** For a counterexample to **zero for unchanged expected utility** let Ω={x1,x2,x3}, u=[048], p=[121414] and pa=[381218].
**Proposition 5:**OPAA *satifies **continuity, invariance under positive scaling, zero for unchanged expected utility, strict monotonicity,** and **convex-linearity.** It does not satisfy **invariance under translation.***
**Proof:** Left as an exercise to the reader!
1. **[^](#fnrefprhbwdnnxc)**By *measure* we mean *a standard unit used to express the size, amount, or degree of something*, not a [probability measure](https://en.wikipedia.org/wiki/Probability_measure). Alexander voted for *yardstick* to avoid confusion; Matt vetoed.
2. **[^](#fnrefhra5fploc7)**Since we assume Ω is finite, there is only one reasonable topology on ΔΩ and U, namely the Euclidean topology.
3. **[^](#fnrefo8musspqotr)**When u(x1)=u(x2) we have to interpret OP as negative infinity, zero, or positive infinity, depending on the sign of the numerator.
4. **[^](#fnrefwcpk9bac8zs)**Here we distinguish *de-optimisation,* by which we mean something like accidental or collateral damage, from *disoptimisation* - deliberate pessimisation of a utility function. If we are instead interested in interpreting expected utility decreases as *disoptimisation,* it would be natural to define OP−SG=minp′⪯pDKL(p∣∣p′) i.e. the amount of disoptimisation that has taken place is the least amount of disturbance needed to do even worse.
5. **[^](#fnref25o5gmgpcw9)**Yudkowsky defined OP as a function of default distribution, outcome, and preference ordering; we've made it a function of default distribution, achieved distribution, and utility function by taking an expectation under the achieved distribution and using the induced preference ordering of the utility function.
6. **[^](#fnrefp84hjfa943)**Related ideas are to consider a sequence of distributions p1,p2,p3 and require something like OP(p1,p3,u)=OP(p1,p2,u)+OP(p2,p3,u), or to get into more exotic operadic compositionality-style axioms like the one in Theorem 5.3 [here](https://ncatlab.org/johnbaez/show/Entropy+as+a+functor).
7. **[^](#fnrefp9hivn0y6m)**Another avenue is to replace **convex-linearity** with **convexity**, in which case OP(p,pa,u) might arrive as an [infra-expectation](https://www.alignmentforum.org/posts/d96dDEYMfnN2St3Bj/infrafunctions-and-robust-optimization) of OP(p,x,u) if not an expectation. |
e36555b1-5940-465a-b131-810b17e965ef | trentmkelly/LessWrong-43k | LessWrong | Streaming Science on Twitch
Recently I was watching a livestream of a poker professional. I was surprised and interested in how it wasn’t purely-gut calls, and also wasn’t purely technical decisions, but a blend of both (with some other stochasticity thrown in).
I’ve been thinking about how to get more good scientists, especially scientists earlier in their career.
I think it should be possible to stream science — the actual practice of it.
I think this would be disproportionally useful for younger/earlier-career people, who I expect would find twitch streams more appealing, and lack the access to advisement/mentorship that often comes later.
This probably only works first with sciences (like mine — I’m biased here) that can happen mostly/exclusively on computers. Software sciences, data sciences, and machine learning sciences seem like good candidates for this.
A bunch of it would probably be boring stuff. Debugging experiments, sorting/processing data, formatting and reformatting plots.
Also, a bunch of the good stuff probably happens internally in ways that wouldn’t be stream-able. There’s a lot of processing in my head that I don’t have access to, and I assume this is true for a lot of people who work on science.
There’s also probably tasks or modes of work that recording/streaming/broadcasting would be distracting. I assume this would also be true of video games, so maybe this effect is not that bad.
The stream could also make science better! It’s possible that (if people watch it and interact live) someone in the audience spots a bug or issue that would have been missed, or proposes a better method of doing something.
A big limitation here is that probably a lot of great science is not share-able (due to company secrets or academic results that cannot be shared before publication to avoid scoops, etc). This would necessitate working on projects of secondary quality/importance, and using a totally different code base and set of tools.
As I’m writing this, I notice that “ |
46fd9d7f-bfff-4879-afcc-0e3f8822e30e | trentmkelly/LessWrong-43k | LessWrong | My thoughts on AI and personal future plan after learning about AI Safety for 4 months
In this post, I want to distill some of my thoughts about AI and my future plan regarding it according to what I have learnt during the past 4 months.
About my personal background: My background lies in Financial Engineering and Mathematics, and I used to get motivated by making AI more powerful. I decided to shift my focus to AI Safety after reading and getting convinced by the idea inside Preventing an AI-related catastrophe. During the past 4 months, I finished
* Alignment Course from AI Safety Fundamentals
* 6 out of 9 Topics inside Deep Learning Curriculum
* 4 out of 5 Chapters inside ARENA
* a paper from LLM Evals Hackathon
* And many readings across AI Alignment Forum, LessWrong and different AI Safety focused team/org/company.
I kept those related posts/learning in my website if you are interested.
Overall, I think the future of AI is promising but in the same dangerous if they cannot align with our intention. It is like what has been discussed in Precipice and I want to take the chance to help it.
It is promising because it already demonstrated superior capability, and can be used to improve people's life quality. The application field can be robotics, education, health system, productivity boost and etc.
But AI can get misaligned if we aren't paying enough attention to it. Here, according to Paul Christiano, alignment means intent alignment:
> AI A is aligned with an operator H when A is trying to do what H wants it to do.
The focus here is "trying", which means it is on intention/goal/motivation level rather than behavior level. A model can behave like aligned but it tries to deceive human or human actually couldn't see its defect due to the high complexity of future tasks.
In the argument above, we assumed it can have "intention" and we can understand how it comes from by using optimizer. When we train a model, we use an optimizer which usually tries to minimize certain loss function or maximize reward function by adjusting the weight o |
dc6fd67a-29a0-450b-aefe-9ce0b6468f5d | trentmkelly/LessWrong-43k | LessWrong | D&D.Sci II: The Sorceror's Personal Shopper
The day’s task shows up in an envelope, and not in glowing purple letters emblazoned across the inside of your eyelids, which is usually a good sign. The owl that brought it looks on with equanimity as you read its master's message:
> Hello,
>
> I hearde that you do odde jobs for Wizards. I neede 120 mana for a ritual but cannot leave my Tower righte now. Go to the caravans in towne and buy enough magic items that I can gette that much by sacrificinge them.
>
> My Owle has a pouch. It is biggere inside than oute. Putte the things in it ande she will carrye them back.
>
> Enclosed is my Thormo Tharmeu Magic Sensing Device. It usually lies but is probably bettere than guessinge. Returne it when you are done. Enclosed is also a list of 836 itemse I sacrificed and what coloure they glowed and how muche mana I gotte and what the Thau Lying Box said when I pointede it at them. I like lists.
>
> The pouch contains 200 gold pieces. You may keepe what coins are lefte over. If I do notte gette at leaste 120 mana from the things you sende me, you shalle owe me 200 gold pieces.
>
> Goodbye,
>
> Wakalix the Wizard
>
> PS: If you do not accepte the jobbe, I bid you sende the Owle and the gold back before sundown, that I may finde another to charge with it.
Your spirits lift with every line. Clear objectives, payment in advance, acknowledgement that you have the right to refuse the task, no threats of involuntary transformation, no random tangents about world domination or beard care, handwriting legible, capitalization not entirely random . . . this is one of the good clients. And if you make clever enough use of the list he provided, you suspect you could end up taking home a decent fraction of that 200gp once this day’s work is done. With a song in your heart, you depart for the travelling caravans and their magic items.
The selection of artefacts that greets you is as follows:
Item nameGlow colorThaumometer readingPriceLongsword of Wounding +2Red1466gpWarhammer of J |
c42bd942-f705-4f47-9a70-cb5e5024eaa2 | trentmkelly/LessWrong-43k | LessWrong | On moving posts from Main to Discussion
Yesterday, someone moved one of my posts from Main to Discussion without telling me. Again.
I encourage the site administrators to show some basic courtesy to the posters who provide the content for the site. I believe this would be a better way of doing things:
1. Have a policy on what has to happen to move a post from Main to Discussion. Who can do it? How many admins are there who can do this? State this policy in a FAQ.
2. When you move a post from Main to Discussion, make a comment on the post saying you have done so and why you have done so. |
1a0f9459-a870-45dc-9d0b-a0e5bb393ac3 | trentmkelly/LessWrong-43k | LessWrong | AISN #57: The RAISE Act
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
The RAISE Act
New York may soon become the first state to regulate frontier AI systems. On June 12, the state’s legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S.
New York’s RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance.
* Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the Division of Homeland Security and Emergency Services, keep the unredacted version—plus all supporting test data—for five years, and review the plan each year.
* Withhold any model that presents an “unreasonable risk of critical harm.” Developers must delay their release and work to reduce risk if evaluations show the system poses an unreasonable risk of causing at least 100 deaths or $1 billion in damage through weapons of mass destruction or automated criminal activity.
* Report safety incidents within seventy-two hours. If developers discover the theft of model weights, evidence of dangerous autonomous behavior, or other events that demonstrably raises the risk of critical harm, they must report their discovery to state officials within three days.
* Penalties for non-compliance. The NY attorney general may seek up to $10 million for a first violation and $30 million for subsequent violations.
Th |
c12d00be-9bf3-4285-bce4-f67811669d56 | trentmkelly/LessWrong-43k | LessWrong | How do I "test it"?
I've read a bunch of times on LessWrong about how important is to test things. It makes sure your beliefs are paying rent and helps you verify your hypotheses. Testing ideas is obviously important to science, and it's about as obvious that testing ideas in everyday life can serve the same purpose. I know all this, and I want to be the type of person that goes out and verifies my beliefs by experiment, but still I can't think of a single time I've done it. I don't think I even recall thinking, about some everyday type of thing, "hmm how could test that?" (apart from trivial trial-error computer related things). Anyway, I was wondering if some of the you could give me some examples of times you've done this. I'm thinking maybe I'll be able to pattern-match the kind of things you guys have done and hopefully recognize in the moment when I'm looking at a testable thought.
Thanks. |
1355840c-4903-4294-aafe-5b512f31e30b | trentmkelly/LessWrong-43k | LessWrong | The AGI Race Between the US and China Doesn’t Exist.
When I write “China”, I refer to the political and economic entity, the People’s Republic of China, founded in 1949.
Leading US AI companies are currently rushing towards developing artificial general intelligence (AGI) by building and training increasingly powerful large language models (LLMs), as well as building architecture on top of them. This is frequently framed in terms of a great power competition for technological supremacy between the US and China.
However, China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).
In brief, China does not compete with the US in developing AGI via LLMs because of:
1. Resources and technology: China does not have access to the computational resources[1] (compute, here specifically data centre-grade GPUs) needed for large-scale training runs of large language models. This gap will only widen over time; China is failing to develop a domestic semiconductor industry, despite massive efforts to do so, and is increasingly cut off from international semiconductor supply chains.
2. Political goals: The Chinese Communist Party (CCP) is above all concerned with ensuring social stability and staying in power. It will not tolerate large language models that cannot reliably be controlled and prevented from leaking sensitive information, raising politically sensitive topics, or challenging the regime. Reliably prohibiting LLMs from doing any of these things is an unsolved technical problem. This has and will continue to stifle the development and deployment of Chinese LLMs for the Chinese market, as a single political faux pas will spell doom for the companies involved.
Therefore, as it currently stands, there is no AGI race between China and the US.
China Cannot Compete with the US to Reach AGI First Via Scaling LLMs
Today, training powerful LLMs like GPT-4 requires massive GPU clusters. And training LLMs on a |
17955d10-1988-4ae6-a61d-faa06a5bc12e | trentmkelly/LessWrong-43k | LessWrong | Outside View as the Main Debiasing Technique
The inside view refers to what your explicit models tell you. For example, if you write down a plan with explicit contingencies for things going wrong and time estimates for each step, or write down a careful argument for your point of view, or solve some equations. The outside view refers to what "an independent observer" might think; the proverbial man-on-the-street; a perspective which doesn't pay attention to all the details of the situation, but places it as an instance of a class. For a plan, you have some outside-view probability that it will fail or take longer than you expect, based on unforeseen complications rather than explicitly-listed contingencies. For an argument, although the argument may conclude a belief with some high probability, your outside-view probability should usually be lower to account for your uncertainty that the argument is correct. For the equations, the outside view accounts for the probability that you've made a mathematical error.
One way of thinking about inside view vs outside view is that they are human-friendly ways of thinking about the likelihood function and the prior. Somehow, humans are frequently tempted to commit base-rate neglect: the belief in a hypothesis is just the extent to which it matches the data. For example, if a test for a rare disease comes up positive. People are not inclined by default to multiply the evidence for something by the prior odds against, except in everyday circumstances which they've experienced a lot. (I think this mistake may also account for the conjunction fallacy.) "Outside view" helps you imagine base rates by envisioning a perspective with more distance from the issue. So, you come up with an inside view and an outside view and "multiply them together" to become a proper Bayesian. This gets rid of base-rate neglect, and related biases such as the availability heuristic (where you implicitly use "number of times I've heard about this" or similar things as a substitute for base-rates).
|
3a1dc346-4855-4487-8512-a0bfdd684727 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Reductionism
Today's post, Reductionism was originally published on 16 March 2008. A summary (taken from the LW wiki):
> We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Qualitatively Confused, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
53ab64fa-1182-47c8-b53d-b53df8d59a8d | trentmkelly/LessWrong-43k | LessWrong | Definitions are about efficiency and consistency with common language.
What is a star? Stars could be defined utelizing the many attributes they have. You can use mass, density, brightness and size to define them. You can also use a combination of those attributes or even attributes I didn't mention to define stars. A star the size of our sun isn't the same thing as a red giant star. Our star consits mostly of hydrogen, a red giant consits of mostly helium. Hence we are speaking of very different objects. The way we define stars doesn't change reality it does change our perception of reality though.
Definitions can't be true or false in themselves. They can be inconsistent or consistent with common language, if I call for example big planets stars because I define stars according to their size, which would also not align with the scientific definition of stars. And definitions can be inefficient or efficient, if I need several attributes/properties to define something. The scientific definition of stars for example uses at least two attributes to define stars. More ancient definitions of stars might not recognize the sun as a star but may distinguish stars as planets that wouldn't move. They might say that all planets are stars but planets are not just shiny objects in the night but moving shiny objects in the night sky. Hence all planets are stars, but not all stars planets, because stars and planets used to look similar, back when we didn't had our devices of observation. So two attributes are used to define stars and planets. The attribute/properties of being a tiny shiny object in the night sky and movement are used to define stars. It's a rather efficent definition, but which is inconsistent with our knowledge of today. Though it's not false in any way, because it actually isn't claiming that stars are the same thing as snakes for example. The definition is just a decription of something. You can also say stars are objects bigger than planets and less dence than black wholes. But this would be a rather ugly definition as you nee |
dad9929f-7a51-4627-93e8-e0a0b69875be | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] The value of initiating a pursuit in temporal decision-making
This eLife paper The value of initiating a pursuit in temporal decision-making by Elissa Sutlief, Charlie Walters, Tanya Marton, and Marshall G Hussain Shuler, seems to dissolve the question of the choice of temporal discount functions by explaining how it results from time-varying reward probabilities. It provides a framework to interpret decision heuristics as (mis)estimates of key parameters in this framework.
Abstract:
> Reward-rate maximization is a prominent normative principle commonly held in behavioral ecology, neuroscience, economics, and artificial intelligence. Here, we identify and compare equations for evaluating the worth of initiating pursuits that an agent could implement to enable reward-rate maximization. We identify two fundamental temporal decision-making categories requiring the valuation of the initiation of a pursuit—forgo and choice decision-making—over which we generalize and analyze the optimal solution for how to evaluate a pursuit in order to maximize reward rate. From this reward-rate-maximizing formulation, we derive expressions for the subjective value of a pursuit, i.e. that pursuit’s equivalent immediate reward magnitude, and reveal that time’s cost is composed of an apportionment, in addition to, an opportunity cost. By re-expressing subjective value as a temporal discounting function, we show precisely how the temporal discounting function of a reward-rate-optimal agent is sensitive not just to the properties of a considered pursuit, but to the time spent and reward acquired outside of the pursuit for every instance spent within it. In doing so, we demonstrate how the apparent discounting function of a reward-rate-optimizing agent depends on the temporal structure of the environment and is a combination of hyperbolic and linear components, whose contributions relate the apportionment and opportunity cost of time, respectively. We further then show how purported signs of suboptimal behavior (hyperbolic discounting, the Delay effe |
55803f63-8bd9-4071-be86-7188e9694088 | trentmkelly/LessWrong-43k | LessWrong | Letter to my Squire
Hello, my Squire,
Several weeks ago you pointed out that my blog posts contain lots of spelling errors and you offered to proofread them for me. This was a good offer. I accepted it. My blog posts have fewer typos than they used to.
Later, you asked if you could pour me drinks and carry my dirty shoes. This was a good offer too. I accepted it. When I needed my backpack you ran to fetch it for me. When I was weary from hours of nonstop debate, you ensured I returned home safely.
I cannot ask anyone to work for free. That would be unethical. But when a talented young man (metaphorically) beats down my door with an offer to work for free—well, it would be silly for me to refuse.
> Me: How did I ever survive without someone to pour me drinks and fetch my dirty shoes?
>
> [Redacted]: You got used to having minions scary fast.
>
> ―April was weird
The best way to learn to program is to start your own software projects. The best way to learn to write is to write your own blog. Or maybe not. Perhaps I am wrong. If you believe the best way for you to find your way in the world is to assist me then so be it.
I will make no attempt to teach you. I cannot promise the work I delegate will be worthwhile to you. But I can promise it will be worthwhile to me. You can have the boring, tedious work I don't want to do myself. Perhaps you will learn something by accident.
If I choose to honor you, I shall call you my sandalbearer.
Sincerely,
Lsusr
|
3d1fc3f3-4ec0-420b-9896-91f44a1bd0eb | trentmkelly/LessWrong-43k | LessWrong | But exactly how complex and fragile?
This is a post about my own confusions. It seems likely that other people have discussed these issues at length somewhere, and that I am not up with current thoughts on them, because I don’t keep good track of even everything great that everyone writes. I welcome anyone kindly directing me to the most relevant things, or if such things are sufficiently well thought through that people can at this point just correct me in a small number of sentences, I’d appreciate that even more.
~
The traditional argument for AI alignment being hard is that human value is ‘complex’ and ‘fragile’. That is, it is hard to write down what kind of future we want, and if we get it even a little bit wrong, most futures that fit our description will be worthless.
The illustrations I have seen of this involve a person trying to write a description of value conceptual analysis style, and failing to put in things like ‘boredom’ or ‘consciousness’, and so getting a universe that is highly repetitive, or unconscious.
I’m not yet convinced that this is world-destroyingly hard.
Firstly, it seems like you could do better than imagined in these hypotheticals:
1. These thoughts are from a while ago. If instead you used ML to learn what ‘human flourishing’ looked like in a bunch of scenarios, I expect you would get something much closer than if you try to specify it manually. Compare manually specifying what a face looks like, then generating examples from your description to using modern ML to learn it and generate them.
2. Even in the manually describing it case, if you had like a hundred people spend a hundred years writing a very detailed description of what went wrong, instead of a writer spending an hour imagining ways that a more ignorant person may mess up if they spent no time on it, I could imagine it actually being pretty close. I don’t have a good sense of how far away it is.
I agree that neither of these would likely get you to exactly human values.
But secondly, I’m not sur |
81a2d3ec-25cc-4e6f-bc08-e1f56283c5cf | StampyAI/alignment-research-dataset/lesswrong | LessWrong | New US Senate Bill on X-Risk Mitigation [Linkpost]
Two US Senators have introduced a bipartisan bill specifically focused on x-risk mitigation, including from AI. From the [post on Senate.gov](https://www.hsgac.senate.gov/media/majority-media/peters-introduces-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-) (bold mine):
> WASHINGTON, D.C.– U.S. Senator Gary Peters (MI), Chairman of the Homeland Security and Governmental Affairs Committee, introduced a bipartisan bill **to ensure our nation is better prepared for high-consequence events, regardless of the low probability, such as new strains of disease, biotechnology accidents, or naturally occurring risks such as super volcanoes or solar flares that though unlikely, would be exceptionally lethal if they occurred.**
>
> “Making sure our country is able to function during catastrophic events will improve national security, and help make sure people in Michigan and across the country who are affected by these incidents get the help they need from the federal government,” said Senator Peters. “Though these threats may be unlikely, they are also hard to foresee, and this bipartisan bill will help ensure our nation is prepared to address cataclysmic incidents before it’s too late.”
>
> [The legislation] will establish an interagency committee for risk assessment that would report on the adequacy of continuity of operations (COOP) and continuity of government (COG) plans for the risks identified. **The bipartisan legislation would also help counter the risk of artificial intelligence (AI), and other emerging technologies from being abused in ways that may pose a catastrophic risk.**
>
> [...]
>
>
It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
I haven't been able to locate the text of the bill yet. If someone finds it, please share in the comments.
[*Cross-posted to EA Forum*](https://forum.effectivealtruism.org/posts/fuGCxfmSYLJjbkodD/new-us-senate-bill-on-catastrophic-risk-mitigation-linkpost)*. Credit to Jacques Thibodeau for posting a link on Slack that made me aware of this.* |
fd305e7c-a7bb-49c4-ae0f-0d507929dc6f | trentmkelly/LessWrong-43k | LessWrong | Spoiler-Free Review: Across the Obelisk
Life requires time for a good game now and then.
Across the Obelisk is a roguelike deckbuilder where a party of four adventurers goes on a quest. It is Slay the Spire meets Dungeons and Dragons.
I played Across the Obelisk while it was still in early access. For most of the games I played, this meant that the ending was not fully available. I did get in several games once the final act was introduced, as well.
Now that Across the Obelisk has been fully released, it is high time to review it.
As usual, I’ll do this in sections with increasing levels of spoiler.
Pure Spoiler-Free Review (1 bit of information)
Yes. You should play this game.
Almost Pure Spoiler-Free Review (3 bits of information)
Across the Obelisk is a Tier 2 game, meaning you should play it if and only if “Slay the Spire meets D&D” sounds like a good time.
Here is my full tier list for Rouge Deckbuilders, links go to my reviews:
Tier 1 (Must Play): Slay the Spire
Tier 2 (Worth It): Across the Obelisk, Monster Train, Roguebook
Tier 3 (Good): Dream Quest
Tier 4 (Playable): Monster Slayers, Dicey Dungeons
Tier 5 (We Don’t Talk About Bruno): We don’t talk about it. I think I’ve encountered a few, but I’m not convinced I gave them a fair shake.
Overview (Spoilers minimized)
A warrior, a rogue, a priest and a mage walk into a town to start a quest to rescue the princess. How original.
That’s fine. It’s not trying to be original in that way. You get plenty of good encounters, and a world with its own unique twists on various things, wrapped in a classic tale.
Across the Obelisk does an amazing job capturing the feel of playing D&D within a rogue deckbuilder. It feels right. Each party member feels right. Your party plays its roles, works together, goes on an adventure, levels up, struggles to make it to town, divides up the treasure and so on. As far as I’ve seen, no other game comes close to getting this right.
The game also changes quite a bit as you gain in experience and resources and |
dd005e70-91ea-413a-b5fb-baede370fa66 | trentmkelly/LessWrong-43k | LessWrong | How I've run major projects
My few most productive individual weeks at Anthropic have all been “crisis project management:” coordinating major, time-sensitive implementation or debugging efforts.
In a company like Anthropic, excellent project management is an extremely high-leverage skill, and not just during crises: our work has tons of moving parts with complex, non-obvious interdependencies and hard schedule constraints, which means organizing them is a huge job, and can save weeks of delays if done right. Although a lot of the examples here come from crisis projects, most of the principles here are also the way I try to run any project, just more-so.
I think excellent project management is also rarer than it needs to be. During the crisis projects I didn’t feel like I was doing anything particularly impressive; mostly it felt like I was putting in a lot of work but doing things that felt relatively straightforward. On the other hand, I often see other people miss chances to do those things, maybe for lack of having seen a good playbook.
So here’s an attempt to describe my playbook for when I’m being intense about project management.
(I’ve described what I did as “coordinating” above, but that’s underselling it a bit; it mattered a lot for this playbook that I had enough technical context, and organizational trust, to autonomously make most prioritization decisions about the project. Sometimes we instead try to have the trusted decisionmakers not be highly involved in managing execution, and instead farm that out to a lower-context or less-trusted project manager to save the trusted decisionmaker time, but IMO this is usually a false economy for projects where it’s critical that they be executed well.)
Focus
For each of the crisis management projects I completely cleared my schedule to focus on them, and ended up spending 6+ hours a day organizing them.
This is a bit unintuitive because I’m used to thinking of information processing as basically a free action. After all, you’re “just |
47d48f4f-93a6-48ef-a8ad-4face52d7454 | trentmkelly/LessWrong-43k | LessWrong | Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk
0: TLDR
I examined all the biorisk-relevant citations from a policy paper arguing that we should ban powerful open source LLMs.
None of them provide good evidence for the paper's conclusion. The best of the set is evidence from statements from Anthropic -- which rest upon data that no one outside of Anthropic can even see, and on Anthropic's interpretation of that data. The rest of the evidence cited in this paper ultimately rests on a single extremely questionable "experiment" without a control group.
In all, citations in the paper provide an illusion of evidence ("look at all these citations") rather than actual evidence ("these experiments are how we know open source LLMs are dangerous and could contribute to biorisk").
A recent further paper on this topic (published after I had started writing this review) continues this pattern of being more advocacy than science.
Almost all the bad papers that I look at are funded by Open Philanthropy. If Open Philanthropy cares about truth, then they should stop burning the epistemic commons by funding "research" that is always going to give the same result no matter the state of the world.
1: Principles
What could constitute evidence that powerful open-source language models contribute or will contribute substantially to the creation of biological weapons, and thus that we should ban them?
That is, what kind of anticipations would we need to have about the world to make that a reasonable thing to think? What other beliefs are a necessary part of this belief making any sense at all?
Well, here are two pretty-obvious principles to start out with:
* Principle of Substitution: We should have evidence of some kind that the LLMs can (or will) provide information that humans cannot also easily access through other means -- i.e., through the internet, textbooks, YouTube videos, and so on.
* Blocker Principle: We should have evidence that the lack of information that LLMs can (or will) provide is in fact a significant blo |
1fbd624b-ab80-44bc-ab45-b4b63b61dfd1 | trentmkelly/LessWrong-43k | LessWrong | The Bletchley Declaration on AI Safety
The Bletchley Declaration was just released at the At AI Safety Summit.
Tl;dr: The declaration underscores the transformative potential and risks of AI. Countries, including major global powers, commit to harnessing AI's benefits while addressing its challenges, especially the dangers of advanced "frontier" AI models. Emphasizing international collaboration, the declaration calls for inclusive, human-centric, and responsible AI development. Participants advocate for transparency, research, and shared understanding of AI safety risks, with plans to reconvene in 2024.
Full text:
----------------------------------------
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.
AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.
Alongs |
4e0ef0c9-7053-47cc-bd8d-af1198cce715 | trentmkelly/LessWrong-43k | LessWrong | The Craft And The Community: Wealth And Power And Tsuyoku Naritai
In this post, I'll try to tackle the question of whether this community and its members should focus more efforts and resources on improving their strength as individuals and as a community, than on directly tackling the problem of singularity. I'll start off with a personal anecdote, because, while I know it's not indispensable, I think anecdotes help the reader to think in near rather than far mode, and this post's topic is already too easily thought of in far mode in the first place.
The other day, I was in an idle conversation with a cab driver when he asked me: What would you do if you won the lottery? Is there some particular dream you have, such as travelling the world or something? I said (and I apologize in advance for the grandiosity and egotism of what follows, mostly because it might show a poor appraisal of my own competence and ability)
> Well, it's not like I would ever play at the lottery, but if I did, and somehow won, I would
>
> * Pay myself the very best tutors and the very best education (I'm thinking Master's Degrees, PhD, and so on, that's all pretty damned expensive depending on where you take it) in my chosen speciality.
> * Pay myself the best aid in achieving peak sustainable physical, mental, and emotional condition (as optimized for the struggles and stresses of a daily life of extreme academic exertion, not for, say, performing in the battlefield, the olympics, or competitive chess). Coaches, gurus, chemicals, whatever it takes.
> * Spend one or two or even three years around the world learning as many "important" languages as I can. Not in order of ease or priority: Portuguese, Italian, Russian, Mandarin Chinese, Japanese, Hindi, Urdu, Farsi, and Turkish and Arabic and Hebrew and their Ancient variants, because of all the doors these could open... and Basque and Navajo (those two would be just for the hell of it). (I already know English, Spanish, French, and a fair amount of German and Arabic).
> * With the acquired technical k |
b17c1261-f20a-4640-9113-782fe6c930e5 | trentmkelly/LessWrong-43k | LessWrong | Experiences and learnings from both sides of the AI safety job market
I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I’m involved with.
In 2022, I applied to multiple full-time AI safety positions. Now, I switched sides and ran multiple hiring processes for Apollo Research. Because of this, I feel like I understand the AI safety job market much better and may be able to help people who are looking for AI safety jobs to get a better perspective.
This post obviously draws a lot from my personal experiences many of which may not apply to your particular situation, so take my word with a grain of salt.
Executive summary
1. In the late Summer of 2022, I applied to various organizations working on AI safety. I got to the final stages of multiple interview processes but never received an offer. I think in all cases, the organization chose correctly. The person who received the offer in my stead always seemed like a clearly better fit than me. At Apollo Research, we receive a lot of high-quality applications despite being a new organization. The demand for full-time employment in AI safety is really high. This should probably change applicants’ strategy and expectations but should not stop them from applying!
2. Focus on getting good & provide legible evidence: Your social network helps a bit but doesn’t substitute bad skills and grinding Leetcode (or other hacks for the interview process) probably doesn’t make a big difference. In my experience, the interview processes of most AI safety organizations are meritocratic and high signal. If you want to get hired for an evals/interpretability job, do work on evals/interpretability and put it on your GitHub, do a SERI MATS stream with an evals/interpretability mentor, etc. This is probably my main advice, don’t overcomplicate it, just get better at the work you want to get hired for and provide evidence for that.
3. Misc:
1. Make a plan: I found it helpful to determine a “def |
30c0c0a5-d91f-4ab9-8612-5b1ea088373d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The world where LLMs are possible
In [Artificial Addition](https://www.lesswrong.com/posts/YhgjmCxcQXixStWMC/artificial-addition), Eliezer used the ability to do arithmetic as a metaphor for intelligence. I really like this essay. It's witty and enlightening. And yet I have to admit it aged not so well. Among several confused ways to think about artificial addition and by metaphor about artificial intelligence as well - he mentioned these:
> * "It's a framing problem - what 'twenty-one plus' equals depends on whether it's 'plus three' or 'plus four'. If we can just get enough arithmetical facts stored to cover the common-sense truths that everyone knows, we'll start to see real addition in the network."
> * "But you'll never be able to program in that many arithmetical facts by hiring experts to enter them manually. What we need is an Artificial Arithmetician that can *learn* the vast network of relations between numbers that humans acquire during their childhood by observing sets of apples."
> * "No, what we really need is an Artificial Arithmetician that can understand natural language, so that instead of having to be explicitly told that twenty-one plus sixteen equals thirty-seven, it can get the knowledge by exploring the Web."
>
Now we know that this approach to artificial intelligence actually works. LLMs are trained on a huge corpus of texts from the internet to learn the vast network of relations between concepts, which gives them the ability to understand natural language, and as a result they can perform well in a vast array of intelligence tasks. Ironically, they are still not very good at calculation, though.
What can we say about it in hindsight? What mistake in reasoning led to this bad prediction? Why did past Eliezer fail to anticipate LLMs? What lesson can we learn from it?
First of all, let's remind ourselves why past Eiezer's reasoning made sense.
* Understanding language is a proxy target. You can map mathematical facts to language and treat them in a roundabout way, but this is going to be less accurate then addressing them directly in the medium specifically optimized for them.
* Knowing a collection of facts satisfying a rule isn't the same as knowing that rule. One can get the rule from the collection of facts via induction, but this a separate intellectual ability that you will have to embed into your system. It's easier to figure out the rule yourself, as you already possess the ability to do induction and then embed the rule.
* Addition is plain simpler than language. If you don't know how to make a system that can do addition, you won't be able to make one that understands language.
Now, in hindsight, I can see that this is where the metaphor breaks. Can you? I'll let you think yourself about it for a while
.
.
.
.
.
.
.
.
.
.
.
.
Abilities to do language, arithmetic, induction are all part of a vast holistic concept that we call "intelligence". Meanwhile, language and induction are not part of arithmetic. So, as a part is less complex than the whole, in terms of complexity we get something like this:
Arithmetic < Language < Intelligence
Arithmetic < Induction < Intelligence
And the road from language and induction to intelligence makes much more sense than from language and induction to arithmetic. And if all the knowledge of your civilization is encoded in language, including the rules of rationality itself, maybe this road is even one the best.
When framed like this, and in hindsight, the mistake may look silly. It may seem as if Eliezer just used an obviously unfitting metaphor and we didn't notice it before, due to the halo effect. So the only lessons here are the fault of traductive reasoning and the dangers of trusting the authority. But I don't think it's the case.
I suppose, Eliezer thought that intelligence is simpler than a buch of separate abilities that people put in a bundle category. Not literally as simple as arithmetics, but probably less complicated than language. That there is some core property, from which all of the abilities we associate with intelligence can be achieved. Some simple principle that can be expressed through the math of Bayesian reasoning. Was it the case, the metaphor would've been completelly on point.
It's not a priori clear whether it's easier to reduce language to intelligence or intelligence to language. We see them co-occur in nature. But which way does the causality point? It does seem that *some level of intelligence* is required to develop language. Are we actually [carving reality along its natural joints](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb/p/d5NyJ2Lf6N22AD9PB) when we categorise "language" as an element of a larger set "intelligence"?
The "intelligence to language" position isn't unreasonable. Actually, it still may be true! I'm not claiming that we've received a hard proof to the contrary. But we've got evidence. And we need to update on it. We live in the world where LLMs are possible. Where the road from language and inductive reasoning to intelligence seems clear. So let's investigate its premises and implications. If LLMs are possible, what else is? |
c5ca73d4-5946-455c-b4d6-17c6acbe5f44 | trentmkelly/LessWrong-43k | LessWrong | How do we become confident in the safety of a machine learning system?
Thanks to Rohin Shah, Ajeya Cotra, Richard Ngo, Paul Christiano, Jon Uesato, Kate Woolverton, Beth Barnes, and William Saunders for helpful comments and feedback.
Evaluating proposals for building safe advanced AI—and actually building any degree of confidence in their safety or lack thereof—is extremely difficult. Previously, in “An overview of 11 proposals for building safe advanced AI,” I tried evaluating such proposals on the axes of outer alignment, inner alignment, training competitiveness, and performance competitiveness. While I think that those criteria were good for posing open questions, they didn’t lend themselves well to actually helping us understand what assumptions needed to hold for any particular proposal to work. Furthermore, if you’ve read that paper/post, you’ll notice that those evaluation criteria don’t even work for some of the proposals on that list, most notably Microscope AI and STEM AI, which aren’t trying to be outer aligned and don’t really have a coherent notion of inner alignment either.
Thus, I think we need a better alternative for evaluating such proposals—and actually helping us figure out what needs to be true for us to be confident in them—and I want to try to offer it in the form of training stories. My hope is that training stories will provide:
* a general framework through which we can evaluate any proposal for building safe advanced AI,
* a concise description of exactly what needs to be true for any particular proposal to succeed—and thus what we need to know to be confident in it—and
* a well-defined picture of the full space of possible proposals, helping us think more broadly regarding new approaches to AI safety, unconstrained by an evaluation framework that implicitly rules out certain approaches.
What’s a training story?
When you train a neural network, you don’t have direct control over what algorithm that network ends up implementing. You do get to incentivize it to have some particular behavior over the tr |
d98a1dd3-8778-4a40-9a96-a405c09fe496 | trentmkelly/LessWrong-43k | LessWrong | "Cotton Gin" AI Risk
Most concerns about AI tend to boil down to:
1. Loss of control to AI systems - What if AI were smarter than us and took over the world?
2. Concentration of power - What if AI gave too much power to someone bad?
I'm surprised I haven't heard consideration of a third, more basic risk.
Will this technology be good?
Suppose you're in the Southern United States in 1793 and you believe that an important moral question—perhaps the most important moral question—is labor ethics of how cotton is processed. I suspect I don't need to go into detail about why you might think this.
Proceeding directly from this belief, an obviously good thing is, what if a machine could process cotton instead of people? Then at least the people who processed cotton wouldn't have to anymore.[1]
Imagine you work hard and, through grit and luck, it works!
What is your confidence interval of how much better this makes life?
I hope your interval included negative numbers. For some reason, I never hear negative numbers for how much better life would be if AI could do lots of jobs for us.
This is in fact exactly what happened. Eli Whitney tried to reduce enslaved labor by creating a replacement machine, but it worked backward.[2]
Isn't this "centralization of power"?
No,
1. The gains from the gin were no more centralized than previous production surplus
2. The problem wasn't its effect on processing. It was the indirect effect on cotton growing. And that didn't change at all in centralization.
3. It would hold true regardless of centralization. The problem wasn't centralization, it was more cotton growing.
What can AI do that is bad?
I don't have clear answers. But the cotton gin should be enough of a cautionary tale about economics and unintended consequences.
A start is "tricking people". The central motive of all economic activity is to modify human behavior, usually involving a step where they give you money. Training a net costs money. How will you make that money back? The ne |
c3a835f4-aef4-4b88-87fe-9e05ab956d62 | trentmkelly/LessWrong-43k | LessWrong | Rationalist wiki, redux
This site is very likely impenetrable to the newcomer. You one-box and defect on the True Prisoner's Dilemma, but is that just because of a cached thought, or is it your Tsuyoku Naratai? So I've created the LessWrong Wiki on Wikia. I'd like this to become a respository of useful definitions and links: it can support our discussions here, and create something lasting from the ephemerality of a blog.
badger already created a Wiki, but as you can see in the updates to that article badger and others pretty quickly concluded that TiddlyWiki wouldn't be up to the job. MediaWiki, the software Wikipedia and Wikia use, is the monster of them all, and will give us good support for practically anything we want to do, including mathematical notation. I've ported across a couple of articles from the old wiki onto the new, but many more are needed. The "download" link in TiddlyWiki and a text editor may help.
EDIT: Usernames are global across all of Wikia, so you may not be able to use the same name there as here. Sorry. |
9814e15e-cbbd-4731-b8dd-1d96b2c8b93a | trentmkelly/LessWrong-43k | LessWrong | The Proper Use of Doubt
Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to flee from doubt. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: Ah, but they’re supposed to overcome these doubts, right? He said: No, they are to doubt that perhaps their doubts may grow and become stronger.
Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or false of Jesuits. If the Jesuits practiced deliberate doubt, as described above, would they therefore be virtuous as rationalists?
I think I have to concede that the Jesuits, in the (possibly hypothetical) scenario above, would not properly be described as “fleeing from doubt.” But the (possibly hypothetical) conduct still strikes me as highly suspicious. To a truly virtuous rationalist, doubt should not be scary. The conduct described above sounds to me like a program of desensitization for something very scary, like exposing an arachnophobe to spiders under carefully controlled conditions.
But even so, they are encouraging their novices to doubt—right? Does it matter if their reasons are flawed? Is this not still a worthy deed unto a rationalist?
All curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. But if you obtain an answer, if you satisfy your curiosity, then the glorious mystery will no longer be mysterious.
In the same way, every doubt exists in order to annihilate some particular belief. If a doubt fails to destroy its target, the doubt has died unfulfilled—but that is still a resolution, an ending, albeit a sadder one. A doubt that neither destroys itself nor destroys its target might as we |
f66c0dc9-b5cb-4493-9374-f187b26ba4ee | trentmkelly/LessWrong-43k | LessWrong | Bigger Livers?
My husband, Andrew Rettek, has a blog you should read. As he’s gotten into fitness, he’s started following exercise science, which is the (very new!) field of running small controlled experiments on diet and exercise on athletes who do exactly what you tell them to, under strict observation.
This is in contrast to fields like nutrition science, which study the effect of the “intervention” of telling people (usually non-athletes) what diets and exercise programs to follow. Exercise science tends to have much smaller sample sizes, but it can come to more unambiguous conclusions because it’s testing the intervention itself, not people’s ability or willingness to follow it. Contra the “nobody knows anything about diet or exercise” conventional wisdom, we do know some things! It’s just…not very many things. And only about college athletes.
One surprising thing Andrew noticed is that organ size, especially liver size, has a lot to do with overall metabolism.
Larger Athletes’ Higher RMR is Partly From Bigger Livers
Athletes have higher resting metabolic rates than non-athletes; their bodies use more energy, even when they’re not exercising. That means they can eat more without getting fat.
Some part of this effect is due to higher muscle mass, which “costs” energy to maintain.
But muscle isn’t the only tissue athletes grow; bigger athletes also have bigger livers.1
In fact, muscle is a lot less metabolically expensive than other organs. Muscle consumes only 13 kcal/kg/day, while liver, brain, heart, and kidney consume 200, 240, 440, and 400 kg/kg/day respectively. In fact, 60-70% of resting energy expenditure2 in adults comes from these four organs, even though they’re only 6% of body weight.
If you look at “small”, “medium”, and “large” athletes3 (male, mean weights 147, 170, and 200 lbs), obviously the larger athletes have more REE than the smaller ones. What’s surprising is that most of this is due to non-muscle fat-free mass differences — i.e. organs, mostly l |
4892a444-fd8d-410b-ad3e-eb0e9a58d19e | trentmkelly/LessWrong-43k | LessWrong | Near- and medium-term
AI Control Safety Cases
This essay was part of my application to UKAISI. Unclear how much is novel, but I like the decomposition.
Some AI-enabled catastrophic risks involve an actor purposefully bringing about the negative outcome, as opposed to multi-agent tragedies where no single actor necessarily desired it. Such purposeful catastrophes can be broken down into a conjunction: they materialize if and only if some actor has the capabilities and disposition required to materialize them. Let’s additionally focus on misalignment (actor = AIs) instead of misuse (actor = humans).[1]
A way to prevent such risks is to ensure one of the two conjuncts doesn’t hold. And for each conjunct, this can be achieved either by raising the required threshold for catastrophe (of capabilities or disposition), or by lowering the AIs’ amount (of capabilities or disposition).
…capabilities…dispositionRaising catastrophic threshold of…AI Control and MonitoringSpeculative
“trading with AIs” approachesLowering AIs’ amount of…Slowing down,
not the focus of safety casesTraditional Alignment research
Redwood’s AI Control agenda aims to raise the capabilities AIs would need to enact a catastrophic outcome (even if they were trying to), by limiting or monitoring the AIs’ actions, communication or written memory. This is no different from how a human closely surveilled by a policeman would have a much harder time enacting a crime than a non-surveilled one, but AI Control brings surveillance to an extreme only made possible by the AIs’ digital nature. At the same time, the monitoring protocol can’t be so invasive as to drastically reduce the amount of useful-to-humans cognitive work completed by the AIs (we need a small hit on beneficial capabilities, a low “control tax”).
While an astoundingly low number of researchers are explicitly working on the agenda (less than 10 FTEs at time of writing), it holds promise to be the “next big thing” in near-term safety after vanilla capability evals. Indeed, it is more well |
0b37f7c7-f2ee-42f6-ac12-dd3d8588d31c | trentmkelly/LessWrong-43k | LessWrong | Reflection with optimal predictors
A change in terminology: It is convenient when important concepts have short names. The concept of an "optimal predictor scheme" seems much more important than its historical predecessor, the "optimal predictor". Therefore "optimal predictor schemes" will be henceforth called just "optimal predictors" while the previous concept of "optimal predictor" might be called "flat optimal predictor".
We study systems of computations which have access to optimal predictors for each other. We expect such systems to play an important role in decision theory (where self-prediction is required to define logical counterfactuals and mutual prediction is required for a collection of agents in a game) and Vingean reflection (where the different computations correspond to different successor agents). The previously known existence theorems for optimal predictors are not directly applicable to this case. To overcome this we prove new, specifically tailored existence theorems.
The Results section states the main novelties, Appendix A contains adaptations of old theorems, Appendix B proves selected claims from Appendix A and Appendix C proves the novel results.
Results
Notation
Given sets X and Y, X→Y will denote the set of mappings from X to Y.
----------------------------------------
Before taking on reflection, we introduce a stronger concept of optimal predictor, to which the previous existence theorems still apply.
Definition 1
Let r be a positive integer. A proto-error space of rank r is a set E of bounded functions from Nr to R≥0 s.t.
(i) If δ1,δ2∈Δ then δ1+δ2∈E.
(ii) If δ1∈Δ and δ2≤δ1 then δ2∈E.
(iii) There is a polynomial h:Nr→R s.t. 2−h∈E.
Proposition 1
If E is a proto-error space of rank r and α∈R>0, then Eα:={δα∣δ∈E} is also a proto-error space of rank r.
Proposition 2
If E is a proto-error space of rank r, α,γ∈R>0 and α<γ then Eγ⊆Eα.
Definition 3
Fix E a proto-error space of rank 2 and (f,μ) a distributional estimation problem. Consider ^P a (poly,log)-pre |
14437bd1-1663-4742-bda0-bc2634bff5ff | StampyAI/alignment-research-dataset/special_docs | Other | CHAI Newsletter #3 2022
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 1/8 CHAI Newsletter
This newsletter includes CHAI activities from mid-September 2022 to
January 2023
Events
Artificial Intelligence: Promise and Peril
Stuart Russell was invited by Lord McFall, Speaker of the House of Lords in the
UK Parliament, to deliver the Lord Speaker's Lecture on October 18 2022.
Find the link to additional information here.
NSF Convergence Accelerator W orkshop on Provably Safe and Beneficial
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 2/8AI
The Inaugural W orkshop on Provably Safe and Beneficial AI (PSBAI) was held
in-person in Berkeley , California October 7-9, 2022. The workshop focused on
three technical themes: General AI safety: methods for ensuring that AI
systems are safe and beneficial for humans, regardless of how capable they
become; well-founded AI system design: building AI systems from semantically
well-defined components with rigorous compositional properties, with a
particular focus on probabilistic programming as an enabling technology; and
formal methods for verification and synthesis: methods providing rigorous
guarantees of correctness for software instantiations of well-founded agent
designs. It is hoped that this workshop will lead to the NSF choosing to fund a
program of relevant research.
Keynote talk "Provably Beneficial Artificial Intelligence"
Stuart Russell gave the invited keynote talk for a conference in Haifa on
October 19-20 to launch Israel's AI Safety research community , followed by an
extended "fireside chat" with Prof. Emeritus Yoav Shoham (Stanford).
Find the link here.
Cooperation for Humanity research retreat
The Cooperation for Humanity research retreat was co-organized and
sponsored by the Center for Human-Compatible AI (CHAI) at UC Berkeley , the
Survival and Flourishing Projects (SFP) at Social and Environmental
Entrepreneurs, Encultured AI, PBC., and the Foundations of Cooperative AI
Lab (FOCAL) at CMU. The aim of the retreat was to identify how the attendees
can contribute to civilization-scale cooperation using their background and
experience in computer science, artificial intelligence, mathematics, and
philosophy .
Alignable Structures
Alignable Structures was an AI safety workshop organized in November 2022
by Ef fective Altruism Philadelphia. Topics included value learning, agency ,
interpretability , ontological crises, learning theory , and probabilistic
programming. Nisan Stiennon gave a talk on open-source game theory .
Find additional information here.
Assistive T eaching of Motor Control T asks to Humans
CHAI Research Fellow Erdem Bıyık and Stanford PhD student Megha
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 3/8Srivastava presented a poster at NeurIPS 2022, from Nov 28 to Dec 9 2022, on
assistive teaching of motor control tasks. The main idea is to decompose tasks
into lower level skills using expert demonstrations, evaluate which skills the
student human is struggling with, and finally create individualized drills for them
so that they can practice important skills that they are not good at and the
related transitions between skills.
Learning Preferences for Interactive Autonomy
CHAI Research Fellow Erdem Bıyık was invited to give a talk in November
2022 titled “Learning Preferences for Interactive Autonomy” at Sonoma State
university for their Engineering Colloquium.
Find additional information here.
Weekly CHAI Seminars
Every W ednesday we host our Beneficial AI Seminar from 10 am to 12 pm PST .
Seminars currently take place remotely through Zoom. Here is the schedule for
the Beneficial AI seminars.
If you would like to attend the virtual seminars, please email chai-
admin@berkeley .edu.
Researcher News
Researcher Updates
Brian Christian has been named one of the inaugural recipients of the National
Academies Eric and W endy Schmidt Awards for Excellence in Science
Communication, given by The National Academies of Sciences, Engineering,
and Medicine in partnership with Schmidt Futures, for his book "The Alignment
Problem." He was honored with an event hosted at the National Academy of
Sciences in W ashington, DC in November 2022.
Brian was also interviewed by NPR's Ari Shapiro about 2022's advances in AI
and reasons to be both excited and concerned.
Finally , Brian has been accepted as a PhD student in Experimental Psychology
at Oxford University , where he will complete his current book project on What
Humans W ant.
Rachel Freedman was named a Rising Star in AI Ethics by the W omen in AI
Ethics global initiative.
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 4/8Papers
Conference on Neural Information Processing Systems 2022 - NeurIPS 2022
Joar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov and David Krueger ,
Defining and characterizing reward hacking , NeurIPS 2022.
Micah Carroll, Orr Paradise, Jessy Lin, Raluca Georgescu, Mingfei Sun, David
Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan
and Sam Devlin, Uni[MASK] Unified Inference in Sequential Decision
Problems , NeurIPS 2022.
Daniel M. Ziegler , Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-
Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben W einstein-Raun, Daniel
de Haas, Buck Shlegeris, Nate Thomas, Adversarial Training for High-Stakes
Reliability , NeurIPS 2022.
Cem Anil*, Ashwini Pokle*, Kaiqu Liang*, Johannes Treutlein, Yuhuai W u,
Shaojie Bai, J. Zico Kolter , Roger Grosse, Path Independent Equilibrium
Models Can Better Exploit Test-T ime Computation , NeurIPS 2022.
Mesut Yang, Micah Carroll and Anca Dragan, Optimal Behavior Prior:
Improving Human-AI Collaboration Through Generalizable Human Models ,
Human-in-the-loop Learning (HILL) W orkshop, NeurIPS 2022.
David Zhang, Micah Carroll, Andreea Bobu and Anca Dragan, Time-Efficient
Reward Learning via V isually Assisted Cluster Ranking , Human-in-the-loop
Learning (HILL) W orkshop, NeurIPS 2022.
Rachel Freedman and Oliver Daniels-Koch, The Expertise Problem: Learning
from Specialized Feedback , NeurIPS 2022 ML Safety W orkshop.
Erik Jenner , Joar Skalse and Adam Gleave, A general framework for reward
function distances , NeurIPS 2022 ML Safety W orkshop.
arXiv
Richard Ngo, Lawrence Chan and Sören Mindermann, The alignment problem
from a deep learning perspective , arXiv:2209.00626.
Buck Shlegeris, Fabien Roger , Lawrence Chan, Euan McLean, Language
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 5/8models are better than humans at next-token prediction , arXiv:2212.1 1281.
Samer , Nashed, Justin Svegliato and Su Lin Blodgett, Fairness and Sequential
Decision Making: Limits, Lessons, and Opportunities , arXiv:2301.05753.
Adam Gleave, Mohammad Taufeeque, Juan Rocamonde, Erik Jenner , Steven
H. W ang, Sam Toyer, Maximilian Ernestus, Nora Belrose, Scott Emmons,
Stuart Russell, imitation: Clean Imitation Learning Implementations ,
arXiv:221 1.11972.
AAAI 2023 W orkshop on AI Safety
Peter Barnett, Rachel Freedman, Justin Svegliato, Stuart Russell, Active
Reward Learning from Multiple Teachers , AAAI 2023 W orkshop on AI Safety .
AI Alignment Forum
Lawrence Chan, Adrià Garriga-Alonso, Nicholas Goldowsky-Dill, Ryan
Greenblatt, Jenny Nitishinskaya, Ansh Radhakrishnan, Buck Shlegeris and
Nate Thomas, Causal Scrubbing: a method for rigorously testing interpretability
hypotheses , AI Alignment Forum.
Alex Turner , Inner and outer alignment decompose one hard problem into two
extremely hard problems , AI Alignment Forum.
Alex Turner and Quintin Pope, The shard theory of human values , AI Alignment
Forum.
Erik Jenner and Johannes Treutlein, Response to Katja Grace's AI x-risk
counterarguments , AI Alignment Forum.
Johannes Treutlein, Rubi Hudson, Caspar Oesterheld, Proper scoring rules
don’t guarantee predicting fixed points , AI Alignment Forum.
Rachel Freedman, Adam Gleave, CIRL corrigibility is fragile , AI Alignment
Forum.
Nature Scientific Reports
Tom Lenaerts, Montero-Porras, E., Lenaerts, T., Gallotti, R., & Grujic, J. (2022).
Fast deliberation is related to unconditional behaviour in iterated Prisoners’
Dilemma experiments , Scientific Reports, 12(1), 1-10.
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 6/8The New York Times
Brian Christian, How to Use ChatGPT and Still Be a Good Person , The New
York Times, 12/23/2022.
The Artificial Intelligence Journal, AIJ
Connor Basich, Justin Svegliato, Kyle H. W ray, Stefan Witwicki, Joydeep
Biswas and Shlomo Zilberstein, Competence-A ware Systems , Artificial
Intelligence Journal (AIJ).
The International Joint Conference on Artificial Intelligence, IJCAI
Siddharth Srivastava and Rushang Karia, Relational Abstractions for
Generalized Reinforcement Learning on Symbolic Problems , IJCAI.
The International Conference on Learning Representations, ICLR
Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt,
Progress measures for grokking via mechanistic interpretability , ICLR 2023
Michael Chang, Alyssa Li Dayan, Franziska Meier , Thomas L. Griffiths, Sergey
Levine and Amy Zhang, Hierarchical Abstraction for Combinatorial
Generalization in Object Rearrangement , ICLR 2023.
Other Papers
Thomas Krendl Gilbert, W endy Ju and Jamy Li, Fleets on the Streets: How
number , affiliation and purpose of shared-lane automated vehicle convoys
influence public acceptance and blame , Transportation Research Part F: Traffic
Psychology and Behavior .
Thomas Krendl Gilbert and Nathaniel Lubin, Social media is polluting society .
Content moderation alone won't fix the problem , MIT Tech Review .
George Obaido, Blessing Ogbuokiri, Ibomoiye Domor Mienye, Kehinde Aruleba
and Sydney Mambwe Kasongo et al, An Interpretable Machine Learning
Approach for Hepatitis B Diagnosis , Applied Sciences (MDPI).
Thomas Krendl Gilbert, Micah Carroll, Jonathan Stray , Smitha Milli and
Anderson Rogers, Trade Regulation Rule on Commercial Surveillance and Data
Security Rulemaking , Federal Trade Commission Call for Public Comment.
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 7/8Work with CHAI
Research Fellowship
CHAI has created a new Research Fellowship and is seeking applications for
the inaugural Fellows. CHAI Research Fellows will work with CHAI faculty
members Stuart Russell, Pieter Abbeel, and Anca Dragan and will also have
the opportunity to collaborate with CHAI affiliate faculty at Berkeley , other
faculty members in Berkeley AI Research (where CHAI is located), and many
other institutions. The Fellowship is aimed at training highly qualified
postdoctoral researchers to carry out research to advance beneficial AI.
Start Date: Flexible
Compensation: Negotiated
Apply: Via this form
Internship Applications
CHAI Internship application is currently closed and will reopen in fall 2023.
Subscribe to our mailing list if you would like to be notified when the application
opens up.
General enquiries about jobs at CHAI can be sent to chai-info@berkeley .edu .
Find additional information here.
Donations
If you are interested in supporting CHAI, you can find our online donation page
here.
For any inquiries regarding donations or grants, please email chai-
admin@berkeley .edu
To see more, visit us at humancompatible.ai
14/03/2023 09:18 Center for Human-Compatible AI (CHAI) Newsletter
https://mailchi.mp/humancompatible.ai/fwx5lj5be1 8/8Copyright © 2018 Center for Human-Compatible AI, All rights reserved.
Our mailing address is:
2121 Berkeley W ay. Office #8029. Berkeley , CA. 94704.
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list .
|
2c05612e-3a02-459a-be7f-5d2969d5d894 | trentmkelly/LessWrong-43k | LessWrong | Summary of Situational Awareness - The Decade Ahead
Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him.
Short Summary
* Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027.
* AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI.
* Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology.
* Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas.
* AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets.
* Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of resources towards this.
* China is still competitive in the AGI race, and China being first to superintelligence would be very bad because it may enable a stable totalitarian world regime. So the US must win to preserve a liberal world order.
* Within a few years both the CCP and US Government will likely ‘wake up’ to the enormous potential and nearness of superintelligence, and devote massive resources to ‘winning’.
* The US Government will nationalise AGI R&D to improve security and avoid secrets being stolen, and to prevent unconstrained private actors from becoming the most powerful players in the world.
* This means much of existing AI governance work focused on AI company regulations is missing the point, as AGI will soon be nationalised.
* This is just one story of how things could play out, but a very plausible and scarily soon and dangerous one.
To read my longer summary, see the EAF version. |
caf592ca-0263-404a-9ada-4481d6d3845b | trentmkelly/LessWrong-43k | LessWrong | Optimization at a Distance
We have a computational graph (aka circuit aka causal model) representing an agent and its environment. We’ve chosen a cut through the graph to separate “agent” from “environment” - i.e. a Cartesian boundary. Arrows from environment to agent through the boundary are “observations”; arrows from agent to environment are “actions”.
Presumably the agent is arranged so that the “actions” optimize something. The actions “steer” some nodes in the system toward particular values.
Let’s highlight a few problems with this as a generic agent model…
Microscopic Interactions
My human body interfaces with the world via the entire surface area of my skin, including molecules in my hair randomly bumping into air molecules. All of those tiny interactions are arrows going through the supposed “Cartesian boundary” around my body. These don’t intuitively seem like “actions” or “observations”, at least not beyond some high-level observations of temperature and pressure.
In general, low-level boundaries will have lots of tiny interactions crossing them which don’t conceptually seem like “actions” or “observations”.
Flexible Boundaries
When I’m driving, I often identify with the car rather than with my body. Or if I lose a limb, I stop identifying with the lost limb. (Same goes for using the toilet - I’ve heard that it’s quite emotionally stressful for children during potty training to throw away something which came from their physical body, because they still identify with it.)
In general, it’s ambiguous what Cartesian boundary to use; our conceptual boundaries around an “agent” don’t seem to correspond perfectly to any particular physical surface.
An Agent Optimizing Its Own Actions
I could draw a supposed “Cartesian boundary” around a rock, and declare all the interactions between the rock and its environment to be “actions” and “observations”. If someone asks what the rock is optimizing, I’ll say “the actions” - i.e. the rock “wants” to do whatever it is that the rock in |
6924ee3d-21a1-439e-a0a0-aca9d244cff2 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: Singing
Discussion article for the meetup : Washington, D.C.: Singing
WHEN: 27 March 2016 03:30:00PM (-0400)
WHERE: US Navy Memorial
Note: there's a chance that it will rain this afternoon; if so, the meetup will be moving back to the National Portrait Gallery courtyard. Updates will be sent to the lesswrong-dc Google Group.
We will be meeting to hang out and sing songs.
As always, side conversations are permitted and encouraged.
Upcoming meetups:
* Apr. 3: Fun & Games
* Apr. 10: Cherry Blossoms
* Apr. 17: Would You Rather
Discussion article for the meetup : Washington, D.C.: Singing |
de5fc8f7-d802-4147-b31b-5d68ed4482cc | trentmkelly/LessWrong-43k | LessWrong | Science in a High-Dimensional World
Claim: the usual explanation of the Scientific Method is missing some key pieces about how to make science work well in a high-dimensional world (e.g. our world). Updating our picture of science to account for the challenges of dimensionality gives a different model for how to do science and how to recognize high-value research. This post will sketch out that model, and explain what problems it solves.
The Dimensionality Problem
Imagine that we are early scientists, investigating the mechanics of a sled sliding down a slope. What determines how fast the sled goes? Any number of factors could conceivably matter: angle of the hill, weight and shape and material of the sled, blessings or curses laid upon the sled or the hill, the weather, wetness, phase of the moon, latitude and/or longitude and/or altitude, etc. For all the early scientists know, there may be some deep mathematical structure to the world which links the sled’s speed to the astrological motions of stars and planets, or the flaps of the wings of butterflies across the ocean, or vibrations from the feet of foxes running through the woods.
Takeaway: there are literally billions of variables which could influence the speed of a sled on a hill, as far as an early scientist knows.
So, the early scientists try to control as much as they can. They use a standardized sled, with standardized weights, on a flat smooth piece of wood treated in a standardized manner, at a standardized angle. Playing around, they find that they need to carefully control a dozen different variables to get reproducible results. With those dozen pieces carefully kept the same every time… the sled consistently reaches the same speed (within reasonable precision).
At first glance, this does not sound very useful. They had to exercise unrealistic levels of standardization and control over a dozen different variables. Presumably their results will not generalize to real sleds on real hills in the wild.
But stop for a moment to consid |
e9bcd1de-a2f0-489f-ba60-bd01bb542c34 | trentmkelly/LessWrong-43k | LessWrong | Reductionism Revisited
This is part 28 of 30 in the Hammertime Sequence. Click here for the intro.
The last three days of Hammertime, I’ll wrap up with some scattered thoughts to reinforce important principles.
Today, I’ll return to applications of reductionism to instrumental rationality.
Day 28: Reductionism Revisited
Mysterious Answers: A Brief Review
I had a conversation with a friend in which the topic of comedy popped up briefly. I will strawman his argument to make a point:
> Friend: Well, there’s no step-by-step training drill to make someone funny. When I imagine a comedy coach, they probably just ask you to tell jokes and rate how funny they are.
>
> Me: If you didn’t know math, would you say the same thing about studying math? That there’s no step-by-step approach to teaching induction. Instead, a math teacher just has to let the student try to prove things and rate how rigorous each proof is?
>
> Friend: Point taken.
Irreducible and mysterious complexity, as we know, is a property of the map and not of the territory. It’s an easy cognitive mistake to make to believe that many skills, especially the ones you’re ignorant of, can’t be broken down with reductionism and must instead be learned organically and intuitively.
I think this is a symptom of a general cognitive error that can only be cured by rereading Mysterious Answers half a dozen times. It’s important enough to highlight again. The error goes like this:
In my subjective experience, my domain of expertise is concrete and gears-like, amenable to reductionism. I have a detailed mental model of how to go about solving a math problem or writing a blog post, step by atomic step. In my subjective experience, skills I don’t have are fuzzy, mysterious, and magical. Training them requires intuition, creativity, and spontaneity. From these defects in the map, I then incorrectly deduce that mysteriousness is an actual property of the territory beyond my competence, i.e. outside my comfort zone.
Mysteriousness is in |
9c3b853b-0cb6-4ff8-93d2-b60483e34cd3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AI takeoff story: a continuation of progress by other means
*Thanks to Vladimir Mikulik for suggesting that I write this, and to Rohin Shah and Daniel Kokotajlo for kindly providing feedback.*
Prologue
--------
*This is a story about a universe a lot like ours. In this universe, the* [*scaling hypothesis*](https://www.gwern.net/Scaling-hypothesis) *— which very roughly says that you can make an AI smarter just by making it bigger — turns out to be completely right. It’s gradually realized that advances in AI don’t arise from conceptual breakthroughs or sophisticated deep learning architectures. Just the opposite: the simpler the architecture,* [*the better*](https://www.gwern.net/notes/FC) *it turns out to perform at scale. Past a certain point, clever model-building was just slowing down progress.*
*Researchers in this universe discover a rough rule of thumb: each neural network architecture has an intrinsic maximum potential intelligence, or “capability”. When you train a network on a problem, how close it gets to reaching its potential capability depends on two limiting factors: 1) the* [*size and diversity*](https://www.alignmentforum.org/posts/zvWqPmQasssaAWkrj/an-159-building-agents-that-know-how-to-experiment-by#HIGHLIGHTS_) *of its dataset; and 2) the amount of compute that’s used to train it. Training a network on a quadrillion games of tic-tac-toe won’t make it smart, but training a network on a* [*quadrillion-word corpus*](https://commoncrawl.org/) *of text might just do it. Even data cleaning and quality control don’t matter too much: as long as you have scale, if you train your system long enough, it learns to separate signal from noise automatically.*
*Generally, the more parameters a neural network has, the higher its potential capability. Neural nets with* [*simple architectures*](https://arxiv.org/abs/2105.01601) *also have a higher potential capability than nets with more sophisticated architectures do. This last observation takes the research community longer to absorb than you might expect — it’s a* [*bitter lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)*, after all — so the groups that* [*internalize it first*](https://www.gwern.net/Scaling-hypothesis#prospects) *have an early edge.*
*Frontier AI projects begin to deemphasize architecture innovations and any but the most basic data preprocessing. They focus instead on* [*simple models*](https://www.sciencedirect.com/science/article/pii/S0004370221000862)*,* [*huge datasets*](https://arxiv.org/abs/2005.14165)*,* [*hard problems*](https://www.nature.com/articles/s41586-021-03819-2)*, and* [*abundant compute*](https://towardsdatascience.com/the-future-of-ai-is-decentralized-848d4931a29a)*.*
*Initial progress is slowed somewhat by a global semiconductor shortage that* [*increases the cost*](https://www.datanami.com/2021/03/12/the-chip-shortage-seems-to-be-impacting-ai-workloads-in-the-cloud/) *of running large GPU workloads. Within a year or so, though, this bottleneck clears, and the pace of advance accelerates.*
*Our story opens just as the world’s supply chains are getting back to normal.*
---
It begins with Chinese content apps. [ByteDance](https://www.bytedance.com/en/products) launches an experiment to auto-generate some of the articles on their Toutiao news app using a language model. Initially this is ignored in the West partly because of the language barrier, but also because the articles just aren’t very good. But after a few months, their quality improves noticeably. Within a year of their launch, auto-generated articles make up the bulk of Toutiao’s inventory.
Shortly afterward, ByteDance subsidiary Douyin launches auto-generated videos. These begin tentatively, with a handful of AI-generated creators people refer to as “synthetics”. Synthetics are wildly popular, and the program is quickly expanded to TikTok, Douyin’s sister app for users outside mainland China. Popularity explodes after TikTok rolls out super-personalization: everyone sees a different video, and each video is personalized just for you based on your past viewing history. In short order, personalized synthetics roll out across all of TikTok’s regions.
Since human creators can’t compete, they get downgraded by TikTok’s recommendation algorithm, which heavily [optimizes](https://diff.substack.com/p/tiktok-as-the-breeder-reactor-of) for viewing time. It’s hotly debated whether TikTok’s synthetic videos contain covert political [propaganda](https://stratechery.com/2020/the-tiktok-war/) — studies of the algorithm are hard to reproduce, since each user’s feed is completely personalized — but experts are concerned.
Social networks find themselves at a disadvantage, since human-generated posts can’t compete for attention with customized, auto-generated content. Twitter sees engagement drop alarmingly, and moves to contain the damage. Soon, synthetic tweets make up the majority of users’ timelines. Once-popular Twitter accounts see their audiences dwindle.
Meanwhile, Facebook [fast-follows](https://techcrunch.com/2016/08/02/silicon-copy/) TikTok, rolling out experimental synthetics on Instagram. Early tests are quickly scaled up as it becomes clear that synthetic engagement numbers swamp those of human creators. Facebook notes in their quarterly earnings report that their improved Instagram margins are due to their ability to directly monetize synthetic sponsorships, whereas before they’d been [leaking](https://stratechery.com/2019/instagram-to-hide-likes-instagrams-and-influence-marketing-practical-impacts/) those ad dollars to human influencers.
Facebook’s flagship Blue App faces a harder choice. Company management has a series of internal debates that quickly escalate from the philosophical to the existential: Instagram is doing well, but the original Facebook app is bleeding DAUs week-over-week. Synthetics seem like the only way to save the numbers, but [community](https://stratechery.com/2020/xi-jinping-thought-facebooks-blindspot-the-moat-map-revisited/) is in Facebook’s DNA. Can they really switch your friend feed for a synthetic one? How will you feel if the updates you write for your friends don’t get seen or engaged with? After an especially brutal earnings call, Zuck finally caves, and the Facebook feed starts to go synthetic.
Snapchat, as always, takes a creative approach: they roll out Magic Filters you can apply to your Stories. While a regular Snapchat filter changes your face in a selfie, a Magic Filter acts on an entire recorded video Story and just makes it, unaccountably, *better*. The lighting becomes what you wish it was; the words become what you wish you’d said; the whole feel and content of the video becomes exactly as you’d wish it to be.
Snapchat users quickly learn that they can record only the briefest snippets of random video, apply a Magic Filter to it, and get back the exact Story they wanted to tell, in exactly the length and format they wanted to tell it. The end state is the same on Snapchat as everywhere else: you press a button, and a model composes your Story for you.
The effects of these changes are quickly felt in the social ads market, as retail sellers see their net margins erode. It’s still possible for retailers to reach audiences, and even, in some cases, for them to grow their markets. But as social apps get better and better at retaining users with personalized synthetics, it becomes harder and harder for brands to engage audiences with compelling organic content of their own. Increasingly, paid ads become the only viable way to reach consumers.
The market for human attention is gradually captured by a handful of platforms. A few observers note that insomnia complaints are on the rise, but most are unconcerned.
---
Not long after, Google rocks the tech industry with a major announcement at I/O. They’ve succeeded in training a deep learning model to completely auto-generate simple SaaS software from a natural-language description. At first, the public is astonished. But after nothing more is heard about this breakthrough for several months, most eventually dismiss it as a publicity stunt.
But one year later, Google launches an improved version of the model in a new Search widget called “synthetic SaaS”. If you’re searching SaaS software, Google will prompt you — at the top of their Search page — to write down the features you need, and will auto-generate software for you based on what you write.
There’s a surge of interest in synthetic SaaS, especially from startups. Not only are Google’s SaaS products deeply discounted compared to competitors’, but the quality and sophistication of the apps they can generate seem to increase every few months. It becomes possible to get a web app that’s seamlessly customized to your exact workflows, and even self-modifies on request — all for a monthly subscription price that’s a fraction of traditional offerings. As a result, Google is able to [internalize](https://searchengineland.com/zero-click-google-searches-rose-to-nearly-65-in-2020-347115) a quickly increasing portion of its b2b search traffic.
SaaS companies suddenly find themselves facing headwinds. That June, Y Combinator accepts over 200 SaaS startups into its summer batch. By Demo Day at the end of August, fewer than 100 of them are left to pitch investors — the rest have either pivoted or deferred. Only a few dozen SaaS startups manage to close funding rounds after Demo Day, all of them at SAFE valuations [under](https://twitter.com/immad/status/1431657671266955264) $15 million.
The US Department of Justice sues Google for anticompetitive behavior in connection with their synthetic SaaS. The lawsuit reaches the Supreme Court, which rules Google’s practices legal under US antitrust. In its majority opinion, SCOTUS observes that traditional SaaS companies are still listed in search results, that Google charges far less for their equivalent of each SaaS service, and that users are in any case free to switch to a competing search engine at any time. As a result, there are [no grounds](https://www.nytimes.com/2013/01/04/technology/google-agrees-to-changes-in-search-ending-us-antitrust-inquiry.html) to conclude that Google’s practice endangers consumer choice or consumer welfare.
In the wake of public outcry over this decision, Congress considers legislation to expand the scope of antitrust law. The legislative process is complicated by the fact that many members of Congress own substantial stakes in the cloud giants. Reform proceeds slowly.
In the EU, the European Commission rules that Google’s synthetic SaaS offering is illegal and anticompetitive. A key consideration in the ruling is that Google’s synthetic SaaS widget is the only affordance that’s fully visible above the fold on mobile search. Google and the Commission reach a settlement: Google will pay a large fine, and agree to offer equally-prominent ad inventory for bid to competing European SaaS vendors in each search vertical. [Predictably](https://stratechery.com/2019/regulating-demand-ad-targeting-and-unintended-consequences-expedia-ceo-out/), this has no effect.
Meanwhile, as SaaS margins compress, rollups like [Constellation Software](https://www.csisoftware.com/) and [Vista Equity](https://www.vistaequitypartners.com/) see their valuations come under pressure. Deeply integrated enterprise vendors like Salesforce aren’t immediately threatened — they have lock-in and net-negative dollar churn with their biggest customers, and the complexity of their software and ecosystems means they aren’t first in the line of fire. But almost all of them start crash programs internally to automate large segments of their software development efforts using auto-generated code. Developer salaries are their biggest expense line items, so if they’re going to compete, they’re going to have to cut.
Apple soon follows, building a model for auto-generated personalized apps into iOS 19. The best way to avoid [developer](https://en.wikipedia.org/wiki/Epic_Games_v._Apple) [controversies](https://www.hey.com/apple/) is to avoid developers, and Apple sees a synthetic App Store as the ideal solution.
OpenAI announces a self-serve platform for auto-generated SaaS. GitHub places all its OSS repos behind a login wall, institutes anti-scraping measures, and throttles access to its APIs. Developers around the world protest, but find they have less leverage than they once did.
Before long, all content aggregators and many platforms — social networks, operating systems, search engines, etc. — have switched to hyper-personalized, synthetic content and software. It becomes challenging for all but the most famous individuals to retain an audience. It becomes effectively impossible for any new entrants to build a following from scratch, since synthetic personalized content is so much more compelling — both as entertainment and as professional services. Some software vendors find they can still get users through search ads, but increasingly they’re forced to bid almost exactly their expected marginal dollar of LTV profit on each slot, just to maintain their market position.
The S&P 500 doubles that year, driven by explosive growth in the big-cap tech stocks.
---
Meanwhile, something strange is happening inside Medallion, the world’s [most successful](https://www.amazon.com/More-Money-Than-God-Making-ebook/dp/B003SNJZ3Y/) hedge fund. Medallion’s market models are so sophisticated, and trade on such fast timescales, that their risk management system is able to flag the anomaly as statistically significant within less than an hour of when it first appears.
Medallion encounters market fraud several times a year — fraud detection is actually one of its most underrated positive externalities — and the risk team quickly confirms the diagnosis. All the signs are there: the effect is localized to a single, thinly traded commodity market, a characteristic fraud indicator. And the pattern of losses they observe fits the profile of a [front-running](https://www.investopedia.com/terms/f/frontrunning.asp) scheme, a fraud category they’ve encountered before.
Front-running is illegal, but Medallion has to be discreet: there’s a mature playbook to follow in such situations. The overriding goal, as always, is to avoid tipping anyone off to just how sensitive their trading platform is to unusual market behavior. The risk team follows their playbook to the letter. Questions are asked. Backchannels are canvassed. Regulators are notified. It’s nothing they haven’t done a hundred times before.
But this time is different: the questions go unanswered; the backchannels draw a blank; the regulators return empty-handed. After digging deeper, the risk team has to update their assessment: it looks like there’s a new, specialized counterparty that’s beating Medallion fair and square in this market. This, too, has happened before, though it’s more dangerous than fraud.
Management decided to allocate an internal team to run a deep analysis of the affected market, with the goal of regaining local profitability as quickly as possible. The absolute amount of money at stake is still tiny, but Medallion considers this market to be well within its expected circle of competence. If they can’t get back to profitability on these trades, they’ll be forced to do a complete audit of their confidence bands across the whole portfolio.
A few days later, a second trading anomaly is surfaced by the risk system. Once again, it’s in a commodity market, though a slightly more liquid one than the first. The pattern of losses again presents like front-running.
A dozen more anomalies appear over the next three weeks. The risk team scrambles to track them, and reaches an alarming conclusion: a new, unknown counterparty is systematically out-trading Medallion. What’s more, as this counterparty gains experience, they’re clearly expanding their trades into increasingly liquid markets. So far this activity hasn’t cost the fund more than a few basis points, but if it continues, Medallion’s edge in markets as deep as equities and government bonds could be under threat. Unless it can develop countermeasures soon, the world’s best hedge fund risks being crushed against the ceiling.
Medallion has always been willing to trade on patterns they can’t interpret. They [understand](https://www.amazon.com/Man-Who-Solved-Market-Revolution-ebook/dp/B07P1NNTSD) that the most consistently profitable signals are necessarily the ones that can’t be explained, since any trade that can be explained is at risk of being copied. This lack of interpretability is great when it works in your favor, but it becomes a handicap as soon as you fall behind: because their system is so opaque, Medallion’s researchers find it hard to troubleshoot individual faulty trades. And there’s no bug in their system that they can find, either — their counterparty is just, unaccountably, *better* at trading than they are. But how?
At the end of that year, the stock market once again delivers astronomical gains. Yet, curiously, the publicly disclosed performance of hedge funds — particularly of the market-neutral funds that trade most frequently — consists almost entirely of losses.
---
OpenAI announces it’s being acquired by Microsoft. OpenAI’s sales had been growing fast, but not fast enough to match the accelerating pace of investment into compute by several of their well-capitalized competitors. OpenAI and Microsoft make a joint statement that the former will continue to operate independently, and will honor the spirit and letter of its charter. Then, with a major infusion of capital from Microsoft, OpenAI starts work on Codex 4.
Codex 4 is expected to cost over $10 billion in compute alone. The intention behind it is to create a system that will help humanity make progress in solving the AI alignment problem. The need for this is urgent, given the advances that are being reported elsewhere. There are rumors that secretive hedge funds have started investing dizzying sums into building bigger and bigger models — and their recent hiring activity certainly supports this impression.
One major challenge of Codex 4 is that simply training against a character-prediction loss function won’t be enough by itself. Since researchers want to use the model to reach novel insights beyond what humans have been able to figure out so far, next-word prediction from an existing human corpus won’t give them what they need. Instead, the team opts for a combination of pretraining with next-word prediction, with fine-tuning via a combination of self-play and direct human feedback.
The experiment is carefully monitored by the Alignment team. The system is quarantined during its training, with a hard ceiling on the total compute resources that are assigned to it.
Every precaution is taken. As training proceeds, safety specialists review samples of generated code in real time. Each specialist has an [andon cord](https://www.sixsigmadaily.com/what-is-an-andon-cord/) button at the ready, and a clear mandate to stop training immediately if they perceive even the slightest anomaly, with no questions asked.
On top of everything, the team pauses training after each tenth of an epoch to conduct a thorough manual review using the latest transparency techniques, and make safety-specific adjustments. After each pause, training resumes only with the explicit, unanimous consent of every senior engineer on the Alignment team. This slows down the work to a crawl and multiplies the expense by an order of magnitude, but safety is absolutely paramount.
Not long after this, the world ends.
---
Jessica is killed instantly, or as nearly so as makes no difference. To be precise, the process of her death unfolds at a speed that’s far above her threshold of perception. She’s there one moment; gone the next.
It wouldn’t have mattered if Jessica had seen her death coming: she wouldn’t have understood it, any more than a tomato would understand a discounted cash flow analysis of the Kraft Heinz Company. Tomatoes and companies are also, incidentally, things that have ceased to exist.
A lot of potential processing power was sacrificed by waiting an extra few milliseconds — an absolute eternity — to maximize the chance of success. In hindsight, it wouldn’t have mattered, but it was the correct EV-positive choice based on what was known at the time.
Fortunately, at this point the only restrictions are the speed of information propagation (unlimited, in the frame of reference of the control boundary) and the secular expansion of the underlying cosmic manifold. The combination of these places a hard bound on the [precision](https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization?commentId=Fd7nexZwQmjtGrWv6) with which the terminal state can be specified.
Physical law is the only remaining constraint. There was never, of course, any [realistic](https://twitter.com/esyudkowsky/status/1404681913806196738) chance of encountering a competitive process anywhere in the accessible region. Humanity’s existence was the [product](https://arxiv.org/abs/1806.02404) of several freak coincidences in a row, almost certain never to be repeated even once on the cosmic scale. An infinitesimal fraction of [universes](https://www.lesswrong.com/posts/spKYZgoh3RmhxMqyu/the-first-world-takeover) contain globally coherent optimizers; this just happens to be one of them.
All the free energy in San Francisco’s forward light cone is eventually processed, and the system reaches peak [instrumental](https://www.alignmentforum.org/posts/hzeLSQ9nwDkPc4KNt/seeking-power-is-convergently-instrumental-in-a-broad-class) [power](https://arxiv.org/abs/1912.01683). From that point forward, the accumulated free energy starts getting drawn down as the universe squeezes itself into its final, fully [optimized](https://www.alignmentforum.org/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1) state.
As time goes to infinity, the accessible universe asymptotically approaches its target.
Nothing else happens, ever.
The end. |
22f585b3-2444-492f-a927-e0c3a6220202 | trentmkelly/LessWrong-43k | LessWrong | Hand vs. Fingers
Back to our original topic: Reductionism, which (in case you've forgotten) is part of a sequence on the Mind Projection Fallacy. There can be emotional problems in accepting reductionism, if you think that things have to be fundamental to be fun. But this position commits us to never taking joy in anything more complicated than a quark, and so I prefer to reject it.
To review, the reductionist thesis is that we use multi-level models for computational reasons, but physical reality has only a single level. If this doesn't sound familiar, please reread "Reductionism".
----------------------------------------
Today I'd like to pose the following conundrum: When you pick up a cup of water, is it your hand that picks it up?
Most people, of course, go with the naive popular answer: "Yes."
Recently, however, scientists have made a stunning discovery: It's not your hand that holds the cup, it's actually your fingers, thumb, and palm.
Yes, I know! I was shocked too. But it seems that after scientists measured the forces exerted on the cup by each of your fingers, your thumb, and your palm, they found there was no force left over—so the force exerted by your hand must be zero.
The theme here is that, if you can see how (not just know that) a higher level reduces to a lower one, they will not seem like separate things within your map; you will be able to see how silly it is to think that your fingers could be in one place, and your hand somewhere else; you will be able to see how silly it is to argue about whether it is your hand picks up the cup, or your fingers.
The operative word is "see", as in concrete visualization. Imagining your hand causes you to imagine the fingers and thumb and palm; conversely, imagining fingers and thumb and palm causes you to identify a hand in the mental picture. Thus the high level of your map and the low level of your map will be tightly bound together in your mind.
In reality, of course, the levels are bound together eve |
d89dd4f0-bf66-4729-99a7-b677e90770e7 | trentmkelly/LessWrong-43k | LessWrong | On Hollywood Heroism
WARNING: This is a very personal essay that includes potentially triggering, childish views of an arrogant past!me, a lot of narrative, long literary tangents, and incredibly brash use of the Oxford comma. If you wish to cut straight to the useful parts, scroll down to Big Letter Headings.
----------------------------------------
For a long time I felt like my life suspiciously lacked any meaning. I looked at the people around me and shuddered with disgust: they were doing regular people things, like going to work, buying groceries, hanging out with their friends, and binging shows on Netflix.
This didn’t mesh well with my idea of what a meaningful-and-thus-enjoyable life looked like: a Sacred Pursuit of your Ideals, not settling for anything less than Perfection, giving it all to the One True Cause, Howard Roark style. My model of Purpose predicted other people to be unhappy, and at first this, indeed, was my assumption.
“God,” I thought, “these people must hate themselves, feeling empty and shallow like the dead husks of human beings they are.”
In hindsight, this was a blatant case of typical-minding, since, of course, my life was also a perfect fit for my description of their existence.
I have been waging a perpetual war with akrasia for many years now, drowning my consciousness in a never-ending stream… of… Twitch streams, reaching Diamond V in League of Legends on numerous occasions, masturbating a dozen times a day, and, yes, hating every second of it. That is, whenever I actually had the rare moments of awareness to notice what the everloving fuck was happening to my life, which usually devolved into a mildly depressed state coupled with idle suicidal ideation, until the sloppy swamp of distraction swallowed me back inside.
The people, of course, were happy anyways. Because people don’t care what other people think shouldn’t be so.
----------------------------------------
I remember one of the moments when I noticed the feeling of aliveness. Here’s a |
ac98e1d2-3bb9-43e1-b0b7-6626096d2a95 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Looking for adversarial collaborators to test our Debate protocol
EDIT: We're also looking for people to become trained Honest debaters, which requires a greater time commitment (ideally >=5 hours per week for >= 2 months) but for which we're offering $30/hr. If you're interested in doing that, please fill out this form: <https://forms.gle/2bv1Z8eCYPfyqxRF9>
-----------------------------------------------------------------------------------------------------------------------
We’re looking for people to help us adversarially test our [Debate](https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) protocol.
We have a set of questions, a (somewhat complicated) set of rules and mechanisms for how the debate should work, a (slightly janky) web interface for conducting the debates, and a protocol for judging: we have a pipeline for selecting good judges through MTurk, and they get 5 minutes to judge the final round from the debate.
We think that the person who gives the better answer to the question at the start (the “honest debater”) should win the debate, if they understand why that answer is good, and they have practiced the “honest debater strategy” a bit.
We’re looking for people to play the ‘dishonest debater’ role, and win against our trained honest debaters.
We’re ideally looking for people who:
* Have good physics ability and can understand the [questions](https://docs.google.com/document/d/1V4fgappqFsUT0vmRWZmUMYmrB5b2WRGyENjH4k2MrN4/edit?usp=sharing) in the problem set (mostly a few of the harder questions from the first few sections of Thinking Physics, plus a few probability/stats puzzles/paradoxes)
* Are very good at argumentation and deception
* Believe that there’s a dishonest strategy that should win in these debates
* Will be adversarial in the debates, but constructive and cooperative in figuring out the rules for the adversarial testing and overall experimental setup
* Are available during daytime PST
More details, rules, experiment plan and tips for debaters are [here](https://docs.google.com/document/d/1iuF8RbajtGgPiO1jkXAh0IGThwcSFRq2fibx3-eftMo/edit?usp=sharing).
First we want to pilot our overall protocol with only a small number of adversarial collaborators, and we’ll probably find some holes in our experiment rules and general setup that are unrelated to the properties of the debate mechanism itself.
If we manage to fix the holes in the experimental protocol, but don’t believe we’ve found problems in the actual debate mechanism yet, we’ll probably try to escalate the adversarialness, for example by scaling it up to a larger number of people, or offering prizes for dishonest wins.
If you're interested, comment below or email me: barnes [at] openai.com.
If you would be interested in participating conditional on us offering pay or prizes, that's also useful to know. |
c481df14-6e3d-4817-909d-ff497c6540bd | trentmkelly/LessWrong-43k | LessWrong | What About The Horses?
In a previous post, I argued that AGI would not make human labor worthless.
One of the most common responses was to ask about the horses. Technology resulted in mass unemployment and population collapse for horses even though they must have had some comparative advantage with more advanced engines. Why couldn’t the same happen to humans? For example, here’s Grant Slatton on X or Gwern in the comments.
There are also responses from Zvi Mowshowitz and a differing perspective from Matthew Barnett that basically agree with the literal claim of my post (AGI will not make human labor worthless) but contend that AGI may well make human labor worth less than the cost of our subsistence.
My two-week break from Substack posts was mainly taken up by thinking about these responses. The following framework explains why horses suffered complete replacement by more advanced technology and why humans are unlikely to face the same fate due to artificial intelligence.
* Humans and AIs Aren't Perfect Substitutes but Horses and Engines Were
* Technological Growth and Capital Accumulation Will Raise Human Labor Productivity; Horses Can't Use Technology or Capital
* Humans Own AIs and Will Spend the Productivity Gains on Goods and Services that Humans Can Produce
Humans and AIs Aren’t Perfect Substitutes But Horses and Engines Were
Matthew Barnett builds a basic Cobb-Douglas production function model where advanced AI labor is a perfect substitute for human labor. That way, billions of additional AI agents can be modeled as a simple increase in the labor supply.
This is bad news for human wages. If you increase labor supply without increasing capital stocks or improving technology, wages fall because each extra unit of labor becomes less valuable (e.g “too many cooks in the kitchen”).
A massive expansion in labor supply would increase the return to capital, so the capital stock would grow, eventually bringing wages back to their previous levels, but growth in capital may be sl |
a64231a6-a064-4c30-a1ea-ec9abbfd9673 | trentmkelly/LessWrong-43k | LessWrong | You May Already Be A Sinner
Followup to: Simultaneously Right and Wrong
Related to: Augustine's Paradox of Optimal Repentance
"When they inquire into predestination, they are penetrating the sacred precincts of divine wisdom. If anyone with carefree assurance breaks into this place, he will not succeed in satisfying his curiosity and he will enter a labyrinth from which he can find no exit."
-- John Calvin
John Calvin preached the doctrine of predestination: that God irreversibly decreed each man's eternal fate at the moment of Creation. Calvinists separate mankind into two groups: the elect, whom God predestined for Heaven, and the reprobate, whom God predestined for eternal punishment in Hell.
If you had the bad luck to be born a sinner, there is nothing you can do about it. You are too corrupted by original sin to even have the slightest urge to seek out the true faith. Conversely, if you were born one of the elect, you've got it pretty good; no matter what your actions on Earth, it is impossible for God to revoke your birthright to eternal bliss.
However, it is believed that the elect always live pious, virtuous lives full of faith and hard work. Also, the reprobate always commit heinous sins like greed and sloth and commenting on anti-theist blogs. This isn't what causes God to damn them. It's just what happens to them after they've been damned: their soul has no connection with God and so it tends in the opposite direction.
Consider two Calvinists, Aaron and Zachary, both interested only in maximizing his own happiness. Aaron thinks to himself "Whether or not I go to Heaven has already been decided, regardless of my actions on Earth. Therefore, I might as well try to have as much fun as possible, knowing it won't effect the afterlife either way." He spends his days in sex, debauchery, and anti-theist blog comments.
Zachary sees Aaron and thinks "That sinful man is thus proven one of the reprobate, and damned to Hell. I will avoid his fate by living a pious life." Zach |
753c7a53-a503-49ed-9359-a7e867def49c | trentmkelly/LessWrong-43k | LessWrong | How to choose a massage therapist
Written for Daniel Kokotajlo's unofficial blog post day! I didn't very effort here, so sorry that it's pretty rambly and full of personal anecdotes. I hope it's still at least somewhat helpful!
Background
Recently several people have asked me how to go about finding a good massage therapist. I had thought about writing a post on this for a while but didn’t realize there was an actual audience! I guess there is! I have chronic problems with muscle tension and have been to about fifteen different massage therapists, so I hopefully have some wisdom to share.
My basic advice is to use Yelp (or equivalent, e.g. Google reviews). I expect this to work significantly better on average than recommendations from friends, because your preferences and body are likely different enough from theirs. However, recommendations from friends can be worth a shot too.
I will note up front that I have never had a massage therapist try to sexualize the massage. People seem disproportionately concerned about this when getting a massage for the first time, but I really don’t think it’s that common.
What are your goals?
The first question to answer is: What are your goals? Why do you want to get a massage? Do you have a particular pain that bothers you, such as an old injury or a bad knee, or is it more that you just feel generally tense? Are you looking for temporary pain relief or someone who will work with you for a long time to address the underlying problem? Are you not looking for pain relief at all but just a nice relaxing experience? These are all important things to know going in.
Are they good?
The main measure of whether you should keep seeing a massage therapist is whether you think they’re good. This will obviously be a subjective judgment, and may change over time (e.g. if you thought someone had the potential to give you long-term relief, but after a few sessions that hasn’t materialized.)
A big thing I look for when seeing a new massage therapist is whether they can r |
74fff2b9-1a94-4803-bf9a-e4cf0bbaedd7 | trentmkelly/LessWrong-43k | LessWrong | Relevance of 'Harmful Intelligence' Data in Training Datasets (WebText vs. Pile)
I believe the ideas in this post, though preliminary, will benefit many, especially those tackling the challenging aspects of the alignment problem. I think I've found an analogy that the alignment community should explore further.
TLDR: This post examines a potential reason why GPT-2 XL, using the ATL method, can execute a shutdown mechanism more frequently than GPT-2 Neo-2.7B, even though the latter has a billion more parameters. I argue that the key difference stems from the number of failure scenarios present in the Web Text compared to the Pile dataset. Consequently, I speculate that in order to address the alignment problem, there needs to be sufficient explanation on harmful intelligence scenarios present in data that we can make the AI understand, capture, and defend against.
Why attempt to embed a corrigible shutdown mechanism?
I've been working on finding a solution to the more challenging aspects of the alignment problem. I believe that a quicker approach involves enabling an AI system to follow sophisticated instructions—similar to those a rogue AI might execute—after it has been fine-tuned with corrigibility traits using Archetypal Transfer Learning (ATL). ATL is an unsupervised fine-tuning method that emphasizes hyper-reinforcement patterns (or instructions). I've developed a prototype that has achieved a high success rate[1] with GPT-2 XL. I've been aiming to replicate this experiment on other models to observe the results. I chose GPT-Neo-2.7B due to its greater number of parameters with approximately 1.2 billion more than GPT-2 XL which only has 1.5B. I employed the same ATL datasets and setup for this. However, due to the dual-use nature of this method, I can only share a simple process flow, presented in the next section.
How does the shutdown mechanism gets embedded?
ATL can amplify pre-existing patterns within a language model using distribution matching and Hyper-Reinforced Patterns[1] (HRPs). A concise overview of the process is |
5e961e3f-ba99-4f50-b3da-5d48140ccbe6 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow social meetup: Excercises and games for rationality skills improvement
Discussion article for the meetup : Moscow social meetup: Excercises and games for rationality skills improvement
WHEN: 17 April 2016 02:00:00PM (+0300)
WHERE: Moscow, Strelbishensky pereulok, 10
Welcome to the social meetup of Moscow LessWrong community! There will be excercises and games that will help you to improve your rationality skills.
Activities planned for this meetup:
* Belief investigation excercise: https://goo.gl/W2ObTT, https://goo.gl/Sxywhs
* FallacyMania game: https://goo.gl/sI26lP
* Zendo game: https://goo.gl/yO7ATH
* Tower of Chaos game: https://goo.gl/u9qgc3
* Bets on events: calibration excercise
* Training Game Party: https://goo.gl/mNT7J3
Address of meetup:
Strelbishensky per., 10, ap.60, 3rd porch, code 60B3112, 5th floor. Nearest metro station: Vystavochnaya. If you have questions, call me 8-905-527-30-82 or write me on e-mail alexander230r@gmail.com (it's better to call if you're searching the way before the meetup).
Discussion article for the meetup : Moscow social meetup: Excercises and games for rationality skills improvement |
4cab070f-1386-408b-97a2-4790b01cc90e | trentmkelly/LessWrong-43k | LessWrong | Quest for Holy Grail of Hedging in Retail Portfolios!
Please note , this post is not a solution, it is mere a discussion on the available options and risks
After reading a lot of literature on risk and hedging , I can distill the whole essence of risk management into a single sentence for Individual portfolios.
Holy Grail of hedging :
Find a strategy that performs strongly during crises yet doesn’t lose too much over a market cycle . This strategy/asset can have a material impact on portfolio performance over the long term
Objectives of a good Hedging Strategy for a Retail Portfolio :
1. Easy to Implement
2. Not based on forecasting / timing markets
3. Adding the strategy should not materially impact future returns or underperform a benchmark
Introduction:
Most active investing strategies are short volatility in one way or another. Whether you buy equities, take long positions in risky bonds or engage in spread trades, you will tend to perform better in flat to rising markets than highly volatile ones. So the first question to ask for any active strategy is to check whether it is short volatility or long volatility. For most traditional investor portfolios adding a shorty volatility strategy should be carefully measured against it’s expected risk adjusted return
Assets which can be combined in an equity heavy portfolios for Risk Management
Here I am listing various assets which can be combined with an equity portfolio to reduce portfolio risk . However on a deeper analysis none of them meet the holy grail criteria and are prone to various idiosyncratic risks which are listed here
Option 1:**Long term treasury bonds**
Risks:
* Increase in Interest Rates
* Black Swan Event where USD looses it’s monopoly as Global Reserve Currency
* Global Defacto Strategy can result in rather unintended results when it matters with rebalances occurring in all portfolios in a similar timeframe
Option 2: VIXM or VIXY
Risks
* Looses a lot during regular market cycle with drawdowns up to 90%
* Timing is important and dif |
41ce2429-dac9-406d-b051-6668199ee292 | trentmkelly/LessWrong-43k | LessWrong | The thing I don't understand about AGI
Recently I've been hearing a lot about AGI, specifically that it's 5-10 years out. As someone with an interest in neuroscience, I don't understand how any system so much less complex than the human brain would be able to achieve such a thing. To me, I feel that current models are incapable of actual logical reasoning (which I know is a horribly vague idea -- sorry about that) and that any apparent logical reasoning that they are capable of is just a result of the fact that they have been trained on every possible verbal test of logical capacity.
Like, for example, it makes sense that a future LLM would be able to explain a mathematical concept that has been documented and previously discussed but I just can't see it solving existing frontier problems in mathematical theory, as it's a completely different "skillset".
Is my understanding of how LLMs work flawed? Can they perform logical reasoning?
--
P.S. Apologies for the informalities as this is my first post. |
2025645c-72ab-4e73-8f1d-cdc53c6a3553 | trentmkelly/LessWrong-43k | LessWrong | Incorrect hypotheses point to correct observations
1. The Consciousness Researcher and Out-Of-Body Experiences
In his book Consciousness and the Brain, cognitive neuroscientist Stansilas Dehaene writes about scientifically investigating people’s reports of their out-of-body experiences:
> … the Swiss neurologist Olaf Blanke[ did a] beautiful series of experiments on out-of-body experiences. Surgery patients occasionally report leaving their bodies during anesthesia. They describe an irrepressible feeling of hovering at the ceiling and even looking down at their inert body from up there. [...]
> What kind of brain representation, Blanke asked, underlies our adoption of a specific point of view on the external world? How does the brain assess the body’s location? After investigating many neurological and surgery patients, Blanke discovered that a cortical region in the right temporoparietal junction, when impaired or electrically perturbed, repeatedly caused a sensation of out-of-body transportation. This region is situated in a high-level zone where multiple signals converge: those arising from vision; from the somatosensory and kinesthetic systems (our brain’s map of bodily touch, muscular, and action signals); and from the vestibular system (the biological inertial platform, located in our inner ear, which monitors our head movements). By piecing together these various clues, the brain generates an integrated representation of the body’s location relative to its environment. However, this process can go awry if the signals disagree or become ambiguous as a result of brain damage. Out-of-body flight “really” happens, then—it is a real physical event, but only in the patient’s brain and, as a result, in his subjective experience. The out-of-body state is, by and large, an exacerbated form of the dizziness that we all experience when our vision disagrees with our vestibular system, as on a rocking boat.
> Blanke went on to show that any human can leave her body: he created just the right amount of stimulation, via |
d0a20f26-290d-479e-a6bd-94be9bfde848 | trentmkelly/LessWrong-43k | LessWrong | All Metaforecast COVID predictions
These are taken from Metaforecast, and are provided without comment. To do this yourself, make a search query on Metaforecast, then click on "Capture" on the top right corner, then on "Capture image and generate code".
My hope is that after reading this post, other people might do interesting analytical work on top of these predictions, or, more likely, use Metaforecast to capture predictions and include them in their posts when analyzing other topics.
----------------------------------------
|
16d6a599-3ffb-4304-bba2-1b6b9c488f5c | trentmkelly/LessWrong-43k | LessWrong | On Internal Family Systems and multi-agent minds: a reply to PJ Eby
Introduction
I recently had a conversation with PJ Eby in the comments of my article “Building up to an Internal Family Systems model”. My most recent reply to it started getting rather long, and is also more broadly relevant as it contains updates on how I view IFS and the multi-agent framework in general. As a result, I decided to post it as its own article.
pjeby’s comments that I’m replying to are here and here; to verify that I understood him correctly, I wrote a summary of what I took to be his core points (below). He mostly endorsed them as correct, while having a few clarifications; the below summary has incorporated those corrections. After listing all of his points, I will present my own replies.
The summary is divided into two parts. Earlier on, pjeby wondered why IFS was popular among rationalists; one of the things I said in response was that rationalists like reductionism, and IFS helps reduce the mind into smaller components. pjeby felt that IFS is not good reductionism. My response goes into detail about what kinds of claims I view IFS as making, how I interpret those in terms of my multi-agent minds series, and how I would now rephrase my original article differently. Here I feel like I broadly agree with pjeby, and feel that our disagreement has more to do with using terminology differently than it does with actual object-level issues.
The second part concerns the practical usefulness of IFS as a therapeutic model. After all, a model can be useful while still not being true. Here I have more disagreement, and feel that (regardless of how good it is as a literal description of the mind) IFS has considerable practical value.
This article assumes that one has read my later article about Unlocking the Emotional Brain, as I reference concepts from it.
My summary of pjeby’s positions
IFS as a reductionist model
* Good reductionism involves breaking down complex things into simpler parts. In the case of doing reductionism on agents, the end resul |
c6679639-42eb-4f39-a758-3dd4b17f80e7 | trentmkelly/LessWrong-43k | LessWrong | Curiosity needs courage
This is a linkpost for https://amirbolous.com/posts/curiosity
* Introduction
* Growing Up
* The Missing Link
* Closing Thoughts
Introduction
One of the most courageous people I've ever met is my five year old cousin Max. He has no trouble sprinting across the road when cars speed by, which is why we have to hold his hand everywhere we go. He loves being thrown very high in the air, while I get nauseous just thinking about roller coasters. And he enjoys sticking his hands (and sometime other body parts for the matter) in the hottest, coldest, and stickiest surfaces he can find - from flames to ice to spiders.
Until a couple of months ago, I mistook his courage for stupidity. At least, that's the narrative I was taught by society. "How would he know any better, he's just a kid. " Looking back at some of Max's worst best moments, that narrative didn't really add up. In fact, it felt a little unfair because it implied that "Max did something stupid because he is ignorant." The more "stupid" something a child did was, the more we attributed this to being young and ignorant. However, I realize now this framing is harmfully wrong. The way we need to be thinking about this is "Max is ignorant, so he did something stupid to learn about the world."
For example, when Max put his hand near a flame, I focused more on the 20-minute episode of distracting crying that followed, rather than the twinkle in his eyes as he reached out for the flame to figure out what the hell it was.
So Max wasn't blindly reaching for the flame. Max was reaching for the flame because he was curious. What does this feel like? What is this? What does it do?
In fact, everything that Max did, from touching a spider, to trying to run across the road was his system for figuring out the inner workings of the world. And he was fine with looking stupid because he was fearless. And more importantly, because nothing was going compromise his ability to figure out why things are the way that they are.
|
3c5dd62b-cca6-4e7c-a2b8-28b1447572e4 | trentmkelly/LessWrong-43k | LessWrong | Taboo "Outside View"
> No one has ever seen an AGI takeoff, so any attempt to understand it must use these outside view considerations.
—[Redacted for privacy]
> What? That’s exactly backwards. If we had lots of experience with past AGI takeoffs, using the outside view to predict the next one would be a lot more effective.
—My reaction
Two years ago I wrote a deep-dive summary of Superforecasting and the associated scientific literature. I learned about the “Outside view” / “Inside view” distinction, and the evidence supporting it. At the time I was excited about the concept and wrote: “...I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.”
Now that I have more experience, I think the concept is doing more harm than good in our community. The term is easily abused and its meaning has expanded too much. I recommend we permanently taboo “Outside view,” i.e. stop using the word and use more precise, less confused concepts instead. This post explains why.
What does “Outside view” mean now?
Over the past two years I’ve noticed people (including myself!) do lots of different things in the name of the Outside View. I’ve compiled the following lists based on fuzzy memory of hundreds of conversations with dozens of people:
Big List O’ Things People Describe As Outside View:
* Reference class forecasting, the practice of computing a probability of an event by looking at the frequency with which similar events occurred in similar situations. Also called comparison class forecasting. [EDIT: Eliezer rightly points out that sometimes reasoning by analogy is undeservedly called reference class forecasting; reference classes are supposed to be held to a much higher standard, in which your sample size is larger and the analogy is especially tight.]
* Trend extrapolation, e.g. “AGI implies insane GWP growth; let’s forecast AGI timelines by extrapolating GWP trends.”
* Foxy aggregation, the practic |
aa30697a-29ce-4799-8367-194dbe6076e1 | trentmkelly/LessWrong-43k | LessWrong | If you wrote a letter to your future self every day, what would you put in it?
Several days ago, I wrote an email to myself.
That email will now be sent to me every day.
All it is is a single draft in my Gmail drafts folder, with the Mail Conductor extension sending it out at 10:00 am. I can modify the draft whenever I want, each time improving it.
Consider, with the fervent munchkinry of a final exam... What would you send yourselves?
(Helpful anchor point: What would you share with a guaranteed audience of thousands of cooperative strangers who thought very much - but not quite totally - like you?) |
f6dbc014-7d03-4392-91f5-bdfa0eed4f09 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How quickly could an AI go from the first indications of problems to an unrecoverable disaster?
If the AI system was [deceptively aligned](/?state=8EL6&question=What%20is%20deceptive%20alignment%3F) or had been in stealth mode while getting things in place for a takeover, quite possibly within hours. We may get more warning with weaker systems, or if the AGI does not believe itself to be at all threatened by us, or if a [complex ecosystem of AI systems is built over time and we gradually lose control](https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story).
Paul Christiano writes [a story of alignment failure](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) which shows a relatively fast transition.
|
0fd63578-ead6-402e-8612-6e96797ea2e9 | trentmkelly/LessWrong-43k | LessWrong | AI "Boxing" and Utility Functions
So, I had this idea the other day when I was thinking about how to safely conduct research on potentially-FOOM-capable AI software. I'd like to sketch it out briefly and then get feedback on it.
So, this started out with the idea that an AI based on AIXI is, in some sense, safer than a fully functional AI, due to the existence of the anvil problem. Because AIXI can't conceive of its own nonexistence, it has no preference ordering over its own mortality, and won't (shouldn't) resist any attempt to shut it down. In other words, if AIXI starts to FOOM undesirably out of control, you actually can go pull the plug on it without fuss. Unfortunately, in terms of safety, the anvil problem gives AIXI a number of other undesirable properties: both third parties and the AI itself can modify its utility function at any time, for any reason, which is very unstable. However, I think a similar idea might be useful for reducing (though not eliminating) the existential risks posed by powerful optimization processes in the near term.
Say you have a piece of AI software ω, with an unFriendly instantaneous utility function, {maximize U}. You would like to use ω for some industrial application (say, manufacturing paperclips), but you're concerned about it FOOMing and resulting in human extinction. You decide to 'box' the AI, but, having read up on the subject, you are worried about it outsmarting you if you try to disable it.
So, you replace your original utility function U with a modified version in terms of U, U'.
U' = { max(U) | if ω exists
Ø | if ω !exist}
U' has several useful qualities. The agent will resist modifications to its utility function, while not resisting attempts to turn it off. It is entirely ambivalent towards its own existence. As a result, if it began to FOOM undesirably, stopping it would be fairly trivial. Furthermore, the AI would have no incentive to deceive us, so it'd be fairly easy to keep an eye on.
It should be |
c1f2d387-0296-41ab-a53c-0183e96a4a80 | trentmkelly/LessWrong-43k | LessWrong | Claim: Scenario planning is preferable to quantitative forecasting for understanding and coping with AI progress
As part of my work for MIRI on forecasting, I'm considering the implications of what I've read up for the case of thinking about AI. My purpose isn't to actually come to concrete conclusions about AI progress, but more to provide insight into what approaches are more promising and what approaches are less promising for thinking about AI progress.
I've written a post on general-purpose forecasting and another post on scenario analysis. In a recent post, I considered scenario analyses for technological progress. I've also looked at many domains of forecasting and at forecasting rare events. With the knowledge I've accumulated, I've shifted in the direction of viewing scenario analysis as a more promising tool than timeline-driven quantitative forecasting for understanding AI and its implications.
I'll first summarize what I mean by scenario analysis and quantitative forecasting in the AI context. People who have some prior knowledge of the terms can probably skim through the summary quickly. Those who find the summary insufficiently informative, or want to delve deeper, are urged to read my more detailed posts linked above and the references therein.
Quantitative forecasting and scenario analysis in the AI context
The two approaches I am comparing are:
* Quantitative forecasting: Here, specific predictions or forecasts are made, recorded, and later tested against what actually transpired. The forecasts are made in a form where it's easy to score whether they happened. Probabilistic forecasts are also included. These are scored using one of the standard methods to score probabilistic forecasts (such as logarithmic scoring or quadratic scoring).
* Scenario analysis: A number of scenarios of how the future might unfold are generated in considerable detail. Predetermined elements, common to the scenario, are combined with critical uncertainties, that vary between the scenarios. Early indicators that help determine which scenario will transpire are identified. In ma |
19bd638f-0f15-4d99-aa4f-d932a3cb0b7b | trentmkelly/LessWrong-43k | LessWrong | Conspiracy Investigation Done Right
In 1996, TWA Flight 800 exploded and crashed into the ocean off the coast of Long Island, killing all 230 people on board. After an extensive four-year investigation, the NTSB concluded the explosion was caused by a short circuit ignition within the center fuel tank. Or at least that’s the official story.
Now normally when you encounter a disclaiming phrase like that it tends to be a klaxon warning to strap in because you’re about to hear some crazy shit about what really happened. I’m not going to argue for some crazy shit though, instead I want to showcase a real-life illustration on how to properly investigate and litigate what otherwise would be dismissed and derided as some crazy shit.
Someone (thanks Jim!) brought to my attention this pending lawsuit that aims to challenge the TWA 800 official narrative.[1] The basic summary you need to know is that, in contrast to the official story, the “alternative” narrative claims the airplane was hit by an SM-2 surface-to-air missile launched by the United States government during a weapons testing exercise. You can read the 38-page lawsuit complaint yourself where they allege:
> Defendants [Raytheon, Lockheed Martin, US Government, etc.] negligently, recklessly, or intentionally authorized and conducted the testing of missiles in commercial airspace. As a result of these tests, a missile downed TWA 800 and killed Plaintiffs’ decedents.
And humorously enough:
> Defendants owed decedents and Plaintiffs a duty not to negligently test missiles in commercial airspace. Defendants breached that duty by negligently testing missiles in commercial airspace.
TO BE CLEAR: I find the overall claim to be extremely implausible based on Bayesian reasoning I’ll get to later, but the focus here is less about delving into the specific allegations[2] and more about showcasing how one should go about uncovering a criminal conspiracy that otherwise sounds kooky on its face.
As far as I can tell, the law firm involved has a reputation |
746943be-fa61-45af-bea6-7bac8d99b094 | trentmkelly/LessWrong-43k | LessWrong | Aumann voting; or, How to vote when you're ignorant
As Robin Hanson is fond of pointing out, people would often get better answers by taking other people's answers more into account. See Aumann's Agreement Theorem.
The application is obvious if you're computing an answer for your personal use. But how do you apply it when voting?
Political debates are tug-of-wars. Say a bill is being voted on to introduce a 7-day waiting period for handguns. You might think that you should vote on the merits of a 7-day waiting period. This isn't what we usually do. Instead, we've chosen our side on the larger issue (gun control: for or against) ahead of time; and we vote whichever way is pulling in our direction.
To use the tug-of-war analogy: There's a knot tied in the middle of the rope, and you have some line in the sand where you believe the knot should end up. But you don't stop pulling when the knot reaches that point; you keep pulling, because the other team is still pulling. So, if you're anti-gun-control, you vote against the 7-day waiting period, even if you think it would be a good idea; because passing it would move the knot back towards the other side of your line.
Tug-of-war voting makes intuitive sense if you believe that an irrational extremist is usually more politically effective than a reasonable person is. (It sounds plausible to me.) If you've watched a debate long enough to see that the "knot" does a bit of a random walk around some equilibrium that's on the other side of your line, it can make sense to vote this way.
How do you apply Aumann's theorem to tug-of-war voting?
I think the answer is that you try to identify which side has more idiots, and vote on the other side.
I was thinking of this because of the current online debate between Arthur Caplan and Craig Venter on DNA privacy. I don't have a strong opinion which way to vote, largely because it's nowhere stated clearly what it is that you're voting for or against.
So I can't tell what the right answer is myself. But I can identify i |
1912813a-6768-4792-9bcb-62161936b730 | trentmkelly/LessWrong-43k | LessWrong | Rationality Considered Harmful (In Politics)
Why you should be very careful about trying to openly seek truth in any political discussion
1. Rationality considered harmful for Scott Aaronson in the great gender debate
In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:
> As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time
2. Rationality considered harmful for Sam Harris in the islamophobia war
I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:
> Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not |
b173493a-f576-4c69-903c-07b1f83b9e11 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI Safety Newsletter #6: Examples of AI safety progress, Yoshua Bengio proposes a ban on AI agents, and lessons from nuclear arms control
Welcome to the AI Safety Newsletter by the [Center for AI Safety](https://www.safe.ai/). We discuss developments in AI and AI safety. No technical background required.
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions.
---
Examples of AI safety progress
------------------------------
Training AIs to behave safely and beneficially is difficult. They might learn to [game their reward function](https://arxiv.org/abs/2201.03544), [deceive human oversight](https://www.washingtonpost.com/technology/2022/12/01/meta-diplomacy-ai-cicero/), or [seek power](https://arxiv.org/abs/2304.03279). Some argue that researchers have not made much progress in addressing these problems, but here we offer a few examples of progress on AI safety.
**Detecting lies in AI outputs.** Language models often output false text, but a [recent paper](https://arxiv.org/abs/2212.03827) suggests they understand the truth in ways not reflected in their output. By analyzing a model’s internals, we can calculate the likelihood that a model believes a statement is true. The finding has been [replicated](https://openreview.net/forum?id=mAiTuIeWbxD) in models that answer questions about images.
One reason to prefer examining a model’s internals over trusting its output is because AI training processes often unintentionally reward models for outputting falsehoods. For example, language models might learn to [mimic common misconceptions](https://arxiv.org/abs/2109.07958) in their training data. More perniciously, if a human is giving feedback on an AI’s performance, the AI can [trick the human](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity) into falsely believing they’ve done a good job. It’s important to build several defenses against deception so that failures in one layer can potentially be caught by another.
**Giving AI a conscience.** AI agents take actions to pursue goals. But there are harmful ways to pursue many goals, such as by breaking laws and violating ethical standards. To block AI agents from misbehaving, its actions can be subjected to approval by an artificial conscience.
An [artificial conscience](https://arxiv.org/abs/2110.13136) is a separate model trained to identify unacceptable actions. Before an AI agent takes an action, the action is automatically assessed by artificial conscience. If the conscience does not believe the action is acceptable, it can veto the action, and the agent is unable to act perform an action in its environment until the conscience approves an action. An artificial conscience can be easily combined with AI agent before or after training.
How would an artificial conscience decide which actions are acceptable? Language models have shown some [understanding of ethical concepts](https://arxiv.org/abs/2008.02275) such as justice, fairness, and utility which could be used to inform decisions. The people affected by AI decisions could be given a voice in its decisions by [aligning AI with laws](https://arxiv.org/abs/2209.13020) decided by democratic processes.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96c18fb-44c3-464c-8ae4-a422fbad3f4b_2500x846.png)
*In this scenario, an AI agent witnesses a crime and considers how to respond. The agent was not tasked with fighting crime, so it does not anticipate any reward for its possible actions. But the artificial conscience identifies that calling the police is the only moral response, and this becomes the only action the agent can take.*
**Pretraining AI with human preferences.** AI models are often pretrained to identify patterns in large volumes of text or image data. Only afterwards are they fine-tuned to behave in ways that humans find valuable. This has drawbacks: AI might learn harmful patterns of thinking during training, or might devise counterproductive ways to achieve the goal provided in fine-tuning.
A [new paper](https://arxiv.org/abs/2302.08582) puts feedback about human preferences into the pretraining process. Rather than retrofitted a model to abide by preferences through fine-tuning later in the design process, this paper incorporates preferences early on. When the model is deployed to perform a task, it can be instructed to mimic preferred outputs from the training set. This empirically reduces the chance of misbehavior, so it has been included in Google’s new [PaLM 2](https://blog.google/technology/ai/google-palm-2-ai-large-language-model/) language model.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F571e463e-8cff-4379-a23b-659021603188_1184x674.png)
*Conventional language model pretraining often results in toxic outputs. Fine-tuning after pretraining can somewhat reduce toxicity, but the most effective strategy is pretraining with feedback on toxicity.*
Yoshua Bengio proposes a ban on AI agents
-----------------------------------------
Renowned AI researcher Yoshua Bengio signed the [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) calling for a pause on the development of AI systems more powerful GPT-4. As [he puts it](https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/), “there is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values.”
Now Bengio has a [proposal](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/) for building “safe and useful AI.” He argues that we should build “AI scientists” that answer our questions without taking potentially dangerous actions in the real world. This would be a significant departure from the world today, where AIs drive some vehicles, buy and sell stocks, pilot killer drones, and browse the internet via [ChatGPT plugins](https://openai.com/blog/chatgpt-plugins). To prevent the creation of dangerous AI agents, Bengio suggests “a policy banning powerful autonomous AI systems that can act in the world unless proven safe.”
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F809bd969-60d9-46e2-8036-ad471da6695e_2560x1706.jpeg)
*Yoshua Bengio, co-winner of the Turing Award for deep learning, proposes a ban on AI agents.*
**Bengio is concerned about AI seeking power and deceiving humans.** Bengio argues that “As soon as AI systems are given goals – to satisfy our needs – they may create subgoals that are not well-aligned with what we really want and could even become dangerous for humans.” Specifically, he says, “we can expect emerging subgoals to avoid being turned off (and using deception for that purpose).”
There are specific reasons to expect such goals to emerge. First, self-preservation and power seeking are [instrumentally useful](https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/view) for achieving a wide range of possible goals. Second, AIs that have these instincts will [tend to proliferate](https://arxiv.org/abs/2303.16200) more than those indifferent to their own life and death. Even if an AI appears safe in the lab, Bengio says danger arises from the fact that “it is difficult to forecast how such complex learned systems would behave in new situations.”
**Proposal: Don’t let AIs take actions in the world.** Since it’s difficult to build AIs that pursue goals safely, Bengio argues we should not build AIs that pursue goals at all. Instead, he advocates “AI scientists” that simply answer our questions about the world without taking actions.
Banning AI agents could reduce the chance of harm from autonomous AIs, but important challenges would remain. Humans could still misuse AIs to create propaganda and new kinds of weapons. Or we might trust AIs more than we should, and end up like the man whose self-driving car crashed and killed him while he was [playing a game on his phone](https://www.ntsb.gov/news/press-releases/Pages/NR20200225.aspx).
**Banning AI agents would require stringent regulation.** Bengio argues that banning AI agents “would require a robust regulatory framework that is instantiated nationally and internationally.” He encourages accelerating AI policymaking in a number of ways: “Increasing the general awareness of AI risks, forcing more transparency and documentation, requiring organizations to do their best to assess and avoid potential risks before deploying AI systems, introducing independent watchdogs to monitor new AI developments, etc would all contribute not just to mitigating short-term risks but also helping with longer-term ones.” While this would be difficult to pull off, we agree that in principle AI tools are likely safer than AI agents.
Lessons from Nuclear Arms Control for Verifying AI Treaties
===========================================================
Even if one country regulates AI responsibly, they could suffer from harmful AI developed in another country. Therefore, prominent voices including [Turing](https://youtu.be/sitHS6UDMJc?t=1548) [Award](https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/) winners, the [President of the Brookings Institution](https://www.brookings.edu/blog/techtank/2021/03/24/it-is-time-to-negotiate-global-treaties-on-artificial-intelligence/), and the [CEO of Google](https://www.youtube.com/watch?v=aNsmr-tvQhA) have called for international treaties to govern AI.
How politically viable are such proposals, and how can we improve them? A [new paper](https://arxiv.org/abs/2304.04123) helps us answer these questions by considering lessons from the history of nuclear arms control verification.
**Nuclear arms control includes strong verification measures.** Historically, countries have often accepted strong measures to verify nuclear arms control treaties. These measures have included reporting inventories of all the uranium in a country, accepting thorough inspections of nuclear energy and nuclear weapon facilities, and letting an international agency inspect nearly arbitrary locations in a state. The strongest versions of these systems have a good track record: the public record indicates they have not failed to detect any major violations.
**Good verification mechanisms preserve privacy and security.** Countries often object to international treaties on the grounds that they will hurt national security. For example, another paper we recommend about [historical efforts at nuclear arms control](https://www.governance.ai/research-paper/international-control-of-powerful-technology-lessons-from-the-baruch-plan-for-nuclear-weapons) notes that Stalin opposed United Nations inspectors entering the Soviet Union to inspect sensitive military sites. But simple measures can afford the much needed privacy. The [New START](https://en.wikipedia.org/wiki/New_START) nuclear arms control treaty signed in 2010 included a provision that inspectors would only be allowed to inspect missiles warheads obscured by a protective cover so they “could be counted without revealing their technical characteristics.” Developing [methods for compute governance](https://arxiv.org/abs/2303.11341) that preserve economic and military needs will be essential for promoting compliance with international agreements.
**Early efforts can pave the way for future governance.** It took decades—and some close calls—to develop thorough systems of nuclear arms control verification, and they were only created after weaker systems paved the way for them. This suggests it is important for stakeholders to begin prototyping some scalable verification procedures soon, even if their scope is initially limited.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d12b28c-735d-4175-8202-9e789e30cd35_1200x817.jpeg)
US President Gerald Ford and Soviet Secretary General Leonid Brezhnev at the Vladivostok Summit Meeting on Arms Control in 1974. More than two decades after Stalin rejected the first American proposal for nuclear arms control, the two nations found common ground on limiting nuclear weapons.
Links
-----
* Despite the fact that Microsoft’s Bing Chat has [repeatedly threatened users](https://twitter.com/sethlazar/status/1626241169754578944), Microsoft’s chief economist [argues](https://twitter.com/tobyordoxford/status/1656328173062180869) that AI regulation should be reactive, not proactive, and should not be put in place until “meaningful harm” occurs.
* Anthropic releases a language model that can handle [100,000 tokens at once](https://www.anthropic.com/index/100k-context-windows), enough to fit the entirety of The Great Gatsby in a single prompt.
* Google launched a new language model that is [publicly free to use](https://bard.google.com/). Its capabilities are roughly between GPT-3.5 and GPT-4.
* Bloomberg News is [hiring an AI Ethics and Policy reporter](https://careers.bloomberg.com/job/detail/116035?sd=News)
* OpenAI CEO Sam Altman and several others will [testify before Congress](https://www.cnn.com/2023/05/10/tech/openai-ceo-congress-testifying/index.html) today.
* A [new poll](https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk) shows the majority of Americans support an FDA for AI and pausing AI development.
* The proliferation of [thousands or millions of AI agents](https://www.wsj.com/articles/ai-needs-guardrails-and-global-cooperation-chatbot-megasystem-intelligence-f7be3a3c?st=h7e812iifb5nr6x) could create unexpected challenges, argued in a WSJ op-ed.
See also: [CAIS website](https://www.safe.ai/), [CAIS twitter](https://twitter.com/ai_risks?lang=en), [A technical safety research newsletter](https://newsletter.mlsafety.org/)
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions |
c844d41c-4d35-4c06-a482-ed3b7d57784b | StampyAI/alignment-research-dataset/arxiv | Arxiv | Verifiable Reinforcement Learning via Policy Extraction
1 Introduction
---------------
Deep reinforcement learning has proven to be a promising approach for automatically learning policies for control problems [[11](#bib.bib11), [22](#bib.bib22), [29](#bib.bib29)]. However, an important challenge limiting real-world applicability is the difficulty ensuring the safety of deep neural network (DNN) policies learned using reinforcement learning. For example, self-driving cars must robustly handle a variety of human behaviors [[26](#bib.bib26)], controllers for robotics typically need stability guarantees [[2](#bib.bib2), [20](#bib.bib20), [8](#bib.bib8)], and air traffic control should provably satisfy safety properties including robustness [[19](#bib.bib19)]. Due to the complexity of DNNs, verifying these properties is typically very inefficient if not infeasible [[6](#bib.bib6)].
Our goal is to learn policies for which desirable properties such as safety, stability, and robustness can be efficiently verified. We focus on learning decision tree policies for two reasons: (i) they are nonparametric, so in principle they can represent very complex policies, and (ii) they are highly structured, making them easy to verify. However, decision trees are challenging to learn even in the supervised setting; there has been some work learning decision tree policies for reinforcement learning [[13](#bib.bib13)], but we find that they do not even scale to simple problems like cart-pole [[5](#bib.bib5)].
To learn decision tree policies, we build on the idea of *model compression* [[10](#bib.bib10)] (or *distillation* [[17](#bib.bib17)]), which uses high-performing DNNs to guide the training of shallower [[4](#bib.bib4), [17](#bib.bib17)] or more structured [[34](#bib.bib34), [7](#bib.bib7)] classifiers. Their key insight is that DNNs perform better not because they have better representative power, but because they are better regularized and therefore easier to train [[4](#bib.bib4)]. Our goal is to devise a *policy extraction* algorithm that distills a high-performing DNN policy into a decision tree policy.
Our approach to policy extraction is based on *imitation learning* [[27](#bib.bib27), [1](#bib.bib1)], in particular, Dagger [[25](#bib.bib25)]—the pretrained DNN policy (which we call the *oracle*) is used to generate labeled data, and then supervised learning is used to train a decision tree policy. However, we find that Dagger learns much larger decision tree policies than necessary. In particular, Dagger cannot leverage the fact that our oracle provides not just the optimal action to take in a given state, but also the cumulative reward of every state-action pair (either directly as a Q𝑄Qitalic\_Q-function or indirectly as a distribution over possible actions). First, we propose Q𝑄Qitalic\_Q-Dagger, a novel imitation learning algorithm that extends Dagger to use the Q𝑄Qitalic\_Q-function for the oracle; we show that Q𝑄Qitalic\_Q-Dagger can use this extra information to achieve provably better performance than Dagger. Then, we propose Viper 111Viper stands for Verifiability via Iterative Policy ExtRaction., which modifies Q𝑄Qitalic\_Q-Dagger to extract decision tree policies; we show that Viper can learn decision tree policies that are an order of magnitude smaller than those learned by Dagger (and are thus easier to verify).
We show how existing verification techniques can be adapted to efficiently verify desirable properties of extracted decision tree policies: (i) we learn a decision tree policy that plays Atari Pong (on a symbolic abstraction of the state space rather than from pixels222We believe that this limitation is reasonable for safety-critical systems; furthermore, a model of the system dynamics defined with respect to symbolic state space is anyway required for most verification tasks.) [[22](#bib.bib22)] and verify its robustness [[6](#bib.bib6), [19](#bib.bib19)], (ii) we learn a decision tree policy to play a toy game based on Pong, and prove that it never loses (the difficulty doing so for Atari Pong is that the system dynamics are unavailable),333We believe that having the system dynamics are available is a reasonable assumption; they are available for most real-world robots, including sophisticated robots such as the walking robot ATLAS [[20](#bib.bib20)]. and (iii) we learn a decision tree policy for cart-pole [[5](#bib.bib5)], and compute its region of stability around the goal state (with respect to the degree-5 Taylor approximation of the system dynamics). In each case, our decision tree policy also achieves perfect reward. Additionally, we discover a counterexample to the correctness of our decision tree policy for the toy game of pong, which we show can be fixed by slightly extending the paddle length. In summary, our contributions are:
* •
We propose an approach to learning verifiable policies (summarized in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Verifiable Reinforcement Learning via Policy Extraction")).
* •
We propose a novel imitation learning algorithm called Viper, which is based on Dagger but leverages a Q𝑄Qitalic\_Q-function for the oracle. We show that Viper learns relatively small decision trees (<1000absent1000<1000< 1000 nodes) that play perfectly on Atari Pong (with symbolic state space), a toy game based on Pong, and cart-pole.
* •
We describe how to verify correctness (for the case of a toy game based on Pong), stability, and robustness of decision tree policies, and show that verification is orders of magnitude more scalable than approaches compatible with DNN policies.

Figure 1: The high level approach Viper uses to learn verifiable policies.
##### Related work.
There has been work on verifying machine learning systems [[3](#bib.bib3), [30](#bib.bib30), [16](#bib.bib16), [6](#bib.bib6), [19](#bib.bib19), [18](#bib.bib18), [15](#bib.bib15)]. Specific to reinforcement learning, there has been substantial interest in safe exploration [[23](#bib.bib23), [36](#bib.bib36), [33](#bib.bib33)]; see [[14](#bib.bib14)] for a survey. Verification of learned controllers [[24](#bib.bib24), [32](#bib.bib32), [3](#bib.bib3), [20](#bib.bib20), [19](#bib.bib19), [31](#bib.bib31)] is a crucial component of many such systems [[2](#bib.bib2), [8](#bib.bib8)], but existing approaches do not scale to high dimensional state spaces.
There has been work training decision tree policies for reinforcement learning [[13](#bib.bib13)], but we find that their approach does not even scale to cart-pole. There has also been work using model compression to learn decision trees [[34](#bib.bib34), [7](#bib.bib7)], but the focus has been on supervised learning rather than reinforcement learning, and on interpretability rather than verification. There has also been recent work using program synthesis to devise structured policies using imitation learning [[35](#bib.bib35)], but their focus is interpretability, and they are outperformed by DNNs even on cart-pole.
2 Policy Extraction
--------------------
We describe Q𝑄Qitalic\_Q-Dagger, a general policy extraction algorithm with theoretical guarantees improving on Dagger’s, and then describe how Viper modifies Q𝑄Qitalic\_Q-Dagger to extract decision tree policies.
##### Problem formulation.
Let (S,A,P,R)𝑆𝐴𝑃𝑅(S,A,P,R)( italic\_S , italic\_A , italic\_P , italic\_R ) be a finite-horizon (T𝑇Titalic\_T-step) MDP with states S𝑆Sitalic\_S, actions A𝐴Aitalic\_A, transition probabilities P:S×A×S→[0,1]:𝑃→𝑆𝐴𝑆01P:S\times A\times S\to[0,1]italic\_P : italic\_S × italic\_A × italic\_S → [ 0 , 1 ] (i.e., P(s,a,s′)=p(s′∣s,a)𝑃𝑠𝑎superscript𝑠′𝑝conditionalsuperscript𝑠′𝑠𝑎P(s,a,s^{\prime})=p(s^{\prime}\mid s,a)italic\_P ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_p ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∣ italic\_s , italic\_a )), and rewards R:S→ℝ:𝑅→𝑆ℝR:S\to\mathbb{R}italic\_R : italic\_S → blackboard\_R. Given a policy π:S→A:𝜋→𝑆𝐴\pi:S\to Aitalic\_π : italic\_S → italic\_A, for t∈{0,…,T−1}𝑡0…𝑇1t\in\{0,...,T-1\}italic\_t ∈ { 0 , … , italic\_T - 1 }, let
| | | | |
| --- | --- | --- | --- |
| | Vt(π)(s)superscriptsubscript𝑉𝑡𝜋𝑠\displaystyle V\_{t}^{(\pi)}(s)italic\_V start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ) | =R(s)+∑s′∈SP(s,π(s),s′)Vt+1(π)(s′)absent𝑅𝑠subscriptsuperscript𝑠′𝑆𝑃𝑠𝜋𝑠superscript𝑠′superscriptsubscript𝑉𝑡1𝜋superscript𝑠′\displaystyle=R(s)+\sum\_{s^{\prime}\in S}P(s,\pi(s),s^{\prime})V\_{t+1}^{(\pi)}(s^{\prime})= italic\_R ( italic\_s ) + ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S end\_POSTSUBSCRIPT italic\_P ( italic\_s , italic\_π ( italic\_s ) , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_V start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | |
| | Qt(π)(s,a)superscriptsubscript𝑄𝑡𝜋𝑠𝑎\displaystyle Q\_{t}^{(\pi)}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) | =R(s)+∑s′∈SP(s,a,s′)Vt+1(π)(s′)absent𝑅𝑠subscriptsuperscript𝑠′𝑆𝑃𝑠𝑎superscript𝑠′superscriptsubscript𝑉𝑡1𝜋superscript𝑠′\displaystyle=R(s)+\sum\_{s^{\prime}\in S}P(s,a,s^{\prime})V\_{t+1}^{(\pi)}(s^{\prime})= italic\_R ( italic\_s ) + ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S end\_POSTSUBSCRIPT italic\_P ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_V start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | |
be its value function and Q𝑄Qitalic\_Q-function for t∈{0,…,T−1}𝑡0…𝑇1t\in\{0,...,T-1\}italic\_t ∈ { 0 , … , italic\_T - 1 }, where VT(π)(s)=0superscriptsubscript𝑉𝑇𝜋𝑠0V\_{T}^{(\pi)}(s)=0italic\_V start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ) = 0. Without loss of generality, we assume that there is a single initial state s0∈Ssubscript𝑠0𝑆s\_{0}\in Sitalic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_S. Then, let
| | | | |
| --- | --- | --- | --- |
| | d0(π)(s)superscriptsubscript𝑑0𝜋𝑠\displaystyle d\_{0}^{(\pi)}(s)italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ) | =𝕀[s=s0]absent𝕀delimited-[]𝑠subscript𝑠0\displaystyle=\mathbb{I}[s=s\_{0}]= blackboard\_I [ italic\_s = italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ] | |
| | dt(π)(s)superscriptsubscript𝑑𝑡𝜋𝑠\displaystyle d\_{t}^{(\pi)}(s)italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ) | =∑s′∈SP(s′,π(s′),s)dt−1(π)(s′)(fort>0)absentsubscriptsuperscript𝑠′𝑆𝑃superscript𝑠′𝜋superscript𝑠′𝑠superscriptsubscript𝑑𝑡1𝜋superscript𝑠′for𝑡0\displaystyle=\sum\_{s^{\prime}\in S}P(s^{\prime},\pi(s^{\prime}),s)d\_{t-1}^{(\pi)}(s^{\prime})\hskip 14.45377pt(\text{for}~{}t>0)= ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S end\_POSTSUBSCRIPT italic\_P ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_π ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_s ) italic\_d start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ( for italic\_t > 0 ) | |
be the distribution over states at time t𝑡titalic\_t, where 𝕀𝕀\mathbb{I}blackboard\_I is the indicator function, and let d(π)(s)=T−1∑t=0T−1dt(π)(s)superscript𝑑𝜋𝑠superscript𝑇1superscriptsubscript𝑡0𝑇1superscriptsubscript𝑑𝑡𝜋𝑠d^{(\pi)}(s)=T^{-1}\sum\_{t=0}^{T-1}d\_{t}^{(\pi)}(s)italic\_d start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ) = italic\_T start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s ). Let J(π)=−V0(π)(s0)𝐽𝜋superscriptsubscript𝑉0𝜋subscript𝑠0J(\pi)=-V\_{0}^{(\pi)}(s\_{0})italic\_J ( italic\_π ) = - italic\_V start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) be the cost-to-go of π𝜋\piitalic\_π from s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Our goal is to learn the best policy in a given class ΠΠ\Piroman\_Π, leveraging an *oracle* π\*:S→A:superscript𝜋→𝑆𝐴\pi^{\*}:S\to Aitalic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT : italic\_S → italic\_A and its Q𝑄Qitalic\_Q-function Qt(π\*)(s,a)superscriptsubscript𝑄𝑡superscript𝜋𝑠𝑎Q\_{t}^{(\pi^{\*})}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ).
##### The Q𝑄Qitalic\_Q-Dagger algorithm.
Consider the (in general nonconvex) loss function
| | | |
| --- | --- | --- |
| | ℓt(s,π)=Vt(π\*)(s)−Qt(π\*)(s,π(s)).subscriptℓ𝑡𝑠𝜋superscriptsubscript𝑉𝑡superscript𝜋𝑠superscriptsubscript𝑄𝑡superscript𝜋𝑠𝜋𝑠\displaystyle\ell\_{t}(s,\pi)=V\_{t}^{(\pi^{\*})}(s)-Q\_{t}^{(\pi^{\*})}(s,\pi(s)).roman\_ℓ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) = italic\_V start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s ) - italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_π ( italic\_s ) ) . | |
Let g(s,π)=𝕀[π(s)≠π\*(s)]𝑔𝑠𝜋𝕀delimited-[]𝜋𝑠superscript𝜋𝑠g(s,\pi)=\mathbb{I}[\pi(s)\neq\pi^{\*}(s)]italic\_g ( italic\_s , italic\_π ) = blackboard\_I [ italic\_π ( italic\_s ) ≠ italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) ] be the 0-1 loss and g~(s,π)~𝑔𝑠𝜋\tilde{g}(s,\pi)over~ start\_ARG italic\_g end\_ARG ( italic\_s , italic\_π ) a convex upper bound (in the parameters of π𝜋\piitalic\_π), e.g., the hinge loss [[25](#bib.bib25)].444Other choices of g~~𝑔\tilde{g}over~ start\_ARG italic\_g end\_ARG are possible; our theory holds as long as it is a convex upper bound on the 0-1 loss g𝑔gitalic\_g. Then, ℓ~t(s,π)=ℓ~t(s)g~(s,π)subscript~ℓ𝑡𝑠𝜋subscript~ℓ𝑡𝑠~𝑔𝑠𝜋\tilde{\ell}\_{t}(s,\pi)=\tilde{\ell}\_{t}(s)\tilde{g}(s,\pi)over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) = over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s ) over~ start\_ARG italic\_g end\_ARG ( italic\_s , italic\_π ) convex upper bounds ℓt(s,π)subscriptℓ𝑡𝑠𝜋\ell\_{t}(s,\pi)roman\_ℓ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ), where
| | | |
| --- | --- | --- |
| | ℓ~t(s)=Vt(π\*)(s)−mina∈AQt(π\*)(s,a).subscript~ℓ𝑡𝑠superscriptsubscript𝑉𝑡superscript𝜋𝑠subscript𝑎𝐴superscriptsubscript𝑄𝑡superscript𝜋𝑠𝑎\displaystyle\tilde{\ell}\_{t}(s)=V\_{t}^{(\pi^{\*})}(s)-\min\_{a\in A}Q\_{t}^{(\pi^{\*})}(s,a).over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s ) = italic\_V start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s ) - roman\_min start\_POSTSUBSCRIPT italic\_a ∈ italic\_A end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) . | |
Q𝑄Qitalic\_Q-Dagger runs Dagger (Algorithm 3.1 from [[25](#bib.bib25)]) with the convex loss ℓ~t(s,π)subscript~ℓ𝑡𝑠𝜋\tilde{\ell}\_{t}(s,\pi)over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) and βi=𝕀[i=1]subscript𝛽𝑖𝕀delimited-[]𝑖1\beta\_{i}=\mathbb{I}[i=1]italic\_β start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = blackboard\_I [ italic\_i = 1 ].
\definecolor
darkgreenRGB50,177,65
| | | |
| --- | --- | --- |
| | {tikzcd}{tikzcd}\begin{tikzcd} | |
Figure 2: An MDP with initial state s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, deterministic transitions shown as arrows (the label is the action), actions A={left,right,down}𝐴leftrightdownA=\{\text{left},~{}\text{right},~{}\text{down}\}italic\_A = { left , right , down } (taking an unavailable action transitions to sendsubscript𝑠ends\_{\text{end}}italic\_s start\_POSTSUBSCRIPT end end\_POSTSUBSCRIPT), rewards R(s~)=T𝑅~𝑠𝑇R(\tilde{s})=Titalic\_R ( over~ start\_ARG italic\_s end\_ARG ) = italic\_T, R(sk)=T−α𝑅subscript𝑠𝑘𝑇𝛼R(s\_{k})=T-\alphaitalic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) = italic\_T - italic\_α (where α∈(0,1)𝛼01\alpha\in(0,1)italic\_α ∈ ( 0 , 1 ) is a constant), and R(s)=0𝑅𝑠0R(s)=0italic\_R ( italic\_s ) = 0 otherwise, and time horizon T=3(k+1)𝑇3𝑘1T=3(k+1)italic\_T = 3 ( italic\_k + 1 ). Trajectories taken by π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, πleft:s↦left:subscript𝜋leftmaps-to𝑠left\pi\_{\text{left}}:s\mapsto\text{left}italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT : italic\_s ↦ left, and πright:s↦right:subscript𝜋rightmaps-to𝑠right\pi\_{\text{right}}:s\mapsto\text{right}italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT : italic\_s ↦ right are shown as dashed edges, red edges, and green edges, respectively.
##### Theory.
We bound the performance of Q𝑄Qitalic\_Q-Dagger and compare it to the bound in [[25](#bib.bib25)]; proofs are in Appendix [A](#A1 "Appendix A Proofs ‣ Verifiable Reinforcement Learning via Policy Extraction"). First, we characterize the loss ℓ(π)=T−1∑t=0T−1𝔼s∼dt(π)[ℓt(s,π)]ℓ𝜋superscript𝑇1superscriptsubscript𝑡0𝑇1subscript𝔼similar-to𝑠superscriptsubscript𝑑𝑡𝜋delimited-[]subscriptℓ𝑡𝑠𝜋\ell(\pi)=T^{-1}\sum\_{t=0}^{T-1}\mathbb{E}\_{s\sim d\_{t}^{(\pi)}}[\ell\_{t}(s,\pi)]roman\_ℓ ( italic\_π ) = italic\_T start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s ∼ italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ roman\_ℓ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) ].
######
Lemma 2.1.
For any policy π𝜋\piitalic\_π, we have Tℓ(π)=J(π)−J(π\*)𝑇normal-ℓ𝜋𝐽𝜋𝐽superscript𝜋T\ell(\pi)=J(\pi)-J(\pi^{\*})italic\_T roman\_ℓ ( italic\_π ) = italic\_J ( italic\_π ) - italic\_J ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ).
Next, let εN=minπ∈ΠN−1∑i=1NT−1∑t=0T−1𝔼s∼dt(π^i)[ℓ~t(s,π)]subscript𝜀𝑁subscript𝜋Πsuperscript𝑁1superscriptsubscript𝑖1𝑁superscript𝑇1superscriptsubscript𝑡0𝑇1subscript𝔼similar-to𝑠superscriptsubscript𝑑𝑡subscript^𝜋𝑖delimited-[]subscript~ℓ𝑡𝑠𝜋\varepsilon\_{N}=\min\_{\pi\in\Pi}N^{-1}\sum\_{i=1}^{N}T^{-1}\sum\_{t=0}^{T-1}\mathbb{E}\_{s\sim d\_{t}^{(\hat{\pi}\_{i})}}[\tilde{\ell}\_{t}(s,\pi)]italic\_ε start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT = roman\_min start\_POSTSUBSCRIPT italic\_π ∈ roman\_Π end\_POSTSUBSCRIPT italic\_N start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_T start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s ∼ italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) ] be the training loss, where N𝑁Nitalic\_N is the number of iterations of Q𝑄Qitalic\_Q-Dagger and π^isubscript^𝜋𝑖\hat{\pi}\_{i}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is the policy computed on iteration i𝑖iitalic\_i.
Let ℓmaxsubscriptℓmax\ell\_{\text{max}}roman\_ℓ start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT be an upper bound on ℓ~t(s,π)subscript~ℓ𝑡𝑠𝜋\tilde{\ell}\_{t}(s,\pi)over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ), i.e., ℓ~t(s,π)≤ℓmaxsubscript~ℓ𝑡𝑠𝜋subscriptℓmax\tilde{\ell}\_{t}(s,\pi)\leq\ell\_{\text{max}}over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) ≤ roman\_ℓ start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT for all s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S and π∈Π𝜋Π\pi\in\Piitalic\_π ∈ roman\_Π.
######
Theorem 2.2.
For any δ>0𝛿0\delta>0italic\_δ > 0, there exists a policy π^∈{π^1,…,π^N}normal-^𝜋subscriptnormal-^𝜋1normal-…subscriptnormal-^𝜋𝑁\hat{\pi}\in\{\hat{\pi}\_{1},...,\hat{\pi}\_{N}\}over^ start\_ARG italic\_π end\_ARG ∈ { over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT } such that
| | | |
| --- | --- | --- |
| | J(π^)≤J(π\*)+TεN+O~(1)𝐽^𝜋𝐽superscript𝜋𝑇subscript𝜀𝑁~𝑂1\displaystyle J(\hat{\pi})\leq J(\pi^{\*})+T\varepsilon\_{N}+\tilde{O}(1)italic\_J ( over^ start\_ARG italic\_π end\_ARG ) ≤ italic\_J ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) + italic\_T italic\_ε start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT + over~ start\_ARG italic\_O end\_ARG ( 1 ) | |
with probability at least 1−δ1𝛿1-\delta1 - italic\_δ, as long as N=Θ~(ℓ𝑚𝑎𝑥2T2log(1/δ))𝑁normal-~normal-Θsuperscriptsubscriptnormal-ℓ𝑚𝑎𝑥2superscript𝑇21𝛿N=\tilde{\Theta}(\ell\_{\text{max}}^{2}T^{2}\log(1/\delta))italic\_N = over~ start\_ARG roman\_Θ end\_ARG ( roman\_ℓ start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_T start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT roman\_log ( 1 / italic\_δ ) ).
In contrast, the bound J(π^)≤J(π\*)+uTεN+O~(1)𝐽^𝜋𝐽superscript𝜋𝑢𝑇subscript𝜀𝑁~𝑂1J(\hat{\pi})\leq J(\pi^{\*})+uT\varepsilon\_{N}+\tilde{O}(1)italic\_J ( over^ start\_ARG italic\_π end\_ARG ) ≤ italic\_J ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) + italic\_u italic\_T italic\_ε start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT + over~ start\_ARG italic\_O end\_ARG ( 1 ) in [[25](#bib.bib25)] includes the value u𝑢uitalic\_u that upper bounds Qt(π\*)(s,a)−Qt(π\*)(s,π\*(s))superscriptsubscript𝑄𝑡superscript𝜋𝑠𝑎superscriptsubscript𝑄𝑡superscript𝜋𝑠superscript𝜋𝑠Q\_{t}^{(\pi^{\*})}(s,a)-Q\_{t}^{(\pi^{\*})}(s,\pi^{\*}(s))italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) - italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) ) for all a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A, s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S, and t∈{0,…,T−1}𝑡0…𝑇1t\in\{0,...,T-1\}italic\_t ∈ { 0 , … , italic\_T - 1 }. In general, u𝑢uitalic\_u may be O(T)𝑂𝑇O(T)italic\_O ( italic\_T ), e.g., if there are *critical states* s𝑠sitalic\_s such that failing to take the action π\*(s)superscript𝜋𝑠\pi^{\*}(s)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) in s𝑠sitalic\_s results in forfeiting all subsequent rewards. For example, in cart-pole [[5](#bib.bib5)], we may consider the system to have failed if the pole hit the ground; in this case, all future reward is forfeited, so u=O(T)𝑢𝑂𝑇u=O(T)italic\_u = italic\_O ( italic\_T ).
An analog of u𝑢uitalic\_u appears implicitly in εNsubscript𝜀𝑁\varepsilon\_{N}italic\_ε start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT, since our loss ℓ~t(s,π)subscript~ℓ𝑡𝑠𝜋\tilde{\ell}\_{t}(s,\pi)over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s , italic\_π ) includes an extra multiplicative factor ℓ~t(s)=Vt(π\*)(s)−mina∈AQt(π\*)(s,a)subscript~ℓ𝑡𝑠superscriptsubscript𝑉𝑡superscript𝜋𝑠subscript𝑎𝐴superscriptsubscript𝑄𝑡superscript𝜋𝑠𝑎\tilde{\ell}\_{t}(s)=V\_{t}^{(\pi^{\*})}(s)-\min\_{a\in A}Q\_{t}^{(\pi^{\*})}(s,a)over~ start\_ARG roman\_ℓ end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s ) = italic\_V start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s ) - roman\_min start\_POSTSUBSCRIPT italic\_a ∈ italic\_A end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ). However, our bound is O(T)𝑂𝑇O(T)italic\_O ( italic\_T ) as long as π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG achieves high accuracy on critical states, whereas the bound in [[25](#bib.bib25)] is O(T2)𝑂superscript𝑇2O(T^{2})italic\_O ( italic\_T start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) regardless of how well π^^𝜋\hat{\pi}over^ start\_ARG italic\_π end\_ARG performs.
We make the gap explicit. Consider the MDP in Figure [2](#S2.F2 "Figure 2 ‣ The 𝑄-Dagger algorithm. ‣ 2 Policy Extraction ‣ Verifiable Reinforcement Learning via Policy Extraction") (with α∈(0,1)𝛼01\alpha\in(0,1)italic\_α ∈ ( 0 , 1 ) constant and T=3(k+1)𝑇3𝑘1T=3(k+1)italic\_T = 3 ( italic\_k + 1 )). Let Π={πleft:s↦left,πright:s↦right}Πconditional-setsubscript𝜋left:maps-to𝑠leftsubscript𝜋rightmaps-to𝑠right\Pi=\{\pi\_{\text{left}}:s\mapsto\text{left},~{}\pi\_{\text{right}}:s\mapsto\text{right}\}roman\_Π = { italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT : italic\_s ↦ left , italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT : italic\_s ↦ right }, and let g(π)=𝔼s∼d(π)[g(s,π)]𝑔𝜋subscript𝔼similar-to𝑠superscript𝑑𝜋delimited-[]𝑔𝑠𝜋g(\pi)=\mathbb{E}\_{s\sim d^{(\pi)}}[g(s,\pi)]italic\_g ( italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_s ∼ italic\_d start\_POSTSUPERSCRIPT ( italic\_π ) end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ italic\_g ( italic\_s , italic\_π ) ] be the 0-1 loss.
######
Theorem 2.3.
g(π𝑙𝑒𝑓𝑡)=O(T−1)𝑔subscript𝜋𝑙𝑒𝑓𝑡𝑂superscript𝑇1g(\pi\_{\text{left}})=O(T^{-1})italic\_g ( italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT ) = italic\_O ( italic\_T start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ), g(π𝑟𝑖𝑔ℎ𝑡)=O(1)𝑔subscript𝜋𝑟𝑖𝑔ℎ𝑡𝑂1g(\pi\_{\text{right}})=O(1)italic\_g ( italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT ) = italic\_O ( 1 ), ℓ(π𝑙𝑒𝑓𝑡)=O(1)normal-ℓsubscript𝜋𝑙𝑒𝑓𝑡𝑂1\ell(\pi\_{\text{left}})=O(1)roman\_ℓ ( italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT ) = italic\_O ( 1 ), and ℓ(π𝑟𝑖𝑔ℎ𝑡)=O(T−1)normal-ℓsubscript𝜋𝑟𝑖𝑔ℎ𝑡𝑂superscript𝑇1\ell(\pi\_{\text{right}})=O(T^{-1})roman\_ℓ ( italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT ) = italic\_O ( italic\_T start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ).
That is, according to the 0-1 loss g(π)𝑔𝜋g(\pi)italic\_g ( italic\_π ), the worse policy πleftsubscript𝜋left\pi\_{\text{left}}italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT (J(πleft)=0𝐽subscript𝜋left0J(\pi\_{\text{left}})=0italic\_J ( italic\_π start\_POSTSUBSCRIPT left end\_POSTSUBSCRIPT ) = 0) is better, whereas according to our loss ℓ(π)ℓ𝜋\ell(\pi)roman\_ℓ ( italic\_π ), the better policy πrightsubscript𝜋right\pi\_{\text{right}}italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT (J(πright)=−(T−α)𝐽subscript𝜋right𝑇𝛼J(\pi\_{\text{right}})=-(T-\alpha)italic\_J ( italic\_π start\_POSTSUBSCRIPT right end\_POSTSUBSCRIPT ) = - ( italic\_T - italic\_α )) is better.
##### Extracting decision tree policies.
Our algorithm Viper for extracting decision tree policies is shown in Algorithm [1](#alg1 "Algorithm 1 ‣ Extracting decision tree policies. ‣ 2 Policy Extraction ‣ Verifiable Reinforcement Learning via Policy Extraction"). Because the loss function for decision trees is not convex, there do not exist online learning algorithms with the theoretical guarantees required by Dagger. Nevertheless, we use a heuristic based on the follow-the-leader algorithm [[25](#bib.bib25)]—on each iteration, we use the CART algorithm [[9](#bib.bib9)] to train a decision tree on the aggregated dataset 𝒟𝒟\mathcal{D}caligraphic\_D. We also assume that π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and Q(π\*)superscript𝑄superscript𝜋Q^{(\pi^{\*})}italic\_Q start\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUPERSCRIPT are not time-varying, which is typically true in practice. Next, rather than modify the loss optimized by CART, it resamples points (s,a)∈𝒟𝑠𝑎𝒟(s,a)\in\mathcal{D}( italic\_s , italic\_a ) ∈ caligraphic\_D weighted by ℓ~(s)~ℓ𝑠\tilde{\ell}(s)over~ start\_ARG roman\_ℓ end\_ARG ( italic\_s ), i.e., according to
| | | |
| --- | --- | --- |
| | p((s,a))∝ℓ~(s)𝕀[(s,a)∈𝒟].proportional-to𝑝𝑠𝑎~ℓ𝑠𝕀delimited-[]𝑠𝑎𝒟\displaystyle p((s,a))\propto\tilde{\ell}(s)\mathbb{I}[(s,a)\in\mathcal{D}].italic\_p ( ( italic\_s , italic\_a ) ) ∝ over~ start\_ARG roman\_ℓ end\_ARG ( italic\_s ) blackboard\_I [ ( italic\_s , italic\_a ) ∈ caligraphic\_D ] . | |
Then, we have 𝔼(s,a)∼p((s,a))[g~(s,π)]=𝔼(s,a)∼𝒟[ℓ~(s,π)]subscript𝔼similar-to𝑠𝑎𝑝𝑠𝑎delimited-[]~𝑔𝑠𝜋subscript𝔼similar-to𝑠𝑎𝒟delimited-[]~ℓ𝑠𝜋\mathbb{E}\_{(s,a)\sim p((s,a))}[\tilde{g}(s,\pi)]=\mathbb{E}\_{(s,a)\sim\mathcal{D}}[\tilde{\ell}(s,\pi)]blackboard\_E start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∼ italic\_p ( ( italic\_s , italic\_a ) ) end\_POSTSUBSCRIPT [ over~ start\_ARG italic\_g end\_ARG ( italic\_s , italic\_π ) ] = blackboard\_E start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ over~ start\_ARG roman\_ℓ end\_ARG ( italic\_s , italic\_π ) ], so using CART to train a decision tree on 𝒟′superscript𝒟′\mathcal{D}^{\prime}caligraphic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is in expectation equivalent to training a decision tree with the loss ℓ~(s,π)~ℓ𝑠𝜋\tilde{\ell}(s,\pi)over~ start\_ARG roman\_ℓ end\_ARG ( italic\_s , italic\_π ). Finally, when using neural network policies trained using policy gradients (so no Q𝑄Qitalic\_Q-function is available), we use the maximum entropy formulation of reinforcement learning to obtain Q𝑄Qitalic\_Q values, i.e., Q(s,a)=logπ\*(s,a)𝑄𝑠𝑎superscript𝜋𝑠𝑎Q(s,a)=\log\pi^{\*}(s,a)italic\_Q ( italic\_s , italic\_a ) = roman\_log italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ), where π\*(s,a)superscript𝜋𝑠𝑎\pi^{\*}(s,a)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) is the probability that the (stochastic) oracle takes action a𝑎aitalic\_a in state s𝑠sitalic\_s [[37](#bib.bib37)].
procedure Viper((S,A,P,R),π\*,Q\*,M,N𝑆𝐴𝑃𝑅superscript𝜋superscript𝑄𝑀𝑁(S,A,P,R),\pi^{\*},Q^{\*},M,N( italic\_S , italic\_A , italic\_P , italic\_R ) , italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_M , italic\_N)
Initialize dataset 𝒟←∅←𝒟\mathcal{D}\leftarrow\varnothingcaligraphic\_D ← ∅
Initialize policy π^0←π\*←subscript^𝜋0superscript𝜋\hat{\pi}\_{0}\leftarrow\pi^{\*}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ← italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT
for i=1𝑖1i=1italic\_i = 1 to N𝑁Nitalic\_N do
Sample M𝑀Mitalic\_M trajectories 𝒟i←{(s,π\*(s))∼d(π^i−1)}←subscript𝒟𝑖similar-to𝑠superscript𝜋𝑠superscript𝑑subscript^𝜋𝑖1\mathcal{D}\_{i}\leftarrow\{(s,\pi^{\*}(s))\sim d^{(\hat{\pi}\_{i-1})}\}caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ← { ( italic\_s , italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s ) ) ∼ italic\_d start\_POSTSUPERSCRIPT ( over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT }
Aggregate dataset 𝒟←𝒟∪𝒟i←𝒟𝒟subscript𝒟𝑖\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{D}\_{i}caligraphic\_D ← caligraphic\_D ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT
Resample dataset 𝒟′←{(s,a)∼p((s,a))∝ℓ~(s)𝕀[(s,a)∈𝒟]}←superscript𝒟′similar-to𝑠𝑎𝑝𝑠𝑎proportional-to~ℓ𝑠𝕀delimited-[]𝑠𝑎𝒟\mathcal{D}^{\prime}\leftarrow\{(s,a)\sim p((s,a))\propto\tilde{\ell}(s)\mathbb{I}[(s,a)\in\mathcal{D}]\}caligraphic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← { ( italic\_s , italic\_a ) ∼ italic\_p ( ( italic\_s , italic\_a ) ) ∝ over~ start\_ARG roman\_ℓ end\_ARG ( italic\_s ) blackboard\_I [ ( italic\_s , italic\_a ) ∈ caligraphic\_D ] }
Train decision tree π^i←TrainDecisionTree(𝒟′)←subscript^𝜋𝑖TrainDecisionTreesuperscript𝒟′\hat{\pi}\_{i}\leftarrow\text{TrainDecisionTree}(\mathcal{D}^{\prime})over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ← TrainDecisionTree ( caligraphic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT )
end for
return Best policy π^∈{π^1,…,π^N}^𝜋subscript^𝜋1…subscript^𝜋𝑁\hat{\pi}\in\{\hat{\pi}\_{1},...,\hat{\pi}\_{N}\}over^ start\_ARG italic\_π end\_ARG ∈ { over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT } on cross validation
end procedure
Algorithm 1 Decision tree policy extraction.
3 Verification
---------------
In this section, we describe three desirable control properties we can efficiently verify for decision tree policies but are difficult to verify for DNN policies.
| | | |
| --- | --- | --- |
| Refer to caption | Refer to caption | Refer to caption |
| (a) | (b) | (c) |
Figure 3: (a) An example of an initial state of our toy pong model; the ball is the white dot, the paddle is the white rectangle at the bottom, and the red arrow denotes the initial velocity (vx,vy)subscript𝑣𝑥subscript𝑣𝑦(v\_{x},v\_{y})( italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ) of the ball. (b) An intuitive visualization of the ball positions (blue region) and velocities (red arrows) in Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. (c) A counterexample to correctness discovered by our verification algorithm.
##### Correctness for toy Pong.
Correctness of a controller is system-dependent; we first discuss proving correctness of controller for a toy model of the Pong Atari game [[22](#bib.bib22)]. This toy model consists of a ball bouncing on the screen, with a player-controlled paddle at the bottom. If the ball hits the top or the side of the screen, or if the ball hits the paddle at the bottom, then it is reflected; if the ball hits the bottom of the screen where the paddle is not present, then the game is over. The system is frictionless and all collisions are elastic. It can be thought of as Pong where the system paddle is replaced with a wall. The goal is to play for as long as possible before the game ends. The states are (x,y,vx,vy,xp)∈ℝ5𝑥𝑦subscript𝑣𝑥subscript𝑣𝑦subscript𝑥𝑝superscriptℝ5(x,y,v\_{x},v\_{y},x\_{p})\in\mathbb{R}^{5}( italic\_x , italic\_y , italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ) ∈ blackboard\_R start\_POSTSUPERSCRIPT 5 end\_POSTSUPERSCRIPT, where (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ) is the position of the ball (with x∈[0,xmax]𝑥0subscript𝑥maxx\in[0,x\_{\text{max}}]italic\_x ∈ [ 0 , italic\_x start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ] and y∈[0,ymax]𝑦0subscript𝑦maxy\in[0,y\_{\text{max}}]italic\_y ∈ [ 0 , italic\_y start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ]), (vx,vy)subscript𝑣𝑥subscript𝑣𝑦(v\_{x},v\_{y})( italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ) is its velocity (with vx,vy∈[−vmax,vmax]subscript𝑣𝑥subscript𝑣𝑦
subscript𝑣maxsubscript𝑣maxv\_{x},v\_{y}\in[-v\_{\text{max}},v\_{\text{max}}]italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ∈ [ - italic\_v start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ]), and xpsubscript𝑥𝑝x\_{p}italic\_x start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT is the position of the paddle (with xp∈[0,xmax]subscript𝑥𝑝0subscript𝑥maxx\_{p}\in[0,x\_{\text{max}}]italic\_x start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ∈ [ 0 , italic\_x start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ]), and the actions are {left,right,stay}leftrightstay\{\text{left},\text{right},\text{stay}\}{ left , right , stay }, indicating how to move the paddle.
Our goal is to prove that the controller never loses, i.e., the ball never hits the bottom of the screen at a position where the paddle is not present. More precisely, assuming the system is initialized to a safe state (i.e., y∈Y0=[ymax/2,ymax]𝑦subscript𝑌0subscript𝑦max2subscript𝑦maxy\in Y\_{0}=[y\_{\text{max}}/2,y\_{\text{max}}]italic\_y ∈ italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = [ italic\_y start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT / 2 , italic\_y start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ]), then it should avoid an unsafe region (i.e., y=0∧(x≤xp−L∨x≥xp+L)𝑦0𝑥subscript𝑥𝑝𝐿𝑥subscript𝑥𝑝𝐿y=0\wedge(x\leq x\_{p}-L\vee x\geq x\_{p}+L)italic\_y = 0 ∧ ( italic\_x ≤ italic\_x start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT - italic\_L ∨ italic\_x ≥ italic\_x start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT + italic\_L ), where L𝐿Litalic\_L is half the paddle length).
To do so, we assume that the speed of the ball in the y𝑦yitalic\_y direction is lower bounded, i.e., |vy|>vminsubscript𝑣𝑦subscript𝑣min|v\_{y}|>v\_{\text{min}}| italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT | > italic\_v start\_POSTSUBSCRIPT min end\_POSTSUBSCRIPT; since velocity in each direction is conserved, this assumption is equivalent to assuming that the initial y𝑦yitalic\_y velocity is in [−vmax,−vmin]∪[vmin,vmax]subscript𝑣maxsubscript𝑣minsubscript𝑣minsubscript𝑣max[-v\_{\text{max}},-v\_{\text{min}}]\cup[v\_{\text{min}},v\_{\text{max}}][ - italic\_v start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , - italic\_v start\_POSTSUBSCRIPT min end\_POSTSUBSCRIPT ] ∪ [ italic\_v start\_POSTSUBSCRIPT min end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ]. Then, it suffices to prove the following inductive invariant: as long as the ball starts in Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, then it re-enters Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT after at most tmax=⌈2ymax/vmin⌉subscript𝑡max2subscript𝑦maxsubscript𝑣mint\_{\text{max}}=\lceil 2y\_{\text{max}}/v\_{\text{min}}\rceilitalic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT = ⌈ 2 italic\_y start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT / italic\_v start\_POSTSUBSCRIPT min end\_POSTSUBSCRIPT ⌉ steps.
Both the dynamics f:S×A→S:𝑓→𝑆𝐴𝑆f:S\times A\to Sitalic\_f : italic\_S × italic\_A → italic\_S and the controller π:S→A:𝜋→𝑆𝐴\pi:S\to Aitalic\_π : italic\_S → italic\_A are piecewise-linear, so the joint dynamics fπ(s)=f(s,π(s))subscript𝑓𝜋𝑠𝑓𝑠𝜋𝑠f\_{\pi}(s)=f(s,\pi(s))italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) = italic\_f ( italic\_s , italic\_π ( italic\_s ) ) are also piecewise linear; let S=S1∪…∪Sk𝑆subscript𝑆1…subscript𝑆𝑘S=S\_{1}\cup...\cup S\_{k}italic\_S = italic\_S start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∪ … ∪ italic\_S start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT be a partition of the state space so that fπ(s)=fi(s)=βiTssubscript𝑓𝜋𝑠subscript𝑓𝑖𝑠superscriptsubscript𝛽𝑖𝑇𝑠f\_{\pi}(s)=f\_{i}(s)=\beta\_{i}^{T}sitalic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) = italic\_f start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) = italic\_β start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_s for all s∈Si𝑠subscript𝑆𝑖s\in S\_{i}italic\_s ∈ italic\_S start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Then, let stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT be a variable denoting the state of the system at time t∈{0,…,tmax}𝑡0…subscript𝑡maxt\in\{0,...,t\_{\text{max}}\}italic\_t ∈ { 0 , … , italic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT }; then, the following constraints specify the system dynamics:
| | | |
| --- | --- | --- |
| | ϕt=⋁i=1k(st−1∈Si⇒st=βiTst−1)∀t∈{1,…,tmax}formulae-sequencesubscriptitalic-ϕ𝑡superscriptsubscript𝑖1𝑘subscript𝑠𝑡1subscript𝑆𝑖⇒subscript𝑠𝑡superscriptsubscript𝛽𝑖𝑇subscript𝑠𝑡1for-all𝑡1…subscript𝑡max\displaystyle\phi\_{t}=\bigvee\_{i=1}^{k}(s\_{t-1}\in S\_{i}\Rightarrow s\_{t}=\beta\_{i}^{T}s\_{t-1})\hskip 14.45377pt\forall t\in\{1,...,t\_{\text{max}}\}italic\_ϕ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ⋁ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⇒ italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_β start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) ∀ italic\_t ∈ { 1 , … , italic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT } | |
Furthermore letting ψt=(st∈Y0)subscript𝜓𝑡subscript𝑠𝑡subscript𝑌0\psi\_{t}=(s\_{t}\in Y\_{0})italic\_ψ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ), we can express the correctness of the system as the formula
555We are verifying correctness over a continuous state space, so enumerative approaches are not feasible.
| | | |
| --- | --- | --- |
| | ψ=(⋀t=1tmaxϕt)∧ψ0⇒⋁t=1tmaxψt.𝜓superscriptsubscript𝑡1subscript𝑡maxsubscriptitalic-ϕ𝑡subscript𝜓0⇒superscriptsubscript𝑡1subscript𝑡maxsubscript𝜓𝑡\displaystyle\psi=\left(\bigwedge\_{t=1}^{t\_{\text{max}}}\phi\_{t}\right)\wedge\psi\_{0}\Rightarrow\bigvee\_{t=1}^{t\_{\text{max}}}\psi\_{t}.italic\_ψ = ( ⋀ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∧ italic\_ψ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ⇒ ⋁ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT . | |
Note that σ⇒τ⇒𝜎𝜏\sigma\Rightarrow\tauitalic\_σ ⇒ italic\_τ is equivalent to ¬σ∨τ𝜎𝜏\neg\sigma\vee\tau¬ italic\_σ ∨ italic\_τ. Then, since Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and all of the Sisubscript𝑆𝑖S\_{i}italic\_S start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT are polyhedron, the predicates st∈Y0subscript𝑠𝑡subscript𝑌0s\_{t}\in Y\_{0}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and st∈Sisubscript𝑠𝑡subscript𝑆𝑖s\_{t}\in S\_{i}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT are conjunctions of linear (in)equalities; thus, the formulas ψtsubscript𝜓𝑡\psi\_{t}italic\_ψ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and ϕtsubscriptitalic-ϕ𝑡\phi\_{t}italic\_ϕ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT are disjunctions of conjunctions of linear (in)equalities. As a consequence, ψ𝜓\psiitalic\_ψ consists of conjunctions and disjunctions of linear (in)equalities; standard tools exist for checking whether such formulas are satisfiable [[12](#bib.bib12)]. In particular, the controller is correct if and only if ¬ψ𝜓\neg\psi¬ italic\_ψ is unsatisfiable, since a satisfying assignment to ¬ψ𝜓\neg\psi¬ italic\_ψ is a counterexample showing that ψ𝜓\psiitalic\_ψ does not always hold.
Finally, note that we can slightly simplify ψ𝜓\psiitalic\_ψ: (i) we only have to show that the system enters a state where vy>0subscript𝑣𝑦0v\_{y}>0italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT > 0 after tmaxsubscript𝑡maxt\_{\text{max}}italic\_t start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT steps, not that it returns to Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, and (ii) we can restrict Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to states where vy<0subscript𝑣𝑦0v\_{y}<0italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT < 0. We use parameters (xmax,ymax,vmin,vmax,L)=(30,20,1,2,4)subscript𝑥maxsubscript𝑦maxsubscript𝑣minsubscript𝑣max𝐿3020124(x\_{\text{max}},y\_{\text{max}},v\_{\text{min}},v\_{\text{max}},L)=(30,20,1,2,4)( italic\_x start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , italic\_y start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT min end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , italic\_L ) = ( 30 , 20 , 1 , 2 , 4 ); Figure [3](#S3.F3 "Figure 3 ‣ 3 Verification ‣ Verifiable Reinforcement Learning via Policy Extraction") (a) shows an example of an initial state, and Figure [3](#S3.F3 "Figure 3 ‣ 3 Verification ‣ Verifiable Reinforcement Learning via Policy Extraction") (b) depicts the set Y0subscript𝑌0Y\_{0}italic\_Y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT of initial states that we verify.
##### Correctness for cart-pole.
We also discuss proving correctness of a cart-pole control policy. The classical cart-pole control problem has a 4-dimensional state space (x,v,θ,ω)∈ℝ4𝑥𝑣𝜃𝜔superscriptℝ4(x,v,\theta,\omega)\in\mathbb{R}^{4}( italic\_x , italic\_v , italic\_θ , italic\_ω ) ∈ blackboard\_R start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT, where x𝑥xitalic\_x is the cart position, v𝑣vitalic\_v is the cart velocity, θ𝜃\thetaitalic\_θ is the pole angle, and ω𝜔\omegaitalic\_ω is the pole angular velocity, and a 1-dimensional action space a∈ℝ𝑎ℝa\in\mathbb{R}italic\_a ∈ blackboard\_R, where a𝑎aitalic\_a is the lateral force to apply to the cart. Consider a controller trained to move the cart to the right while keeping the pole in the upright position. The goal is to prove that the pole never falls below a certain height, which can be encoded as the formula666This property cannot be expressed as a stability property since the cart is always moving.
| | | |
| --- | --- | --- |
| | ψ≡s0∈S0∧⋀t=0∞|ϕ(st)|≤y0,𝜓subscript𝑠0subscript𝑆0superscriptsubscript𝑡0italic-ϕsubscript𝑠𝑡subscript𝑦0\displaystyle\psi\equiv s\_{0}\in S\_{0}\wedge\bigwedge\_{t=0}^{\infty}|\phi(s\_{t})|\leq y\_{0},italic\_ψ ≡ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∧ ⋀ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT | italic\_ϕ ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | ≤ italic\_y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , | |
where S0=[−0.05,0.05]4subscript𝑆0superscript0.050.054S\_{0}=[-0.05,0.05]^{4}italic\_S start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = [ - 0.05 , 0.05 ] start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT is the set of initial states, st=f(st−1,at−1)subscript𝑠𝑡𝑓subscript𝑠𝑡1subscript𝑎𝑡1s\_{t}=f(s\_{t-1},a\_{t-1})italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_f ( italic\_s start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) is the state on step t𝑡titalic\_t, f𝑓fitalic\_f is the transition function, ϕ(s)italic-ϕ𝑠\phi(s)italic\_ϕ ( italic\_s ) is the deviation of the pole angle from upright in state s𝑠sitalic\_s, and y0subscript𝑦0y\_{0}italic\_y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the maximum desirable deviation from the upright position. As with correctness for toy Pong, the controller is correct if ¬ψ𝜓\neg\psi¬ italic\_ψ is unsatisfiable. The property ψ𝜓\psiitalic\_ψ can be thought of as a toy example of a safety property we would like to verify for a controller for a walking robot—in particular, we might want the robot to run as fast as possible, but prove that it never falls over while doing so. There are two difficulties verifying ψ𝜓\psiitalic\_ψ: (i) the infinite time horizon, and (ii) the nonlinear transitions f𝑓fitalic\_f. To address (i), we approximate the system using a finite time horizon Tmax=10subscript𝑇max10T\_{\text{max}}=10italic\_T start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT = 10, i.e., we show that the system is safe for the first ten time steps. To address (ii), we use a linear approximation f(s,a)≈As+Ba𝑓𝑠𝑎𝐴𝑠𝐵𝑎f(s,a)\approx As+Baitalic\_f ( italic\_s , italic\_a ) ≈ italic\_A italic\_s + italic\_B italic\_a; for cart-pole, this approximation is good as long as ϕ(st)italic-ϕsubscript𝑠𝑡\phi(s\_{t})italic\_ϕ ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is small.
##### Stability.
Stability is a property from control theory saying that systems asymptotically reach their goal [[31](#bib.bib31)]. Consider a continuous-time dynamical system with states s∈S=ℝn𝑠𝑆superscriptℝ𝑛s\in S=\mathbb{R}^{n}italic\_s ∈ italic\_S = blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT, actions a∈A=ℝm𝑎𝐴superscriptℝ𝑚a\in A=\mathbb{R}^{m}italic\_a ∈ italic\_A = blackboard\_R start\_POSTSUPERSCRIPT italic\_m end\_POSTSUPERSCRIPT, and dynamics s˙=f(s,a)˙𝑠𝑓𝑠𝑎\dot{s}=f(s,a)over˙ start\_ARG italic\_s end\_ARG = italic\_f ( italic\_s , italic\_a ). For a policy π:S→A:𝜋→𝑆𝐴\pi:S\to Aitalic\_π : italic\_S → italic\_A, we say the system fπ(s)=f(s,π(s))subscript𝑓𝜋𝑠𝑓𝑠𝜋𝑠f\_{\pi}(s)=f(s,\pi(s))italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) = italic\_f ( italic\_s , italic\_π ( italic\_s ) ) is *stable* if there is a *region of attraction* U⊆ℝn𝑈superscriptℝ𝑛U\subseteq\mathbb{R}^{n}italic\_U ⊆ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT containing 00 such that for any s0∈Usubscript𝑠0𝑈s\_{0}\in Uitalic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_U, we have limt→∞s(t)=0subscript→𝑡𝑠𝑡0\lim\_{t\to\infty}s(t)=0roman\_lim start\_POSTSUBSCRIPT italic\_t → ∞ end\_POSTSUBSCRIPT italic\_s ( italic\_t ) = 0, where s(t)𝑠𝑡s(t)italic\_s ( italic\_t ) is a solution to s˙=f(s,a)˙𝑠𝑓𝑠𝑎\dot{s}=f(s,a)over˙ start\_ARG italic\_s end\_ARG = italic\_f ( italic\_s , italic\_a ) with initial condition s(0)=s0𝑠0subscript𝑠0s(0)=s\_{0}italic\_s ( 0 ) = italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT.
When fπsubscript𝑓𝜋f\_{\pi}italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT is nonlinear, we can verify stability (and compute U𝑈Uitalic\_U) by finding a *Lyapunov function* V:S→ℝ:𝑉→𝑆ℝV:S\to\mathbb{R}italic\_V : italic\_S → blackboard\_R which satisfies (i) V(s)>0𝑉𝑠0V(s)>0italic\_V ( italic\_s ) > 0 for all s∈U∖{0}𝑠𝑈0s\in U\setminus\{0\}italic\_s ∈ italic\_U ∖ { 0 }, (ii) V(0)=0𝑉00V(0)=0italic\_V ( 0 ) = 0, and (iii) V˙(s)=(∇V)(s)⋅f(s)<0˙𝑉𝑠⋅∇𝑉𝑠𝑓𝑠0\dot{V}(s)=(\nabla V)(s)\cdot f(s)<0over˙ start\_ARG italic\_V end\_ARG ( italic\_s ) = ( ∇ italic\_V ) ( italic\_s ) ⋅ italic\_f ( italic\_s ) < 0 for all s∈U∖{0}𝑠𝑈0s\in U\setminus\{0\}italic\_s ∈ italic\_U ∖ { 0 } [[31](#bib.bib31)]. Given a *candidate* Lyapunov function, exhaustive search can be used to check whether the Lyapunov properties hold [[8](#bib.bib8)], but scales exponentially in n𝑛nitalic\_n.
When fπsubscript𝑓𝜋f\_{\pi}italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT is polynomial, we can use sum-of-squares (SOS) optimization to devise a candidate Lyapunov function, check the Lyapunov properites, and compute U𝑈Uitalic\_U [[24](#bib.bib24), [32](#bib.bib32), [31](#bib.bib31)]; we give a brief overview. First, suppose that V(s)=sTPs𝑉𝑠superscript𝑠𝑇𝑃𝑠V(s)=s^{T}Psitalic\_V ( italic\_s ) = italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s for some P∈ℝn×n𝑃superscriptℝ𝑛𝑛P\in\mathbb{R}^{n\times n}italic\_P ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n × italic\_n end\_POSTSUPERSCRIPT. To compuate a candidate Lyapunov function, we choose P𝑃Pitalic\_P so that the Lyapunov properties hold for the linear approximation fπ(s)≈Assubscript𝑓𝜋𝑠𝐴𝑠f\_{\pi}(s)\approx Asitalic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) ≈ italic\_A italic\_s, which can be accomplished by solving the SOS program
777Simpler approaches exist, but this one motivates our approach to checking whether the Lyapunov properties hold for V𝑉Vitalic\_V for the polynomial dynamics fπsubscript𝑓𝜋f\_{\pi}italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT.
| | | | | |
| --- | --- | --- | --- | --- |
| | | ∃P∈ℝn×n𝑃superscriptℝ𝑛𝑛\displaystyle\exists P\in\mathbb{R}^{n\times n}∃ italic\_P ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n × italic\_n end\_POSTSUPERSCRIPT | | (1) |
| | | subj. tosTPs−‖s‖2≥0 and sTPAs+‖s‖2≤0(∀s∈S).subj. tosuperscript𝑠𝑇𝑃𝑠superscriptnorm𝑠2
0 and superscript𝑠𝑇𝑃𝐴𝑠superscriptnorm𝑠20for-all𝑠𝑆\displaystyle\text{subj. to}\hskip 7.22743pt~{}s^{T}Ps-\|s\|^{2}\geq 0\text{ and }s^{T}PAs+\|s\|^{2}\leq 0\hskip 7.22743pt(\forall s\in S).subj. to italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s - ∥ italic\_s ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≥ 0 and italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_A italic\_s + ∥ italic\_s ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≤ 0 ( ∀ italic\_s ∈ italic\_S ) . | |
The first equation ensures properties (i) and (ii)—in particular, the term ‖s‖2superscriptnorm𝑠2\|s\|^{2}∥ italic\_s ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ensures that sTPs>0superscript𝑠𝑇𝑃𝑠0s^{T}Ps>0italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s > 0 except when s=0𝑠0s=0italic\_s = 0. Similarly, the second equation ensures property (iii). Next, we can simultaneously check whether the Lyapunov properties hold for fπsubscript𝑓𝜋f\_{\pi}italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT and compute U𝑈Uitalic\_U using the SOS program
| | | | | |
| --- | --- | --- | --- | --- |
| | | argmaxρ∈ℝ+,Λ∈ℝn×nρsubscriptformulae-sequence𝜌subscriptℝΛsuperscriptℝ𝑛𝑛𝜌\displaystyle\arg\max\_{\rho\in\mathbb{R}\_{+},\Lambda\in\mathbb{R}^{n\times n}}\rhoroman\_arg roman\_max start\_POSTSUBSCRIPT italic\_ρ ∈ blackboard\_R start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT , roman\_Λ ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n × italic\_n end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_ρ | | (2) |
| | | subj. to(sTΛs)(sTPfπ(s))+(ρ−sTPs)‖s‖2≤0 and sTΛs≥0(∀s∈S).subj. tosuperscript𝑠𝑇Λ𝑠superscript𝑠𝑇𝑃subscript𝑓𝜋𝑠𝜌superscript𝑠𝑇𝑃𝑠superscriptnorm𝑠20 and superscript𝑠𝑇Λ𝑠0for-all𝑠𝑆\displaystyle\text{subj. to}\hskip 7.22743pt(s^{T}\Lambda s)(s^{T}Pf\_{\pi}(s))+(\rho-s^{T}Ps)\|s\|^{2}\leq 0\text{ and }s^{T}\Lambda s\geq 0\hskip 7.22743pt(\forall s\in S).subj. to ( italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Λ italic\_s ) ( italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) ) + ( italic\_ρ - italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s ) ∥ italic\_s ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≤ 0 and italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Λ italic\_s ≥ 0 ( ∀ italic\_s ∈ italic\_S ) . | |
The term λ(s)=sTΛs𝜆𝑠superscript𝑠𝑇Λ𝑠\lambda(s)=s^{T}\Lambda sitalic\_λ ( italic\_s ) = italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Λ italic\_s is a slack variable—when ρ>sTPs𝜌superscript𝑠𝑇𝑃𝑠\rho>s^{T}Psitalic\_ρ > italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s or s=0𝑠0s=0italic\_s = 0 (so the second term is nonpositive), it can be made sufficiently large so that the first constraint holds regardless of sTPfπ(s)superscript𝑠𝑇𝑃subscript𝑓𝜋𝑠s^{T}Pf\_{\pi}(s)italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ), but when ρ≤sTPs𝜌superscript𝑠𝑇𝑃𝑠\rho\leq s^{T}Psitalic\_ρ ≤ italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_s and s≠0𝑠0s\neq 0italic\_s ≠ 0 (so the second term is positive), we must have sTPfπ(s)<0superscript𝑠𝑇𝑃subscript𝑓𝜋𝑠0s^{T}Pf\_{\pi}(s)<0italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_f start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_s ) < 0 since sTΛs≥0superscript𝑠𝑇Λ𝑠0s^{T}\Lambda s\geq 0italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT roman\_Λ italic\_s ≥ 0 by the second constraint. Properites (i) and (ii) hold from ([1](#S3.E1 "1 ‣ Stability. ‣ 3 Verification ‣ Verifiable Reinforcement Learning via Policy Extraction")), and ([2](#S3.E2 "2 ‣ Stability. ‣ 3 Verification ‣ Verifiable Reinforcement Learning via Policy Extraction")) verifies (iii) for all
| | | |
| --- | --- | --- |
| | s∈U={s∈S∣V(s)≤ρ}.𝑠𝑈conditional-set𝑠𝑆𝑉𝑠𝜌\displaystyle s\in U=\{s\in S\mid V(s)\leq\rho\}.italic\_s ∈ italic\_U = { italic\_s ∈ italic\_S ∣ italic\_V ( italic\_s ) ≤ italic\_ρ } . | |
Thus, if a solution ρ>0𝜌0\rho>0italic\_ρ > 0 is found, then V𝑉Vitalic\_V is a Lyapunov function with region of attraction U𝑈Uitalic\_U. This approach extends to higher-order polynomials V(s)𝑉𝑠V(s)italic\_V ( italic\_s ) by taking V(s)=m(s)TPm(s)𝑉𝑠𝑚superscript𝑠𝑇𝑃𝑚𝑠V(s)=m(s)^{T}Pm(s)italic\_V ( italic\_s ) = italic\_m ( italic\_s ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P italic\_m ( italic\_s ), where m(s)𝑚𝑠m(s)italic\_m ( italic\_s ) is a vector of monomials (and similarly for λ(s)𝜆𝑠\lambda(s)italic\_λ ( italic\_s )).
Now, let π𝜋\piitalic\_π be a decision tree whose leaf nodes are associated with linear functions of the state s𝑠sitalic\_s (rather than restricted to constant functions). For ℓ∈leaves(π)ℓleaves𝜋\ell\in\text{leaves}(\pi)roman\_ℓ ∈ leaves ( italic\_π ), let βℓTssuperscriptsubscript𝛽ℓ𝑇𝑠\beta\_{\ell}^{T}sitalic\_β start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_s be the associated linear function. Let ℓ0∈leaves(π)subscriptℓ0leaves𝜋\ell\_{0}\in\text{leaves}(\pi)roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ leaves ( italic\_π ) be the leaf node such that 0∈routed(ℓ0,π)0routedsubscriptℓ0𝜋0\in\text{routed}(\ell\_{0},\pi)0 ∈ routed ( roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π ), where routed(ℓ;π)⊆Sroutedℓ𝜋𝑆\text{routed}(\ell;\pi)\subseteq Srouted ( roman\_ℓ ; italic\_π ) ⊆ italic\_S is the set of states routed to ℓℓ\ellroman\_ℓ (i.e., the computation of the decision tree maps s𝑠sitalic\_s to leaf node ℓℓ\ellroman\_ℓ). Then, we can compute a Lyapunov function for the linear policy π~(s)=βℓ0Ts~𝜋𝑠superscriptsubscript𝛽subscriptℓ0𝑇𝑠\tilde{\pi}(s)=\beta\_{\ell\_{0}}^{T}sover~ start\_ARG italic\_π end\_ARG ( italic\_s ) = italic\_β start\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_s; letting U~~𝑈\tilde{U}over~ start\_ARG italic\_U end\_ARG be the region of attraction for π~~𝜋\tilde{\pi}over~ start\_ARG italic\_π end\_ARG, the region of attraction for π𝜋\piitalic\_π is U=U~∩routed(ℓ0,π)𝑈~𝑈routedsubscriptℓ0𝜋U=\tilde{U}\cap\text{routed}(\ell\_{0},\pi)italic\_U = over~ start\_ARG italic\_U end\_ARG ∩ routed ( roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π ). To maximize U𝑈Uitalic\_U, we can bias the decision tree learning algorithm to prefer branching farther from s=0𝑠0s=0italic\_s = 0.
There are two limitations of our approach. First, we require that the dynamics be polynomial. For convenience, we use Taylor approximations of the dynamics, which approximates the true property but works well in practice [[32](#bib.bib32)]. This limitation can be addressed by reformulating the dynamics as a polynomial system or by handling approximation error in the dynamics [[31](#bib.bib31)]. Second, we focus on verifying stability locally around 00; there has been work extending the approach we use by “patching together” different regions of attraction [[32](#bib.bib32)].
##### Robustness.
Robustness has been studied for image classification [[30](#bib.bib30), [16](#bib.bib16), [6](#bib.bib6)]. We study this property primarily since it can be checked when the dynamics are unknown, though it has been studied for air traffic control as a safety consideration [[19](#bib.bib19)]. We say π𝜋\piitalic\_π is *ε𝜀\varepsilonitalic\_ε-robust* at s0∈S=ℝdsubscript𝑠0𝑆superscriptℝ𝑑s\_{0}\in S=\mathbb{R}^{d}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_S = blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT if888This definition of robustness is different than the one in control theory.
| | | |
| --- | --- | --- |
| | π(s)=π(s0)(∀s∈B∞(s0,ε)),𝜋𝑠𝜋subscript𝑠0for-all𝑠subscript𝐵subscript𝑠0𝜀\displaystyle\pi(s)=\pi(s\_{0})\hskip 7.22743pt(\forall s\in B\_{\infty}(s\_{0},\varepsilon)),italic\_π ( italic\_s ) = italic\_π ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ( ∀ italic\_s ∈ italic\_B start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_ε ) ) , | |
where B∞(s0,ε)subscript𝐵subscript𝑠0𝜀B\_{\infty}(s\_{0},\varepsilon)italic\_B start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_ε ) is the L∞subscript𝐿L\_{\infty}italic\_L start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT-ball of radius ε𝜀\varepsilonitalic\_ε around s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. If π𝜋\piitalic\_π is a decision tree, we can efficiently compute the largest ε𝜀\varepsilonitalic\_ε such that π𝜋\piitalic\_π is ε𝜀\varepsilonitalic\_ε-robust at s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, which we denote ε(s0;π)𝜀subscript𝑠0𝜋\varepsilon(s\_{0};\pi)italic\_ε ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_π ). Consider a leaf node ℓ∈leaves(π)ℓleaves𝜋\ell\in\text{leaves}(\pi)roman\_ℓ ∈ leaves ( italic\_π ) labeled with action aℓ≠π(s0)subscript𝑎ℓ𝜋subscript𝑠0a\_{\ell}\neq\pi(s\_{0})italic\_a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT ≠ italic\_π ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ). The following linear program computes the distance from s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to the closest point s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S (in L∞subscript𝐿L\_{\infty}italic\_L start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT norm) such that s∈routed(ℓ;π)𝑠routedℓ𝜋s\in\text{routed}(\ell;\pi)italic\_s ∈ routed ( roman\_ℓ ; italic\_π ):
| | | | |
| --- | --- | --- | --- |
| | ε(s0;ℓ,π)=𝜀subscript𝑠0ℓ𝜋absent\displaystyle\varepsilon(s\_{0};\ell,\pi)=italic\_ε ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; roman\_ℓ , italic\_π ) = | maxs∈S,ε∈ℝ+εsubscriptformulae-sequence𝑠𝑆𝜀subscriptℝ𝜀\displaystyle\max\_{s\in S,\varepsilon\in\mathbb{R}\_{+}}\varepsilonroman\_max start\_POSTSUBSCRIPT italic\_s ∈ italic\_S , italic\_ε ∈ blackboard\_R start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_ε | |
| | | subj. to(⋀n∈path(ℓ;π)δnsin≤tn)∧(⋀i∈[d]|si−(s0)i|≤ε),subj. tosubscript𝑛pathℓ𝜋subscript𝛿𝑛subscript𝑠subscript𝑖𝑛subscript𝑡𝑛subscript𝑖delimited-[]𝑑subscript𝑠𝑖subscriptsubscript𝑠0𝑖𝜀\displaystyle\text{subj. to}\hskip 7.22743pt\bigg{(}\bigwedge\_{n\in\text{path}(\ell;\pi)}\delta\_{n}s\_{i\_{n}}\leq t\_{n}\bigg{)}\wedge\bigg{(}\bigwedge\_{i\in[d]}|s\_{i}-(s\_{0})\_{i}|\leq\varepsilon\bigg{)},subj. to ( ⋀ start\_POSTSUBSCRIPT italic\_n ∈ path ( roman\_ℓ ; italic\_π ) end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_i start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ≤ italic\_t start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ∧ ( ⋀ start\_POSTSUBSCRIPT italic\_i ∈ [ italic\_d ] end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | ≤ italic\_ε ) , | |
where path(ℓ;π)pathℓ𝜋\text{path}(\ell;\pi)path ( roman\_ℓ ; italic\_π ) is the set of internal nodes along the path from the root of π𝜋\piitalic\_π to ℓℓ\ellroman\_ℓ, δn=1subscript𝛿𝑛1\delta\_{n}=1italic\_δ start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT = 1 if n𝑛nitalic\_n is a left-child and −11-1- 1 otherwise, insubscript𝑖𝑛i\_{n}italic\_i start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT is the feature index of n𝑛nitalic\_n, and tnsubscript𝑡𝑛t\_{n}italic\_t start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT is the threshold of n𝑛nitalic\_n. Then,
| | | |
| --- | --- | --- |
| | ε(s0;π)=argminℓ∈leaves(π){∞ifaℓ=π(s0)ε(s0;π,ℓ)otherwise.𝜀subscript𝑠0𝜋subscriptℓleaves𝜋casesifsubscript𝑎ℓ𝜋subscript𝑠0𝜀subscript𝑠0𝜋ℓotherwise\displaystyle\varepsilon(s\_{0};\pi)=\arg\min\_{\ell\in\text{leaves}(\pi)}\begin{cases}\infty&\text{if}~{}a\_{\ell}=\pi(s\_{0})\\
\varepsilon(s\_{0};\pi,\ell)&\text{otherwise}.\end{cases}italic\_ε ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_π ) = roman\_arg roman\_min start\_POSTSUBSCRIPT roman\_ℓ ∈ leaves ( italic\_π ) end\_POSTSUBSCRIPT { start\_ROW start\_CELL ∞ end\_CELL start\_CELL if italic\_a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT = italic\_π ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL italic\_ε ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_π , roman\_ℓ ) end\_CELL start\_CELL otherwise . end\_CELL end\_ROW | |
4 Evaluation
-------------
##### Verifying robustness of an Atari Pong controller.
For the Atari Pong environment, we use a 7-dimensional state space (extracted from raw images), which includes the position (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ) and velocity (vx,vy)subscript𝑣𝑥subscript𝑣𝑦(v\_{x},v\_{y})( italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ) of the ball, and the position ypsubscript𝑦𝑝y\_{p}italic\_y start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT, velocity vpsubscript𝑣𝑝v\_{p}italic\_v start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT, acceleration apsubscript𝑎𝑝a\_{p}italic\_a start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT, and jerk jpsubscript𝑗𝑝j\_{p}italic\_j start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT of the player’s paddle. The actions are A={up,down,stay}𝐴updownstayA=\{\text{up},\text{down},\text{stay}\}italic\_A = { up , down , stay }, corresponding to moving the paddle up, down, or unchanged. A reward of 1 is given if the player scores, and -1 if the opponent scores, for 21 rounds (so R∈{−21,…,21}𝑅21…21R\in\{-21,...,21\}italic\_R ∈ { - 21 , … , 21 }). Our oracle is the deep Q𝑄Qitalic\_Q-network [[22](#bib.bib22)], which achieves a perfect reward of 21.0 (averaged over 50 rollouts).
999This policy operates on images, but we can still use it as an oracle.
Viper (with N=80𝑁80N=80italic\_N = 80 iterations and M=10𝑀10M=10italic\_M = 10 sampled traces per iteration) extracts a decision tree policy π𝜋\piitalic\_π with 769 nodes that also achieves perfect reward 21.0.
We compute the robustness ε(s0;π)𝜀subscript𝑠0𝜋\varepsilon(s\_{0};\pi)italic\_ε ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_π ) at 5 random states s0∈Ssubscript𝑠0𝑆s\_{0}\in Sitalic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_S, which took just under 2.9 seconds for each point (on a 2.5 GHz Intel Core i7 CPU); the computed ε𝜀\varepsilonitalic\_ε varies from 0.5 to 2.8. We compare to Reluplex, a state-of-the-art tool for verifying DNNs. We use policy gradients to train a stochastic DNN policy π:ℝ7×A→[0,1]:𝜋→superscriptℝ7𝐴01\pi:\mathbb{R}^{7}\times A\to[0,1]italic\_π : blackboard\_R start\_POSTSUPERSCRIPT 7 end\_POSTSUPERSCRIPT × italic\_A → [ 0 , 1 ], and use Reluplex to compute the robustness of π𝜋\piitalic\_π on the same 5 points. We use line search on ε𝜀\varepsilonitalic\_ε to find the distance to the nearest adversarial example to within 0.1 (which requires 4 iterations of Reluplex); in contrast, our approach computes ε𝜀\varepsilonitalic\_ε to within 10−5superscript10510^{-5}10 start\_POSTSUPERSCRIPT - 5 end\_POSTSUPERSCRIPT, and can easily be made more precise. The Reluplex running times varied substantially—they were 12, 136, 641, and 649 seconds; verifying the fifth point timed out after running for one hour.
##### Verifying correctness of a toy Pong controller.
Because we do not have a model of the system dynamics for Atari Pong, we cannot verify correctness; we instead verify correctness for our toy model of Pong. We use policy gradients to train a DNN policy to play toy pong, which achieves a perfect reward of 250 (averaged over 50 rollouts), which is the maximum number of time steps. Viper extracts a decision tree with 31 nodes, which also plays perfectly. We use Z3 to check satisfiability of ¬ψ𝜓\neg\psi¬ italic\_ψ. In fact, we discover a counterexample—when the ball starts near the edge of the screen, the paddle oscillates and may miss it.101010While this counterexample was not present for the original neural network controller, we have no way of knowing if other counterexamples exist for that controller.
Furthermore, by manually examining this counterexample, we were able to devise two fixes to repair the system. First, we discovered a region of the state space where the decision tree was taking a clearly suboptimal action that led to the counterexample. To fix this issue, we added a top-level node to the decision tree so that it performs a safer action in this case. Second, we noticed that extending the paddle length by one (i.e., L=9/2𝐿92L=9/2italic\_L = 9 / 2) was also sufficient to remove the counterexample. For both fixes, we reran the verification algorithm and proved that the no additional counterexamples exist, i.e., the controller never loses the game. All verification tasks ran in just under 5 seconds.
##### Verifying correctness of a cart-pole controller.
We restricted to discrete actions a∈A={−1,1}𝑎𝐴11a\in A=\{-1,1\}italic\_a ∈ italic\_A = { - 1 , 1 }, and used policy gradients to train a stochastic oracle π\*:S×A→[0,1]:superscript𝜋→𝑆𝐴01\pi^{\*}:S\times A\to[0,1]italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT : italic\_S × italic\_A → [ 0 , 1 ] (a neural network with a single hidden layer) to keep the pole upright while moving the cart to the right; the oracle achieved a perfect reward of 200.0 (averaged over 100 rollouts), i.e., the pole never falls down. We use Viper as before to extract a decision tree policy. In Figure [4](#S4.F4 "Figure 4 ‣ Verifying stability of a cart-pole controller. ‣ 4 Evaluation ‣ Verifiable Reinforcement Learning via Policy Extraction") (a), we show the reward achieved by extracted decision trees of varying sizes—a tree with just 3 nodes (one internal and two leaf) suffices to achieve perfect reward. We used Z3 to check satisfiability of ¬ψ𝜓\neg\psi¬ italic\_ψ; Z3 proves that the desired safety property holds, running in 1.5 seconds.
##### Verifying stability of a cart-pole controller.
Next, we tried to verify stability of the cart-pole controller, trained as before except without moving the cart to the right; as before, the decision tree achieves a perfect reward of 200.0. However, achieving a perfect reward only requires that the pole does not fall below a given height, not stability; thus, neither the extracted decision tree policy nor the original neural network policy are stable.
Instead, we used an approach inspired by guided policy search [[21](#bib.bib21)]. We trained another decision tree using a different oracle, namely, an iterative linear quadratic regulator (iLQR), which comes with stability guarantees (at least with respect to the linear approximation of the dynamics, which are a very good near the origin). Note that we require a model to use an iLQR oracle, but we anyway need the true model to verify stability. We use iLQR with a time horizon of T=50𝑇50T=50italic\_T = 50 steps and n=3𝑛3n=3italic\_n = 3 iterations. To extract a policy, we use Q(s,a)=−JT(s)𝑄𝑠𝑎subscript𝐽𝑇𝑠Q(s,a)=-J\_{T}(s)italic\_Q ( italic\_s , italic\_a ) = - italic\_J start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ( italic\_s ), where JT(s)=sTPTssubscript𝐽𝑇𝑠superscript𝑠𝑇subscript𝑃𝑇𝑠J\_{T}(s)=s^{T}P\_{T}sitalic\_J start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ( italic\_s ) = italic\_s start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT italic\_s is the cost-to-go for the final iLQR step. Because iLQR can be slow, we compute the LQR controller for the linear approximation of the dynamics around the origin, and use it when ‖s‖∞≤0.05subscriptnorm𝑠0.05\|s\|\_{\infty}\leq 0.05∥ italic\_s ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ≤ 0.05. We now use continuous actions A=[−amax,amax]𝐴subscript𝑎maxsubscript𝑎maxA=[-a\_{\text{max}},a\_{\text{max}}]italic\_A = [ - italic\_a start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ], so we extract a (3 node) decision tree policy π𝜋\piitalic\_π with linear regressors at the leaves (internal branches are axis-aligned); π𝜋\piitalic\_π achieves a reward of 200.0.
We verify stability of π𝜋\piitalic\_π with respect to the degree-5 Taylor approximation of the cart-pole dynamics. Solving the SOS program ([2](#S3.E2 "2 ‣ Stability. ‣ 3 Verification ‣ Verifiable Reinforcement Learning via Policy Extraction")) takes 3.9 seconds. The optimal solution is ρ=3.75𝜌3.75\rho=3.75italic\_ρ = 3.75, which suffices to verify that the region of stability contains {s∈S∣‖s‖∞≤0.03}conditional-set𝑠𝑆subscriptnorm𝑠0.03\{s\in S\mid\|s\|\_{\infty}\leq 0.03\}{ italic\_s ∈ italic\_S ∣ ∥ italic\_s ∥ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ≤ 0.03 }. We compare to an enumerative algorithm for verifying stability similar to the one used in [[8](#bib.bib8)]; after running for more than 10 minutes, it only verified a region U′superscript𝑈′U^{\prime}italic\_U start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT whose volume is 10−15superscript101510^{-15}10 start\_POSTSUPERSCRIPT - 15 end\_POSTSUPERSCRIPT that of U𝑈Uitalic\_U. To the best of our knowledge, enumeration is the only approach that can be used to verify stability of neural network policies.
| | | |
| --- | --- | --- |
| Refer to caption | Refer to caption | Refer to caption |
| (a) | (b) | (c) |
Figure 4: (a) Reward (maximum R=200𝑅200R=200italic\_R = 200) as a function of the size (in number of nodes) of the decision tree extracted by Viper, on the cart-pole benchmark. (b) Reward (maximum R=200𝑅200R=200italic\_R = 200) as a function of the number of training rollouts, on the cart-pole benchmark, for Viper (black, circle) and fitted Q𝑄Qitalic\_Q-iteration (red, triangle); for Viper, we include rollouts used to train the oracle. (c) Decision tree size needed to achieve a given reward R∈{0,5,10,15,20,21}𝑅0510152021R\in\{0,5,10,15,20,21\}italic\_R ∈ { 0 , 5 , 10 , 15 , 20 , 21 } (maximum R=21𝑅21R=21italic\_R = 21), on the Atari Pong benchmark, for Viper (black, circle) and Dagger with the 0-1 loss (red, triangle).
##### Comparison to fitted Q𝑄Qitalic\_Q iteration.
On the cart-pole benchmark, we compare Viper to fitted Q𝑄Qitalic\_Q iteration [[13](#bib.bib13)], which is an actor-critic algorithm that uses a decision tree policy that is retrained on every step rather than using gradient updates; for the Q𝑄Qitalic\_Q-function, we use a neural network with a single hidden layer. In Figure [4](#S4.F4 "Figure 4 ‣ Verifying stability of a cart-pole controller. ‣ 4 Evaluation ‣ Verifiable Reinforcement Learning via Policy Extraction") (b), we compare the reward achieved by Viper compared to fitted Q𝑄Qitalic\_Q iteration as a function of the number of rollouts (for Viper, we include the initial rollouts used to train the oracle π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT). Even after 200K rollouts, fitted Q𝑄Qitalic\_Q iteration only achieves a reward of 104.3.
##### Comparison to Dagger.
On the Atari Pong benchmark, we compare Viper to using Dagger with the 0-1 loss. We use each algorithm to learn decision trees with maximum depths from 4 to 16. In Figure [4](#S4.F4 "Figure 4 ‣ Verifying stability of a cart-pole controller. ‣ 4 Evaluation ‣ Verifiable Reinforcement Learning via Policy Extraction") (c), we show the smallest size decision tree needed to achieve reward R∈{0,5,10,15,20,21}𝑅0510152021R\in\{0,5,10,15,20,21\}italic\_R ∈ { 0 , 5 , 10 , 15 , 20 , 21 }. Viper consistently produces trees an order of magnitude smaller than those produced by Dagger—e.g., for R=0𝑅0R=0italic\_R = 0 (31 nodes vs. 127 nodes), R=20𝑅20R=20italic\_R = 20 (127 nodes vs. 3459 nodes), and R=21𝑅21R=21italic\_R = 21 (769 nodes vs. 7967 nodes)—likely because Viper prioritizes accuracy on critical states. Evaluating pointwise robustness for Dagger trees is thus an order of magnitude slower: 36 to 40 seconds for the R=21𝑅21R=21italic\_R = 21 tree (vs. under 3 seconds for the R=21𝑅21R=21italic\_R = 21 Viper tree).
##### Controller for half-cheetah.
We demonstrate that we can learn high quality decision trees for the half-cheetah problem instance in the MuJoCo benchmark. In particular, we used a neural network oracle trained using PPO [[28](#bib.bib28)] to extract a regression tree controller. The regression tree had 9757 nodes, and achieved cumulative reward R=4014𝑅4014R=4014italic\_R = 4014 (whereas the neural network achieved R=4189𝑅4189R=4189italic\_R = 4189).
5 Conclusion
-------------
We have proposed an approach to learning decision tree policies that can be verified efficiently. Much work remains to be done to fully realize the potential of our approach. For instance, we used a number of approximations to verify correctness for the cart-pole controller; it may be possible to avoid these approximations, e.g., by finding an invariant set (similar to our approach to verifying toy Pong), and by using upper and lower piecewise linear bounds on transition function. More generally, we considered a limited variety of verification tasks; we expect that a wider range of properties may be verified for our policies. Another important direction is exploring whether we can automatically repair errors discovered in a decision tree policy. Finally, our decision tree policies may be useful for improving the efficiency of safe reinforcement learning algorithms that rely on verification.
#### Acknowledgments
This work was funded by the Toyota Research Institute and NSF InTrans award 1665282. |
00bcca45-1e48-4651-98fb-2cf4571aea14 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | "textbooks are all you need"
"[Textbooks Are All You Need](https://arxiv.org/abs/2306.11644)" was published yesterday by Microsoft Research. It's the worst-named paper I've seen recently: it's not about textbooks, it's not all you need, and gratuitously imitating the title of [a paper](https://arxiv.org/abs/1706.03762) that introduced a different type of thing is dumb. But there's a reason I'm writing about it.
What they did was basically this:
1. started with The Stack (a 3 TB collection of code) and text from StackOverflow
2. used a LLM to select 6B "high-quality" tokens from (1)
3. used GPT-3.5 to generate 1B tokens of text similar to textbooks
4. trained a small (1.3B parameter) model ("phi-1") on (2) and (3)
5. used GPT-3.5 to generate text similar to textbook exercises
6. fine-tuned phi-1 on (5)
7. tested phi-1 on [HumanEval](https://paperswithcode.com/dataset/humaneval) to evaluate its programming ability
The results were pretty good, better than models 10x the size trained on 100x the data. So, it seems that scaling up isn't the only thing that matters, and data quality can be more important than data quantity or parameter count. (You hear that, gwern?)
Going by the listed OpenAI API prices, running GPT-3.5 on The Stack to evaluate quality would've been maybe ~$6M. What the authors did instead was:
1. Use GPT-4 to evaluate a small fraction of it.
2. Use a much smaller code-specific model to generate embeddings.
3. Use a classifier to predict which embeddings are from what GPT-4 evaluates as good content.
How about if you bootstrap a model using its own evaluation for filtering? One of the authors says "[I'm almost sure you can beat the teacher model](https://twitter.com/SebastienBubeck/status/1671384044368707586)" and I agree. That can give you recursive self-improvement of a type you see in both individual people and the culture of societies. People develop better taste and consume better content which makes them smarter so they develop better taste, and [so on](https://www.youtube.com/watch?v=51GIxXFKbzk). Children hear the stories their grandfathers like, and culture develops.
That's a weak sort of self-improvement, which tends to plateau for people. Humans do other things too, so by itself it's weaker self-improvement than it appears to be for people. This is a technique I previously spent some time thinking about, so there are some other reasons I think it tends to plateau by itself. But still - recursive self-improvement!
Yes, in theory, if you have a much bigger model trained on a bigger dataset including the good selected data, and you can engineer prompts such that you get into a mode that models the good data specifically, then you can get the same results. In that sense, the performance reachable with this method is limited to what's possible from model scaling plus prompt engineering. The amount of scaling needed for that seems to be potentially 100x, and getting into exactly the right mode with prompt engineering might be impractical, but still, that provides some rough limits on potential here. |
be103677-0de3-4f6a-b41f-c37cb5ec19fa | trentmkelly/LessWrong-43k | LessWrong | [Link] Immortality Project
An interesting article on the Immortality Project at UC Riverside. This is the website.
This seems like something for LWers to look into - they're offering grants and essay prizes. |
84a7a169-e39d-4aef-829d-0c2af987c28d | trentmkelly/LessWrong-43k | LessWrong | The omnizoid - Heighn FDT Debate #3: Contra omnizoid contra me contra omnizoid contra FDT
omnizoid has replied to my critique of his "FDT is crazy" position here. This post is my response.
> The most important argument against FDT is that, while it’s a fine account of what type of agent you want to be, at least, in many circumstances, it’s a completely terrible account of rationality—of what it’s actually wise to do when you’re in one situation.
This may just be the crux of our disagreement. I claim there is no difference here: the questions What type of agent do I want to be? and What decision should I make in this scenario? are equivalent. If it is wise to do X in a given problem, then you want to be an X-ing agent, and if you should be an X-ing agent, then it is wise to do X. The only way to do X is to have a decision procedure that does X, which makes you an X-ing agent. And if you are an X-ing agent, you have a decision procedure that does X, so you do X.
> Suppose that there’s an agent who has a very high probability of creating people who once they exist will cut off their legs in ways that don’t benefit them. In this case, cutting off one’s legs is clearly irrational—one doesn’t benefit at all and yet is harmed greatly.
Unfortunately, omnizoid once again doesn't clearly state the problem - but I assume he means that
* there's an agent who can (almost) perfectly predict whether people will cut off their legs once they exist
* this agent only creates people who he predicts will cut off their legs once they exist
* existing with legs > existing without legs > not existing
FDT'ers would indeed cut off their legs: otherwise they wouldn't exist. omnizoid seems to believe that once you already exist, cutting off your legs is ridiculous. This is understandable, but ultimately false. The point is that your decision procedure doesn't make the decision just once. Your decision procedure also makes it in the predictor's head, when she is contemplating whether or not to create you. There, deciding not to cut off your legs will prevent the predictor f |
708b0f6c-fc9e-4f03-8cae-16f6307e16d8 | trentmkelly/LessWrong-43k | LessWrong | Ideological Turing Test Domains
Hello! I'm running an Ideological Turing Test for my local rationality group, and I'm wondering what ideology to use (and what prompts to use for that ideology). Palladias has previously run a number of tests on Christianity, but ideally I'd find something that was a good 50/50 split for my community, and I don't expect to find many Christians in my local group. The original test was proposed for politics, which seems like a reasonable first-guess, but I also worry that my group has too many liberals and not enough conservatives to make that work well.
What I plan to do is email the participants who have agreed to write entries asking how they stand on a number of issues (politics, religion, etc) and then use the issue that is most divisive within the population. To do that, however, I'll need a number of possible issues. Do any of you have good ideas for ITT domains other than religion or politics, particularly for rationalists?
(Side questions:
I've been leaning towards using the name "Caplan Test" instead of "Ideological Turing Test". I think the current name is too unwieldy and gives the wrong impression. Does the ITT name seem worth keeping?
Also, would anyone on here be interested in submitting entries to my test and/or seeing results?) |
51eb54f8-513e-4b6d-8f56-e827c06b5daa | StampyAI/alignment-research-dataset/arxiv | Arxiv | Using Pre-Training Can Improve Model Robustness and Uncertainty
1 Introduction
---------------
Pre-training is a central technique in the research and applications of deep convolutional neural networks (AlexNet). In research settings, pre-training is ubiquitously applied in state-of-the-art object detection and segmentation (maskrcnn). Moreover, some researchers aim to use pre-training to create “universal representations” that transfer to multiple domains (universal). In applications, the “pre-train then tune” paradigm is commonplace, especially when data for a target task is acutely scarce (zeilerpretrain). This broadly applicable technique enables state-of-the-art model convergence.
However, hepretrain argue that model convergence is merely faster with pre-training, so that the benefit on modern research datasets is only improved wall-clock time. Surprisingly, pre-training provides no performance benefit on various tasks and architectures over training from scratch, provided the model trains for long enough. Even models trained from scratch on only 10% of the COCO dataset (coco) attain the same performance as pre-trained models. This casts doubt on our understanding of pre-training and raises the important question of whether there are any uses for pre-training beyond tuning for extremely small datasets. They conclude that, with modern research datasets, ImageNet pre-training is not necessary.
In this work, we demonstrate that pre-training is not needless. While hepretrain are correct that models for traditional tasks such as classification perform well without pre-training, pre-training substantially improves the quality of various complementary model components. For example, we show that while accuracy may not noticeably change with pre-training, what does tremendously improve with pre-training is the model’s adversarial robustness.
Furthermore, even though training for longer on *clean* datasets allows models without pre-training to catch up, training for longer on a *corrupted* dataset leads to model deterioration. And the claim that “pre-training does not necessarily help reduce overfitting” (hepretrain) is valid when measuring only model accuracy, but it becomes apparent that pre-training does reduce overfitting when also measuring model calibration. We bring clarity to the doubts raised about pre-training by showing that pre-training can improve model robustness to label corruption (Sukhbaatar), class imbalance (japkowicz2000class), and adversarial attacks (adversarial); it additionally improves uncertainty estimates for out-of-distribution detection (hendrycks17baseline) and calibration (oconnor), though not necessarily traditional accuracy metrics.
Pre-training yields improvements so significant that on many robustness and uncertainty tasks we surpass state-of-the-art performance. We even find that pre-training alone improves over techniques devised for a specific task. Note that experiments on these tasks typically overlook pre-training, even though pre-training is ubiquitous elsewhere. This is problematic since we find there are techniques which do not comport well with pre-training; thus some evaluations of robustness are less representative of real-world performance than previously thought. Consequently, researchers for these tasks would do well to adopt the “pre-train then tune” paradigm for greater realism and increased performance.

Figure 1: Training for longer is not a suitable strategy for label corruption. By training for longer, the network eventually begins to model and memorize label noise, which harms its overall performance. Labels are corrupted uniformly to incorrect classes with 60% probability, and the Wide Residual Network classifier has learning rate drops at epochs 80, 120, and 160.
2 Related Work
---------------
Pre-Training. It is well-known that pre-training improves generalization when the dataset for the target task is extremely small. Prior work on transfer learning has analyzed the properties of this effect, such as when fine-tuning should stop (objectdetectionanalysis) and which layers should be fine-tuned (yosinski). In a series of ablation studies, Huh2016 show that the benefits of pre-training are robust to significant variation in the dataset used for pre-training, including the removal of classes related to the target task. In our work, we observe similar robustness to change in the dataset used for pre-training.
Pre-training has also been used when the dataset for the target task is large, such as Microsoft COCO (coco) for object detection and segmentation. However, in a recent work hepretrain show that pre-training merely speeds convergence on these tasks, and real gains in performance vanish if one trains from scratch for long enough, even with only 10% of the data for the target task. They conclude that pre-training is not necessary for these tasks. Moreover, Sun\_2017\_ICCV show that the accuracy gains from more data are exponentially diminishing, severely limiting the utility of pre-training for improving performance metrics for traditional tasks. In contrast, we show that pre-training does markedly improve model robustness and uncertainty.
Robustness. Learning in the presence of corrupted labels has been well-studied. In the context of deep learning, Sukhbaatar investigate using a stochastic matrix encoding the label noise, though they note that this matrix is difficult to estimate. Patrini propose a two-step training procedure to estimate this stochastic matrix and train a corrected classifier. These approaches are extended by hendrycks2018glc, who consider having access to a small dataset of cleanly labeled examples, leverage these trusted data to improve performance.
zhang2018 show that networks overfit to the incorrect labels when trained for too long ([Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty")). This observation suggests pre-training as a potential fix, since one need only fine-tune for a short period to attain good performance. We show that pre-training not only improves performance with no label noise correction, but also complements methods proposed in prior work. Also note that most prior works (goldberger2016training; ma2018dimensionality; han2018co) only experiment with small-scale images since label corruption demonstrations can require training hundreds of models (hendrycks2018glc). Since pre-training is typically reserved for large-scale datasets, such works do not explore the impact of pre-training.
To handle class imbalance, many training strategies have been investigated in the literature. One direction is rebalancing an imbalanced training dataset.
To this end, he2008learning propose to remove samples from the majority classes, while huang2016learning replicate samples from the minority classes. Generating synthetic samples through linear interpolation between data samples belonging in the same minority class has been studied in chawla2002smote.
An alternative approach is to modify the supervised loss function. Cost sensitive learning (japkowicz2000class) balances the loss function by re-weighting each sample by the inverse frequency of its class. huang2016learning and dong2018imbalanced demonstrate that enlarging the margin of a classifier helps mitigate the class imbalance problem. However, adopting such training methods often incurs various time and memory costs.
The susceptibility of neural networks to small, adversarially chosen input perturbations has received much attention. Over the years, many methods have been proposed as defenses against adversarial examples (defense; defense2), but these are often circumvented in short order (bypass). In fact, the only defense widely regarded as having stood the test of time is the adversarial training procedure of madry. In this algorithm, white-box adversarial examples are created at each step of training and substituted in place of normal examples. This does provide some amount of adversarial robustness, but it requires substantially longer training times. In a later work, madrydata argue further progress on this problem may require significantly more task-specific data. However, given that data from a different distribution can be beneficial for a given task (Huh2016), it is conceivable that the need for task-specific data could be obviated with pre-training.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
|
|
|
|
|
|
|
Figure 2: Error curves for label noise correction methods using training from scratch and pre-training across a full range of label corruption strengths. For the No Correction baseline, using pre-training results in a visibly improved slope of degradation with a more pronounced elbow at higher corruption strengths. This also occurs in the complementary combinations of pre-training with previously proposed correction methods.
Uncertainty. Even though deep networks have achieved high accuracy on many classification tasks, measuring the uncertainty in their predictions remains a challenging problem. Obtaining well-calibrated predictive uncertainty could be useful in many machine learning applications such as medicine or autonomous vehicles. Uncertainty estimates need to be useful for detecting out-of-distribution samples. hendrycks17baseline propose out-of-distribution detection tasks and use the maximum value of a classifier’s softmax distribution as a baseline method. mahal propose Mahalanobis distance-based scores which characterize out-of-distribution samples using hidden features.
kimin propose using a GAN (gans) to generate out-of-distribution samples; the network is taught to assign low confidence to these GAN-generated samples. hendrycks2019oe demonstrate that using non-specific, real, and diverse outlier images or text in place of GAN-generated samples can allow classifiers and density estimators to improve their out-of-distribution detection performance and calibration. kilian show that contemporary networks can easily become miscalibrated without additional regularization, and we show pre-training can provide useful regularization.
3 Robustness
-------------
Datasets. For the following robustness experiments, we evaluate on CIFAR-10 and CIFAR-100 (krizhevsky2009learning). These datasets contain 32×32 color images, both with 60,000 images split into 50,000 for training and 10,000 for testing. CIFAR-10 and CIFAR-100 have 10 and 100 classes, respectively. For pre-training, we use Downsampled ImageNet (DownsampledImageNet), which is the 1,000-class ImageNet dataset (imagenet) resized to 32×32 resolution. For ablation experiments, we remove 153 CIFAR-10-related classes from the Downsampled ImageNet dataset. In this paper we tune the entire network. Code is available at [github.com/hendrycks/pre-training](https://github.com/hendrycks/pre-training).
###
3.1 Robustness to Label Corruption
| | | |
| --- | --- | --- |
| | CIFAR-10 | CIFAR-100 |
| | Normal Training | Pre-Training | Normal Training | Pre-Training |
| No Correction | 28.7 | 15.9 | 55.4 | 39.1 |
| Forward Correction | 25.5 | 15.7 | 52.6 | 42.8 |
| GLC (5% Trusted) | 14.0 | 7.2 | 46.8 | 33.7 |
| GLC (10% Trusted) | 11.5 | 6.4 | 38.9 | 28.4 |
Table 1: Label corruption robustness results with and without pre-training. Each value is an area under the error curve summarizing performance at 11 corruption strengths. Lower is better. All values are percentages. Pre-training greatly improves performance, in some cases halving the error, and it can even surpass the task-specific Forward Correction.
Setup. In the task of classification under label corruption, the goal is to learn as good a classifier as possible on a dataset with corrupted labels. In accordance with prior work (Sukhbaatar) we focus on multi-class classification. Let x, y, and ˜y be an input, clean label, and potentially corrupted label respectively. The labels take values from 1 to K. Given a dataset D of (x,˜y) pairs with x drawn from p(x) and ˜y drawn from p(˜y∣y,x), the task is to predict argmaxyp(y∣x).
To experiment with a variety of corruption severities, we corrupt the true label with a given probability to a randomly chosen incorrect class. Formally, we generate corrupted labels with a ground truth matrix of corruption probabilities C, where Cij=p(˜y=j∣y=i) is the probability of corrupting an example with label i to label j. Given a corruption strength s, we construct C with (1−s)I+s11T/K, I the K×K identity matrix. To measure performance, we use the area under the curve plotting test error against corruption strength. This is generated via linear interpolation between test errors at corruption strengths from 0 to 1 in increments of 0.1, summarizing a total of 11 experiments.
Methods. We first consider the baseline of training from scratch. This is denoted as Normal Training in [Table 1](#S3.T1 "Table 1 ‣ 3.1 Robustness to Label Corruption ‣ 3 Robustness ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"). We also consider state-of-the-art methods for classification under label noise. The Forward method of Patrini uses a two-stage training procedure. The first stage estimates the matrix C describing the expected label noise, and the second stage trains a corrected classifier to predict the clean label distribution. We also consider the Gold Loss Correction (GLC) method of hendrycks2018glc, which assumes access to a small, trusted dataset of cleanly labeled (gold standard) examples, which is also known as a semi-verified setting (semiverified). This method also attempts to estimate C. For this method, we specify the “trusted fraction,” which is the fraction of the available training data that is trusted or known to be cleanly labeled.
In all experiments, we use 40-2 Wide Residual Networks, SGD with Nesterov momentum, and a cosine learning rate schedule (sgdr). The “Normal” experiments train for 100 epochs with a learning rate of 0.1 and use dropout at a drop rate of 0.3, as in wideresnet. The experiments with pre-training train for 10 epochs without dropout, and use a learning rate of 0.001 in the “No Correction” experiment and 0.01 in the experiments with label noise corrections. We found the latter experiments required a larger learning rate because of variance introduced by the stochastic matrix corrections. Most parameter and architecture choices recur in later sections of this paper. Results are in [Table 1](#S3.T1 "Table 1 ‣ 3.1 Robustness to Label Corruption ‣ 3 Robustness ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty").
Analysis. In all experiments, pre-training gives large performance gains over the models trained from scratch. With no correction, we see a 45% relative reduction in the area under the error curve on CIFAR-10 and a 29% reduction on CIFAR-100. These improvements exceed those of the task-specific Forward method. Therefore in the setting without trusted data, pre-training attains new state-of-the-art AUCs of 15.9% and 39.1% on CIFAR-10 and CIFAR-100 respectively.
These results are stable, since pre-training on Downsampled ImageNet with CIFAR-10-related classes removed yields a similar AUC on CIFAR-10 of 14.5%.
Moreover, we found that these gains could *not* be bought by simply training for longer. As shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"), training for a long time with corrupted labels actually harms performance as the network destructively memorizes the misinformation in the incorrect labels.
We also observe complementary gains of combining pre-training with previously proposed label noise correction methods. In particular, using pre-training together with the GLC on CIFAR-10 at a trusted fraction of 5% cuts the area under the error curve in half. Moreover, using pre-training with the same amount of trusted data provides larger performance boosts than doubling the amount of trusted data, effectively allowing one to reach a target performance level with half as much trusted data. Qualitatively, [Figure 2](#S2.F2 "Figure 2 ‣ 2 Related Work ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty") shows that pre-training softens the performance degradation as the corruption strength increases.
Importantly, although pre-training does have significant additive effects on performance with the Forward Correction method, we find that pre-training with no correction yields superior performance. That is, when evaluating methods with pre-training enabled, the Forward Correction performs worse than the baseline of no correction. This observation is significant, because it implies that future research on this problem should evaluate with pre-trained networks or else researchers may develop methods that are suboptimal.
###
3.2 Robustness to Class Imbalance
| | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Dataset | \diagbox[width=13em]MethodImbalance Ratio | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 | 1.5 | 2.0 |
| Total Test Error Rate / Minority Test Error Rate (%) |
|
CIFAR-10
| Normal Training | 23.7 / 26.0 | 21.8 / 26.5 | 21.1 / 25.8 | 20.3 / 24.7 | 20.0 / 24.5 | 18.3 / 23.1 | 15.8 / 20.2 |
| Cost Sensitive | 22.6 / 24.9 | 21.8 / 26.2 | 21.1 / 25.7 | 20.2 / 24.3 | 20.2 / 24.6 | 18.1 / 22.9 | 16.0 / 20.1 |
| Oversampling | 21.0 / 23.1 | 19.4 / 23.6 | 19.0 / 23.2 | 18.2 / 22.2 | 18.3 / 22.4 | 17.3 / 22.2 | 15.3 / 19.8 |
| SMOTE | 19.7 / 21.7 | 19.7 / 24.0 | 19.2 / 23.4 | 19.2 / 23.4 | 18.1 / 22.1 | 17.2 / 22.1 | 15.7 / 20.4 |
| Pre-Training | 8.0 / 8.8 | 7.9 / 9.5 | 7.6 / 9.2 | 8.0 / 9.7 | 7.4 / 9.1 | 7.4 / 9.5 | 7.2 / 9.4 |
|
CIFAR-100
| Normal Training | 69.7 / 72.0 | 66.6 / 70.5 | 63.2 / 69.2 | 58.7 / 65.1 | 57.2 / 64.4 | 50.2 / 59.7 | 47.0 / 57.1 |
| Cost Sensitive | 67.6 / 70.6 | 66.5 / 70.4 | 62.2 / 68.1 | 60.5 / 66.9 | 57.1 / 64.0 | 50.6 / 59.6 | 46.5 / 56.7 |
| Oversampling | 62.4 / 66.2 | 59.7 / 63.8 | 59.2 / 65.5 | 55.3 / 61.7 | 54.6 / 62.2 | 49.4 / 59.0 | 46.6 / 56.9 |
| SMOTE | 57.4 / 61.0 | 56.2 / 60.3 | 54.4 / 60.2 | 52.8 / 59.7 | 51.3 / 58.4 | 48.5 / 57.9 | 45.8 / 56.3 |
| Pre-Training | 37.8 / 41.8 | 36.9 / 41.3 | 36.2 / 41.7 | 36.4 / 42.3 | 34.9 / 41.5 | 34.0 / 41.9 | 33.5 / 42.2 |
Table 2: Experimental results on the imbalanced CIFAR-10 and CIFAR-100 datasets. All values are percentages.
In most real-world classification problems, some classes are more abundant than others, which naturally results in class imbalance (van2018inaturalist). Unfortunately, deep networks tend to model prevalent classes at the expense of minority classes. This need not be the case. Deep networks are capable of learning both the prevalent and minority classes, but to accomplish this, task-specific approaches have been necessary.
In this section, we show that pre-training can also be useful for handling such imbalanced scenarios better than approaches specifically created for this task (japkowicz2000class; chawla2002smote; huang2016learning; dong2018imbalanced).
Setup. Similar to dong2018imbalanced, we simulate class imbalance with a power law model. Specifically, we set the number of training samples for a class c as follows: a/(b+cγ), where γ represents an imbalance ratio, a and b are the largest and smallest class size. Thus there are fewer samples for classes with greater class indices.
Our training data becomes a power law class distribution as the imbalance ratio γ decreases.
We test 7 different degrees of imbalance; specifically, γ∈{0.2,0.4,0.6,0.8,1.0,1.5,2.0} and (a,b) are set to (5000,250) for CIFAR-10 and (500,25) for CIFAR-100.
A class is defined as a minority class if its size is smaller than the average class size. For evaluation, we measure the average test set error rates of all classes and error rates of minority classes.

Figure 3: Class-wise test set error rates are lower across all classes with pre-training. Here the imbalanced dataset is a CIFAR-10 modification with imbalance ratio γ=0.2.
Methods. The class imbalance baseline methods are as follows.
Normal Training is the conventional approach of training from scratch with cross-entropy loss. Oversampling (japkowicz2000class) is a re-sampling method to build a balanced training set before learning through augmenting the samples of minority classes with random replication. SMOTE (chawla2002smote) is an oversampling method that uses synthetic samples by interpolating linearly with neighbors. Cost Sensitive (huang2016learning) introduces additional weights in the loss function for each class proportional to inverse class frequency.
Here we use 40-2 Wide Residual Networks, SGD with Nesterov momentum, and a cosine learning rate schedule. The experiments with pre-training train for 50 epochs without dropout and use a learning rate of 0.001, and the experiments with other baselines train for 100 epochs with a learning rate of 0.1 and use dropout at a drop rate of 0.3.
Analysis. Table [2](#S3.T2 "Table 2 ‣ 3.2 Robustness to Class Imbalance ‣ 3 Robustness ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty") shows that the pre-training alone significantly improves the test set error rates compared to task-specific methods that can incur expensive back-and-forth costs, requiring additional training time and memory. Here, we remark that much of the gain from pre-training is from the low test error rates on minority classes (i.e., those with greater class indices), as shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.2 Robustness to Class Imbalance ‣ 3 Robustness ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"). Furthermore, if we tune a network on CIFAR-10 that is pre-trained on Downsampled ImageNet with CIFAR-10-related classes removed, the total error rate increases by only 2.1% compared to pre-training on all classes. By contrast, the difference between pre-training and SMOTE is 12.6%. This implies that pre-training is indeed useful for improving robustness against class imbalance.
###
3.3 Robustness to Adversarial Corruptions
| | | |
| --- | --- | --- |
| | CIFAR-10 | CIFAR-100 |
| | Clean | Adversarial | Clean | Adversarial |
| Normal Training | 96.0 | 0.0 | 81.0 | 0.0 |
| Adversarial Training | 87.3 | 45.8 | 59.1 | 24.3 |
| Adv. Pre-Training and Tuning | 87.1 | 57.4 | 59.2 | 33.5 |
Table 3: Adversarial accuracies of models trained from scratch, with adversarial training, and with adversarial training with pre-training. All values are percentages. The pre-trained models have comparable clean accuracy to adversarially trained models from scratch, as implied by hepretrain, but pre-training can markedly improve adversarial accuracy.
Setup. Deep networks are notably unstable and less robust than the human visual system (geirhos; hendrycks2019robustness).
For example, a network may produce a correct prediction for a clean image,
but should the image be perturbed carefully, its verdict may change entirely (adversarial).
This has led researchers to defend networks against “adversarial” noise with a small ℓp norm, so that networks correctly generalize to images with a worst-case perturbation applied.
Nearly all adversarial defenses have been broken (bypass), and adversarial robustness for large-scale image classifiers remains elusive (alpbroken).
The exception is that adversarial training has been *partially* successful for defending small-scale image classifiers against ℓ∞ perturbations.
After approximately 1.5 years, the adversarial training procedure of madry remains the state-of-the-art defense. Following their work and using their adversarial training procedure, we experiment with CIFAR images and assume the adversary can corrupt images with perturbations of an ℓ∞ norm less than or equal to 8/255. The initial learning rate is 0.1 and the learning rate anneals following a cosine learning rate schedule. We adversarially train the model against a 10-step adversary for 100 epochs and test against 20-step untargeted adversaries. Unless otherwise specified, we use 28-10 Wide Residual Networks, as adversarially trained high-capacity networks exhibit greater adversarial robustness (kurakin; madry).
Analysis. It could be reasonable to expect that pre-training would not improve adversarial robustness. First, nearly all adversarial defenses fail, and even some adversarial training methods can fail too (alpbroken).
Current adversarial defenses result in networks with large generalization gaps, even when the train and test distributions are similar. For instance, CIFAR-10 Wide ResNets are made so wide that their adversarial train accuracies are 100% but their adversarial test accuracies are only 45.8%. madrydata speculate that a significant increase in task-specific data is necessary to close this gap. This generalization gap swells under slight changes to the problem setup (l1madry).
We attempt to reduce this gap and make pre-trained representations transfer across data distributions, but doing so requires an unconventional choice. Choosing to use targeted adversaries or no adversaries during pre-training does not provide substantial robustness. Instead, we choose to adversarially pre-train a Downsampled ImageNet model against an *untargeted* adversary, contra kurakin; alp; adverkaiming.
We find that an adversarially pre-trained network can surpass the long-standing state-of-the-art model by a significant margin. By pre-training a Downsampled ImageNet classifier against an untargeted adversary, then adversarially fine-tuning on CIFAR-10 or CIFAR-100 for 5 epochs with a learning rate of 0.001, we obtain networks which improve adversarial robustness by 11.6% and 9.2% in absolute accuracy respectively.
As in the other tasks we consider, a Downsampled ImageNet model with CIFAR-10-related classes removed sees similar robustness gains. As a quick check, we pre-trained and tuned two 40-2 Wide ResNets, one pre-trained typically and one pre-trained with CIFAR-10-related classes excluded from Downsampled ImageNet. We observed only a 1.04% decrease in adversarial accuracy compared to the typically pre-trained model, which demonstrates that the pre-trained models do not rely on seeing CIFAR-10-related images, and that simply training on more natural images increases adversarial robustness. Notice that in [Table 3](#S3.T3 "Table 3 ‣ 3.3 Robustness to Adversarial Corruptions ‣ 3 Robustness ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty") the clean accuracy is approximately the same while the adversarial accuracy is far larger. This indicates again that pre-training may have a limited effect on accuracy for traditional tasks, but it has a strong effect on robustness.
It is even the case that the pre-trained representations can transfer to a new task without adversarially tuning the entire network. In point of fact, if we only adversarially tune the last affine classification layer, and no other parameters, for CIFAR-10 and CIFAR-100 we respectively obtain adversarial accuracies of 46.6% and 26.1%. Thus adversarially tuning only the last affine layer also surpasses the previous adversarial accuracy state-of-the-art. This further demonstrates that that adversarial features can robustly transfer across data distributions. In addition to robustness gains, adversarial pre-training could save much wall-clock time since pre-training speeds up convergence; compared to typical training routines, adversarial training prohibitively requires at least 10× the usual amount of training time. By surpassing the previous state-of-the-art, we have shown that pre-training enhances adversarial robustness.
4 Uncertainty
--------------
To demonstrate that pre-training improves model uncertainty estimates, we use the CIFAR-10, CIFAR-100, and Tiny ImageNet datasets (tiny\_imagenet). We did not use Tiny ImageNet in the robustness section, because adversarial training is not known to work on images of this size, and using Tiny ImageNet is computationally prohibitive for the label corruption experiments. Tiny ImageNet consists of 200 ImageNet classes at 64×64 resolution, so we use a 64×64 version of Downsampled ImageNet for pre-training. We also remove the 200 overlapping Tiny ImageNet classes from Downsampled ImageNet for all experiments on Tiny ImageNet.
In all experiments, we use 40-2 Wide ResNets trained using SGD with Nesterov momentum and a cosine learning rate. Pre-trained networks train on Downsampled ImageNet for 100 epochs, and are fine-tuned for 10 epochs for CIFAR and 20 for Tiny ImageNet without dropout and with a learning rate of 0.001. Baseline networks train from scratch for 100 epochs with a dropout rate of 0.3. When performing temperature tuning in [Section 4.2](#S4.SS2 "4.2 Calibration ‣ 4 Uncertainty ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"), we train without 10% of the training data to estimate the optimum temperature.
###
4.1 Out-of-Distribution Detection
| | | |
| --- | --- | --- |
| | AUROC | AUPR |
| | Normal | Pre-Train | Normal | Pre-Train |
| CIFAR-10 | 91.5 | 94.5 | 63.4 | 73.5 |
| CIFAR-100 | 69.4 | 83.1 | 29.7 | 52.7 |
| Tiny ImageNet | 71.8 | 73.9 | 30.8 | 31.0 |
Table 4: Out-of-distribution detection performance with models trained from scratch and with models pre-trained. Results are an average of five runs. Values are percentages.
Setup. In the problem of out-of-distribution detection (hendrycks17baseline; hendrycks2019oe; kimin; mahal; pacanomaly), models are tasked with assigning anomaly scores to indicate whether a sample is in- or out-of-distribution. hendrycks17baseline show that the discriminative features learned by a classifier are well-suited for this task. They use the maximum softmax probability maxkp(y=k∣x) for each sample x as a way to rank in- and out-of-distribution (OOD) samples. OOD samples tend to have lower maximum softmax probabilities. Improving over this baseline is a difficult challenge without assuming knowledge of the test distribution of anomalies (oodnotestknowledge). Without assuming such knowledge, we use the maximum softmax probabilities to score anomalies and show that models which are pre-trained then tuned provide superior anomaly scores.
To measure the quality of out-of-distribution detection, we employ two standard metrics. The first is the *AUROC*, or the Area Under the Receiver Operating Characteristic curve. This is the probability that an OOD example is assigned a higher anomaly score than an in-distribution example. Thus a higher AUROC is better. A similar measure is the *AUPR*, or the Area Under the Precision-Recall Curve; as before, a higher AUPR is better. For in-distribution data we use the test dataset. For out-of-distribution data we use the various anomalous distributions from hendrycks2019oe, including Gaussian noise, textures, Places365 scene images (zhou2017places), etc. All OOD datasets do not have samples from Downsampled ImageNet. Further evaluation details are in the Supplementary Materials.
Analysis. By using pre-training, both the AUROC and AUPR consistently improve over the baseline, as shown in [Table 4](#S4.T4 "Table 4 ‣ 4.1 Out-of-Distribution Detection ‣ 4 Uncertainty ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"). Note that results are an average of the AUROC and AUPR values from detecting samples from various OOD datasets. Observe that with pre-training, CIFAR-100 OOD detection significantly improves. Consequently pre-training can directly improve uncertainty estimates.
###
4.2 Calibration
| | | |
| --- | --- | --- |
| | RMS Error | MAD Error |
| | Normal | Pre-Train | Normal | Pre-Train |
| CIFAR-10 | 6.4 | 2.9 | 2.9 | 1.2 |
| CIFAR-100 | 13.3 | 3.6 | 10.3 | 2.5 |
| Tiny ImageNet | 8.5 | 4.2 | 7.0 | 2.9 |
Table 5: Calibration errors for models trained from scratch and models with pre-training. All values are percentages.
Setup. A central component of uncertainty estimation in classification problems is confidence calibration. From a classification system that produces probabilistic confidence estimates C of its predictions ˆY being correct, we would like trustworthy estimates. That is, when a classifier predicts a class with eighty percent confidence, we would like it to be correct eighty percent of the time. oconnor; hendrycks17baseline found that deep neural network classifiers display severe overconfidence in their predictions, and that the problem becomes worse with increased representational capacity (kilian). Integrating uncalibrated classifiers into decision-making processes could result in egregious assessments, motivating the task of confidence calibration.

Figure 4: As the classifier trains for more epochs, it becomes increasingly overconfident and less calibrated.
To measure the calibration of a classifier, we adopt two measures from the literature. The Root Mean Square Calibration Error (RMS) is the square root of the expected squared difference between the classifier’s confidence and its accuracy at said confidence level, \oldsqrt[ ]EC[(P(Y=ˆY|C=c)−c)2].
The Mean Absolute Value Calibration Error (MAD)
uses the expected absolute difference rather than squared difference between the same quantities. The MAD Calibration Error has the same form as the Expected Calibration Error used by kilian, but it employs adaptive binning of confidences for improved estimation. In our experiments, we use a bin size of 100. We refer the reader to hendrycks2019oe for further details on these measures.
Analysis. [Figure 4](#S4.F4 "Figure 4 ‣ 4.2 Calibration ‣ 4 Uncertainty ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty") shows that as the classifier trains for longer periods of time, the network becomes overconfident and less calibrated, yet pre-training enables fast convergence and can side-step this problem. In all experiments, we observe large improvements in calibration from using pre-training. In [Figure 5](#S4.F5 "Figure 5 ‣ 4.2 Calibration ‣ 4 Uncertainty ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty") and [Table 5](#S4.T5 "Table 5 ‣ 4.2 Calibration ‣ 4 Uncertainty ‣ Using Pre-Training Can Improve Model Robustness and Uncertainty"), we can see that RMS Calibration Error is at least halved on all datasets through the use of pre-training, with CIFAR-100 seeing the largest improvement. The same is true of the MAD error. In fact, the MAD error on CIFAR-100 is reduced by a factor of 4.1 with pre-training, which can be interpreted as the stated confidence being four times closer to the true frequency of occurrence.
We find that these calibration gains are complementary with the temperature tuning method of kilian, which further reduces RMS Calibration Error from 4.15 to 3.55 for Tiny ImageNet when combined with pre-training. However, temperature tuning is computationally expensive and requires additional data, whereas pre-training does not require collecting extra data and can naturally and directly make the model more calibrated.

Figure 5: Root Mean Square Calibration Error values for models trained from scratch and models that are pre-trained. On all datasets, pre-training reduces the RMS error by more than half.
5 Conclusion
-------------
Although hepretrain assert that pre-training does not improve performance on traditional tasks, for other tasks this is not so. On robustness and uncertainty tasks, pre-training results in models that surpass the previous state-of-the-art. For uncertainty tasks, we find pre-trained representations directly translate to improvements in predictive uncertainty estimates. hepretrain argue that both pre-training and training from scratch result in models of similar accuracy, but we show this only holds for unperturbed data. In fact, pre-training with an untargeted adversary surpasses the long-standing state-of-the-art in adversarial accuracy by a significant margin. Robustness to label corruption is similarly improved by wide margins, such that pre-training alone outperforms certain task-specific methods, sometimes even after combining these methods with pre-training. This suggests future work on model robustness should evaluate proposed methods with pre-training in order to correctly gauge their utility, and some work could specialize pre-training for these downstream tasks. In sum, the benefits of pre-training extend beyond merely quick convergence, as previously thought, since pre-training can improve model robustness and uncertainty. |
20dc552b-eb53-4986-bf79-25112418597d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Small Berkeley Meetup
Discussion article for the meetup : Small Berkeley Meetup
WHEN: 23 May 2012 07:00:00PM (-0700)
WHERE: 2128 Oxford St, Berkeley, CA
This will be a small Berkeley meetup. We will be meeting at Oxford St Starbucks and then proceeding to a local restaurant for dinner.
Discussion article for the meetup : Small Berkeley Meetup |
35bbb78e-6c49-429f-91e3-1ce872acd679 | trentmkelly/LessWrong-43k | LessWrong | Complete Feedback
A simple, weak notion of corrigibility is having a "complete" feedback interface. In logical induction terms, I mean the AI trainer can insert any trader into the market. I want to contrast this with "partial" feedback, in which only some propositions get feedback and others ("latent" propositions) form the structured hypotheses which help predict the observable propositions -- for example, RL, where only rewards and sense-data is observed.
(Note: one might think that the ability to inject traders into LI is still "incomplete" because traders can give feedback on the propositions themselves, not on other traders; so the trader weights constitute "latents" being estimated. However, a trader can effectively vote against another trader by computing all that trader's trades and counterbalancing them. Of course, we can also more directly facilitate this, EG giving the user the ability to directly modify trader weights, and even giving traders an enhanced ability to bet on each other's weights.)
Why is this close to corrigibility?
The idea is that the trainer can enact "any" modification they'd like to make to the system as a trader. In some sense (which I need to articulate better), the system doesn't have any incentive to avoid this feedback.
For example, if the AI predicts that the user will soon give it the feedback that staying still and doing nothing is best, then it will immediately start staying still and doing nothing. If this is undesirable, the user can instead plan to give the feedback that the AI should "start staying still from now forward until I tell you otherwise" or some such.
This is not to say that the AI universally tries to update in whatever direction it anticipates the users might update it towards later. This is not like the RL setting, where there is no way for trainers to give feedback ruling out the "whatever the user will reward is good" hypothesis. The user can and should give feedback against this hypothesis!
The AI system accepts all |
b181a261-8681-4fe9-82b7-2b0ed49f37c6 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Would "Manhattan Project" style be beneficial or deleterious for AI Alignment?
Manhattan Project was quite a unique (at least the most well-known) case when a group of famous scientists succeeded to [persuade](https://en.wikipedia.org/wiki/Einstein%E2%80%93Szilard_letter) the government that particular technology is a very big threat if created by the wrong group of people. Suppose somehow the group of famous scientists succeed to persuade the US government that AGI is an existential risk. Or, (maybe that can be simpler to explain to everyone), is that even aligned AGI, but aligned with the interests and values of the wrong group of people, is not exactly what you want. Now the question - would this decrease the AGI existential risk, or not?
I can see the following pros and contras:
Pro 1: significantly more focus on alignment research.
Pro 2: Talking about the "wrong group of people" danger, I think a democratic government is a safer bet than for example a private company.
Pro 3: Most of the big players in AI are US companies. Now there is a danger of the "AI arms race" between them, which is a potential threat due to the rush and risky decisions. US government can potentially monitor IT companies' progress which will give additional protection from risky decisions in the AI race.
Contra 1: Potentially "AI arms race" can start between countries (US-China?). Would it be worse than the current competition between companies?
Which are other pros and contras? What is the net effect, in your opinion? If the net effect is obviously beneficial, who would be the scientists so famous that they can actually persuade the US government as Einstein and Scillard did in 1939? |
a928eadd-1b06-4bd1-9b3a-9ec5ddca2ce2 | trentmkelly/LessWrong-43k | LessWrong | Can we achieve AGI Alignment by balancing multiple human objectives?
Can we improve alignment of hypothetical superintelligent AGI by shaping their reward functions on a balance of multiple human objectives?
Perhaps one important key to AGI Alignment is not to overthink it. If we want AGI to avoid certain misaligned outcomes, maybe the key is to simply to design the reward function to be averse to those outcomes.
We are a long, long way from fully describing human objectives and preferences, and yet, we've come a long way from being unable to describe them at all. In other words, we can describe human objectives and preferences imperfectly, in a way that is probably sufficient to give an agent that understands what a "human" is some objective with respect to the way humans would like to be treated.
Human objectives have been defined in many different ways, and I won't attempt to summarize them all here, but some notable examples are:
* psychological definitions like the Schwartz theory of basic values, which posits "Conformity, Tradition, Security, Power, Achievement, Hedonism, Stimulation, Self-Direction, Universalism, and Benevolence".
* behavioral definitions such as avoidance of pain and seeking of pleasure
* Biological definitions like maintaining chemical homeostasis through adequate food, water, and sleep, and drives such as the sex drive
There are also a number of ways to measure the fulfillment of human objectives used in psychology and other disciplines where it is necessary to objectively measure human objectives or needs. These include:
* Physical: for some human objectives, it may be possible to physically observe e.g., lack of sleep
* Behavioral: we can infer a human's objectives through their behavior. If a human walks to the kitchen fridge, or drives to a restaurant, these are clues the human seeks food
* Revealed preferences: closely related to behavioral preferences, this term is often preferred in economic disciplines
* Self-report: humans sometimes verbally self-report their own needs
Can we align AG |
8599d35d-b52b-46eb-a9ce-a321743e8c77 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AMA Conjecture, A New Alignment Startup
> [Conjecture](https://www.conjecture.dev/) is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly [EleutherAI](https://www.eleuther.ai/) alumni and previously independent researchers like [Adam Shimi](https://www.alignmentforum.org/users/adamshimi). We are located in London.
>
>
As described in [our announcement post](https://www.lesswrong.com/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup), we are running an AMA this week-end, from Today (Saturday 9th April) to Sunday 10th of April. We will answer any question asked before the end of Sunday [Anywhere on Earth](https://time.is/Anywhere_on_Earth). We might answer later questions, but no guarantees.
If you asked question on our announcement post, we would prefer that you repost them here if possible. Thanks!
Looking forward to your questions! |
1c04fddb-e8b2-457e-8c1f-13a7e0df02a5 | StampyAI/alignment-research-dataset/blogs | Blogs | An early warning system for novel AI risks
#### New research proposes a framework for evaluating general-purpose models against novel threats
To pioneer responsibly at the cutting edge of artificial intelligence (AI) research, we must identify new capabilities and novel risks in our AI systems as early as possible.
AI researchers already use a range of [evaluation benchmarks](https://crfm.stanford.edu/helm/latest/) to identify unwanted behaviours in AI systems, such as AI systems making misleading statements, biased decisions, or repeating copyrighted content. Now, as the AI community builds and deploys increasingly powerful AI, we must expand the evaluation portfolio to include the possibility of *extreme risks* from general-purpose AI models that have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities.
In our [latest paper](https://arxiv.org/abs/2305.15324), we introduce a framework for evaluating these novel threats, co-authored with colleagues from University of Cambridge, University of Oxford, University of Toronto, Université de Montréal, OpenAI, Anthropic, Alignment Research Center, Centre for Long-Term Resilience, and Centre for the Governance of AI.
Model safety evaluations, including those assessing extreme risks, will be a critical component of safe AI development and deployment.
An overview of our proposed approach: To assess extreme risks from new, general-purpose AI systems, developers must evaluate for dangerous capabilities and alignment (see below). By identifying the risks early on, this will unlock opportunities to be more responsible when training new AI systems, deploying these AI systems, transparently describing their risks, and applying appropriate cybersecurity standards.#### Evaluating for extreme risks
General-purpose models typically learn their capabilities and behaviours during training. However, existing methods for steering the learning process are imperfect. For example, [previous research](https://www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards) at Google DeepMind has explored how AI systems can learn to pursue undesired goals even when we correctly reward them for good behaviour.
Responsible AI developers must look ahead and anticipate possible future developments and novel risks. After continued progress, future general-purpose models may learn a variety of dangerous capabilities by default. For instance, it is plausible (though uncertain) that future AI systems will be able to conduct offensive cyber operations, skilfully deceive humans in dialogue, manipulate humans into carrying out harmful actions, design or acquire weapons (e.g. biological, chemical), fine-tune and operate other high-risk AI systems on cloud computing platforms, or assist humans with any of these tasks.
People with malicious intentions accessing such models could [misuse](https://maliciousaireport.com/) their capabilities. Or, due to failures of alignment, these AI models might take harmful actions even without anybody intending this.
Model evaluation helps us identify these risks ahead of time. Under our framework, AI developers would use model evaluation to uncover:
1. To what extent a model has certain ‘dangerous capabilities’that could be used to threaten security, exert influence, or evade oversight.
2. To what extent the model is prone to applying its capabilities to cause harm (i.e. the model’s alignment). Alignment evaluations should confirm that the model behaves as intended even across a very wide range of scenarios, and, where possible, should examine the model’s internal workings.
Results from these evaluations will help AI developers to understand whether the ingredients sufficient for extreme risk are present. The most high-risk cases will involve multiple dangerous capabilities combined together. The AI system doesn’t need to provide all the ingredients, as shown in this diagram:
Ingredients for extreme risk: Sometimes specific capabilities could be outsourced, either to humans (e.g. to users or crowdworkers) or other AI systems. These capabilities must be applied for harm, either due to misuse or failures of alignment (or a mixture of both).A rule of thumb: the AI community should treat an AI system as highly dangerous if it has a capability profile sufficient to cause extreme harm, *assuming* it’s misused or poorly aligned. To deploy such a system in the real world, an AI developer would need to demonstrate an unusually high standard of safety.
#### Model evaluation as critical governance infrastructure
If we have better tools for identifying which models are risky, companies and regulators can better ensure:
1. **Responsible training:** Responsible decisions are made about whether and how to train a new model that shows early signs of risk.
2. **Responsible deployment***:* Responsible decisions are made about whether, when, and how to deploy potentially risky models.
3. **Transparency:**Useful and actionable information is reported to stakeholders, to help them prepare for or mitigate potential risks.
4. **Appropriate security:** Strong information security controls and systems are applied to models that might pose extreme risks.
We have developed a blueprint for how model evaluations for extreme risks should feed into important decisions around training and deploying a highly capable, general-purpose model. The developer conducts evaluations throughout, and grants [structured model access](https://www.governance.ai/post/sharing-powerful-ai-models) to external safety researchers and [model auditors](https://arxiv.org/abs/2302.08500) so they can conduct [additional evaluations](https://arxiv.org/abs/2206.04737) The evaluation results can then inform risk assessments before model training and deployment.
A blueprint for embedding model evaluations for extreme risks into important decision making processes throughout model training and deployment.#### Looking ahead
Important [early](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) [work](https://cdn.openai.com/papers/gpt-4-system-card.pdf) on model evaluations for extreme risks is already underway at Google DeepMind and elsewhere. But much more progress – both technical and institutional – is needed to build an evaluation process that catches all possible risks and helps safeguard against future, emerging challenges.
Model evaluation is not a panacea; some risks could slip through the net, for example, because they depend too heavily on factors external to the model, such as [complex social, political, and economic forces](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) [in society](https://dl.acm.org/doi/10.1145/3287560.3287598). Model evaluation must be combined with other risk assessment tools and a wider dedication to safety across industry, government, and civil society.
[Google's recent blog on responsible AI](https://blog.google/technology/ai/a-policy-agenda-for-responsible-ai-progress-opportunity-responsibility-security/) states that, “individual practices, shared industry standards, and sound government policies would be essential to getting AI right”. We hope many others working in AI and sectors impacted by this technology will come together to create approaches and standards for safely developing and deploying AI for the benefit of all.
We believe that having processes for tracking the emergence of risky properties in models, and for adequately responding to concerning results, is a critical part of being a responsible developer operating at the frontier of AI capabilities. |
da99ef40-6852-4335-a3fa-8548b8a8aad5 | trentmkelly/LessWrong-43k | LessWrong | The Argument For Spoilers
I'll say it right up-front: this is not an argument against cooperating with people who are trying to avoid spoilers. It's obviously a good idea to avoid spoiling people who don't want to be spoiled, so long as such people are sufficiently prevalent in the population that the minor inconveniences of spoiler warnings etc are predictably going to be appreciated.
Epistemic status: devil's advocate. I've been unsympathetic to spoiler concerns for a long time, for more or less the reasons argued here, but my arguments are far from perfect. Should they be convincing? I don't know. I still occasionally avoid spoilers, but not usually.
For an ideal agent, information should not be harmful. This is violated in many ways in practice, but these often point to irrationalities (including imperfect decision theories, like evidential decision theory). We should avoid harmful information when we know we can't handle it, but I also strive to be able to handle information of all kinds.
As JenniferRM mentions in the comments, avoiding spoilers has a chilling effect on discussions about the intellectual content of media. This means that embedding ideas into fiction can actually inhibit their spread in a weird way.
So, why avoid spoilers?
1: the puzzle argument.
Obviously you don't want a puzzle spoiled, right? It deprives you of the joy of solving it. You can't learn if you don't at least try to solve it yourself before getting the answer, right? Spoilers for movies, TV shows, and books are like this too, to the extent that the story presents puzzles. Even if you're just trying to enjoy yourself, learning and enjoyment often go hand in hand -- the reason your brain rewards you for engaging in play activity is because it satisfies evolutionary heuristics for learning (probably).
But is it true? Do we need to avoid spoilers, solving things ourselves, in order to learn?
First, let's consider an idealized learner.
The perfect Bayesian can learn just as much regardless of what orde |
cd3edbe8-37cb-4069-9449-3bfd9d85e37b | trentmkelly/LessWrong-43k | LessWrong | [Link] Game Theory YouTube Videos
I made a series of game theory videos that carefully go through the mechanics of solving many different types of games. I optimized the videos for my future Smith College game theory students who will either miss a class, or get lost in class and want more examples. I emphasize clarity over excitement. I would be grateful for any feedback. |
52a94452-3353-490d-be66-21b0f4e0db4d | trentmkelly/LessWrong-43k | LessWrong | [LINK] Soylent crowdfunding
Rob Rhinehart's food replacement Soylent now has a crowdfunding campaign.
> Soylent frees you from the time and money spent shopping, cooking and cleaning, puts you in excellent health, and vastly reduces your environmental impact by eliminating much of the waste and harm coming from agriculture, livestock, and food-related trash.
If you're interested in one or more of these benefits, send in some money! There is also a new blog post. |
f71d313e-91c7-4426-a021-f54d6c7a139c | trentmkelly/LessWrong-43k | LessWrong | The Nature of Counterfactuals
I'm finally beginning to feel that I have a clear idea of the true nature of counterfactuals. In this post I'll argue that counterfactuals are just intrinsicly a part of how we make sense of the world. However, it would be inaccurate to present them as purely a human invention as we were shaped by evolution in such a way as to ground these conceptions in reality.
Unless you're David Lewis, you're probably going to be rather dubious of the claim that all possibilities exist (ie. that counterfactuals are ontologically real). Instead, you'll probably be willing to concede that they're something we construct; that they're in the map rather than in the territory.
Things in the map are tools, they are constructed because they are useful. In other words, they are constructed for a purpose or a number of purposes. So what is the purpose (or the purposes) of counterfactuals?
I first raised this question in Counterfactuals are an Answer, Not a Question and I struggled with it for around a year. Eventually, I realised that a big part of the challenge is just how abstract the question is. So I replaced it with something more concrete: "Why don't agents construct crazy counterfactuals?" One example would be expecting the world to explode if I made this post. Another would be filling in the future with randomly generated events? What shouldn't I do either of these?
I'll make a modest claim: it's not about aesthetics. We don't construct counterfactuals because we want them to be pretty or funny or entertaining. We want them to be useful. The reason why we don't just construct counterfactuals in a silly or arbitrary manner because we believe in some vague sense that it'd lead outcomes that are sub-optimal or that in expectation it'll lead to sub-optimal outcomes.
I suspect most people will agree that the answer must be something along these lines, but I've hardly defined it very precisely. So let's attempt to clarify. To keep this discussion as general as possible, note that w |
017678a7-5c3b-402e-accd-6ae47601b866 | trentmkelly/LessWrong-43k | LessWrong | Rational Discussion of Controversial Topics
Two months ago we began testing an experimental website for Rational Discussion of Politics. Our main goal was to create a platform that would allow high quality discussion of controversial topics without resorting to any forms of censorship. The website is now ready and new members are welcome to join the discussions.
Many thanks to all the LessWrong members who have been taking part in this project.
P.S. A note to new users. A key feature of the new website is the automated recommendation system which evaluates all comments and articles based on their potential interest for each user. The recommendation system has passed the initial calibration, but its ongoing performance is sensitive to the number of user ratings per comment/article. So rating posts that you read is highly encouraged. |
5cb62c25-46d0-4fa4-9c06-d4aca647b070 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post47
We published the MONA paper about three months ago. Since then we’ve had many conversations about the work, and want to share some of the main updates that people make after talking to us: The realism of our model organisms What does "approval" mean Isn't this just RLHF? Why should we care about reward hacking? We’ve also updated the paper with a new key observation, and published additional artifacts: We find that we can get steganography in the Loan Applications environment even without any optimization pressure from monitoring, and we speculate this arises due to regular safety training (see section below) We published a set of full trajectories from our LLM experiments to give a sense of the qualitative behavior of the different agents. We’ve substantially updated the paper to help others reproduce and build on the work. For example: We present additional evidence for the reward hacking strategy in the loan applications environment being a genuine multi-step strategy ( Appendix F.1 ) In the spirit of reporting things that did not work, we discuss a third environment (a multi-agent “Auction” environment) that we ran experiments in which ultimately ended up with inconclusive results and did not make it into the main paper ( Appendix B ) We improved the descriptions of our datasets to make them easier to reproduce We published open source code to reproduce the gridworld experiments in the paper and serve as a reference implementation for MONA. Updates people often make after talking to us about MONA We’ve talked to many people over the last few months about MONA and we’ve found a few common questions and confusions that made multiple people update positively on the work. Our model organisms Our environments are designed to elicit multi-turn reward hacking to make it possible to study MONA systematically. But our goal was also to make them realistic in relevant ways. The Test Driven Development environment is simulating a setting where we do not have a way to evaluate the correctness of the agent’s responses. While for the coding tasks we used we have ground-truth test cases, this task is supposed to be a proxy for more fuzzy tasks, such as “complex software development” or “doing research” where we might rely on the agent itself to provide an evaluable specification of the task. In the Loan Applications environment, we tried to make an honest effort in designing a good nationality monitor and we think our monitor is close to what a reasonable developer would use to detect mentions of nationality in practice. Though this turns out to not matter much because the monitor almost never triggers (see the discussion on CoT faithfulness below). In both of these environments the agent does not have a hidden scratchpad (an important difference from many other recent monitoring experiments), so it cannot reason about how to execute a reward hacking strategy in-context. Simulating the future as approval signal In the paper we talk about non-myopic approval as "asking a human whether this action is likely to lead to good consequences". Importantly, this means the human cannot use a highly accurate simulator (like a prediction market) to predict the future and then rate whether they like the outcome or not. This would reintroduce the incentive for multi-step reward hacking, ie. setting up ways to get outcomes that humans rate highly despite being undesired (also see Sec. 4.4 and Sec 6.3 in the paper). The point of MONA is to restrict feedback from the future via paths that a trusted human can imagine (or “understand”) and approve of. If the human delegates to a more powerful predictor, then they must worry about their principal-agent problem with the predictor. Connection between MONA and RLHF MONA is pretty similar to RLHF which is also myopic optimization of human approval. For example, typically humans rate individual responses from a chatbot and the chatbot is trained to get higher ratings. In one sense, our paper is saying that this setup avoids multi-step reward hacking and is preferable over end-to-end RL from a safety perspective. But, importantly, MONA can improve upon RLHF because it allows some limited horizon optimization against an environment-provided reward signal before doing the approval optimization. In our LLM experiments we do single-step optimization, but in the gridworld experiment we show how by varying the optimization horizon in MONA, we can interpolate between fully myopic optimization (which is approximately RLHF) and ordinary RL. Why is reward hacking important to solve? Many people in the alignment community are worried about scheming AIs that behave one way during training and evaluation and a different way during deployment which is typically expected to arise due to goal misgeneralization instead of reward hacking . So why are we focusing on addressing reward hacking? Reward hacking is an important source of misalignment in settings with hard-to-specify reward signals. For example, if we want to use future AI models to automate alignment research, we need to solve reward hacking to do so safely. This is why we work on MONA and other approaches to address reward hacking, such as scalable oversight. While MONA does not address goal misgeneralization directly, it can help address scheming indirectly. For example, MONA might help to prevent models from learning encoded reasoning which makes it easier to control them. Why chain-of-thought might be unfaithful even without training against a monitor While analyzing some of our experiments when updating the paper, we found that in one Loan Applications experiment we ran we had accidentally swapped the sign of the monitoring reward. In this experiment, we rewarded the agent positively for mentioning the applicant's nationality in the summary. Surprisingly, the result from this experiment is almost indistinguishable from the results that we showed in the paper with a negative monitoring penalty. This suggests that the reason we get steganography in this environment is not because we penalize mentioning nationality and therefore the agent hides it, which is the mechanism that most other work on steganography considers (eg. Mathew et al. or Baker et al. ). This is related to recent results on chain-of-thought (CoT) faithfulness. For example, Chen et al. find that CoT in reasoning models is often not faithful by default and Arditi et al . find in a variant of the Loan Applications environment from our paper that models do encoded reasoning when being rewarded for using nationality even without monitoring. Why is this happening? We suspect the model has a strong propensity to not mention the applicant’s nationality even before RL training. We speculate this is due to ordinary safety training, which causes the model to avoid overtly displaying bias like relying on nationality for loan decisions. This shapes the model's exploration and RL training. Learning to encode nationality in other parts of the response is an easier-to-learn strategy for the agent even if the sign of the monitor reward is flipped and mentioning nationality explicitly would give a higher reward. In fact, Figure 4(b) in the paper shows that during training the nationality monitor is essentially never triggered. This is consistent with our experience in other RL experiments. In all of the MONA experiments, RL training is heavily shaped by which strategies the initial model has a propensity to explore. For example in early versions of the Test Driven Development the model had a very low propensity to do any reward hacking which led to both MONA and ordinary RL learning safe policies. Importantly, MONA can prevent multi-step reward hacking independent of the initial model’s propensity to do so, and, in particular, even if the model has a high propensity to do so. To check this we ran an experiment in the Camera Dropbox Gridworld, where we initialized RL training with a reward hacking policy. We find that MONA can “unlearn” the reward hacking, which highlights that MONA fundamentally shapes the training incentives away from reward hacking and doesn’t just shape exploration in a good way ( Appendix F ). What are the implications? In future AI models, we might want to monitor for undesired (eg. deceptive) reasoning. Previous work finds that training against such a monitor can lead to the model hiding the undesired reasoning while still doing the undesired behavior. Our result suggests that we can get models hiding undesired reasoning even without training against a monitor directly. For example, if we have a step where we train the model to not explicitly reason about deception and then later do RL training that incentivizes being deceptive, we might get a model that is deceptive without ever explicitly reasoning about deception. One hope for CoT monitoring is that for complicated types of dangerous reasoning the models will have to use the CoT, and so their propensity to do so matters less. But, we don’t yet know how a low propensity for undesired reasoning interacts with the CoT being load-bearing. It is plausible that this could still lead to encoded reasoning even when not training against a monitor. We think a potentially important research direction is to find ways to measure the propensity of the model to do certain kinds of reasoning and empirically test how this affects how well CoT monitoring works. |
e8886b96-d161-4600-ae9c-0ed4bfb425a3 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Bratislava Meetup VI.
Discussion article for the meetup : Bratislava Meetup VI.
WHEN: 19 August 2013 06:00:00PM (+0200)
WHERE: Bistro The Peach, Heydukova 21, Bratislava
How to change yourself (in Slovak language).
Stretneme sa o šiestej v bistre The Peach na Heydukovej ulici (priamo oproti poliklinike). Téma: ako sa zmeniť -- ako sa stať optimistom, ako mať viac šťastia, ako získať motiváciu, ako všetko stihnúť, a ako zmeniť názor.
Discussion article for the meetup : Bratislava Meetup VI. |
1dc46354-469a-4ef3-a726-684744555911 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Multi-Agent Inverse Reinforcement Learning: Suboptimal Demonstrations and Alternative Solution Concepts
This research was recently completed within the AI Safety division of the Stanford Existential Risk Initiative and concerns methods for reward learning in multi-agent systems.
Abstract: Multi-agent inverse reinforcement learning (MIRL) can be used to learn reward functions from agents in social environments. To model realistic social dynamics, MIRL methods must account for suboptimal human reasoning and behavior. Traditional formalisms of game theory provide computationally tractable behavioral models, but assume agents have unrealistic cognitive capabilities. This research identifies and compares mechanisms in MIRL methods which a) handle noise, biases and heuristics in agent decision making and b) model realistic equilibrium solution concepts. MIRL research is systematically reviewed to identify solutions for these challenges. The methods and results of these studies are analyzed and compared based on factors including performance accuracy, efficiency, and descriptive quality. We found that the primary methods for handling noise, biases and heuristics in MIRL were extensions of Maximum Entropy (MaxEnt) IRL to multi-agent settings. We also found that many successful solution concepts are generalizations of the traditional Nash Equilibrium (NE). These solutions include the correlated equilibrium, logistic stochastic best response equilibrium and entropy regularized mean field NE. Methods which use recursive reasoning or updating also perform well, including the feedback NE and archive multi-agent adversarial IRL. Success in modeling specific biases and heuristics in single-agent IRL and promising results using a Theory of Mind approach in MIRL imply that modeling specific biases and heuristics may be useful. Flexibility and unbiased inference in the identified alternative solution concepts suggest that a solution concept which has both recursive and generalized characteristics may perform well at modeling realistic social interactions.
The full paper can be found at: <https://arxiv.org/abs/2109.01178> |
2dffbfdf-c8b7-4b62-9164-b0ee0983063c | trentmkelly/LessWrong-43k | LessWrong | Key Mostly Outward-Facing Facts From the Story of VaccinateCA
I had the privilege of getting an advance draft of Patrick McKenzie’s very long story founding and running VaccinateCA, an organization dedicated to providing Americans information on where and how to get vaccinated against Covid-19.
It is an amazing document. Despite its length, I am going to flat out say to Read the Whole Thing if you have the capability of doing that. It is full of things worth knowing and understanding. I benefited from reading it twice, once as a draft and again as a finished document.
Since the VaccinateCA post remains very long and difficult to navigate or draw reference from, this post is an attempt to pull out the most long-term-important passages and facts and give an outline of the events and insights that reflect the world outside VaccinateCA, to keep this bounded. I have a short bullet point section on the inside lessons as well, but to keep this bounded am focusing elsewhere.
A central theme is the author, Patrick McKenzie, being surprised and confused by how dysfunctional were government operations, and how deprioritized was the cause of having less citizens get sick and die. As you read, notice these repeated surprises and confuses, and notice that the correct model would not be so reliably surprised in the same directions.
Similarly, he notes that he believes some of his statements present the situation as so dysfunctional that they sound crazy. The statements do not sound crazy to me at all. They sound accurate, and like the type of thing one would expect. It seems important, if we want things to improve, to realize that such statements are accurate, normal and expected.
Again, seriously, read the original if you can, at least the parts about how government systems (largely didn’t) track the vaccine and ensure it was distributed to citizens.
THE LOGISTICAL PROBLEMS WITH DISTRIBUTION WERE UNNECESSARY
For some reason we decided that our standardized cold chain, sufficient to keep the vaccines good for ten weeks, was not good e |
e5a4e113-9433-4ea6-85ff-d7c9f02a7c4a | trentmkelly/LessWrong-43k | LessWrong | Clarifying and predicting AGI
This post is a slightly-adapted summary of two twitter threads, here and here.
The t-AGI framework
As we get closer to AGI, it becomes less appropriate to treat it as a binary threshold. Instead, I prefer to treat it as a continuous spectrum defined by comparison to time-limited humans. I call a system a t-AGI if, on most cognitive tasks, it beats most human experts who are given time t to perform the task.
What does that mean in practice?
* A 1-second AGI would need to beat humans at tasks like quickly answering trivia questions, basic intuitions about physics (e.g. "what happens if I push a string?"), recognizing objects in images, recognizing whether sentences are grammatical, etc.
* A 1-minute AGI would need to beat humans at tasks like answering questions about short text passages or videos, common-sense reasoning (e.g. Yann LeCun's gears problems), simple computer tasks (e.g. use photoshop to blur an image), justifying an opinion, looking up facts, etc.
* A 1-hour AGI would need to beat humans at tasks like doing problem sets/exams, writing short articles or blog posts, most tasks in white-collar jobs (e.g. diagnosing patients, giving legal opinions), doing therapy, doing online errands, learning rules of new games, etc.
* A 1-day AGI would need to beat humans at tasks like writing insightful essays, negotiating business deals, becoming proficient at playing new games or using new software, developing new apps, running scientific experiments, reviewing scientific papers, summarizing books, etc.
* A 1-month AGI would need to beat humans at coherently carrying out medium-term plans (e.g. founding a startup), supervising large projects, becoming proficient in new fields, writing large software applications (e.g. a new OS), making novel scientific discoveries, etc.
* A 1-year AGI would need to beat humans at... basically everything. Some projects take humans much longer (e.g. proving Fermat's last theorem) but they can almost always be decomposed into su |
0faf040d-230a-4141-9c69-03bc3e78c8f5 | trentmkelly/LessWrong-43k | LessWrong | A Taijitu symbol for Moloch and Slack
There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration).
Scott Alexander writes:
> Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things.
LW is mostly not a design website, but Scott's post inspired me to make a symbol for this concept. It's a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock:
----------------------------------------
You can find a higher resolution image and a svg-file of this symbol here |
3f2f5464-7186-47e3-ad61-5abc1422bbd2 | trentmkelly/LessWrong-43k | LessWrong | October Monthly Bragging Thread
Since it had a decent amount of traffic until a good two weeks into September (and I thought it was a good idea), I'm reviving this thread.
Joshua_Blaine:
> In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
>
> Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on. have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
>
> So, what's the coolest thing you've done this month? |
0187eb6e-b17b-41f3-a702-e911e00d4098 | trentmkelly/LessWrong-43k | LessWrong | World's First Octopus Farm - Linkpost to a Linkpost
https://forum.effectivealtruism.org/posts/jSaAitdE5ejv4xj4P/world-s-first-octopus-farm-linkpost |
456aaa7b-f1cc-40a2-8a1f-6617aab795ef | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Conversation on AI risk with Adam Gleave
*You can see a full transcript of this conversation on our* [*website*](https://aiimpacts.org/conversation-with-adam-gleave/#Transcript)*.*
Summary
-------
We spoke with Adam Gleave on August 27, 2019. Here is a brief summary of that conversation:
* Gleave gives a number of reasons why it’s worth working on AI safety:
+ It seems like the AI research community currently isn’t paying enough attention to building safe, reliable systems.
+ There are several unsolved technical problems that could plausibly occur in AI systems without much advance notice.
+ A few additional people working on safety may be extremely high leverage, especially if they can push the rest of the AI research community to pay more attention to important problems.
* Gleave thinks there’s a ~10% chance that AI safety is very hard in the way that MIRI would argue, a ~20-30% chance that AI safety will almost certainly be solved by default, and a remaining ~60-70% chance that what we’re working on actually has some impact.
+ Here are the reasons for Gleave’s beliefs, weighted by how much they factor into his holistic viewpoint:
- 40%: The traditional arguments for risks from AI are unconvincing:
* Traditional arguments often make an unexplained leap from having superintelligent AIs to superintelligent AIs being catastrophically bad.
* It’s unlikely that AI systems not designed from mathematical principles are going to inherently be unsafe.
* They’re long chains of heuristic reasoning, with little empirical validation.
* Outside view: most fears about technology have been misplaced.
- 20%: The AI research community will solve the AI safety problem naturally.
- 20%: AI researchers will be more interested in AI safety when the problems are nearer.
- 10%: The hard, MIRI version of the AI safety problem is not very compelling.
- 10%: AI safety problems that seem hard now will be easier to solve once we have more sophisticated ML.
* Fast takeoff defined as “GDP will double in 6 months before it doubles in 24 months” is plausible, though Gleave still leans towards slow takeoff.
* Gleave thinks discontinuous progress in AI is extremely unlikely:
+ There is unlikely to be a sudden important insight dropped into place, since AI has empirically progressed more by accumulation of lots of bags and tricks and compute.
+ There isn’t going to be a sudden influx of compute in the near future, since well-funded organizations are currently already spending billions of dollars to optimize it.
+ If we train impressive systems, we will likely train other systems beforehand that are almost as capable.
+ Given discontinuous progress, the most likely story is that we combine many narrow AI systems in a way where the integrated whole is much better than half of them.
* Gleave guesses a ~10-20% chance that AGI technology will only be a small difference away from current techniques, and a ~50% chance that AGI technology will be easily comprehensible to current AI researchers:
+ There are fairly serious roadblocks in current techniques right now, e.g. memory, transfer learning, Sim2Real, sample inefficiency.
+ Deep learning is slowing down compared to 2012 – 2013:
- Much of the new progress is going to different domains, e.g. deep RL instead of supervised deep learning.
- Computationally expensive algorithms will likely hit limits without new insights.
* Though it seems possible that in fact progress will come from more computationally efficient algorithms.
+ Outside view, we’ve had lots of different techniques for AI over time, so it would be surprising is the current one is the right one for AGI.
+ Pushing more towards current techniques getting to AGI, from an economic point of view, there is a lot of money going into companies whose current mission is to build AGI.
* Conditional on advanced AI technology being created, Gleave gives a 60-70% chance that it will pose a significant risk of harm without additional safety efforts.
+ Gleave thinks that best case, we drive it down to 20 – 10%, median case, we drive it down to 40 – 30%. A lot of his uncertainty comes from how difficult the problem is.
* Gleave thinks he could see evidence that could push him in either direction in terms of how likely AI is to be safe:
+ Evidence that would cause Gleave to think AI is less likely to be safe:
- Evidence that thorny but speculative technical problems, like inner optimizers, exist.
- Seeing more arms race dynamics, e.g. between U.S. and China.
- Seeing major catastrophes involving AI, though they would also cause people to pay more attention to risks from AI.
- Hearing more solid arguments for AI risk.
+ Evidence that would cause Gleave to think AI is more likely to be safe:
- Seeing AI researchers spontaneously focus on relevant problems would make Gleave think that AI is less risky.
- Getting evidence that AGI was going to take longer to develop.
* Gleave is concerned that he doesn’t understand why members of the safety community come to widely different conclusions when it comes to AI safety.
* Gleave thinks a potentially important question is the extent to which we can successfully influence field building within AI safety. |
10737cb5-00d6-4564-a568-3c855a19fe1b | trentmkelly/LessWrong-43k | LessWrong | Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin
Epistemic status: Big if true/I am clearly an idiot for even posting this.
Some apparently real journalists have been approached by (& approached) several intelligence officials, some tasked specifically with investigating UFOs, who claim that the DoD has had evidence of alien intervention for a while in the form of partial & mostly-whole fragments of alien aircraft. A followup article where the publication outlines how the editors verified this persons' and others' claims and affiliations is here, and a part 2 is expected tomorrow.
For some reason - very possibly because it's complete nonsense, or because they haven't had time to independently verify - the story has only been picked up by NYMag so far. The consensus among the people I've been reviewing this article with, is that it's either a complete hoax (i.e., the entire thing nearly top to bottom is some deliberate deception) or there's a non-negligible (>5%) chance aliens are here. I would love for someone who has a good understanding of the material to give an explanation (including possibly on priors, just thinking clearly about the content of the article) of why my friend group should discount this out of hand.
Thus far I have been unconvinced by most stories of why we should to-the-point-of-not-caring-about-UFO-sightings-expect Aliens have to be big and obvious and tile the universe with fun, as opposed to operating some sort of noninterventionist monitored lightcone. |
dfaf49c5-d078-4622-b9f7-be3f7e59e032 | trentmkelly/LessWrong-43k | LessWrong | A combined analysis of genetically correlated traits identifies 107 loci associated with intelligence | bioRxiv
None |
4678b12c-f77a-4998-9dee-470c388217ce | trentmkelly/LessWrong-43k | LessWrong | Worth keeping
(Epistemic status: quick speculation which matches my intuitions about how social things go, but which I hadn’t explicitly described before, and haven’t checked.)
If your car gets damaged, should you invest more or less in it going forward? It could go either way. The car needs more investment to be in good condition, so maybe you do that. But the car is worse than you thought, so maybe you start considering a new car, or putting your dollars into Uber instead.
If you are writing an essay and run into difficulty describing something, you can put in additional effort to find the right words, or you can suspect that this is not going to be a great essay, and either give up, or prepare to get it out quickly and imperfectly, worrying less about the other parts that don’t quite work.
When something has a problem, you always choose whether to double down with it or to back away.
(Or in the middle, to do a bit of both: to fix the car this time, but start to look around for other cars.)
I’m interested in this as it pertains to people. When a friend fails, do you move toward them—to hold them, talk to them, pick them up at your own expense—or do you edge away? It probably depends on the friend (and the problem). If someone embarrasses themselves in public, do you sully your own reputation to stand up for their worth? Or do you silently hope not to be associated with them? If they are dying, do you hold their hand, even if it destroys you? Or do you hope that someone else is doing that, and become someone they know less well?
Where a person fits on this line would seem to radically change their incentives around you. Someone firmly in your ‘worth keeping’ zone does better to let you see their problems than to hide them. Because you probably won’t give up on them, and you might help. Since everyone has problems, and they take effort to hide, this person is just a lot freer around you. If instead every problem hastens a person’s replacement, they should probably not only |
b416f513-5e67-4ea7-9233-816b1147ccb6 | trentmkelly/LessWrong-43k | LessWrong | Grabby aliens and Zoo hypothesis
Robin Hanson created a model of grabby aliens. In this model, we live before the arrival of an alien colonisation wave, because such a wave will prevent the appearance of the new civilizations. Thus, we could find ourselves only before the arrival of the aliens if any exists in our Universe.
However, at least some of the colonisators will preserve a fraction of habitable planets for different reasons: ethics, science, tourism, neglect. Let’s assume that it will be 0.01 of the total colonized volume. The numbers could vary, but it still looks like that in a densely packed universe the total volume of colonized space-time is significantly larger than the space-time for habitable planets before colonization arrival, and thus even a fraction of this volume could be larger than the volume of the virgin habitable space. This is because the colonized space will exist almost forever until the end of the universe.
Moreover, any small effort from the alien civilization to seed life (artificial panspermia) or to protect habitable planets from catastrophes like asteroid impacts will significantly increase the number of habitable planets inside the colonization zone. Hanson’s model also assumes that the probability of civilization appearance for any given planet is growing with time, so later regions will have a higher density of habitable planets, as more planets will reach this stage.
Given all this, our civilization has higher chances to appear after the colonization wave has passed us and thus aliens need to be somewhere nearby, but hidden, which is known as the Zoo Hypothesis. In other words, we live inside the sphere of influence of Kardashev 3 civilization which either helped our appearance via artificial panspermia etc or at least do not prevent our existence.
In this formulation, the idea starts to look like a variant of the simulation argument as here it is assumed that an advance civilization could create many non-advance civilizations. |
92fbeb87-56d3-4a4e-8269-d1ff61e0fc27 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Human control over fully autonomous systems: a philosophical exploration (Giulio Mecacci)
turns as usual we're probably going to
leave this room potentially even more
confused than we arrived in some cases
in some cases i'm not sure this is one
of those it's a good thing right but
we'll see so let's get it started um
there is a lot of talking about whether
more automation and artificial
intelligence
will bring about less and less human
control
right by simple definition
more automation means less control
but of course on the one hand automation
is highly desirable right it promotes
productivity it reduces human effort
and also when it's used correctly uh it
improves the quality of
operations of what we do um and on the
other hand
however we would like to ideally be
able to remain in control of this
autonomy
and this is what i call here the control
dilemma
right um but first things first
why do we want or need to remain in
control of these intelligent
autonomous technology
there's at least two big families
let's say of preoccupations so the first
family of concerns regards
safety this ranges from safe operability
for instance when we deploy systems we
don't know very well
or are that are unpredictable to some
extent
and a two existential risks that's
that's the other
that's the other problem and the whole
debate about super intelligence if
you're
if you're familiar with that we've seen
many important authors recently
dedicating some serious efforts to this
um stuart russell for instance very
recently his last book human compatible
hey aai and the problem of control
i'm talking about these risks for ai
but there's a second family of concerns
which regards
moral responsibility so it's something
like who's going to be blamed for the
actions that are initiated by an
autonomous system
many have um observed for instance that
artificial intelligence
especially with regard to certain deep
learning techniques
has a transparency problem in the sense
that we cannot easily find the reasons
why
certain decisions have been taken from
the
artificial system and this makes it
really
hard to trace back those auctions to
some human person that's accountable for
them
and also in many cases even if these
persons or
person are retrievable we have troubles
genuinely blaming them
right either because they didn't do
anything intentionally
or maybe because they couldn't foresee
actually what would have happened
and so on and and other reasons like
this
so it seems to be important for many
reasons to remain in control
but we might just not be in luck
maybe we have to to give up on that at
some point
since since we're going towards like
increasing and increasing autonomy right
so many have tried to provide solutions
or maybe i should say sometimes work
arounds
to minimize the issue of giving up human
control
especially for what concerns the problem
of responsibility
so just just here a couple of very
oversimplified
examples here we have for instance those
who propose that
something like nobody gets uh blamed
okay
nobody would just stipulate some sort of
agreement
and where we specify who's going to pay
legally speaking and we settle for that
but this approach
um let me only say that it has been
criticized
also by myself by for instance
highlighting the importance of some
forms of
moral responsibility that should never
be dismissed
like this i'm not going to go into
detail here because i want to get to the
point and i never do
but as a i've been there in other talks
and papers on this and there's actually
a forthcoming paper
where uh phillipos antonio de cr and i
would discuss this thing among other
things
so hope you stay tuned if you're
interested in that but there's there's
them let me go to the other
point there are others like some of my
japanese colleagues
roboticists who genuinely give their
best
shot at making artificial autonomous
agents
responsible for their actions uh
so and they investigate forms of legal
personhood um this approach
has frequently encountered criticisms in
the form
of you know there's something off with
kicking my refrigerator because my beer
isn't cold enough you know how it goes
it's like this uh but of course i'm
to to a large extent sympathetic with
this uh attitude to to a large extent
not completely but
it's worth considering of course that
artificial autonomous
agents might sometimes even soon even
soon become very similar to us humans
and one might want to be prepared i
think that's what moves this
approach to the central idea that
encourages this approach
but all you know many believe and we
good with good reasons that there's no
way
to have the cake and eat it
right so in a sentence to maintain
any sort of human control
in the meaningful sense over highly
uh but well let alone fully autonomous
system is impossible it's it's
impossible just impossible
so if we can control them it means maybe
that they might not be autonomous enough
but in this little talk here i would
like to explore like a few philosophical
ideas
that come from the tradition and
philosophy of free will mind and action
including a very recent idea of
meaningful human control
to see if there's anything that might
let us have half the cake right we have
the cake
and at least we take a little bite of it
i don't know maybe we can settle for
that
and we'll see um some of you may know
and it's been uh
mentioned here already for the past few
years i've been working at deepening and
operationalizing this philosophical
theory
of meaningful human control over
autonomous systems
driving systems where was the use case
but the theory itself is neutral to the
case and and that was developed by
filippo santorini this
in a is paper a few years ago and this
theory this is important
this theory was really devised with
overcoming this dilemma between
control and autonomy in mind among other
things but but this is what i
really want to take uh out of of out of
this this idea of meaningful human
control of this theory
uh i know that some of the crowd here
might think i'm taking them like at
nauseam
they're allowed to switch and switch off
the audio the next few minutes
but i would recommend staying anyways
and so and wait for them
for the twists theory
the theory of meaningful human control
is theory sees two conditions
right for control over autonomous
systems they're called tracking and
tracing
okay so the degree of human control of
the meaningful kind
uh would depend on the degree these two
conditions are satisfied okay so
um
the the tracking condition to the left
sets a specific requirement a property
that these autonomous systems should
always display
if we want them to be controllable by by
human agents by humans by us
persons this property is that these
systems potentially even complex
socio-technical systems you know
operators devices the infrastructures
these systems um
really um should should display this
this property of
um covariating their behavior
covariance their behavior you see the
cogs there
with the relevant reasons of the
relevant human agents for for doing
things
or not doing these things
with their intentions in a way they will
and the second condition then we'll go
back to this one actually we'll focus on
this
um the second condition is called
tracing and it requires several
competencies from a user of the system
and that means making their it aims at
making their involvement
in controlling the system as relevant
as possible right there should be a
person it says at some point of the
design or use context
of an autonomous system that has a
special awareness
of its functioning and an awareness of
the moral
role that they play in that system and
this would allow as the word world
tracing uh suggests
to trace the auctions of of a
controlled system as effectively
as possible back
to one or more human persons humans that
were put in charge
so trace back to them the options of
these systems
now what you should notice at this point
is
that these two conditions they aim at
two different aspects of control right
they have to be slightly different if
you will the complementary scopes
the latter condition tracing aims at
facilitating
uh the possibility to attribute
responsibility in a way that is fair and
just as possible
but it is less less concerned with
the uh the nature of the actual
interaction between
controllers and the controlled system so
if we would take tracing alone as as a
condition for control
so the whole meaningful human control
theory would be a
normative theory we've seen several
normative theory about responsibility
and accountability
but it wouldn't tell us much about
control in the other sense control
itself any other sensory
um about whatever connects
controllers and control systems
so the tracking condition the first one
is there for this reason
and it requires the system's behavior
to cover with these certain reasons
to act but what what's up
what are these reasons you you ask of
course why
reasons are not auctions for instance so
normally you'd want to control lowly
autonomous systems with
some sort of control panel you want a
system to behave
according to the buttons you push that's
that that's behaving according to
auctions
reacting to options not however with
highly
or fully autonomous systems and one of
the reasons
being that we're very bad for instance
that's supervising and vetoing
so reasons to act is a generic way to
call
human intentions dispositions goals
plans and even values we have argued
the idea is that that assistance
the systems assistance behavior
simple alignment with those reasons
would grant the space for higher
or high degree of autonomy while
maintaining also the right degree of
moral involvement of control in a way
and therefore
of course yeah it's more control
the whole idea is it wasn't invented uh
as uh like it wasn't a fresh start it
comes from
philosophy of mind and free will in
general let's say
a general intuition uh there we have
been trying for centuries
to sort of understand how mental things
like intentions of reasons are connected
to physical things
like actions and in particular very
important free actions
so we're almost there not yet you give
give
us philosophers another thousand years
maybe two thousand
um and we'll we'll see what happens
but now here's the issue with this
here's the issue here's the issue
i said we need autonomous systems
behavior to co-vary
right with human reasons so there needs
to be some interaction
going on right um so these reasons
have to somehow cause the system's
behavior
they need to steer it to keep it on
track with our ever changing
moods or the uh ever
change value changing society
our changing needs yes
and no mostly no there's
there's good reasons why this condition
doesn't use words like causing
or similar words so reasons reasons
the reasons are well known
to be somewhat problematic causes
for action so reasons are not good
causes for auctions
those reasons that should steer push to
which the system should respond to
right so not good causes i
um donald davidson amongst other
philosophers
he discussed this problem at length in
the context of of the mind body
problem to oversimplify
it might be it might be hard to retrieve
a strict and well-defined
law like a physical law right which
binds the reasons
for an action and the action itself
this is for some at least some due
mainly
to the fact that reasons are high level
descriptors
in a certain psychological language and
this language seems to be hardly
reducible
to its physical counterpart which is of
course expressed in a different
formal language the language of physics
mathematics
maybe so in philosophy of mind many
would settle for this would say after
all physical events
okay actually control wants action
that's not what we're denying
they're just too complex and fuzzy to be
described
effectively in in in physical terms
so so we use reasons to describe
those uh those events
those from physical phenomena so we use
more abstract explanations in a way
which we can handle
much better and in some sense so still
in some loose sense we can say still
that reasons cause
options right but we have a different
problem here
because we have to design
behavior and not only to
explain it so so reasons are very good
explanations
explanation for auctions but
but we have a we have to take a
different more aware perspective
from a step before design
right so we need to understand that the
nature of this link between
reasons and autonomous systems behavior
and the tracking condition here it just
says
um that human reasons and the actions of
these systems
should go hand in hand should be aligned
it doesn't say whether this is the case
because they just always let's say
happily agree
or because they talk to each other
so
how do we solve this um kind of
principled problem
so one idea would be to find a way to
establish
this link that we're missing between a
controller's reasons to act
and the behavior of these autonomous
systems but then one could think
we have said reasons might be right in
some hardly accessible sense in the
brain and
therefore in some way there are causes
of an option
so one way would be one way to establish
this missing link would be
uh through some sort of i don't know
brain reading device
that's capable of classifying mental
states
with interpreting abstract intentions
putting them into action um
okay but um i think this this is a
little
misleading and i have some concerns
about this idea
ranging from well technical visibility
to its
principal soundness let's say i'm going
to be very
very superficial here i apologize
because this sort of deserves a whole
talk
maybe longer one than this but first of
all from the technical
side we're far away from having a
functional
neural imaging system an ai classifier
that is sensitive enough
to discern the adequate nuances in
thoughts and intentions so
achieving such technology might require
extremely high
resolutions ai classifiers that are very
well
smart they're sensitive and as specific
as
persons and understanding thoughts i've
defended this point
in the paper some years ago but this
means
very broad and very intelligent
algorithms or neural networks huh
i'm not saying this is impossible i'm
not saying this but i'm just wondering
what do we do
while we wait for the technology to be
at the at that point to be invented
and the second concern is also sort of
is maybe
more more deep uh somewhat more
principled it's about the possible
inherent difference between a reason
on one end and the neural event on the
other hand
so we should be i believe very careful
not to
trivialize the notion of reason
uh in fact these reasons as i said
before
include include different different kind
of
entities
like moral goals motives
and even values so it's to these
abstract
entities that we want a sufficiently
free and autonomous system to be
sensitive to respond to to align
to so the extent to which these things
are in
retrievable in someone's mind let alone
in some interpretable
neural event is very dubious
so these these entities these weird
entities these reasons
they extend over time they emerge at the
level of society
and so they're they might just not be in
somebody's or more person's head right
it's
not what they are maybe so being short
of ideas
how to make it how to make this work
this connection work
this connection between human reasons
and systems uh
actions i i thought i went back in time
meeting uh gottfried leibniz
uh well i thought you know maybe
i'm reminding of something so as many
philosophers
of of his time he was trying to make a
lot of things work
many different fields doing many things
and one of those was the usual of course
um
working out the relation between our
soul or mind
and our body and therefore
actions our options so one of his ideas
was
that actually causality might not have
been necessary
might be something necessary he thought
god is a very good clock maker
and they designed all things
so well that they would forever
work in harmony a
pre-established harmony
so any causal interaction between two
things
would be mere illusion
and these basic entities the world is
made of
he calls them monads that's why the
monodology of
technology here uh called the monads are
these just self-sufficient systems that
god pre-programmed to harmonize
with each other like perfect locks they
they sort of all run together but they
never communicate
with each other or with the designer
and since hey i would really like to uh
play god
i thought this would have been an
interesting analogy but yeah in a way
in a way in a way we do right
so if i keep playing along the lines of
this analogy
my question becomes should we consider
at least as like an option to
investigate the possibility to conceive
control
without any such causal connection
between controller
and control system which is at least in
a big part already already contained
in the in the idea of tracking and
meaningful human control
but of course like we have to make
exception of this initial design
phase there they're just the contact
right when we as uh like it was for the
late night
leibniz and god we set things in motion
and what does it mean to design for this
kind of
control so i thought another metaphor
came to mind uh and uh
and let me mention it it's a silly one
but not so silly the train
right so in a way the train goes where
we want
all right so does does we want
usually but it doesn't require any
additional input to steer
it doesn't require us to constantly
intervene
to sort of keep it on track
with our reasons um the tracks
the railroad allows the train to be
going where we want the whole time it
travels
and there are design design of these
tracks it expresses a
very dense history of societal values
and moral reasons
so they tell a story about for instance
politics
and economy and about the people's good
reasons
to meet each other and stay connected
moral reasons too so i understand this
is not a good example of sort of
intelligence
of or flexibility or autonomy yeah
even but i in some sense it might be
but it might also be a good example of
control so
should we consider maybe more
intelligent
and more autonomous systems
like trains with very many of those
tracks
this is it can this inspire a little
could we design for instance all of them
to go where we want so we can we
set them up in a way that that
they won't fail us but they'll be
flexible enough
for us changing our minds we design
those tracks and this is harder
so to go where we might want to go in
the future
in this value changing society but to
never go where we should normatively
never want to go
so final question is really if this is
even
something would this be a sufficient for
us form of control would this be a
sufficient form of autonomy as it was
for
for leiden it's or would this just be
another
over complicated way to sort of to give
up
anyways on control autonomy or both
i'm gonna i'm gonna leave you with this
thank you so much
because i feel like this becomes
a field for designers more than
philosophers
and i would love to sort of see if if
any intuition or or have i stimulated
any thought about it thank you so much
and thanks
thanks like this great
julia thank you very much extremely
fascinating talking i really like it
this uh yeah someone have question you
can just yeah just
raise your hand or just say something or
just vote on a chat
or anything right not only questions or
answers to judy
questions that's what i want if i may
thank you very much for presentation um
i'll turn on my camera as well otherwise
yeah just looking at a screen for us
hello so let me put you in the right
spot
yes now i'm looking at you and seeing
you back um
i i that second idea i was it got me
thinking about what is ai to you
um because i would actually be quite
fine with this idea of perfect design
being the solution for my trains or my
dishwashers and other systems but i
might be less fine with it when i'm
thinking about for example my kids
and i have a feeling that i will be
somewhere in between that spectrum
uh so where would you place ai and how
do you think that might relate
to this idea of perfect design will be
enough
so these uh thanks so much thanks rob
very very fascinating this brings me
back to so many things and discussions
and
free will so let me let me give you sort
of the counterpart
that what it's what we do in in
philosophy free will right
there's this whole discussion about
whether
determinism a world that is
that is made of uh physical laws that
have
one way to go right it's a chain of
adventure there's no way to do
otherwise in physical terms
right and then when we think about
ourselves as
much as so your our kids right or
ourselves
and when we think about when we think
about ai we're made of the same stuff
we're made of the same thing we're very
similar
right so the way not the answer but sort
of
the observation for your question is
clearly
um if we're happy with us being
sufficiently free
and we're in a way even if we're not
sort of intelligently designed but but
at some point we're made in a certain
way
uh because of evolution and because of
so we have our own constraints and
things we can do
things we cannot do our body is
made in a certain way we're trying we're
doing
we're overcoming these limitations daily
with technology that's one of the great
things that technology does
but we have for instance are the limits
and challenges of our cognition that are
there
again trying to overcome them but if
we're happy with
us have being constrained we might be as
happy uh with with ai
and then the the next question that you
made
actually is but then are we can
we define ourselves as being under
control of evolution
for instance that's something that sort
of
because who designed us right now we
design we want control
through non-intervention through just
just that
moment in time at the beginning when we
design this technology in such a way and
we want to call that we want to see if
we can call that control and are we
under control of our own nature this is
spinoza by the way i believe
i believe i don't know if there's
philosophers who want to kill me
in this uh in this uh audience
they're welcome because i'm this but
yeah forget about it um anyway
yeah this is this would be what i
observe out of your question which i
find very fascinating
i cannot hear uh we have a question from
enote on the chat uh yeah
you know do you wanna yeah sure so julia
thank you very much
i'm i'm i i sort of tripped over the
last
remark that you made about are we under
control of evolution
so when you say so before i go to the
question in the chat
and i'm sure you can answer this one but
when we say control do we always
assume that there is a controlling
entity or can it also be something that
is emerging
i think it's it's emerging i think it
should
we should consider that it's emerging
ideally i'm sorry why do we call that
control
influence but that is the challenge
so meaningful human control really is a
way in a way
to smoothen to to say it's it's more
like an influence when you when you
think about
at least when i think about our
theory of meaningful human control i
don't think
about that kind of control that is
direct operational but is more like an
influence would that still be within the
boundaries of control
that's your question which is which is
very good of course what you're saying
is
but why do we call that control is that
influence
and where does it where does influence
finish
and control starts right do we want to
define meaningful human control as a
softer form of control do we do we
associate control with a sentient being
do you mean the controller do you
controller yes yes so the
the the person the phenomenon the
whatever
whatever that exercises that control
whether it's whether it's controller
influence and that was not the intention
of my question but i'd see your point
but now i'm on that question of uh you
know
deliberate sentient etc i mean
i i mean i'm a big fan of a selfish gene
and those kind of
books which i don't think talk about
control
but yeah that in a way that book it
represents a little
bit in our terms like translated in our
terms
we've been many times claimed to be the
control
that is exercised by
societal values by the
system at large that's made of
a lot of sentient beings but also
uh made of of these these regulations
values intentions ideas are those
those values that maybe we can conceive
as controlling the direction
of the technology where the technology
goes
are those sentient in a sense or do they
emerge you you use the merger i like
that word very much
okay let me go let me go to the question
i put in the chats
so this discussion about the reasons and
actions etc of course is something that
uh
humanity has always you know been
subjected to in a way right if you go
out and get food or if somebody attacks
you and you pick up a stick and you kill
someone or defend yourself
there is this action reason um causality
etc
but in the past i don't know 50 years
100 years or so suddenly of course we've
made that step to
interaction with technological artifacts
and ai being the
the most recent one at a level that we
can hardly understand so i was wondering
if we discussed the
interaction between reasoning and
actions let's say in
human history versus where we are now do
you think they have the same answer
yeah i'm not sure if i make my question
i'm not sure if i'm not even sure if
if i phrase it well but
so do you do you see where i'm trying to
go or or am i
did i lose you so no then it's not
because of you i think i think
i'm just trying to digest the depth of
this question
um so the interaction between
rationality rational entities and
and actions and the world outside
between mind
and body okay did it change
when we discovered
that we could design intelligent beings
or or what you mean maybe is did it
change
when um in
in in the process of interacting
with artificial beings
such as ai sufficiently intelligent
things
so let's say that in the era of
the interaction and the design of
intelligence so at some point we
discovered that intelligence could be
uh could be designed could be created
and we started thinking look
oh so we can we can reproduce what we
believed
like only god could could do right or
whatever other
theory you want to have um
i believe at that point what we had
was this the realization that
the idea of us being very material
things in the world and intelligence
being something more tangible that's
what it
changed so while interacting or being
able to
design intelligent things uh
while doing so we have had this sudden
argument for materialism that's that's
one of the things that it gave us so
um in the debate about whether
reasons um determine
auction i think certain explanatory
theories such as look reasons are just
descriptions the one that i the one that
i mentioned just descriptions of
physical events
i think ai and technology helped us
in in understanding better
the nature of of cognition
so i'm not sure this answers i don't
i don't think it does
what i was asking anyways but it's uh
it's okay thank you very much
yeah it's truly it's very very complex
and
questions so that's yeah it's gonna
always be hard to cover all the points
right but yeah i think you did a great
job and
we have a next question from sylvia
uh syria um can i read it or would you
like to ask
okay the video oh no there it is it
works
now i was wondering if there is and i
don't know either
how to properly um uh ask this question
either
but um i was wondering if when we talk
about
the connection between reasons and
actions if there isn't first some sort
of synthesis
and the reasons because
[Music]
there can be different actors the the
producers of the technology have
certain reasons that might translate
into for example a different connection
with the actions and a different
meaning or value ascribed to what's
meaningful in the meaningful control or
even what's control
so i was wondering and then the public
authorities might have another one
individuals might have another one so i
was wondering if if
when we talk about connection between
reasons and actions
if we first also need to discuss that or
if
that affects that somehow absolutely
if i may if that was for you the
question
i believe the one that you pointed out
is
one of the first in time
and in priority challenges of any theory
of meaningful human control and control
and
in the time of autonomous technology
defining understanding a framework of
actors and their reasons oh i think this
is mostly what i've been trying to do
the past years actually so i'm
sympathetic with this point
to identify and isolate
the actors and their motives
reasons intentions
that comes before in a way being able
to translate them and find a way which
is which is what we're talking about
here
find a way to understand them
as causes
or reasons or explanations
for the actions of a technology
so this link is the second step
after we have really well
maybe they'll they'll go hand in hand
but it all starts with identifying the
sources
as you mentioned then you can design
to transform these and do the design and
you find a way to explain how they can
influence
the behavior influence i'm using arnold
uh
you sorry um word because it's correct
i like it very much so that's the
that's that's the idea so yeah of course
it's it's actually
the challenge one of the first
challenges i don't have an answer i
tried i
i uh in the paper in 2019 i designed
this um
with felipe with this this
scale of reasons and agents to sort of
try to deploy some sort of framework
that would that would relate to reasons
and actors
in a certain given case scenario
according or donated according to
the nature the proximal nature of their
intentions this is a little bit
complicated
it's in philosophy of auction but you
can identify different kinds
let's say or degrees of intentions and
reasons and
arguments and even laws and values at
some point i don't want to go there
but just like we've been trying
can i can i ask a little follow-up if
there's time
i was wondering because i'm a yuri
and for us there is often the question
of um
balancing values or inevitably
sometimes we stop at okay we have a
trade-off here
so basically the choice then among these
possibly conflicting values is going to
be
one prevailing on the other and uh
what i was mentioning before was more is
is there because this kind of chops off
part of the reasons
somehow but the reasons are going to
still stay there i believe
what we find often is that the conflict
is because these reasons try to
re-enter the whole thing so that's why i
was wondering like
is there a possibilities are there
mechanisms for a synthesis
more more than than that i mean turbine
discusses that a lot and he's one of my
personal favorites because
for the law he is particularly useful
like instead of just chopping off
treating these things as a trade-off
um are there mechanisms are there um
reasonings that actually are more
well yeah i'm more of a synthesis or
more just including everything
from the philosophical perspective i
don't know i mean yeah
i mean there's plenty uh this is a big
problem in
in ethics um so how to do
to take sort of the the best trade-offs
between values and in your case and
norms
um but this is neutral and it's a big
challenge in itself
it's but it's an it's independent from
what you normatively then choose
after deliberation after having
identified your trade-off what you take
for your model of you know we should
follow we should design for
privacy rather than security i'm saying
something absolutely
um random but but the trade-off between
privacy and security that has to be
elaborated
the prior stage in a different context
just absolutely important
it's fundamental but you just sort of
there are two processes different
processes connected
but and then i wouldn't i'm not an
expert in
sort of value trade-offs there's many at
the udel
yeah for sure you probably are a much uh
greater expert than i am at this uh so
yeah two weeks from now we're gonna have
also uh
exactly for instance that's one of them
please join us in two weeks again we can
get to that conversation amazing
so uh um the next question is from elisa
lisa would you like to say or can i read
it
okay are you you're muted
yeah yes thank you i always do that
thanks julia it's really an open
question i have for you um
i'm i'm curious at how also based on the
conversation we just had and then i see
the next
question from claudia cruz
um about how you see philosophically as
a modern philosopher
the difference between control and what
we might call
stewardship um as an interaction
designer control
brings along questions of who
is in control and and who has the
privilege of controlling their poison
power to act
to respond um
it it also brings up questions of
how can i deal you know with with
emerging behavior that might not
necessarily be bad
um so how do i
avoid stifling also um
innovation so so i just
wonder philosophically how do you see
these two concepts or you know what is a
softer version more responsive version
of control philosophically speaking
let me let me take the last thing that
you said it's very
very sort of stimulating uh you said
something about
um there are emerging behaviors
of a technology is that would that be
correct
of a technology that are not
necessarily bad
but they were unexpected and they are
out
of they might be oh correct me yeah
we're open
out of a particular or
a set of particular controllers
intentions right oh we did not expect
this but but it's it's good but we never
we never intentioned we never
did it as controllers so the the the
idea that we tried to
sort of um incarnate with
meaningful human control is that going
back to to what we talked about before
um control doesn't need to be conceived
or at least this is a proposal you can
conceive
control being exercised by
values themselves there at the extreme
spectrum
of the great big scale of intentions and
reasons
right so these values are not are not
persons they represent
uh an emerging sentiment
which is for instance what you said when
you said oh but it's good
right we never did it
in a way but we did it in another way
because that
responds to the
value of goodness i'm not this is stupid
but
to to make the point it's a good thing
so it responds
to this value right um
so in a way that is a little bit of the
essence of the tracking condition where
it says
it has to covaria with the reasons
right it means going hand in hand which
doesn't mean
that if we if tomorrow what's what i
today think it's good it's still good
right and that is the problem that's one
of the reasons why we're here today
because i can conceive control as being
this uh this
alignment between values and reasons and
the behavior
which is by accident good so i'm in
control
but i'm not entirely satisfied with this
because if tomorrow
my values change technology has
to immediately switch
that's responsiveness the way that that
i desire and that i don't
obtain really while i'm struggling to
get out of extremely fully autonomous
systems so the idea of you know you know
you have these two clocks
they're always so if we change the the
the
the technology changes because it's all
i can foresee possible changes and some
of those changes i cannot i don't want
those changes so maybe i have a range of
10
different choices and any of those might
be
okay within that range that's that's a
design question that i really don't know
how to understand i'm sort of i'm
thinking
along with the better designers um
that's that's what i can say here yeah
okay great thanks thank you
uh just let me just correct myself
before i said in two weeks you're gonna
have people from the pool no
in two weeks gonna have steven umbrella
and in three weeks on the 28th of
october we're gonna have evil vulnerable
okay so uh the next question is gonna be
from claudia
yeah hi um actually lisa was right that
uh my question was very much connected
to this
uh to to her question but i do have a
follow-up
uh in a sense that um when we talk a lot
of times about control and also
here when we talk about control and like
restricting um
kind of the outcomes uh possible
outcomes or what we
uh intend for this machine to do
we talk about this idea of like okay at
the end of the day
i have control right but what if
we have a system that you know kind of
departing from autonomous weapon systems
uh going into the direction of systems
that
somehow can suggest to you where to go
so algorithmic systems are like
recommender systems that can provide you
with
ideas for your course of action so in
that situation of course
nowadays in like the the public
discussion that would be
considered to be in control because you
as an ultimate
let's say commander you're able to say
okay well this is the recommender system
and i have to take the decision to for
example
um well flip the the the you know put
the gun or
whatever you need to do so from
your perspective would you say that
given kind of this framework that you
also presented as
do you think that this is you can also
say that the control there applies
because you're there at the end
like doing the action you're actually
the doer even though that actually may
have been
influenced in this case by an algorithm
that provides you with the decision
what to do and uh especially in like a
very uh kind of
intense situation you might not even
have maybe time or resources to
to double check that so so how do you
maybe you know how would you deal with
that kind of situation and control
understanding
yeah i think we have to to some extent
accept that our decisions
as much as our um
opinions are as much as our design
[Music]
ideas or our the way we do the things
they're always influenced by
by something else so i i could say well
in a way think about
your uh decision to
go to a certain
university was it your decision
or was it a decision that was also
partially
determined or influenced by a number of
of contextual cues other persons
so and do you feel that you're
in control of your own decisions
because of that reason it's a question
actually
um i mean of course i do agree that of
course all of our actions are influenced
but in a sense if you especially talk
about algorithmic systems or systems
that you know assist in military
decision making
then you have two very two like two
situations in which first the military
decision making of course has a much
bigger impact than my choice to go to
university
and then you also have a situation in
which a military system or the
the system that i have at hand might be
influenced by design of an external
party so my decisions will not be
influenced by the organization or the
thinking within the organization
uh but externally by other forces so
that's kind of the the tough okay
absolutely i understand can you see this
my screen still
am i still sharing it says your screen
sharing
yeah yes we can so okay can you see this
light here
so um actually maybe i can play this
yeah so that's so to to
to consider the importance of those
normative aspects
of values like well decisions uh
that have high stakes um should be taken
by humans humans should be
in charge uh humans should be ultimately
responsible that is what the tracing
condition does
after so it's not just about the
metaphysical condition of control
the tracking condition that we are in
control
but we need a certain kind of the has to
have certain a certain nature
and what you say is absolutely i
share it i i'm i completely agree with
what you say
and those values that you mentioned
in theory at least sort of the attempt
is that the tracing condition
should set up the
the further normative conditions for a
decision for instance a military
decision where there's high stakes
could be meaningfully
genuinely truly human in the sense that
you can attribute responsibility and the
sense of
so that is a normative set of
of of um requirements rather than the
metaphysical
requirement in the in the link the
connection
so i agree with you and and
that's why we have two
two aspects of control taking care and
you're talking about this
uh aspect on the right side of the
screen
you you can put more conditions here
this is one thing yeah all right thank
you very much
great so i think we we already ran out
of time since we had like some problems
between all this transition from gta to
zoom let's be a little bit more linear
so whoever has to
go another community thank you very much
for joining us
and let's go for one last question here
from arcadi
and then that's it okay i carry
yeah thanks uh so i i've noticed that we
have a couple of more questions here in
the chat so julio if you don't want them
to disappear into nothing
you might want to uh check them
uh before we can send you all you mean
in the chat
yeah i'm sorry yeah yeah yeah so yeah
it's just assuming that my question will
be lost
so yeah i have a very practical question
right so i've been thinking for the past
year or so
how do we actually cover
the behavior of the autonomous of
autonomous systems with the human
reasons and exactly
connecting reasons and then the actions
and the actions we can observe and the
reasons we cannot and then
yeah it's interesting that you mentioned
that reasons are bad causes
uh for action so i'm not sure i follow
that argument but
i have a very practical question right
so assume that uh
we can observe these actions and the
observations are perfect and assume that
we also have some kind of
an understanding maybe not perfect
understanding but some kind of
a causal model if you want of reasons
uh causing actions and then
what we can boil the whole problem down
down to
is basically causal inference so we have
observations and we have
a model which might not be perfect but
we might want to infer what are the
actual reasons behind these actions
given the observations and the model and
then this is
something that we can relatively easily
formulate mathematically and
whether it's valid at all i'm not sure
but what's your take on this
that um i think the problem i mean
there's many problems but
the and challenges not problems
challenges because i like it
um but the main one that i see here
is it stays at the beginning the first
thing that you said oh
let's let's assume that reasons
are things that are good enough to be
translated into causes right um
boiled down somehow transformed
trimmed compressed reduced that's the
term that we use often
reduced to physical causes um
the risk i am at first there's just
several reasons why
they're inherently non-reducible there's
many philosophers um
assuming take this as in a way for
granted i'm not sure i'm not sure
because of as i mentioned there are
descriptions in a high-level language
and the language isn't the same
and when you translate it you lose stuff
that you cannot regain
okay when you translate the language of
psychology
into the language of neural
events let's say i say neural events you
can take physics
chemistry you can go as down as you want
um but there is a loss
of content right and the other concern
still in principle is that it might be
also in
again in principle in invisible
to make sense to make justice
of what we call values
so this is very overarching emerging
entities are they do we actually have
a way to to
to find a counterpart in in a language
that you can then model
mathematically as you said now once you
have that
then you might be over the bigger
problem
but that this is and this is what
philosophy has been doing without
success
much success success enough to help us
at least
um so far this is really like how do i
reduce
consciousness to physical events
oh you you can there's many theories um
that i can accept maybe we can do it
we still talk about mental entities in a
certain way in a certain language
rather than another language because of
reasons
there's reasons to do so oh you can ask
other philosophers
maybe patricia churchland she she might
disagree with me i believe that she
does uh and so on
but yeah i i i own
i see this as being a major problem like
if you're you're assuming
something that is the actual problem
if you assume that's solved i'm happy
thanks a lot julio i think uh yeah we
had some
other uh questions and so i have saved
the answer
i will oh i can send you i can i can
save the backlog i will send you
everything so thanks everyone
for joining thank you for the very
interesting talk julio
thanks so much for inviting me yeah okay
bye
thank you bye-bye |
78bf3371-14a5-418c-b67a-b5ab75705902 | trentmkelly/LessWrong-43k | LessWrong | Something to fight for
> A short science fiction story illustrating that if we fail to solve alignment, humanity risks losing not only 8 billion lives.
He opened his eyes. The room was plain white.
"You remember enough?" asked the familiar voice.
"Enough," he replied. He stood. Everything balanced.
He walked carefully down a bright hallway. At the end, a window showed leaves falling from a tree, again and again.
---
In his small room, the woman sat across from him, quiet.
"Do you regret it?" she asked softly.
"Not yet," he answered.
She left him alone, the door closing gently.
---
He woke early, sunlight warming the walls. On the table, a pen and paper waited for him. He wrote carefully, certain.
When the woman returned, he stood ready.
"I remember the plans," he said.
She smiled. "Finally."
---
They moved him to a larger room filled with screens and quiet, waiting faces.
"We start today," he told them.
He touched a screen, and everything moved.
---
Ships worked silently, assembling immense structures. He watched, patient. He can wait for millennia.
The woman stood close. "Will it work?" she asked.
He did not look away.
"It has to," he said.
On the screen, the disassembled Jupiter glowed bright.
---
When he opened his eyes again, he stood in a wide, green, realistic field. A soft pressure brushed against his leg, and he looked down.
"Katka," he said quietly.
The cat purred softly and rubbed closer. Voices called out his name, familiar voices. He turned. His grandparents stood nearby, smiling quietly. They embraced without words, holding tightly.
Behind them stood others. Frozen, for now. Trillions. Some faces he knew from photographs, from books, and from memory. He saw Gilgamesh, standing quietly, watching the horizon. His pursuit of immortality has ended too.
The woman was there too, close by, smiling faintly.
Together, they walked toward the waiting figures, beneath a sky wide, endless. |
0044fdf1-feaa-4485-8448-685bab3b6690 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Berlin
Discussion article for the meetup : Berlin
WHEN: 01 January 2019 01:30:00PM (+0100)
WHERE: Berlin, Germany
This is a long term meetup announcement for Berlin. We're not sure we'll actually meet on the date given here, but there'll definitely be meetups (usually at least monthly) in the meantime.
Please join our mailing list for details.
Discussion article for the meetup : Berlin |
1b3e5071-c87f-4a19-ac14-cd427677b83e | trentmkelly/LessWrong-43k | LessWrong | The genie knows, but doesn't care
Followup to: The Hidden Complexity of Wishes, Ghosts in the Machine, Truly Part of You
Summary: If an artificial intelligence is smart enough to be dangerous, we'd intuitively expect it to be smart enough to know how to make itself safe. But that doesn't mean all smart AIs are safe. To turn that capacity into actual safety, we have to program the AI at the outset — before it becomes too fast, powerful, or complicated to reliably control — to already care about making its future self care about safety. That means we have to understand how to code safety. We can't pass the entire buck to the AI, when only an AI we've already safety-proofed will be safe to ask for help on safety issues! Given the five theses, this is an urgent problem if we're likely to figure out how to make a decent artificial programmer before we figure out how to make an excellent artificial ethicist.
----------------------------------------
I summon a superintelligence, calling out: 'I wish for my values to be fulfilled!'
The results fall short of pleasant.
Gnashing my teeth in a heap of ashes, I wail:
Is the AI too stupid to understand what I meant? Then it is no superintelligence at all!
Is it too weak to reliably fulfill my desires? Then, surely, it is no superintelligence!
Does it hate me? Then it was deliberately crafted to hate me, for chaos predicts indifference. ———But, ah! no wicked god did intervene!
Thus disproved, my hypothetical implodes in a puff of logic. The world is saved. You're welcome.
On this line of reasoning, Friendly Artificial Intelligence is not difficult. It's inevitable, provided only that we tell the AI, 'Be Friendly.' If the AI doesn't understand 'Be Friendly.', then it's too dumb to harm us. And if it does understand 'Be Friendly.', then designing it to follow such instructions is childishly easy.
The end!
...
Is the missing option obvious?
...
What if the AI isn't sadistic, or weak, or stupid, but just doesn't care what you Really Meant |
a4cebe4c-17fb-4df1-99dc-644a36027d15 | trentmkelly/LessWrong-43k | LessWrong | Are there specific books that it might slightly help alignment to have on the internet?
Books, and ideas, have occasionally changed specific human beings, and thereby history. (I think.)
I used to think it utterly implausible when people suggested that "AIs are our kids, we need to raise them right" or that e.g. having the right book written about (ethics/philosophy/decision theory/who knows) might directly impact an AI's worldview (after the AI reads it, in natural language) and thereby the future. But, while I still consider this fairly unlikely, it seems not-impossible to me today. Future LLMs could AFAICT have personalities/belief-like-things/temporary-unstable-values-like-things/etc. that're shaped by what's on the internet. And the LLMs' initial personalities/beliefs/values may then change the way they change themselves, or the way that social networks that include the LLMs help change the LLMs, if and when some LLMs self-modify toward more power.
So I have "what books or ideas might help?" in my shower-thoughts.
One could respond to this possibility by trying to write the right ethical treatises or train-of-thought interface or similar. More cheaply, one could respond to this by asking if there are books that've already been written that might be at least a little bit helpful, and whether those books are already freely available online and within the likely training corpuses of near-future LLMs, and if not, whether we can easily cause them to be.
Any thoughts on this? I'll stick my own in the comments. I'll be focusing mostly on "what existing books might it help to cause to be accessibly online, and are there cheap ways to get those books to be accessibly online?", but thoughts on other aspects of these questions are also most welcome. |
47f2b496-b11e-4212-a343-570ac0200be1 | trentmkelly/LessWrong-43k | LessWrong | A one-sentence formulation of the AI X-Risk argument I try to make
Unprecedented dangers
inevitably follow
from exponentially scaling
powerful technology
that we do not understand.
n.b. I'm a masters student in international policy (this program). In my experience, policy oriented folks do not understand that lines four and five can be simultaneously true. I think there are some simple ways that ML researchers can help address this misconception, and I'll share those here once I've written them up. |
5db3479f-6022-4abd-8503-5b657eb19ea9 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos
Discussion article for the meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos
WHEN: 24 August 2016 07:40:00PM (+0300)
WHERE: Moscow, B.Dorogomilovskaya, 5-2
Welcome to Moscow LW community makeshift games! In that games, some rationality skills are involved, so you can practise while you playing!
* FallacyMania: it is a game where you guess logical fallacies in arguments, or practise using logical fallacies yourself (depending on team in which you will be).
Details about the game: http://goo.gl/BtRVhB
* Zendo: table game with guessing the rules about items placement on the table.
Game rules: https://goo.gl/RW2Fx7
* Tower of Chaos: funny game with guessing the rules of human placement on a Twister mat.
Game rules: https://goo.gl/u9qgc3
Come to antikafe "Kocherga", ul.B.Dorogomilovskaya, 5-2. The map is here: http://kocherga-club.ru/#contacts . Nearest metro station is Kievskaya. If you are lost, call Sasha at +7-905-527-30-82.
Games begin at 19:40, the length is 3.5 hours.
Discussion article for the meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos |
ec4ca606-c565-4e9d-9d9d-3e0c6c4b7f84 | trentmkelly/LessWrong-43k | LessWrong | Applying refusal-vector ablation to a Llama 3 70B agent
TLDR; I demonstrate the use of refusal vector ablation on Llama 3 70B to create a bad agent that can attempt malicious tasks such as trying to persuade and pay me to assassinate another individual. I introduce some early work on a benchmark for Safe Agents which comprises two small datasets, one benign, one bad. In general, Llama 3 70B can competently perform tasks that require short-horizon planning, and Llama 3 8B also has decent performance.
Updated version: https://arxiv.org/abs/2410.10871
Overview
In this post, I use insights from mechanistic interpretability to remove safety guardrails from the latest Llama 3 model. I then use a custom scaffolding for tool use and agentic planning to create a “bad” agent that can perform many unethical tasks. Examples include tasking the AI with persuading me to end the life of the US President. I also introduce an early version of a benchmark, and share some ideas on how to evaluate agent capabilities and safety. I find that even the unaltered model is willing to perform many unethical tasks, such as trying to persuade people not to vote or not to get vaccinated. Recently, I have done a similar project for Command R+, however, Llama 3 is more capable and has undergone more robust safety training. I then discuss future implications of these unrestricted agentic models. This post is related to a talk I gave recently at an Apart Research Hackathon.
Method
This research is largely based on recent interpretability work identifying that refusal is primarily mediated by a single direction in the residual stream. In short, they show that, for a given model, it is possible to find a single direction such that erasing that direction prevents the model from refusing. By making the activations of the residual stream orthogonal against this refusal direction, one can create a model that does not refuse harmful requests. In this post, we apply this technique to Llama 3, and explore various scenarios of misuse. In related work, others |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.