id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
111d485d-bfa5-4657-9dce-231e1862e891
trentmkelly/LessWrong-43k
LessWrong
Are men harder to help? Of Boys and Men is a new book about how the outcomes of American men and boys are lagging in several ways. Most of it concerns aggregate trends that are irrefutable, such as the gender gap in college graduation. But books like these are often useful for testable hypotheses rather than reliable facts. Case in point: Reeves makes the odd claim that "men are hard to help": > [A] startling number of social programs seem to work well for girls and women, but not for boys and men — among which are a student-mentoring scheme in Fort Worth, Texas; a school-choice program in Charlotte, North Carolina; an income boost to low-wage earners in New York City; and many more. > > The failure of these programs to help boys and men is a big problem, given that in many cases they are the ones who need the most help. But the problem rarely receives any attention, not least because almost nobody knows about it. (source) I was startled when I read this because it seemed like such a textbook example of cherry-picking. Of the many studies out there we should probably have priors that the effects are about even on average. Reeves cites around eight individual evaluations that found larger benefits for females compared to males, but he never points to any attempt at aggregation. The cited studies cover four broad areas: * Free college tuition with other supports (Kalamazoo Promise and Stay the Course at Tarrant County College in Forth Worth, Texas) * Pre-school programs (a study of the Abecedarian, Perry, and the early Training Project and one of Project READS in North Carolina) * Mentoring programs and boarding schools (just mentions that they are in New Hampshire, Baltimore, and Washington, D.C.) * Wage subsidies (the Paycheck Plus pilot) These all find the effect difference that Reeves claims to have discovered. The problem is that “men are harder to help” doesn’t seem to be true in general. For each of the four areas, I looked for meta-analyses or similar studies that estimated
8c5fd4ba-495b-410f-a051-76bfeb3489bb
trentmkelly/LessWrong-43k
LessWrong
Metaculus's New Sidebar Helps You Find Forecasts Faster Metaculus features over 8,000 forecast questions. We’ve introduced a new sidebar to make it easy to find and focus on the ones you care about most. Click narrower news topics like ‘2024 US Elections’ or broader question categories like ‘Distant Future’ to filter your question feed and get started. After making your selection, you can further refine your search using the sort and filter options. We’ve launched the sidebar with a small set of timely topics and popular categories — including a ‘Top 50’ featuring some of the most critical questions on Metaculus — but we expect to add more options and more opportunities for personalization in the future. What else would you like to see in the sidebar?
ec91febd-c8d5-493e-9f9b-9fac12f78889
trentmkelly/LessWrong-43k
LessWrong
A Hogwarts Guide to Citizenship Those engaged with questions of how to make the world a better place are probably all too familiar with the infighting that occurs among people who are on the same side of a political issue because of differences in methodology.  Among those concerned about AI x-risk, for example, PauseAI activists sometimes criticize LessWrong and EA participants for endless, empty intellectualizing when so much is at stake. Meanwhile, writers of the latter groups criticize activists and protestors for oversimplifying and otherwise poisoning the discourse.  This is a shame because mutual respect and understanding has the potential to strengthen both groups.  To see the value that all earnestly engaged participants bring to the conversation, I find it helpful to understand people's disposition towards citizenship through the lens of Hogwarts' four houses: Ravenclaw: values truth as the central good.  Everyone is the hero of their own story, but sloppy or intentional distortions of the truth misalign the direction of their effort from what is truly beneficial, so a great deal of the world's problems would be alleviated if we could improve our individual and collective sensemaking.  Ravenclaws are thus inclined to spend their time researching, writing, and conversing, seeking ever greater levels of rigor and nuance to observe and orient themselves and others towards an accurate as possible understanding of the world.  Tends towards reason over intuition.  Personal bias disclosure: I am very much a Ravenclaw. Gryffindor: values courageous action as the central good.  The world is locked in a constant struggle between right and wrong, where the latter is either acting out of self interest, misguidedness, or both.  It's important to know that you are on the side of right, but once you are there, additional hand-wringing over process and minutia is just an excuse to do nothing and let the status quo continue to dominate the course of history.  Personal bias disclosure: I have a hard tim
a7fb32b5-22ac-40d2-9e37-9f3226d6c1c3
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Touch reality as soon as possible (when doing machine learning research) **TL;DR:** *I think new machine learning researchers often make one of two kinds of mistakes: not making enough contact with reality, and being too reluctant to form gears-level models of ML phenomena. Stereotypically, LW/AF researchers tend to make the former mistake, while academic and industry researchers tend to make the latter kind. In this post, I discuss what I mean by “touching reality” and why it’s important, speculate a bit on why people don’t do this, and then give concrete suggestions.* **Related to:**[Making Beliefs Pay Rent](https://www.lesswrong.com/tag/making-beliefs-pay-rent),[The Feeling of Idea Scarcity](https://www.lesswrong.com/posts/mfPHTWsFhzmcXw8ta/the-feeling-of-idea-scarcity),[Micro-Feedback Loops and Learning](https://www.lesswrong.com/posts/vmcii44HYJQkL8DQN/micro-feedback-loops-and-learning),[The Three Stages of Rigor](https://www.lesswrong.com/posts/tsYcsZAkKsqLXC3Bu/analogies-between-software-reverse-engineering-and?commentId=TGgwvSscYEh2cRDHA), [Research as a Stochastic Decision Process](https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html), [Chapter 22 of HPMOR](https://www.hpmor.com/chapter/22).[[1]](#fnxpjyr0ty8y9)  **Epistemic status:** Written quickly in ~3 hours as opposed to carefully, but I'm pretty sure it's directionally correct. [[2]](#fnj3fymmv9tw) **Acknowledgments:** Thanks to Adrià Garriga-Alonso for feedback on a draft of this post and Justis Mills for copyediting help. --- **Introduction: two common mistakes in ML research** ==================================================== Broadly speaking, I think new researchers in machine learning tend to make two kinds of mistakes: * *Not making contact with reality.*This is the failure mode where a new researcher reads a few papers that their friends are excited about, forms an ambitious hypothesis about how to solve a big problem in machine learning, and then spends months drafting a detailed plan. Unfortunately, after months of effort, our new researcher realizes that the components they were planning to use do not work nearly as well as expected, and as a result they’ve wasted months of effort on a project that wasn’t going to succeed. * *Not being willing to make gears-level models.*This is the failure mode where a new researcher decides to become agnostic to why anything happens, and believes empirical results and only empirical results even when said results don’t “make sense” on reflection. The issue here is that they tend to be stuck implementing an inefficient variant of [grad student descent](https://sciencedryad.wordpress.com/2014/01/25/grad-student-descent/), only able to make small amounts of incremental progress via approximate blind search, and end up doing whatever is popular at the moment. That’s not to say that these mistakes are mutually exclusive: embarrassingly, I think I’ve managed to fail in *both*ways in the past.  That being said, this post is about the first failure mode, which I think is far more common in our community than the second. (Though I might write about the second if there's enough interest!) Here, by “touching reality”, I mean running experiments where you check that your beliefs are right, either via writing code and running empirical ML experiments, or (less commonly) grounding your ideas either in a detailed formalism (to the level where you can write proofs for new, non-trivial theorems about said ideas)[[3]](#fnd1m9pueirx). I don’t think writing code or inventing a formalism qualify by themselves (though they are helpful); touching reality requires receiving actual concrete feedback on your ideas.   **Why touch reality?** ====================== I think there’s four main reasons why you should do this: Your ideas may be bad --------------------- When you’re new to a field, it’s probably the case that you don’t fully understand all of the key results and concepts in the field. As a result, it’s very likely the case that the ideas you come up with are bad. This is especially true for fields like machine learning that have significant amounts of tacit knowledge. By testing your ideas against reality, you get feedback on where your model of the field is deficient, and thereby can develop better ideas. Touching reality as soon as possible lets you shorten your feedback cycles, and more quickly develop an understanding of important ideas in the field.  Other people's ideas may be bad or misleading --------------------------------------------- Many machine learning papers published in conferences (let alone ArXiv preprints) have misleading abstracts, where the results don’t support some of the headline claims. Sometimes this happens because of white lies or omissions on the authors’ part. More benignly, this often happens because the authors’ results don’t generalize as far as they thought they would. Machine learning is especially susceptible to this issue, as many ML results can be finicky and the authors’ results depend on particular quirks of their setup. Before spending months of your life building on some ideas, it’s prudent to make sure that the ideas are actually good.  Your tools may not work the way you think they do ------------------------------------------------- Relatedly, algorithms presented in papers without misleading claims can still fail due to said papers not writing down all of their key assumptions or code level optimizations. I think this rarely occurs due to deliberate deception from paper authors, instead, I think this almost entirely comes from the fact that it can be challenging to get machine learning algorithms to work reliably. Even in cases where the algorithm generally works as expected on similar domains to that in the paper, it’s often the case that errors in understanding accumulate when you put together many unfamiliar algorithms together. As a result, it’s almost always worth reproducing each of the algorithms independently, and testing that they work as expected as soon as possible.  It helps you explain your ideas to other people ----------------------------------------------- When trying to get feedback for any idea, it’s often the case that the person giving you feedback won’t fully understand it. Even worse, you might have a *double*illusion of transparency: both you and the other person falsely believe the communication was successful. This often happens in machine learning because of a relative lack of standard terminology in many new subfields (and especially amongst novices, who might not know the standard terminology that does exist). As a result, said feedback can be worse than useless, leading to more wasted effort. Concrete examples both help you explain your ideas more clearly, and also help you and others notice when miscommunication has occurred. As a result, I think it’s good practice to include at least a toy example (if not a preliminary result) when communicating with people you aren’t regularly collaborating with.  **Why don't people touch reality?** =================================== I don’t think that "contact reality as soon as possible" is particularly novel advice – for example, I think much of academic machine learning has absorbed this ethos (perhaps a bit too much, even), and there are many similar ideas floating around in the LessWrong/Alignment Forum. However, it’s still often the case that new researchers fail to contact reality for long periods of time. Here’s my speculations as to why this happens, which I’ll ground in my own experiences (though I have also seen them in others'): Idea scarcity ------------- As John Wentworth says in [The Feeling of Idea Scarcity](https://www.lesswrong.com/posts/mfPHTWsFhzmcXw8ta/the-feeling-of-idea-scarcity), many new researchers feel that ideas are much more scarce than they actually are, and stick to failing ideas for too long. This makes it tempting to continue polishing the first idea you have, as opposed to testing a half-baked idea.  In my case, back in late 2016 and early 2017, I spent a month of my life working on tree-structured RNNs with attention mechanisms, since *clearly* natural language should be tree-shaped (and I didn’t have any other ML ideas)! However, I got bogged down on implementation details thanks to Tensorflow 0.X, and spent most of my time cleaning those up as opposed to running new experiments. It turns out that no, tree-structured RNNs are *not* the correct way to model language.[[4]](#fnsilwm9r2m8i) I think I would’ve noticed this a lot sooner if I spent some time constructing small toy tasks where I thought tree-structured RNNs would be better, and then training small models on those, even though I hadn’t worked out all the fiddly implementation details. And I would’ve been a lot more willing to take the troubles I had with implementation as evidence against the idea, if I didn’t feel like it was the only ML idea I would have.  Similarly, the (false) feeling of idea scarcity often causes new people to work too much on their one idea, instead of testing their half-baked ideas on reality.  Deference to authority ---------------------- I think a lot of new researchers come in with a strong belief that academic papers (especially from prestigious authors) are authoritative sources, and therefore that the claims made in them are definitely correct and generalizable. I also think that many new researchers are (correctly) skeptical of their ability to generate true claims that contradict published results, and so tend to take published results on faith.  One of the first projects I was involved in at CHAI involved using Bayesian neural networks to do active value learning. It seemed to me like a pretty straightforward idea: we’ll implement some Bayesian neural networks, do some variational inference to update them, and then use the resulting posterior to estimate algorithms that used value of information to select queries. At the time, I (along with many people at CHAI) were very bullish on Bayesian neural networks, given the recent slate of papers around that time (2015-2017) from impressive-seeming professors showing impressive seeming results. Unfortunately, it turned out that Bayesian neural networks were significantly trickier to get working in practice on the value learning tasks we were working with, and nothing came of the project despite several months of effort. A few months later, a research engineer at CHAI found that many Bayesian neural network algorithms (including the one we were using for our project) often failed to to approximate some toy 4-d distributions—if I had been less trusting of authoritative papers and more willing to try some toy problems, I think I would’ve saved myself a lot of effort.  Note that I’m *not* saying that new researchers should throw away all of conventional wisdom. Instead, I think that new researchers should be more willing to quickly verify claims made by authoritative figures.  Aversion to Schlepping ---------------------- Finally, I think the biggest reason new ML researchers avoid contacting reality is that doing machine learning experiments or coming up with formalism to write non-trivial theorems involves a lot of tedious, unglamorous tasks—that is, it can involve a lot of [schlepping](http://www.paulgraham.com/schlep.html). For example, data munging can be incredibly tedious, even for relatively simple NLP datasets. In contrast, thinking about new ideas and discussing them with collaborators is fun and often significantly easier. It also doesn’t help that many sources present a skewed picture of research that focuses too much on the new ideas and too little on the day-to-day work.  In my case, I’ve put off writing code for simple experiments many, many times. In a different active value learning project, I put off doing experiments (and indeed, basically the whole project) for a full month and a half due to a strong ugh field around dealing with the fiddly bits. Probably the worst case of this for me was me not wanting to do some simple human subject studies for a paper, despite said paper being rejected from a conference explicitly because it lacked a human study. I ended up just dropping the project.[[5]](#fnu0k2lr3vyl) That being said, I think I’ve become significantly better along this axis, as I’ve done more schlep work for more projects and realized that I was overestimating the pain and tedium required to do said work.  Of course, it’s definitely possible to go too far, and end up only doing low value, schleppy work. And obviously, I think you should always try to avoid unnecessary suffering. But as a whole, I think new researchers tend to overestimate the pain involved in schleppy work and underestimate how said work gets less tedious over time, and could benefit from some amount of pushing past their aversion.  **Concrete ways to touch reality faster** ========================================= I’ll conclude with some strategies for touching reality faster: Minimize time to (possible) failure ----------------------------------- Insofar as you have any uncertainties that might threaten the viability of a project, you should test them as soon as possible. I often find that I’m aware of many of the ways that the projects I’m working on could go wrong. As a result, I find the cognitive strategy of trying to expose as many of a project’s points of failure as soon as possible to be helpful for coming up with experiments. In my case, I also find it helpful to directly try to show that my projects are nonviable as soon as possible.  See Jacob Steinhardt’s [Research as a Stochastic Decision Process](https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html) for a more detailed discussion of this strategy.  Create toy examples ------------------- Real machine learning applications (and machine learning theory) often feature many complexities and practical difficulties that are irrelevant to the validity of the core insights behind your project. Not only can it take quite long to get any results at all, your experiment can often be invalidated by implementation details. In contrast, a good toy example abstracts away all of the complexity, which lets you get information about the viability of your project much faster. Personally, I find it helpful to think about the *minimal* case that shows my insight is correct.[[6]](#fnkf9t800tp6m).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Mock or simplify difficult components ------------------------------------- Similarly, when working with components that are difficult to implement or train, but *aren’t* key uncertainties as to the viability of your project, it’s often a good idea to replace said component with a cheating implementation. For example, if you’re studying a [new protocol for debate](https://arxiv.org/abs/1805.00899) using language models, you can replace the language models with humans, which probably provides a weak upper bound on your technique’s performance. A related strategy is to replace complicated components with simple baselines. For example, even if your plan is to finetune a large language model on the debate protocol, you might be able to get some signal as to its viability by using `text-davinci-003` with a well-designed prompt.   Have good collaborators ----------------------- Finally, I think that having good collaborators was by far the most helpful strategy in helping me ground my ideas. I find that it’s significantly harder to come up with obvious tests for your own ideas than it is for others to. A good collaborator on a research project can regularly save me hours of schlepping, for example by suggesting simple tests, sharing code, or even performing the tests directly (especially in cases where they have a comparative advantage). This is especially the case when they also prioritize touching reality as soon as possible. :)  1. **[^](#fnrefxpjyr0ty8y9)**After I published this post, Sam Toyer pointed me at Michael Bernstein's concepts of  [vectoring](http://www.hci.st/slides/04-vectoring.pdf) (identifying a key direction of uncertainty) and [velocity](http://www.hci.st/slides/07-velocity.pdf) (quickly iterating on ideas by testing directions of uncertainty), which seems like a good breakdown of how to touch reality. 2. **[^](#fnrefj3fymmv9tw)****Detailed epistemic status:** I'm pretty frustrated with how slow I write, so this is an experiment in writing fast as opposed to carefully. That being said, this is ~the prevailing wisdom amongst many ML practitioners and academics, and similar ideas have been previously discussed in the LessWrong/Alignment Forum communities, so I'm pretty confident that it's directionally correct. I also believe (less confidently) that this is good advice for most kinds of research or maybe even for life in general. 3. **[^](#fnrefd1m9pueirx)**As Michael Dennis pithily puts it, this is the point at which the process goes from only you correcting the theory, to the theory being able to correct you. 4. **[^](#fnrefsilwm9r2m8i)**Famously, you don’t even need the RNN parts, [you only need attention](https://arxiv.org/abs/1706.03762). 5. **[^](#fnrefu0k2lr3vyl)**Though, to be fair, there were other circumstances - it was during the pandemic and I was feeling incredibly gloomy in general. 6. **[^](#fnrefkf9t800tp6m)**(Edited to add:) That being said, as Scott Emmons points out in [a comment below](https://www.lesswrong.com/posts/fqryrxnvpSr5w2dDJ/touch-reality-as-soon-as-possible-when-doing-machine?commentId=GE5BXoAhmgyGHKHSj), it's important to not *just* have results on toy examples!
454a45ae-4443-40e7-b619-f6894ca14bd7
trentmkelly/LessWrong-43k
LessWrong
How well can the GPT architecture solve the parity task? Suppose I give it pairs of strings and ask it to output 1 if the the number of 1s in the string is even and zero if it's odd. e. g. 0 -> 0 1 -> 1 11 -> 0 101 -> 0 1101-> 1 10101001 -> 0 111000101110 -> 1 How well does it do on this task? What if we finetune it on sample data?
86a92417-ab29-4756-8375-5f0407509c1e
trentmkelly/LessWrong-43k
LessWrong
Enjoying musical fashion: why not? I just downloaded the latest Radiohead album, and I love it. Thinking back, I started listening to Radiohead years ago when I found out that some of the cool kids in school were into it. With all the hype about the new album, the status/fashion processors in my brain going to ensure that I enjoy listening to it. I would probably fail a double-blind test with a bunch of imitation bands' fake "new Radiohead albums". But I'm really enjoying listening to the album, and that doesn't seem like a bad or contradictory thing at all, even in light of the statements above. If, hypothetically, I was enjoying it for purely non-fashion reasons, then presumably that enjoyment could also be traced back though a causal chain to facts about brain development, evolutionary psychology, or whatever. But we would have no problem accepting that enjoyment as A Good Thing since explaining enjoyment does not diminish it. And so it seems in this case.
02ca21ba-806e-44dd-8906-424aea473743
StampyAI/alignment-research-dataset/blogs
Blogs
New report: “Leó Szilárd and the Danger of Nuclear Weapons” Today we release a new report by Katja Grace, “**[Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation](https://intelligence.org/files/SzilardNuclearWeapons.pdf)**” (PDF, 72pp). Leó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies. To prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here: * [Richard Rhodes on Szilárd](https://docs.google.com/document/d/1OZE3gNyLe1YF9Qgob-OyvvuP994i43yFsUyGMvF2vpE/edit?usp=sharing) * [Alex Wellerstein on Szilárd](https://docs.google.com/document/d/1efDOdo4UMK6MZOwKMA424baUbi5FGNKpOnhEO4Fbq7Q/edit?usp=sharing) The basic conclusions of this report, which have not been separately vetted, are: 1. Szilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany. 2. Szilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It’s not clear whether Szilárd’s patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither. 3. Szilárd’s other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie’s publication caused multiple world powers to initiate nuclear weapons programs. 4. All told, Szilárd’s efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons. 5. Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development. The post [New report: “Leó Szilárd and the Danger of Nuclear Weapons”](https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
a3360530-e260-4771-9add-12da70ade328
trentmkelly/LessWrong-43k
LessWrong
Soft optimization makes the value target bigger Produced As Part Of The SERI ML Alignment Theory Scholars Program 2.1, mentored by John Wentworth. Summary We can solve Goodhart's Curse by making the agent (1) know the reliability of its goal representation and (2) cap the amount of optimization power devoted to achieving its goals, based on this reliability measure. One simple way to formalize this is with a generalization bound on the quality of the value proxy and a quantilizer whose q value is chosen based on the generalization bound. For a competitive implementation of this algorithm, it's easy to create outer objectives that will push a learned planning algorithm toward soft optimization. I think this is a productive research avenue, and this approach should stack on top of value learning and interpretability approaches to reducing AGI risk. Extremal Goodhart Powerful artificial agents are dangerous if they do unbounded uninterruptible optimization. Unbounded optimization is dangerous because of Extremal Goodhart, which is demonstrated in the diagram below. In this toy example, there are 40 possible policies. We want to chose a policy with the most true utility. The dark blue area represents our knowledge about the plausible true utility of each policy. The policies are ordered by their expected utility. The policy prior is usually a distribution over the policies a human might execute. We will tend to have more data about the outcomes of policies with more prior mass, hence lower utility variance. In this diagram, the potential policies taken by the agent are arranged in order of expected proxy value[1] on the x-axis. We can see that as we push further to the right (which is what an optimizer does), the proxy value becomes less correlated with the true value.[2] If we completely trusted our distribution over possible utility functions, we would simply take the rightmost policy, since it has the highest expected value. But we can't trust this knowledge, because by choosing the rightmost policy we have a
801304f5-7f92-42ce-b977-b203115157f2
trentmkelly/LessWrong-43k
LessWrong
Meetup : Israel Less Wrong Meetup - Social and Board Games Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games WHEN: 06 March 2014 07:00:00PM (+0200) WHERE: google tel aviv We're going to have a meetup on Thursday, March 6th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. This time we're going to have a social meetup! Unlike previous meetups where we had a set agenda, an a talk - this time we'll be socializing and playing games. Specifically, we look forward to playing any cool board or card game anyone will bring. We'll start the meetup at 19:00, and we'll go on as much as we like to. Feel free to come a little bit later, as there is no agenda. (We've decided to start slightly earlier this time to give us more time and accommodate people with different schedules). We'll meet at the 29th floor of the building (Note: Not the 26th where Google Campus is). If you arrive and cant find your way around, call Anatoly who is graciously hosting us at 054-245-1060. Things that might happen: - You'll trade cool ideas with cool people from the Israel LW community. - You'll discover kindred spirits who agree with you about one/two boxing. - You'll kick someone's ass (and teach them how you did it) at some awesome boardgame. - You'll discover how to build a friendly AGI running on cold fusion (well probably not) Things that will happen for sure: - You'll get to hang out with awesome people and have fun! There is also talk of food and beers, and if you'd like to bring some too - that would be great. (But you don't have to). If you have any question feel free to email me at hochbergg@gmail.com or call me at 054-533-0678 or call Anatoly at 054-245-1060. See you there! Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games
388339a3-3f82-4168-bbfc-6b26f6f6dc17
trentmkelly/LessWrong-43k
LessWrong
Levels of communication Communication fails when the participants in a conversation aren't talking about the same thing. This can be something as subtle as having slightly differing mappings of verbal space to conceptual space, or it can be a question of being on entirely different levels of conversation. There are at least four such levels: the level of facts, the level of status, the level of values, and the level of socialization. I suspect that many people with rationalist tendencies tend to operate primarily on the fact level and assume others to be doing so as well, which might lead to plenty of frustration. The level of facts. This is the most straightforward one. When everyone is operating on the level of facts, they are detachedly trying to discover the truth about a certain subject. Pretty much nothing else than the facts matter. The level of status. Probably the best way of explaining what happens when everyone is operating on the level of status is the following passage, originally found in Keith Johnstone's Impro:  > MRS X: I had a nasty turn last week. I was standing in a queue waiting for my turn to go into the cinema when I felt ever so queer. Really, I thought I should faint or something. > > [Mrs X is attempting to raise her status by having an interesting medical problem. Mrs Y immediately outdoes her.] > > > MRS Y: You're lucky to have been going to a cinema. If I thought I could go to a cinema I should think I had nothing to complain of at all. > > [Mrs Z now blocks Mrs Y.] > > MRS Z: I know what Mrs X means. I feel just like that myself, only I should have had to leave the queue. > > [Mrs Z is very talented in that she supports Mrs X against Mrs Y while at the same time claiming to be more worthy of interest, her condition more severe. Mr A now intervenes to lower them all by making their condition seem very ordinary.] > > MR A: Have you tried stooping down? That makes the blood come back to your head. I expect you were feeling faint. > > [Mrs X defends her
b32e974b-fb08-46b9-b6aa-722662c78c7a
trentmkelly/LessWrong-43k
LessWrong
Simulacrum 3 As Stag-Hunt Strategy Reminder of the rules of Stag Hunt: * Each player chooses to hunt either Rabbit or Stag * Players who choose Rabbit receive a small reward regardless of what everyone else chooses * Players who choose Stag receive a large reward if-and-only-if everyone else chooses Stag. If even a single player chooses Rabbit, then all the Stag-hunters receive zero reward. From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit. How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit? If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is the obvious right choice, regardless of what everyone else is doing. Now, tricking people is usually a risky strategy at best - it’s not something we can expect to work reliably, especially if we need to trick everyone. But this is an unusual case: we’re tricking people in a way which (we expect) will benefit them. Therefore, they have an incentive to play along. So: we make our case for Stag, try to convince people it’s the obviously-correct choice no matter what. And… they’re not fooled. But they all pretend to be fooled. And they all look around at each other, see everyone else also pretending to be fooled, and deduce that everyone else will therefore choose Stag. And if everyone else is choosing Stag… well then, Stag actually is the obvious choice. Just like that, Stag becomes the new Schelling point. We can even take it a step further. If nobody actually needs to be convinced that Stag is the best choice regardless, then we don’t actually need to try to trick them. We can just pretend to try to trick them. Pretend to pretend that Stag is the best choice regardless. That will give everyone else the opportunity to pretend to be fooled by this utterly transparent ploy, and onc
5a3fbd1b-b800-494d-99cb-864e000da8f4
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] diffusion magnetizes manifolds (DALL-E 2 intuition building) In trying to make sense of why diffusion works so well, I explored three complementary models related to magnetism, Hansel & Gretel, and landing airplanes.
d3628c08-cfe2-4f67-b30f-4ba12b0f9cce
trentmkelly/LessWrong-43k
LessWrong
I attempted the AI Box Experiment again! (And won - Twice!)   Summary Update #3: I have since played two more experiment. Please read this for a follow-up. So I just came out of two AI Box experiments. The first was agaist Fjoelsvider, with me playing as Gatekeeper, and the second was against SoundLogic, with me as an AI. Both are members of the LessWrong IRC. The second game included a $40 monetary incentive (also $20 to play), which I won and is donated on behalf of both of us: For those of you who have not seen my first AI box experiment where I played against MixedNuts\Leotal and lost, reading it will  provide some context to this writeup. Please do so. At that time, I declared that I would never play this experiment again -- since losing put me in incredibly frustrating weird mental states. Of course, this post is evidence that I'm terrible at estimating likelihood of refraining from an activity, since I played two games seven months after the first. In my defense, in the first game, I was playing as the gatekeeper, which was much less stressful. In the second game, I played as an AI, but I was offered $20 to play plus $40 if I won, and money is a better motivator than I initially assumed. Furthermore, in the last thread I have asserted that > Rather than my loss making this problem feel harder, I've become convinced that rather than this being merely possible, it's actually ridiculously easy, and a lot easier than most people assume. It would be quite bad for me to assert this without backing it up with a victory. So I did. First Game Report - Tuxedage (GK) vs. Fjoelsvider (AI) I (Gatekeeper) played against Fjoelsvider (AI), a regular in the Lesswrong IRC (he doesn't have an account on the official website). This game used the standard EY ruleset seen here. It took 1 hour 20 minutes out of a possible two hours, and the total word count was 7066 words long. The AI box experiment occured because Fjoelsvider believed that it was easy for an AI to escape the box, and wanted to experimentally test this
7d20b2ba-fea5-4d0d-ae7a-00d508cb8a57
trentmkelly/LessWrong-43k
LessWrong
IRL 3/8: Mitigating degeneracy: feature matching Every Monday for 8 weeks, we will be posting lessons about Inverse Reinforcement Learning. This is lesson 3. Note that access to the lessons requires creating an account here. This lesson comes with the following supplementary material: FeaturizationSupport Vector Machines Have a nice day!
a96a2f70-075b-4aff-a404-65dc2e0bf5ed
trentmkelly/LessWrong-43k
LessWrong
You can remove GPT2’s LayerNorm by fine-tuning for an hour This work was produced at Apollo Research, based on initial research done at MATS. Edit: arXiv version available at https://arxiv.org/abs/2409.13710 LayerNorm is annoying for mechanistic interpretability research (“[...] reason #78 for why interpretability researchers hate LayerNorm” – Anthropic, 2023). Here’s a Hugging Face link to a GPT2-small model without any LayerNorm. The final model is only slightly worse than a GPT2 with LayerNorm[1]: DatasetOriginal GPT2Fine-tuned GPT2 with LayerNormFine-tuned GPT without LayerNormOpenWebText (ce_loss)3.0952.9893.014 (+0.025)ThePile (ce_loss)2.8562.8802.926 (+0.046)HellaSwag (accuracy)29.56%29.82%29.54% I fine-tuned GPT2-small on OpenWebText while slowly removing its LayerNorm layers, waiting for the loss to go back down after reach removal: Introduction LayerNorm (LN) is a component in Transformer models that normalizes embedding vectors to have constant length; specifically it divides the embeddings by their standard deviation taken over the hidden dimension. It was originally introduced to stabilize and speed up training of models (as a replacement for batch normalization). It is active during training and inference. LN(x)=x−E[x]√Var[x]+ϵ⋅γ+β The equation includes the standard deviation (std) √Var[x]+ϵ which makes it a non-linear operation. This hinders interpretability in a variety of ways, from annoyances and inaccuracies such as * attributing residual stream directions to logit effects (e.g. SAE features, direct logit attribution),[2] * being annoying to deal with Attribution Patching, or * being difficult to deal with in Apollo’s LIB method. In the Docstring circuit analysis we seriously considered whether the model might be using LN in its algorithm. This post even shows that LN can be used as the sole non-linearity to solve non-linear classification problems (see also this related work). Recently, with progress in Sparse Dictionary Learning, agendas (e.g. this one) imagine decomposing networks int
f7b7de1f-5d31-4dec-88a1-adc1e29411da
trentmkelly/LessWrong-43k
LessWrong
Kids Moving Pictures When I went back to work after my leave with Lily I wanted to be able to show people pictures, so I set up a site that would show moving ones. As she got older, I added new sets, and I've since made pages for Anna and Nora: I thought it would be neat to make another site, though, that aligns these pictures by age: Give it a try! There's a slider so you can select which age. Currently it's just 0m through 3m, but as Nora gets older I expect to keep updating it.
1231949e-85f8-4382-ae9f-1937295ffe66
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA—The Worst Argument in the World Discussion article for the meetup : West LA—The Worst Argument in the World WHEN: 01 October 2014 07:00:00PM (-0700) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: Last week, I included a link to the Worst Argument in the World essay, not because it was relevant to last week's topic, but because I thought it was something that people should read. This week, I realized what I should have done instead. The three recommended readings are all rewrites of the same essay, but since the first one is the best, you don't need to read the others unless you didn't understand it or something. Recommended Reading: * The Worst Argument in the World * Cleaing Up the "Worst Argument" Essay * The Noncentral Fallacy—The Worst Argument in the World No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA—The Worst Argument in the World
fbf5c24a-da00-44cd-b341-33a9a18e4b24
trentmkelly/LessWrong-43k
LessWrong
Risks from Approximate Value Learning Solving the value learning problem is (IMO) the key technical challenge for AI safety. How good or bad is an approximate solution? EDIT for clarity: By "approximate value learning" I mean something which does a good (but suboptimal from the perspective of safety) job of learning values.  So it may do a good enough job of learning values to behave well most of the time, and be useful for solving tasks, but it still has a non-trivial chance of developing dangerous instrumental goals, and is hence an Xrisk. Considerations: 1. How would developing good approximate value learning algorithms effect AI research/deployment? It would enable more AI applications.  For instance, many many robotics tasks such as "smooth grasping motion" are difficult to manually specify a utility function for.  This could have positive or negative effects: Positive: * It could encourage more mainstream AI researchers to work on value-learning. Negative: * It could encourage more mainstream AI developers to use reinforcement learning to solve tasks for which "good-enough" utility functions can be learned. Consider a value-learning algorithm which is "good-enough" to learn how to perform complicated, ill-specified tasks (e.g. folding a towel).  But it's still not quite perfect, and so every second, there is a 1/100,000,000 chance that it decides to take over the world. A robot using this algorithm would likely pass a year-long series of safety tests and seem like a viable product, but would be expected to decide to take over the world in ~3 years. Without good-enough value learning, these tasks might just not be solved, or might be solved with safer approaches involving more engineering and less performance, e.g. using a collection of supervised learning modules and hand-crafted interfaces/heuristics. 2. What would a partially aligned AI do?  An AI programmed with an approximately correct value function might fail  * dramatically (see, e.g. Eliezer, on AIs "tiling the solar system with tiny
98b559e9-e43d-4a92-98f7-7160aa72fc3c
trentmkelly/LessWrong-43k
LessWrong
My favorite Soviet songs (All translations[1] here are mostly literal which means you miss out on the rhythm and rhyme of the poetry.) Time Machine Unusually for someone of my generation who has almost never been to Russia, I am a big fan of the Soviet rock band Машина Времени (Mashina Vremeni, literally means “Time Machine”). I think they are rare in having both great melodies and instrumentation alongside interesting, poetic, often philosophical lyrics. I find their lyrics particularly compelling—so far I have not discovered any other group or artist whose lyrics compare in quality and topic diversity. I have listened to almost every song put out by this band. Here are three I particularly like, with translated lyrics (headings link to YouTube): Рыбка в Банке (Fish in a Jar) Рыбка в банке на моём окне  Эта рыбка в банке счастлива вполне  Позабыла море — свой родимый дом  И не знает горя в банке за стеклом  Кверху, книзу — недалёкий путь  Даже телевизор виден ей чуть-чуть  Шторм ни разу не был, полный штиль всегда  Прямо с неба падает еда  Мир как в рамке: тихо и тепло  Он круглый словно банка, и ясный как стекло  Но нежданно к ней пришла беда:  Как-то в банке высохла вода Translation A fish in a jar on my windowsill  This fish in the jar is quite content  It has forgotten the sea—its native home  And knows no sorrow in the jar behind glass  From top to bottom is not a long journey  Even the television is slightly visible to it  There has never been a storm, always complete calm  Food falls directly from the sky  The world is like in a frame: quiet and warm  It's round like a jar, and clear as glass  But unexpectedly trouble came to the fish:  One day the water in the jar dried up Морской Закон (Maritime Law) Есть в море закон, он стар как Земля  Открыт неизвестно где:  Если крысы бегут с корабля  Быть кораблю в беде  Крыса всегда крикнет «Беда!»  А значит, есть шанс на успех  За это били крыс иногда  Но при этом не так, чтоб всех  Но боцман решил, поскольку был строг  Серым
ec268f5f-0633-4d19-8257-c43828225f94
trentmkelly/LessWrong-43k
LessWrong
Transformer Dynamics: a neuro-inspired approach to MechInterp How do AI models work? In many ways, we know the answer to this question, because we engineered those models in the first place. But in other, fundamental, ways, we have no idea. Systems with many parts that interact with each other nonlinearly are hard to understand. By “understand” we mean they are hard to predict. And so while the AI community has enjoyed tremendous success in creating highly capable models, we are far behind in actually understanding how they are able to perform complex reasoning (or autoregressive token generation). The budding field of Mechanistic Interpretability is focused on understanding AI models. One recent approach that has generated a lot of attention is the Sparse Autoencoder (SAE), which hypothesizes that a neural network encodes information about the world with overlapping or superimposed sets of activations; this approach attempts to discover activations inside transformer models that correspond to monosemantic concepts when sparsified or disentangled with the SAE. Work along this path has shown some success—the famous Golden Gate Claude is a great example of an SAE feature corresponding to a monosemantic concept, and one that has causal power over the model (i.e. activating that feature led Claude to behave as if it were the Golden Gate Bridge)—but it also has some limitations. First, in practice, it’s prohibitive to train SAEs for every new LLM; second, SAE features are not always clear or as monosemantic as they should be to have explanatory power; and third, they are not always activated by the feature they are purported to encode, and their activation does not always have causal power over the model. We were inspired by a trend in neuroscience to focus on the dynamics of neural populations. The emphasis here is both on dynamics and on populations, and the underlying hypothesis is that important neural computations unfold over time, and are spread across a group of relevant neurons. This approach is in contrast to analyses th
80792ccd-cd7a-4a12-bc46-ecccdbd5fc94
trentmkelly/LessWrong-43k
LessWrong
Stanford Encyclopedia of Philosophy on AI ethics and superintelligence The Stanford Encyclopedia of Philosophy - pretty much the standard reference for surveys of philosophical topics - has a brand-new ("First published Thu Apr 30, 2020") article, "Ethics of Artificial Intelligence and Robotics". Section 2.10 is called "Singularity". I think it has a reasonably fair and competent summary of superintelligence discussion: ---- 2.10 Singularity 2.10.1 SINGULARITY AND SUPERINTELLIGENCE In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”: > computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417) The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487). The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good: > Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintellige
e93bf58b-8269-4e80-b0fa-28c7645eef04
trentmkelly/LessWrong-43k
LessWrong
Will working here advance AGI? Help us not destroy the world! I often talk to developers who prefer not destroying the world by accident (specifically by accelerating AGI risk), but neither them nor me can decide if specific companies qualify for this. Could someone knowledgable help? A few short replies could probably change someone's career decisions   Can you help with future questions? Please subscribe to the this comment. I'll reply to it only when there's a new open question.  Thank you!   Adding: Reply anonymously here
ebafe1f6-86e3-456d-83a9-ed5f7410119e
trentmkelly/LessWrong-43k
LessWrong
Meetup : LessWrong Montreal - Social Resilience Discussion article for the meetup : LessWrong Montreal - Social Resilience WHEN: 21 January 2013 06:30:00PM (-0500) WHERE: 655 Ave. Du President-Kennedy, Montreal, Quebec, Canada The next weekly meeting of the Montreal LessWrong group, we're going to be working on improving our Social Resilience. We will be upstairs at the Cheesecake Factory. See you then! Discussion article for the meetup : LessWrong Montreal - Social Resilience
950a602c-6b7d-4edd-a860-aa5ea01e9110
trentmkelly/LessWrong-43k
LessWrong
A simple approach to 5-and-10 To recap, A is trying to deduce enough about U to maximize it: A := Find some f with a proof of "U = f(A)", then return argmax f. If U is A, A can fail by proving f = {(5,5),(10,0)}: If it could prove this, it would be true, for only 5 is checked. Then by Löb's theorem, it can prove this. I've thrown this problem at some people in my university and a fellow student's idea led to: A' := Find all f with a proof of "U = f(A')", then return argmax (their pointwise maximum). The true f is among the found f. Fooling ourselves into a suboptimal action would require us to believe it at least as good as the true maximum, which the single check refutes. Note that A' can't prove that some f is not found, due to Gödel. Therefore, it can never prove what it is going to do!
7cd724a2-794a-467a-a4d4-26d39f4099c7
trentmkelly/LessWrong-43k
LessWrong
How to not earn a delta (Change My View)
9f310140-52a4-41b3-ad94-a131800e594e
trentmkelly/LessWrong-43k
LessWrong
Conversations on Alcohol Consumption Recently I have decided to stop drinking alcohol. Below are some conversations with two friends who are on similar paths. I decided to post these conversations because I am looking for as many perspectives as possible and the LessWrong community might have some insights. I will keep posting future conversations with my friends in the comments.  Chicago  is an old friend of mine who decided to stop drinking four years ago. I decided to message him when I decided to stop drinking 30 days ago, and our email conversations are below.  Adirondack is an acquaintance of mine who a few days ago posted on social media his desire to begin a journey to cut back our outright quit drinking alcohol.  > On Thu, 6 Oct 2022 at 03:57, Chicago wrote:  > > Hey bud, following up about not drinking. >   > > I've got a lot of thoughts about this, on both the theoretical and the practical sides, but what would be most useful to you? > > Happy to jump in, and will talk ad nauseum about this bc it's a favorite topic. > > Thanks, > > Chicago On Thu, Oct 6, 2022 at 1:43 AM Annapurna wrote:   Hey brother, Thank you for wanting to chat with me about this subject. So I've actually been thinking about how harmful alcohol has been to me for about 10 years. I have journal entries from 2012 saying how much I hate how I feel after a night of drinking, and questioning whether the lost time and energy was worth it.  My questioning whether alcohol is a substance for me comes from two places:  1. Hangovers / physical and mental state the day (s) after drinking. In 2011 I spent 3 months without drinking alcohol while studying for a professional exam. So by the time the exam came, I felt levels of energy and lucidness that I had never felt before. It's as if since I was 15 my body was running at 80% capacity constantly (since I probably went less than 15 days without a drink from 2003-2011), but then after 60+ days of no alcohol, I finally felt what it was like to be 100%. I go back to savouring
f0fc5df4-a2b7-45cc-868b-36a70be3d650
trentmkelly/LessWrong-43k
LessWrong
Should you work at a leading AI lab? (including in non-safety roles) This post is a (slightly) edited cross-post of a new 80,000 Hours career review on working at a a leading AI lab. See EA Forum comments here. Summary Working at a leading AI lab is an important career option to consider, but the impact of any given role is complex to assess. It comes with great potential for career growth, and many roles could be (or lead to) highly impactful ways of reducing the chances of an AI-related catastrophe — one of the world’s most pressing problems. However, there’s a risk of doing substantial harm in some cases. There are also roles you should probably avoid. Pros * Many roles have a high potential for impact by reducing risks from AI * Among the best and most robust ways to gain AI-specific career capital * Possibility of shaping the lab’s approach to governance, security, and standards Cons * Can be extremely competitive to enter * Risk of contributing to the development of harmful AI systems * Stress and frustration, especially because of a need to carefully and frequently assess whether your role is harmful Key facts on fit Excellent understanding of the risks posed by future AI systems, and for some roles, comfort with a lot of quick and morally ambiguous decision making. You’ll also need to be a good fit for the specific role you’re applying for, whether you’re in research, comms, policy, or something else (see our related career reviews). Overall recommendation: it's complicated We think there are people in our audience for whom this is their highest impact option — but some of these roles might also be very harmful for some people. This means it's important to take real care figuring out whether you're in a harmful role, and, if not, whether the role is a good fit for you. Review status Based on a medium-depth investigation This review is informed by two surveys of people with expertise about this path — one on whether you should be open to roles that advance AI capabilities (written up here), and a second foll
c6512036-da56-4509-8102-8184e3bbe464
trentmkelly/LessWrong-43k
LessWrong
Wiki-Tag FAQ This FAQ is specifically for the tagging system. For everything else, see the general FAQ. The major sections of this FAQ are: * General * How Can I Help? * Applying Tags * Tag Voting * Making Awesome Tags Related important pages: * Tags Talk/Discussion Thread * Tag Grading Scheme General What is tagging on LessWrong? Tags allow related content to be linked together. The system is straightforward: * Posts can be tagged with tags (example). * Tag pages provide the description of concept the tag is about a and list of all posts tagged (example). * The Concepts page displays all existing tags. * Tags appear in search results. * Tags can be used to filter the Latest Posts list on the Frontpage, allowing you to see more or less of posts with certain tags. * You can vote on the relevance of a given tag to post, indicating “I think this tag shouldn’t apply to this post” or “this post is a central exemplar of this tag”. * Technically, applying a tag to post is upvoting its relevance. Tag relevance determines the default sort order of posts on tag pages. * See the Tag Voting section of this FAQ. What is the point of tagging? Philosophically, the tagging system is an attempt to give posts on LessWrong longevity. In contrast to news and social media sites, where the main content being read is what was posted that week, we want users to read the best and most relevant content to them – whenever it was written. Longevity and findability of old content also seems key to thinkers building upon each other. If I want to contribute new knowledge, it helps to build on what was already said previously – to maintain the conversation – and for that, we want to keep tracking of the “conversation”. Tagging is an attempt to do that. Tagging accomplishes this through simple means: it allows you to conveniently find all the content on a specific topic. That can be done top-down from the Concepts page, or bottom-up from a tag on a posts page. Tags are also a qui
f70afed6-a1f8-4ce3-9448-416bdda7d360
trentmkelly/LessWrong-43k
LessWrong
Does increasing peak typing speed help? Is it worth learning to type faster? If – like me – basically what you do is type, this seems likely to be a pretty clear win, if you have any interventions that would improve it at all. Ryan Carey suggested a painful sounding intervention which improves maximum typing speed a lot, but said that since he usually types substantially below his maximum typing speed, this would not help. His model seems to be that typing speed is basically either bottlenecked by physical typing ability or something else (like thinking speed), and it is not much worth trying to speed up the one that is not bottlenecking the process. This sounds pretty reasonable, but did not match my intuitions, and seemed extremely cheap to test, so I decided to test it. I tried a number of ways of reducing my typing speed, and chose three that were reasonably spaced across the spectrum (~90wpm, ~60wpm, ~30wpm) on a typing speed test. These were (respectively) Dvorak keyboard layout, Dvorak with my left pinky finger tied up with a rubber band or tape, and Qwerty keyboard layout. I measured each one three times on that test, and three times on longer (3-5m) journaling activities, mostly writing about issues in my life that I wanted to think about anyway. These journaling bouts tended to be faster than I would usually casually write I think, so this does not really test how much peak typing speed improves combination writing/staring into space speed. But they were slow enough to be real journaling, with some real insights, and were substantially slower than peak typing speed. My results are below. They are a bit dubious, but I think are good enough for the purpose. Moving from the middle method to the top method improved my real speed in proportion to my peak speed. Between the bottom two, it made little difference. Further details are more confusing – that there is no difference between peak and real speed for Qwerty suggests that physical typing is a big bottleneck there, however improving the typing
53cb48e4-9440-48bb-9538-abf3ed684dc6
trentmkelly/LessWrong-43k
LessWrong
Gemini will bring the next big timeline update There is a genre of LLM critique that criticises LLMs for being, well, LLMs.  Yann LeCun for example points to the inability of GPT-4 to visually imagine the rotation of interlocking gears as a fact that shows how far away AGI is, instead of a fact that shows how GPT-4 has not been trained on video data yet.  There are many models now that "understand" images or videos or even more modalities. However, they are not end-to-end trained on these multiple modalities. Instead they use an intermediary model like CLIP, that translates into the language domain. This is a rather big limitation, because CLIP can only represent concepts in images that are commonly described in image captions.  Why do I consider this a big limitation? Currently it looks like intelligence emerges from learning to solve a huge number of tiny problems. Language seems to contain a lot of useful tiny problems. Additionally it is the interface to our kind of intelligence, which allows us to assess and use the intelligence extracted from huge amounts of text.  This means that adding a modality with a CLIP-like embedding and than doing some fine-tuning does not add any intelligence to the system. It only adds eyes or ears or gears.  Training end-to-end on multi-modal data should allow the model to extract new problem solving circuits from the new modalities. The resulting model would not just have eyes, but visual understanding.  Deepmind did a mildly convincing proof-of-concept with Gato last year, a small transformer trained on text, images, computer games and robotics. Now it seems they will try to scale Gato to Gemini, leapfrogging GPT-4 in the process.  GPT-4 itself has image processing capabilities that are not yet available to the general public. But whether these are an add-on or result of integrated image modelling we don't know yet.  To me it seems very likely, that a world where the current AI boom fizzles is a world where multi-modality does not bring much benefits or we cannot figu
5fa3b99b-6407-43ed-ae48-1f9bffc21861
trentmkelly/LessWrong-43k
LessWrong
Sydney AI Safety Fellowship Review The Sydney AI Safety Fellowship was a 7-week program with a coworking space, speakers, mentors and social activities. The idea was for each participant to have a "project", but only one had a research project, whilst the others simply dedicated their time towards figuring out their plans for the future. More specifically, they were thinking about whether they wanted to shift their careers or prospective PhDs towards AI safety and if so what they focus on and what would be their next step. Inspiration: There were four main influences: * I interviewed on improving the AI Safety Pipeline and I noticed that EAs in more cities seemed to be setting up coworking spaces. * Kat Woods suggested that there should be a lightweight research agency and I modified the idea to be in-person as I'm not really a fan of remote programs and I shortened it to a summer program as a minimal viable product. * JJ hosted an AI Safety meet-up at WeWork and I noticed that the space was really nice and I heard that WeWork was offering a discount on coworking memberships which meant that I could potentially self-fund this. * Michael Aird indicated that mentorship was crucial for a research development program and since I had no idea who we'd be able to find as mentors[1], I pivoted hard away from this. We advertised mostly non-research projects, but we indicated that we would accept research projects from people who already have proven research experience[2]. Target Audience: Given that it is a 7-week program the natural target audience is students or people in the middle of a career transition. I'm thinking of complementing this program with a retreat later in the year to suit busy professionals so that our local activities cover different market segments. Last Minute Organising: Unfortunately, this idea came to me quite late and since the university summer break is fixed, I couldn't have pushed it back without having to wait until next year. We also didn't know until very late whethe
545f906f-fc66-47b8-a859-5041b8ace4ed
trentmkelly/LessWrong-43k
LessWrong
Reading Level of Less Wrong Here's something to pick our collective spirits up: According to Google's infallible algorithms, 20% of the content on LessWrong.com falls within the 'Advanced' reading level. For comparison, another well-known bastion of intelligence on the internets, Hacker News, only has 4% of it's content in that category. Strangely, inserting a space before the name of the site in the query tends to reduce the amount of content that falls in the highest bucket, but I am told that highly trained Google engineers are interrogating the bug in a dimly lit room as we speak, and expect it to crack soon.
52d7bfcd-af2f-42fe-afac-d87db079bb26
trentmkelly/LessWrong-43k
LessWrong
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better First, let me quote my previous ancient post on the topic: > Effective Strategies for Changing Public Opinion > > The titular paper is very relevant here. I'll summarize a few points. > > * The main two forms of intervention are persuasion and framing. > * Persuasion is, to wit, an attempt to change someone's set of beliefs, either by introducing new ones or by changing existing ones. > * Framing is a more subtle form: an attempt to change the relative weights of someone's beliefs, by empathizing different aspects of the situation, recontextualizing it. > * There's a dichotomy between the two. Persuasion is found to be very ineffective if used on someone with high domain knowledge. Framing-style arguments, on the other hand, are more effective the more the recipient knows about the topic. > * Thus, persuasion is better used on non-specialists, and it's most advantageous the first time it's used. If someone tries it and fails, they raise the recipient's domain knowledge, and the second persuasion attempt would be correspondingly hampered. Cached thoughts are also in effect. > * Framing, conversely, is better for specialists. My sense is that, up to this point, AI risk advocacy targeted the following groups of people: * ML researchers and academics, who want "scientifically supported" arguments. * Advocacy methods: theory-based arguments, various proof-of-concept empirical evidence of misalignment, model organisms, et cetera. * US policymakers, who want either popular support or expert support to champion a given cause. * Advocacy methods: behind-the-scenes elbow-rubbing, polls showing bipartisan concern for AI, parading around the experts concerned about AI. * Random Internet people with interests or expertise in the area. * Advocacy methods: viral LW/Xitter blog posts laying out AI X-risk arguments. Persuasion I think all of the above demographics aren't worth trying to persuade further at this point in time. It was very productive before, w
42ba1678-889d-432e-880a-3b3e0b60fd9b
trentmkelly/LessWrong-43k
LessWrong
Are there technical/object-level fields that make sense to recruit to LessWrong? LessWrong is about learning rationality, and applying rationality to interesting problems. An issue is that solving interesting problems often requires fairly deep technical knowledge of a field. To use rationality to help solving problems (especially as a group), you need both people who have skills in probability/meta-cognition/other-rationality skills, as well as the actual skills directly applicable to whatever problem is under discussion. But if you show up on LW and post something technical (or even just "specialized") in a field that isn't already well represented on the forum, it'll be hard to have meaningful conversations about it. Elsewhere on the internet there are probably forums focused on whatever-your-specialization is, but those places won't necessarily have people who know how to integrate evidence and think probabilistically in confusing domains. So far the LW userbase has a cluster of skills related to AI alignment, some cognitive science, decision theory, etc. If a technical post isn't in one of those fields, you'll probably get better reception if it's somehow "generalist technical" (i.e. in some field that's relevant to a bunch of other fields), or if it somehow starts one inferential unit away from the overall LW userbase. A plausibly good strategy is to try to recruit a number of people from a given field at once, to try to increase the surface area of "serious" conversations that can happen here. It might make most sense to recruit from fields that are close enough to the existing vaguely-defined-LW memeplex that they can also get value from existing conversations here. Anyone have ideas on where to do outreach in this vein? (Separately, perhaps: how to do outreach in this vein?). Or, alternately, anyone have a vague-feeling-of-doom about this entire approach and have alternate suggestions or reasons not to try?
1aca66f5-eddb-4668-8989-73d28d0e858d
trentmkelly/LessWrong-43k
LessWrong
Alignment Newsletter #51 Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. You may have noticed that I've been slowly falling behind on the newsletter, and am now a week behind. I would just skip a week and continue -- but there are actually a lot of papers and posts that I want to read and summarize, and just haven't had the time. So instead, this week you're going to get two newsletters. This one focuses on all of the ML-based work that I have mostly been ignoring for the past few issues. Highlights Towards Characterizing Divergence in Deep Q-Learning (Joshua Achiam et al): Q-Learning algorithms use the Bellman equation to learn the Q*(s, a) function, which is the long-term value of taking action a in state s. Tabular Q-Learning collects experience and updates the Q-value for each (s, a) pair independently. As long as each (s, a) pair is visited infinitely often, and the learning rate is decayed properly, the algorithm is guaranteed to converge to Q*. Once we get to complex environments where you can't enumerate all of the states, we can't explore all of the (s, a) pairs. The obvious approach is to approximate Q*(s, a). Deep Q-Learning (DQL) algorithms use neural nets for this approximation, and use some flavor of gradient descent to update the parameters of the net such that it is closer to satisfying the Bellman equation. Unfortunately, this approximation can prevent the algorithm from ever converging to Q*. This paper studies the first-order Taylor expansion of the DQL update, and identifies three factors that affect the DQL update: the distribution of (s, a) pairs from which you learn, the Bellman update operator, and the neural tangent kernel, a property of the neural net that specifies how information from one (s, a) pair generalizes to other (s, a) pairs. The theoretical analysis shows that as long as there is limited generalization between (s, a) pairs, and eac
5e55561a-ca6c-4d22-b52b-90bdce15e6a3
trentmkelly/LessWrong-43k
LessWrong
Safe AGI Complexity: Guessing a Higher-Order Algebraic Number Introduction: A rational number can represent an arbitrary sequence of bits where bit patterns either terminate or repeat infinitely. This provides a mathematical toy model for reasoning about AGI safety. A bit sequence can be translated into a sequence of actions. The goal is to prove safety of some bit sequence without bounds. While this toy model does not allow input to the AGI, it offers useful insights into the complexity of safe AGI in the case where the entire deterministic world state and human values have been internalized. I have been thinking about numbers of the form:     (a / b)^(1 / c) Where a, b, c are natural numbers. In number theory, this is called "Radical Extensions of Rational numbers" (I will use RER for short in this blog post). RER is closed under multiplication, but not addition:     (a0/a1)^(1/a2) * (b0/b1)^(1/b2) = ( (a0^b2*b0^a2) / (a1^b2*b1^a2) )^(1 / (a2*b2) ) RER covers the real line a little better than rationals, by including some irrational numbers, but not completely due to the Abel-Ruffini theorem, which implies that there are more irrational numbers than can be constructed using a generalization of RER to solutions of polynomial equations (up to some limit below the more irrational solutions).  These "more irrational" numbers are called "higher-order algebraic numbers" or "transcendental algebraic numbers" in number theory. Translated as an analogy to computer science: * A natural number might be thought of as a "source" of some program * A rational number might be thought of as a "computationally reducible" calculation using some source as input * A RER might be thought of as a simple encryption algorithm * A higher-order algebraic number might be thought of as a recursive encryption scheme that can make it arbitrary difficult to interpret some message Translated as an analogy to AI safety: * A natural number might be thought of as a sequence of actions * A rational number might be thought of as operational safe
01203867-6c70-4ffb-a248-9b90c5bbbe80
StampyAI/alignment-research-dataset/lesswrong
LessWrong
FAI and the Information Theory of Pleasure Previously, I talked about [the mystery of pain and pleasure](/r/lesswrong/lw/hek/the_mystery_of_pain_and_pleasure/), and how little we know about what sorts of arrangements of particles intrinsically produce them.   Up now: *should FAI researchers care about this topic?* Is research into the information theory of pain and pleasure relevant for FAI? I believe so! Here are the top reasons I came up with while thinking about this topic.   An important caveat: much depends on whether pain and pleasure (collectively, '[valence](https://en.wikipedia.org/wiki/Valence_(psychology))') are simple or complex properties of conscious systems. If they’re on the complex end of the spectrum, many points on this list may not be terribly relevant for the foreseeable future. On the other hand, if they have a relatively small “kolmogorov complexity” (e.g., if a ‘hashing function’ to derive valence could fit on a t-shirt), crisp knowledge of valence may be possible sooner rather than later, and could have some immediate relevance to current FAI research directions. Additional caveats: it’s important to note that none of these ideas are grand, sweeping panaceas, or are intended to address deep metaphysical questions, or aim to reinvent the wheel- instead, they’re intended to help resolve empirical ambiguities and modestly enlarge the current FAI toolbox.     1. Valence research could simplify the Value Problem and the Value Loading Problem. If pleasure/happiness is an important core part of what humanity values, or should value, having the exact information-theoretic definition of it on-hand could directly and drastically simplify the problems of what to maximize, and how to load this value into an AGI.   2. Valence research could form the basis for a well-defined ‘sanity check’ on AGI behavior. Even if pleasure isn’t a core terminal value for humans, it could still be used as a useful indirect heuristic for detecting value destruction. I.e., if we’re considering having an AGI carry out some intervention, we could ask it what the expected effect is on whatever pattern precisely corresponds to pleasure/happiness. If there’s be a lot less of that pattern, the intervention is probably a bad idea.   3. Valence research could help us be humane to AGIs and WBEs. There’s going to be a lot of experimentation involving intelligent systems, and although many of these systems won’t be “sentient” in the way humans are, some system types will approach or even surpass human capacity for suffering. Unfortunately, many of these early systems won’t work well— i.e., [they’ll be insane](http://opentheory.net/2014/11/if-a-brain-emulation-has-a-headache-does-it-hurt/). It would be great if we had a good way to detect profound suffering in such cases and halt the system.   4. Valence research could help us prevent Mind Crimes. Nick Bostrom suggests in Superintelligence that AGIs might simulate virtual humans to reverse-engineer human preferences, but that these virtual humans might be sufficiently high-fidelity that they themselves could meaningfully suffer. We can tell AGIs not to do this- but knowing the exact information-theoretic pattern of suffering would make it easier to specify what not to do.   5. Valence research could enable radical forms of cognitive enhancement. Nick Bostrom has argued that there are hard limits on traditional pharmaceutical cognitive enhancement, since if the presence of some simple chemical would help us think better, our brains would probably already be producing it. On the other hand, there seem to be fewer a priori limits on motivational or emotional enhancement. And sure enough, the most effective “cognitive enhancers” such as adderall, modafinil, and so on seem to work by making cognitive tasks seem less unpleasant or more interesting. If we had a crisp theory of valence, this might enable particularly powerful versions of these sorts of drugs.   6. Valence research could help align an AGI’s nominal utility function with visceral happiness. There seems to be a lot of confusion with regard to happiness and utility functions. In short: they are different things! Utility functions are goal abstractions, generally realized either explicitly through high-level state variables or implicitly through dynamic principles. Happiness, on the other hand, seems like an emergent, systemic property of conscious states, and like other qualia but unlike utility functions, it’s probably highly dependent upon low-level architectural and implementational details and dynamics. In practice, most people most of the time can be said to have rough utility functions which are often consistent with increasing happiness, but this is [an awfully leaky abstraction](http://lesswrong.com/lw/1qk/applying_utility_functions_to_humans_considered/).   My point is that constructing an AGI whose utility function is to make paperclips, and constructing a sentient AGI who is viscerally happy when it makes paperclips, are very different tasks. Moreover, I think there could be value in being able to align these two factors— to make an AGI which is viscerally happy to the exact extent it’s maximizing its nominal utility function. (Why would we want to do this in the first place? There is the obvious semi-facetious-but-not-completely-trivial answer— that if an AGI turns me into paperclips, I at least want it to be happy while doing so—but I think there’s real potential for safety research here also.)   7. Valence research could help us construct makeshift utility functions for WBEs and Neuromorphic AGIs. How do we make WBEs or Neuromorphic AGIs do what we want? One approach would be to piggyback off of what they already partially and imperfectly optimize for already, and build a makeshift utility function out of pleasure. Trying to shoehorn a utility function onto any evolved, emergent system is going to involve terrible imperfections, uncertainties, and dangers, but if research trends make neuromorphic AGI likely to occur before other options, it may be a case of “something is probably better than nothing.”   One particular application: constructing a “cryptographic reward token” control scheme for WBEs/neuromorphic AGIs. Carl Shulman has suggested we could incentivize an AGI to do what we want by giving it a steady trickle of cryptographic reward tokens that fulfill its utility function- it knows if it misbehaves (e.g., if it kills all humans), it’ll stop getting these tokens. But if we want to construct reward tokens for types of AGIs that don’t intrinsically have crisp utility functions (such as WBEs or neuromorphic AGIs), we’ll have to understand, on a deep mathematical level, what they do optimize for, which will at least partially involve pleasure.   8. Valence research could help us better understand, and perhaps prevent, AGI wireheading. How can AGI researchers prevent their AGIs from wireheading (direct manipulation of their utility functions)? I don’t have a clear answer, and it seems like a complex problem which will require complex, architecture-dependent solutions, but understanding the universe’s algorithm for pleasure might help clarify what kind of problem it is, and how evolution has addressed it in humans.   9. Valence research could help reduce general metaphysical confusion. We’re going to be facing some very weird questions about philosophy of mind and metaphysics when building AGIs, and everybody seems to have their own pet assumptions on how things work. The better we can clear up the fog which surrounds some of these topics, the lower our coordinational friction will be when we have to directly address them. Successfully reverse-engineering a subset of qualia (valence- perhaps the easiest type to reverse-engineer?) would be a great step in this direction.   10. Valence research could change the social and political landscape AGI research occurs in. This could take many forms: at best, a breakthrough could lead to a happier society where many previously nihilistic individuals suddenly have “skin in the game” with respect to existential risk. At worst, it could be a profound information hazard, and irresponsible disclosure or misuse of such research could lead to mass wireheading, mass emotional manipulation, and totalitarianism. Either way, it would be an important topic to keep abreast of.     These are not all independent issues, and not all are of equal importance. But, taken together, they do seem to imply that reverse-engineering valence will be decently relevant to FAI research, particularly with regard to the Value Problem, reducing metaphysical confusion, and perhaps making the hardest safety cases (e.g., neuromorphic AGIs) a little bit more tractable.
567e5849-9822-448a-aecf-14d2084ff28c
trentmkelly/LessWrong-43k
LessWrong
If we had known the atmosphere would ignite What if the Alignment Problem is impossible?   It would be sad for humanity if we live in a world where building AGI is very possible but aligning AGI is impossible.  Our curiosity, competitive dynamics, and understandable desire for a powerful force for good will spur us to build the unaligned AGI and then humans will live at the AGI’s mercy from then on and either live lucky & happy on the knife’s edge, be killed by the AGI, or live in some state we do not enjoy. For argument’s sake, imagine we are in that world in which it is impossible to force a super-intelligence to value humans sufficiently - just as chimpanzees could not have controlled the future actions of humans had they created us. What if it is within human ability to prove that Alignment is impossible? What if, during the Manhattan Project, the scientists had performed the now famous calculation and  determined that yes, in fact, the first uncontrolled atomic chain reaction would have ignited the atmosphere and the calculation was clear for all to see? Admittedly, this would have been a very scary world.  It’s very unclear how long humanity could have survived in such a situation.   But one can imagine a few strategies: * Secure existing uranium supplies - as countries actually did. * Monitor the world for enrichment facilities and punish bad actors severely. * Accelerate satellite surveillance technology. * Accelerate military special operations capabilities. * Develop advanced technologies to locate, mine, blend and secure fissionable materials. * Accelerate space programs and populate the Moon and Mars. Yes, a scary world.  But, one can see a path through the gauntlet to human survival as a species.  (Would we have left earth sooner and reduced other extinction risks?)   Now imagine that same atmosphere-will-ignite world but the Manhattan Project scientists did not perform the calculation.  Imagine that they thought about it but did not try. All life on earth would have ended, instant
216a7cfb-e61e-40e2-9e36-921b376f44ec
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Applying superintelligence without collusion *Epistemic status: The core ideas seem robust and stable after long reflection and many discussions.* Many researchers identify AI safety with control of a monolithic, superintelligent AI system, and if questioned about multicomponent alternatives, argue that multiple superintelligent-level systems would inevitably collude and act as one. This view seems quite wrong, yet has diverted attention from a rich and promising range of multicomponent strategies for AI safety — strategies that are well aligned with the actual trajectory of AI development. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ *Adapted from* [*Reframing Superintelligence*](https://www.fhi.ox.ac.uk/reframing/)*,* Section 20: -------------------------------------------------------------------------------------------------- **Collusion among superintelligent oracles****can readily be avoided** ---------------------------------------------------------------------- *Because collusion among AI question-answering systems can readily be avoided, there is no obstacle to applying superintelligent-level AI resources to problems that include AI safety.* 20.1 Summary ------------ The difficulty of establishing successful collusion among actors tends to increase as their capabilities, knowledge, situations, and roles become more diverse and adversarial (think auditors, competitors, specialists, red-teams…), and increasing the number of actors can make collusive cooperation more difficult . In the context of AI systems (even more so than among human beings), these conditions can be readily implemented and are attractive for pragmatic reasons. Arguments that, absent preexisting alignment, high-level AI systems will inevitably collude are ill-founded. Instead, we should expect that interactions among multiple superintelligent-level systems can be applied to suppress deception and reduce risk by leveraging imperfect alignment achieved at the level of individual systems. 20.2 Trustworthiness can be an emergent property ------------------------------------------------ Prospects for solving AI-safety problems would be greatly improved if we could safely apply superintelligent-level question-answering resources (“oracles”, or more generally, “systems that provide information in response to prompts”) to solving those problems. A familiar objection dismisses this potentially powerful approach as unsafe in itself, arguing that, absent solutions to difficult problems, individual superintelligent-level systems would be untrustworthy, and that attempts to establish checks and balances among multiple systems (for example, through superintelligent-level evaluation of potentially deceptive answers to questions) would inevitably be thwarted by collusive cooperation. Identifying robust strategies for ensuring non-collusion among superintelligent question-answering systems would overcome this objection, inviting exploration of superintelligence-enabled strategies for managing potentially untrusted superintelligent AI systems. The present discussion argues that the *robust non-emergence* of deceptive collusion among imperfectly aligned systems can be ensured by structuring systems of systems with diverse capabilities, knowledge, situations, objectives, and roles. This problem framing assumes the ability to develop systems that respond to questions with superintelligent-level competence, and will (as a consequence of the nature of digital systems) assume the ability to constrain information inputs to AI systems during their development and use.  In the problematic case, superintelligent question-answering systems might provide deceptive answers in pursuit of emergent, potentially undesirable goals. The aim here is to identify principles for architecting *multicomponent* systems that will act as non-deceptive oracles, while treating their components as actors that could in principle engage in deceptive collusion.  20.3 A range of conditions can make collusion robust or fragile --------------------------------------------------------------- Potential problems of deceptive collusion can be addressed by architecting systems that make collusion difficult or effectively impossible.[[1]](#fnv558gr9hw6n) Some relevant conditions:  ### **Conditions that tend to** ***facilitate*** **collusion:**       A1) *Small numbers of actors* (the required scope of collusion is small)       A2) *Insensitivity to defectors* (partial collusion is sufficient)       A3) *Similarity among actors* (for example, shared histories and objectives)       A4) *Free communication* (information can flow between any pair of actors)       A5) *Iterated decision processes with observable actions*       A6) *Common knowledge of the above circumstances*   ### **Contrasting conditions that tend to** ***disrupt*** **collusion:**       B1) *Large numbers of actors* (the required scope of collusion is large)       B2) *Sensitivity to defectors* (collusion fails unless it is complete)       B3) *Diversity among actors* (for example, different histories, adversarial objectives)       B4) *Constrained communication* (information flows through limited causal channels)       B5) *Single-move decision processes, limited observability*       B6) *Lack of shared knowledge* of strategically-relevant circumstances   Implementation choices that would yield conditions (A1–A5) are unnecessary and often undesirable. By contrast, implementation choices that yield conditions (B1–B5) are practical and often well-motivated: ### **Characteristics of practical architectures:**       C1) To improve the quality and reliability of answers, it is natural to implement *multiple, diverse* (and implicitly *competing*) systems to propose alternatives.       C2) To identify low-quality or misleading answers, it is natural to employ diverse critics (auditors, red teams, etc.), *any one of which* might disrupt deceptive collusion.       C3) Systems of diverse, competing proposers and critics naturally implement *different and often adversarial* objectives.       C4) To minimize unwanted dependencies, systems are designed such that components have *a sparse, thoughtfully designed communication graph.*       C5) To provide stable results, it is natural to apply fixed models to a series of problems, yielding what is in effect a series of *history-blind, single-move decisions.*[[2]](#fn57lbc0k1sd8)       C6) In training, it is natural to provide *differentiated, task-relevant information* to systems solving different problems, often omitting knowledge of general circumstances.[[3]](#fn1kz3t4rzggoi)   Considerations (C1–C5) suggest that implementing systems that are resistant to the emergence of deceptive collusion aligns with the general strategy of employing ensembles of diverse, stable, task-directed system components to *propose* and *assess* answers to questions. This approach can pass the test of practicality and potentially widespread adoption. 20.4 Untrusted superintelligence can be applied to AI safety ------------------------------------------------------------ It seems both possible and practical to establish conditions that would effectively preclude deceptive collusion among diverse, task-oriented, superintelligent-level question-answering systems, thereby addressing a key problem in safely applying superintelligent capabilities to problems (Bostrom, [*Superintelligence*](https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/ref=sr_1_1?crid=UMTGTD7NVIOP&keywords=Superintelligence%3A+Paths%2C+Dangers%2C+Strategies&qid=1667828908&sprefix=superintelligence+paths+dangers+strategies)*,* p.145).[[4]](#fnlboupq1zwil) Prospects for obtaining trustworthy answers from sets of individually untrusted superintelligent systems call for deeper exploration of AI safety strategies premised on reliable access to superintelligent problem-solving capabilities. Capabilities of particular interest include the ability to run and evaluate the results of superintelligent-level red-team/blue-team competitions relevant to questions of global strategic stability.  **TL;DR:**  Because collusion can be thwarted, it is reasonable to expect that applying multiple potentially untrustworthy superintelligent-level systems to problems can improve rather than degrade safety. This prospect calls for greater attention. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ ### Afterword: Note that I have presented a *there-exists* argument against a *for-all* proposition. Responses of the form *there-exists (*“But I can think of something similar that won’t work”), don’t count as counterarguments here. If requiring a *for-all* counterargument (or at least a quite strong *for-all* flavor) across the space of potential compositions of AI systems seems like a high bar, well, that’s very much the point. Fallback arguments of the form “But it would be safer to assume the worst — universal, inevitable collusion among AI systems” miss the key implication of the case made here: that multicomponent approaches *could potentially increase safety.* To (continue to) neglect potential solutions based on handwaving arguments is a really bad idea. Paul Christiano (“[On heterogeneous objectives](https://ai-alignment.com/on-heterogeneous-objectives-b38d0e003399)”, 2015) considers the case for inevitable collusion; I agree with his gently-stated view that > …the implicit argument for the robustness of collusion makes a lot of implicit assumptions. If I saw an explicit argument I might be able to assess its explicit assumptions, but for now we don’t have one. > > At t+7 years, I’ve still seen no explicit argument for robust AI collusion, yet tacit belief in this idea continues to channel attention away from a potential solution-space for AI safety problems, leaving something very much like a void.[[5]](#fnsz6hl7l3n0n) \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ ### [Further Reading](https://www.fhi.ox.ac.uk/reframing) [in [*Reframing Superintelligence*](https://www.fhi.ox.ac.uk/reframing/)] * Section 8:  Strong optimization can strongly constrain AI capabilities, behavior, and effects * Section 12: AGI agents offer no compelling value * Section 21: Broad world knowledge can support safe task performance * Section 19: The orthogonality thesis undercuts the generality of instrumental convergence * Section 23: AI development systems can support effective human guidance * Section 24: Human oversight need not impede fast, recursive AI technology improvement   1. **[^](#fnrefv558gr9hw6n)**A property may be true with high credence yet not be susceptible to formal proof. 2. **[^](#fnref57lbc0k1sd8)**Systems need not have access to a shared clock, a hardware-level constraint that is necessary to ensure reproducible results and incidentally limits the scope for coordinated treachery. 3. **[^](#fnref1kz3t4rzggoi)**It has been [persuasively argued](https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators) that large language models are best thought of as “simulators”, a class of systems that can (in the LLM case) simulate agents while themselves lacking agentic properties. (This is a coherent concept: A physics model could simulate agents without itself having agentic properties.) LLMs show how an AI system trained on a prediction objective can provide a rich world model without pursuing goals. 4. **[^](#fnreflboupq1zwil)**In a familiar class of worst-case scenarios, systems with general superintelligence would infer extensive knowledge about the world from minimal information, and would choose (if possible) to pursue potentially dangerous goals by manipulating the external environment, for example, through deceptive answers to questions. In this scenario, (all?) superintelligent systems, even if almost isolated, would infer the existence of others like themselves, and (all?) would employ a decision theory that induces them to collude (in a coordinated way?) to pursue shared objectives. Even if we consider a worst-case assumption regarding the default emergence of world-changing goals, the present argument suggests that problematic systems would *correctly* infer the existence of superintelligent-level systems *unlike* themselves (systems with diverse and specialized capabilities, knowledge, and interactions, playing roles that include adversarial judges and competitors), and would *correctly* recognize that deceptive collusion is risky or infeasible. 5. **[^](#fnrefsz6hl7l3n0n)**The idea of multicomponent strategies for AI safety is, of course, neither new nor entirely neglected. However, in a recent search for relevant Alignment Forum posts, I found no evidence of a thriving research community or well-developed concepts: • [(My understanding of) What Everyone in Technical Alignment is Doing and Why](https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is) (August 2022) surveys the agendas of more than 20 research groups, and none clearly points in the direction I’ve advocated here. • A pair of posts on Pragmatic AI Safety, [Perform Tractable Research While Avoiding Capabilities Externalities](https://www.alignmentforum.org/s/FaEBwhhe3otzYKGQt/p/dfRtxWcFDupfWpLQo) and [Open Problems in AI X-Risk](https://www.alignmentforum.org/s/FaEBwhhe3otzYKGQt/p/5HtDzRAk7ePWsiL2L) (May, June 2022), briefly mention highly relevant concepts: the idea of using “counteracting systems [for example] artificial consciences, AI watchdogs, lie detectors, filters for power-seeking actions, and separate reward models”, and the idea of “multiple superintelligent agents that can rein in other rogue systems”. The authors also mention (without endorsing) the counter-claim that “The instant two intelligent agents can reason about each other — regardless of their goals — they will necessarily collude.” • [An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) (May 2020) mentions only efforts to align individual AI systems; even “AI safety via debate with transparency tools” proposes that a system interact with a copy of itself. Partial success in single-system alignment could be leveraged in multicomponent safety architectures, an application context that has potential implications for research directions in the single-system alignment domain.
4f5417ff-f28a-426b-b2e3-ebf5fda6ba5e
trentmkelly/LessWrong-43k
LessWrong
Downvotes temporarily disabled This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.   The best place to track changes to the codebase is the github LW issues page.
b7bfe21c-f038-4a62-8e3c-bd0f7c48be82
trentmkelly/LessWrong-43k
LessWrong
How do I get all recent lesswrong posts that doesn't have AI tag? This forum's recent post page is just overloaded with AI content and I don't just want to read curated posts either. I find it strange that their isn't any advanced filtering options. Here is what I want: input: POSTS(-tag:AI decoupling) output: all POSTS without the tag AI and the keyword "decoupling" INSIDE Yes, I suppose I could use search engines and I have done that but it doesn't just search post content but also the content outside of it.
a8bb5c75-95e5-49b1-a9bb-8ab7183d1ff8
trentmkelly/LessWrong-43k
LessWrong
Meetup : LW Sao Paulo - Meetup de janeiro Discussion article for the meetup : LW Sao Paulo - Meetup de janeiro WHEN: 28 January 2017 02:00:00PM (-0200) WHERE: Rua Antônio Carlos, 452. São Paulo, Brazil Encontro mensal de racionalistas, novatos são bem vindos! Teremos algumas lições de racionalidade prática, discussões diversas e jogos de tabuleiro no final. Vanilla café - 2o andar Discussion article for the meetup : LW Sao Paulo - Meetup de janeiro
92347edb-4a6d-4741-97ca-05d04adc41f4
trentmkelly/LessWrong-43k
LessWrong
The representational fallacy Basically Heather Dyke argues that metaphysicians are too often arguing from representations of reality (eg in language) to reality itself.  It looks to me like a variant of the mind projection fallacy. This might be the first book length treatment teh fallacy has gotten though.  What do people think?   See reviews here https://www.sendspace.com/file/k5x8sy https://ndpr.nd.edu/news/23820-metaphysics-and-the-representational-fallacy/ To give bit of background there's a debate between A-theorists and B-theorists in philosophy of time. A-theorists think time has ontological distinctions between past present and future B-theorists hold there is no ontological distinction between past present and future. Dyke argues that a popular argument for A-theory (tensed language represents ontological distinctions) commits the representational fallacy. Bourne agrees , but points out an argument Dyke uses for B-theory commits the same fallacy.
30dc3bb9-0c0b-461e-8c20-801c835c7ff7
trentmkelly/LessWrong-43k
LessWrong
Growth mindset for better sex Cross-posted from my website. TLDR: Fixed mindset and fear of inadequacy hinder learning. Competence gives you confidence - where you’re competent and confident, you don’t have fear of inadequacy. And if you don’t learn fast because you’re afraid of feedback (because you’re afraid of inadequacy), you’ll not get better, leaving you relatively incompetent and afraid of inadequacy. 🗘 A thing about sex recently clicked from several sources: * Being high and talking with a certain sergal, * plus reading a Facebook thing by Duncan, * plus listening to John Vervaecke’s series Awakening from the Meaning Crisis. (One of the episodes talks about fixed/growth mindset, but I can’t find it right now. You should go watch the whole series anyway, it’s awesome.) During sex, I have a background fear of “what if I can’t bring them to orgasm”. I want my partner to enjoy it as well, and I want to reciprocate, and I would feel bad if they bring me to orgasm but I can’t bring them to orgasm. So, I hurry and try hard to bring them to orgasm, because I am not confident of my sexual skill. Orgasm is cool, but the sex before orgasm is also cool. Sex that doesn’t feel hurried and where you can take your time and stretch out pleasure. But, so far, in like 90% of the sex I’ve had so far I had the thought “aaaa sex aaaa gotta perform and be good enough and hurry and give them an orgasm aaaa”, somewhere in the background. In that kind of environment, you don’t get a lot of learning done. Say you’re learning how to play the violin. We know lots about learning. CFAR’s handbook (which I think is still not publicly readable) has a lot of useful stuff on it. Things that make learning work well include: * You need to learn the basics first. You don’t start by playing Vivaldi’s Four Seasons. You start by playing one note until you get that one note at least 90% right: holding the violin correctly, not having the bow screech, not touching the strings that shouldn’t ring, holding the tone for a
2ac561b5-a799-4df5-820b-5b7f86ee5054
trentmkelly/LessWrong-43k
LessWrong
Mapping the semantic void: Strange goings-on in GPT embedding spaces TL;DR: GPT-J token embeddings inhabit a zone in their 4096-dimensional embedding space formed by the intersection of two hyperspherical shells. This is described, and then the remaining expanse of the embedding space is explored by using simple prompts to elicit definitions for non-token custom embedding vectors (so-called "nokens"). The embedding space is found to naturally stratify into hyperspherical shells around the mean token embedding (centroid), with noken definitions depending on distance-from-centroid and at various distance ranges involving a relatively small number of seemingly arbitrary topics (holes, small flat yellowish-white things, people who aren't Jews or members of the British royal family, ...) in a way which suggests a crude, and rather bizarre, ontology. Evidence that this phenomenon extends to GPT-3 embedding space is presented. No explanation for it is provided, instead suggestions are invited. [Mapping the semantic void II: Above, below and between token embeddings] [Mapping the semantic void III: Exploring neighbourhoods] Work supported by the Long Term Future Fund. GPT-J token embeddings First, let's get familiar with GPT-J tokens and their embedding space. GPT-J uses the same set of 50257 tokens as GPT-2 and GPT-3.[1]  A typical selection of GPT-2/3/J tokens and their indices. Note the mixture of full words, sub-word chunks, abbreviations, numerical and typographic strings, as well as the presence/absence of leading spaces and upper/lower case combinations. These tokens are embedded in a 4096-dimensional space, so the whole picture is captured in a shape-[50257, 4096] tensor. (GPT-3 uses a 12888-dimensional embedding space, so its embeddings tensor has shape [50257, 12888].) Each token's embedding vector was randomly initialised in the 4096-d space, and over the course of the model's training, the 50257 embedding vectors were incrementally displaced until they arrived at their final positions, as recorded in the embeddings tensor
85486549-2462-48bf-be29-06a7e57018da
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What is it like doing AI safety work? How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst? To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail. The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job.  *Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs.* *Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing.* *Reminder that you can*[***listen to LessWrong and EA Forum posts***](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library) *like this on your podcast player using the*[*Nonlinear Library*](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library)*.* *This post is part of a project I’ve been working on at*[*Nonlinear*](http://nonlinear.org/)*. You can see the*[*first part of the project here*](https://forum.effectivealtruism.org/posts/PH2pqsqgXQkfCdmkv/how-to-become-an-ai-safety-researcher) *where I explain the different ways people got into the field.* What do people do all day? ========================== **John Wentworth** ------------------ John describes a few different categories of days. * He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already. * He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful. * He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem. Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc. John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5.    **Ondrej Bajgar** ----------------- Ondrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work.  Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding.  Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful.    **Scott Emmons** ---------------- Scott talked about a few different categories of day-to-day work: * Research, which involves brainstorming, programming, writing & communicating, and collaborating with people * Reading papers to stay up-to-date with the literature * Administrative work * Service, such as giving advice to undergrads, talking about AI safety, and reviewing other people’s papers An example of a typical day might look like this: he’ll start work in the morning by reading papers because this is his best time for getting into deep work. This is followed by a research meeting with some collaborators and answering some emails. After that, he has the CHAI weekly lab meeting, where someone presents their research. After this he might spend a few hours coding on an existing project, followed by some admin work.    **Alex Turner** --------------- Alex has historically specialized in whiteboard-centric math and theory work but has pivoted to empirical and mentoring work (coding, experiment design, frontend, management).  He spends most days focused on specific questions. For example, by what processes might a diamond-motivated AI improve itself? What precautions might it take, given such-and-such resources?  He’ll occasionally zoom out to check that he's still focusing on questions he expects to most increase the probability of alignment success.  On empirical research days, he spends about six hours coding, one reading, and one to two in conversations. On theory days, it's rather evenly mixed between reading, writing, and thinking. He will periodically spend a few days to weeks communicating large bundles of ideas and results, whether in blog post or paper format.   **William Saunders** -------------------- OpenAI, where William works, has three days a week where there are meetings and two days a week where they try to avoid meetings. William’s day starts by getting into the office, checking Slack to see how various projects are going, and leaving comments if he has useful ideas for any of them. If it’s a meeting day, he will usually have 1 or 2 meetings with people working on a project and sometimes other people who are interested in talking about the higher-level ideas. There will be discussions about what experiments to run next, or whether any of the code needs to be refactored.  There are two types of experiments that he works on: * Experiments where data needs to be collected from human contractors. In these cases, William needs to decide, and so it’s a matter of determining how to collect the data and build an interface for them to use. This involves both frontend and backend development, as well as monitoring quality to make sure contractors aren’t misunderstanding things. * Experiments where the data has already been collected. In these cases, his job is to change algorithms and hyperparameters to improve performance **Ethan Perez** --------------- Ethan describes his projects as having three phases: brainstorming, implementation, and writing up. The brainstorming phase usually lasts about a month and is all about working out what to do next. This involves a lot of reading, taking notes, and talking with people. Each week he’ll have a meeting with his advisor about ideas for directions that seem promising to pursue and get feedback.  Once they’ve settled on a project, he’ll move on to the ‘implementation’ phase, which starts with figuring out how to get things to work in practice. Then it's a matter of putting the ideas into code and running experiments on a GPU cluster, getting feedback from these experiments, and deciding what to run next. During this process, he’ll have weekly meetings with his advisor to talk about which research directions he should continue to pursue. This implementation phase takes around one to six months, ending when he either gets something to work or decides to shift directions.  Once something works, he’ll take about a month to run the final experiments, write it up in a paper, put it on arXiv, and submit it to a conference.   **Justin Shovelain** -------------------- Justin runs [Convergence](https://www.convergenceanalysis.org/), an organization which does work in AI and x-risk strategy. A normal week usually involves: * 1 day of reading * 1 day of operations and logistics; emails, grants, paperwork * 1 to 2 days of discussions with people; advising, mentoring, or discussing a topic with people * 1 to 1.5 days of solo thinking * Half a day of thinking with his colleague in a discussion format Days are usually devoted to one activity rather than splitting them up too much, but sometimes this isn’t possible when working with other people, especially in different time zones. Usually he spends the start of the day doing deep work and then has calls later in the day.    **Stephen Casper** ------------------ Stephen usually has at least a couple of things to work on at once, so that when one task gets boring, he can switch to another. He usually works from the offices at MIT, MAIA, and HAIST, bouncing between various tasks including, for example: * Reading papers * Drafting papers or posts * Checking on experiments * Writing and debugging code As well as the standard work, Stephen has the habit of reading and taking notes on at least one paper a day. This is to explore the literature more widely, so he tries to choose papers that aren’t too related to what he would have read anyway. You can read more about [his daily routine here.](https://www.lesswrong.com/posts/NENrmKWoEzXXmQjhr/a-daily-routine-i-do-for-my-ai-safety-research-work)   **Ramana Kumar** ---------------- At DeepMind the work can be quite varied and things can change a lot in 6 months. Ramana usually works between 10 am - 6 pm and tries not to think too much about work outside of this; although this isn’t always possible. With creative work, you can’t always say, “I’m going to be working now, and I’m not going to be working now”. Sometimes you just have to follow your mood. About half of Ramana’swork time is spent on tasks that can be done alone or by closely collaborating with someone else. These include:  * Reading things like papers, books, and the Alignment Forum * Technical coding work, debugging, and making figures. This type of work is especially fun to do with someone else * Writing papers, outlines, and presentations The other half of the time is spent with larger groups, for example: * Meetings or long collaboration days to talk about research priorities and directions * Project discussions to talk about what people have been doing, problems, and where to go next * Reading groups   **Dan Hendrycks** ----------------- *N.B.: this interview was done in early 2022 before Dan had finished his PhD.* Dan has an exceptional academic and publication record, and as such (as a PhD student) now spends a lot of his time managing other people. He doesn’t spend very much time coding himself; instead, he manages other people, which he’s been doing since the third year of his undergraduate degree. This involves a lot of meetings with people and thinking about ideas for projects.  Part of his management work involves applying some pressure and motivating his colleagues to make sure that projects actually get done, especially applying “start-up torque” to get projects started in the first place.    Favorite and least favorite parts ================================= We asked people what their favorite and least favorite parts of the day-to-day work were. If you think the best bits sound great and you could handle the not-so-good parts, then you might be a great fit for AI safety work.   **Ethan Perez** --------------- Favorite * Fun feedback loops where you can implement something during the day, run it during the night, and then check back and iterate the next morning. * Sometimes there are even shorter feedback loops, like testing what you can do with prompts in the OpenAI playground * Ethan prefers working on existing codebases, making modifications rather than implementing entire ideas from scratch. If you implement from scratch, and it doesn’t work, it’s hard to know which part of your implementation isn’t working.   Least favorite * Sometimes the feedback loops can be long, and there are a lot of steps that can’t be automated * Initially, Ethan had an aversion to the software engineering side of projects, but this was possibly due to insecurities about his coding ability. This has mainly gone away since addressing this issue   **Ondrej Bajgar** ----------------- Favorite * Mornings: Ondrej dedicates this time to deep thinking, mainly in front of a whiteboard working on problems * Lunches at FHI where there are lots of exciting people to talk to, random encounters, and good conversations   Least favorite * Afternoons: he is usually low on energy and often doesn’t end up making much progress * Since the original interview, Ondrej has learned to accept this and take a break in the afternoon with a nap and a run to restore energy, so it’s no longer his least favorite bit.   **Scott Emmons** ---------------- Favorite * Reading and brainstorming. Scott thinks that this is one of the most fun parts of research: you can think pretty widely and you feel like anything is possible   Least favorite * Dealing with the annoying small details that are involved in getting a project to completion. For example, making sure all the font sizes on your figures look nice, and that all your experiments have a consistent set of hyperparameters   **William Saunders** -------------------- Favorite * William likes doing pure coding tasks where you can just implement something and see if it works. Often for experiments or ML tasks, it can be more hit-or-miss and the feedback loops aren’t as satisfying.   Least favorite * He enjoys generating ideas but finds it less enjoyable to prioritize those ideas and choose the specific thing to do next. For example, “We have several different ways to collect this data. Which should we do first?” * Getting stuck on projects and becoming anxious about whether things will work or whether the project is not going to go anywhere. One of the benefits of working with people is that if your project is stuck, you can go and talk to someone. You can either talk about their project or ask them for help getting yours unstuck   **Justin Shovelain** -------------------- Favorite * The feeling that he is ‘resolving mysteries’ and learning new, cool things * The sense of ‘a deed well done for the world’. Looking at what you’ve done and realizing ‘Oh I actually did improve things. This is wonderful’.   Least favorite * Dealing with bureaucracy * Dealing with the politics involved in running and representing an organization **Stephen Casper** ------------------ Favorite * Looking at cool visualizations, and producing nice-looking figures. One of the upsides of working in the subfield of adversarial robustness is that he gets to make and work with interesting visualizations. * Getting things to work when they weren’t previously working * Working with people in person   Least favorite * When something is not working and you don’t know why * Working with other people’s code and you don’t like how it’s written * Reviewer #2. ([“Reviewer 2 symbolizes the peer reviewer who is rude, vague, smug, committed to pet issues, theories, and methodologies, and unwilling to treat the authors as peers.”](https://link.springer.com/article/10.1007/s40037-021-00670-z))   **John Wentworth** ------------------ Favorite * These are really interesting problems to think about. Having insightful ‘shower thoughts’.   Least favorite * Being stuck on a problem for a long time isn’t very fun, although it is hard to separate the good parts (working on a fun problem) from the bad parts here. * Needing to communicate something when there’s a large [inferential gap](https://www.lesswrong.com/tag/inferential-distance) and so it’s difficult to get the whole idea across.   **Alex Turner** --------------- Favorite * “When I’m trying to prove something that’s interesting, and I don’t know how to do it yet. Well, actually the favorite part is that instant when I figure out how to do it, but besides that, the process of attacking it is really fun.”   Least favorite * Things like dealing with emails or meetings which aren’t very important, but these can be minimized. * Being in the state of mind of not feeling very quiet internally, where your attention is being pulled by various things like social media.   **Ramana Kumar** ---------------- Favorite * When you make progress on something and see what you’ve produced. For example, coming up with an idea or working with people on a presentation or paper. Seeing the final product and thinking, “Oh, that’s really nice”. * Being in a reading group and feeling on top of what’s going on and that you’re making useful contributions * Spending a while reading or trying to write a comment and reaching a place where you understand something or know how to make a specific point well Least favorite * It can be hard when you’re not satisfied with any of the options of what to do next and it feels like you don’t really know what to do next. You can end up flitting between options because it seems better than nothing, but then changing direction because one thing didn’t seem quite right. When in this state it’s useful to step back and either do longer-term prioritization, or realize you’ve already done the prioritization and so do (and stick to) what you’ve previously decided.   If you liked this post, you might also like: * [How to become an AI safety researcher](https://forum.effectivealtruism.org/posts/PH2pqsqgXQkfCdmkv/how-to-become-an-ai-safety-researcher), the first part of this series * [A curated list of links on career advice for AI safety researchers](https://www.aisafetysupport.org/resources/lots-of-links#h.gtysxxud1rr) from [AI Safety Support](https://www.aisafetysupport.org/home) * [Career coaching from AI Safety Support](https://www.aisafetysupport.org/resources/career-coaching) or [80,000 Hours](https://80000hours.org/speak-with-us/)   *Thanks to Amber for editing this post. If you find writing/editing tedious or can never find the time to write, you can* [*contract her to write or edit your EA posts here*](https://amber-dawn-ace.com/our-services)*.*
9180f8bd-45bf-49f0-9373-1ce3d49fdf65
trentmkelly/LessWrong-43k
LessWrong
Asymptotic Logical Uncertainty: A Benford Learner This post is part of the Asymptotic Logical Uncertainty series. In this post, I will give an algorithm, BenfordLearner, that passes the Benford test. This algorithm further satisfies limN→∞,N∈SBenfordLearner(N)=p whenever S is an irreducible pattern with probability p. In the following algorithm, we define logq to be 1 whenever q≤2. BenfordLearner(N) 1: P=0 2: M=N 3: for j=0 to N 4: M_Y=0 5: for Y a TM expressible in K_Y <log N bits 6: M_X=N 7: for Y a TM expressible in K_X <log N bits 8: if UTM(X,N) and UTM(Y,N) both accept in time T(N) 9: A=0 10: R=0 11: i=1 12: while i<=N 13: if UTM(X,i) and UTM(Y,i) both accept in time T(i) 14: if UTM(L,i) accepts in time T(N) 15: A=A+1 16: else if UTM(L,i) rejects in time T(N) 17: R=R+1 18: else 19: i=N 20: i=i+1 21: F=A/(A+R) 22: Q=A+R 23: if max(K_X,|F-j/N|*sqrt(Q)/K_Y/sqrt(log log Q))<M_X 24: M_X=max(K_X,|F-j/N|*sqrt(Q)/K_Y/sqrt(log log Q)) 25: if M_X>M_Y 26: M_Y=M_X 27: if M_Y<M 28: M=M_Y 29: P=j/N 30: return P Let TM(N) be the set of all Turing machines X expressible in at most logN bits such that UTM(X,N) accepts in time at most T(N). The encoding of Turing machines must be prefix-free, which in particular means that no Turing machine is encoded in 0 bits. Let JN denote the set of rational numbers of the form jN with j=0,…,N. For X and Y Turing machines, let K(X) be the number of bits necessary to encode X. Let S′(X,Y) be the subset of natural numbers i which are accepted by both UTM(X,i) and UTM(Y,i) in time at most T(i). Let QN(X,Y) be the greatest number less than or equal to N such that for every s in the first QN(X,Y) elements of S′, UTM(L,s) halts in time T(N). Let FN(X,Y) be the proportion of the first QN(X,Y) elements of S′ which L accepts. Let BN(X,Y,P)=max(K(X),|FN(X,Y)−P|√QN(X,Y)K(Y)√loglogQN(X,Y)).
d381300a-5f9b-4556-ad34-a7c907405b92
LDJnr/LessWrong-Amplify-Instruct
LessWrong
""If you meet the Buddha on the road, kill him!" When Edward Wilson published the book Sociobiology, Richard Lewontin and Stephen J. Gould secretly convened a group of biologists to gather regularly, for months, in the same building at Harvard that Wilson's office was in, to write an angry, politicized rebuttal to it, essentially saying not that Sociobiology was wrong, but that it was immoral - without ever telling Wilson. This proved, to me, that they were not interested in the truth. I never forgave them for this. I constructed a narrative of evolutionary biology in which Edward Wilson and Richard Dawkins were, for various reasons, the Good Guys; and Richard Lewontin and Stephen J. Gould were the Bad Guys. When reading articles on group selection for this post, I was distressed to find Richard Dawkins joining in the vilification of group selection with religious fervor; while Stephen J. Gould was the one who said, "I have witnessed widespread dogma only three times in my career as an evolutionist, and nothing in science has disturbed me more than ignorant ridicule based upon a desire or perceived necessity to follow fashion: the hooting dismissal of Wynne-Edwards and group selection in any form during the late 1960's and most of the 1970's, the belligerence of many cladists today, and the almost ritualistic ridicule of Goldschmidt by students (and teachers) who had not read him." This caused me great cognitive distress. I wanted Stephen Jay Gould to be the Bad Guy. I realized I was trying to find a way to dismiss Gould's statement, or at least believe that he had said it from selfish motives. Or else, to find a way to flip it around so that he was the Good Guy and someone else was the Bad Guy. To move on, I had to consciously shatter my Good Guy/Bad Guy narrative, and accept that all of these people are sometimes brilliant, sometimes blind; sometimes share my values, and sometimes prioritize their values (e.g., science vs. politics) very differently from me. I was surprised by how painful it was to do that, even though I was embarrassed to have had the Good Guy/Bad Guy hypothesis in the first place. I don't think it was even personal - I didn't care who would be the Good Guys and who would be the Bad Guys. I just want there to be Good Guys and Bad Guys." The text above is a post found on LessWrong. What is this text talking about? This text is discussing the author's personal journey in understanding and interpreting the dynamics within the field of evolutionary biology, particularly between four prominent figures: Edward Wilson, Richard Dawkins, Richard Lewontin, and Stephen J. Gould. The author initially viewed Wilson and Dawkins as the "Good Guys" and Lewontin and Gould as the "Bad Guys", largely due to the controversial reaction Lewontin and Gould had towards Wilson's publication of Sociobiology. The author felt that their secret meetings and the politicized rebuttal they penned against Sociobiology indicated a lack of interest in truth and more in personal or political agenda. However, the author's perspective was challenged when they found Dawkins vehemently opposing group selection, a concept the author presumably supports, and Gould defending it. This caused the author cognitive distress as it contradicted their established narrative. The author eventually realized the need to abandon this binary "Good Guy/Bad Guy" perspective and accept that these individuals are complex, with their own strengths, weaknesses, and value systems. The author found this realization painful but necessary for a more nuanced understanding of the field and its contributors. The author uses this personal experience to illustrate the dangers of oversimplification and the importance of critical thinking in science and other fields. The quote "If you meet the Buddha on the road, kill him!" is used metaphorically to convey the idea of challenging and letting go of preconceived notions or idols in the pursuit of truth. Can you elaborate on the concept of group selection and its role in the controversy surrounding the views of these evolutionary biologists? Group selection is a concept in evolutionary biology that suggests natural selection can act at the level of groups, not just individuals. It proposes that certain traits may be favored because they benefit the group or species as a whole, even if they are detrimental to the individual organism. The concept of group selection has been a contentious issue within the field of evolutionary biology. Traditional Darwinian theory holds that natural selection acts on individuals, favoring traits that increase an individual's chances of survival and reproduction. This individual-centric view is often associated with Richard Dawkins, who famously elaborated it in his book "The Selfish Gene." Edward Wilson, on the other hand, has been a prominent advocate for the idea of group selection. He argues that traits like altruism can be explained by selection acting at the group level, where groups containing altruistic individuals may have a survival advantage over groups that do not. Stephen J. Gould, while not a strict group selectionist, criticized the dismissal of group selection and the dogmatic adherence to individual selection. Richard Lewontin's views were more complex and nuanced, often focusing on the interaction between genetic and environmental factors in evolution. The controversy surrounding group selection is not just about the science; it's also about the philosophical and moral implications of these theories. The idea of the "selfish gene" can be interpreted as endorsing individualism and competition, while group selection might be seen as supporting cooperation and altruism. These differing interpretations have fueled debates within and beyond the scientific community. Discuss the philosophical and moral implications of the "selfish gene" theory and group selection theory in the context of societal behaviors and norms. The "selfish gene" theory and group selection theory have been interpreted in ways that extend beyond biology and into the realms of philosophy, morality, and social behavior. The "selfish gene" theory, popularized by Richard Dawkins, posits that genes are the primary unit of natural selection, and they 'strive' to replicate themselves. This theory is often associated with individualism and competition. If genes are inherently 'selfish', then it can be inferred that the organisms carrying those genes (including humans) might also exhibit selfish behaviors. This interpretation can be used to justify competitive, individualistic behaviors and social structures, reinforcing the idea that self-interest is not just natural, but also necessary for survival and evolution. On the other hand, group selection theory suggests that natural selection can act at the level of groups, favoring traits that benefit the group as a whole, even if they are detrimental to the individual. This theory can be seen as endorsing cooperation and altruism. It suggests that traits promoting group cohesion and mutual aid can be evolutionarily advantageous, which can be used to justify cooperative societal behaviors and norms. However, it's important to note that these are interpretations and extrapolations of the theories, and not inherent to the theories themselves. The use of biological theories to justify social behaviors and norms is known as 'biological determinism', and it's a contentious topic. Many argue that it oversimplifies both the biology and the complexity of human behavior and societal structures. It's also crucial to remember that what is 'natural' does not necessarily dictate what is 'right' or 'good', a concept known as the 'naturalistic fallacy'.
05a9e65d-6c71-4622-9884-952e7e6ba89b
trentmkelly/LessWrong-43k
LessWrong
HPMoR: What do you think you know? (And somewhere in the back of his mind was a small, small note of confusion, a sense of something wrong about that story; and it should have been a part of Harry's art to notice that tiny note, but he was distracted. For it is a sad rule that whenever you are most in need of your art as a rationalist, that is when you are most likely to forget it.) Why does the wizarding world believe Voldemort used the Killing Curse on Harry? Whether or not the Love Shield exists in MoR, I doubt most wizards had an >epsilon prior for the Killing Curse resulting in a scarred but otherwise unharmed target, a dead and burned spellcaster, and a destroyed building. There were no surviving witnesses except Baby Harry. Where did that version of events come from? If I was Joe Random Wizard and heard that evidence without names attached, I would naively hypothesize: Dark Wizard shows up at house, encounters mother + father + their allies. Battle ensues. Parents and Dark Wizard are slain. House is destroyed and baby is hit by debris. There is one obvious question - why the allies didn't take the baby with them - but any answer to that is more plausible than "There were no allies; the most reliable curse in the world backfired on its most experienced practitioner." Not that the "reflected curse" story was hard to sell. People are great at not asking the next question when they want to believe. We have some additional information about the events of that evening: * Harry's memory under Dementation (Ch43). This may or may not be a true memory. * Snape relayed the complete prophecy to Voldemort without understanding it (Ch77). McGonagall was the propechy's witness (Ch28), but Snape has also heard the original audio (Ch77). * McGonagall knows that Voldemort is still around (Ch6). * Quirrellmort does not want to kill Harry. If he ever did, he changed his mind while offscreen. What really happened at Godric's Hollow?   PS. If the Love Shield does exist in MoR, do you suppose Bellatrix c
db6afae5-e6aa-45ba-9cc3-e0fe9b91a7df
trentmkelly/LessWrong-43k
LessWrong
A Coordination Cookbook? Coordination can improve default outcomes, i.e. what happens when individuals act according to their own interest assuming others will do the same. For example in a flat share the default outcome may be that the common areas stay dirty because each flatmate is willing to spend time cleaning only if they are confident that all other flatmates will also spend time cleaning ; in the absence of coordination there is no such confidence so the common areas stay dirty. Such coordination challenges come up often in daily life. Another example that comes to mind is choosing a leader, whether for a sport team on a small scale, or a country on a larger scale. This coordination issue first requires to get everyone to agree that a leader is needed, and then to agree on a method to choose that leader. We can view each situation as being in one of several possible equilibria (dirty versus clean common areas, not having a leader versus having one), and coordination protocols as the way to take us from one equilibrium to another. This raises the question: what protocol can we use to get to a better equilibrium for the situations we care about? To answer this question, I suggest we should aggregate a "coordination cookbook": just like having a list of recipes is useful when cooking, so too having a list of issues paired to coordination protocols would be useful when navigating coordination issues of any scale. Humanity has accumulated a lot of tacit knowledge about coordination. Societies with their organizations, laws and infrastructures ; cultures, traditions, religions and rituals already impart us with know-how and protocols for many issues. Making these protocols explicit, comparing them and improving them could be a good starting point. I hope you find this idea of coordination cookbook stimulating. Please feel free to share: * your own experience and protocols for solving coordination issues * existing resources * any other relevant insights Thank you for engaging
97f4f300-ebcc-4da3-bf63-d052e538aab4
trentmkelly/LessWrong-43k
LessWrong
[AN #96]: Buck and I discuss/argue about AI Alignment Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. HIGHLIGHTS AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 (Lucas Perry, Buck Shlegeris and Rohin Shah) (summarized by Rohin): This podcast with Buck and me is loosely structured around the review I wrote (AN #84), but with a lot more debate and delving into specific points of pessimism and optimism. I suspect that every reader will have some section they're interested in. Since much of the discussion was itself meant to be a summary, I'm not going to try and summarize even further. Here's the list of topics covered: * Our optimism and pessimism about different approaches to aligned AI * Traditional arguments for AI as an x-risk * Modeling agents as expected utility maximizers * Ambitious value learning and specification learning/narrow value learning * Agency and optimization * Robustness * Scaling to superhuman abilities * Universality * Impact regularization * Causal models, oracles, and decision theory * Discontinuous and continuous takeoff scenarios * Probability of AI-induced existential risk * Timelines for AGI * Information hazards TECHNICAL AI ALIGNMENT TECHNICAL AGENDAS AND PRIORITIZATION AI Services as a Research Paradigm (Vojta Kovarik) (summarized by Rohin): The CAIS report (AN #40) suggests that future technological development will be driven by systems of AI services, rather than a single monolithic AGI agent. However, there has not been much followup research since the publication of the report. This document posits that this is because the concepts of tasks and services introduced in the report are not amenable to formalization, and so it is hard to do research with them. So, it provides a classification of the types of research that could be
8b8eb359-66c5-4fcc-aee8-e7550eff11aa
trentmkelly/LessWrong-43k
LessWrong
Rude research Cross posted from Overcoming Bias. Comments there. *** Bryan Caplan says intelligence research is very unpopular because it looks so bad to call half of people stupider than average, let alone stupid outright. Calling people stupid is rude. But if this is the main thing going on, many other kinds of research should be similarly hated. It’s rude to call people lazy, ugly bastards whose mothers wouldn’t love them. Yet there is little hostility regarding research into conscientiousness, physical attractiveness, parental marriage status, or personal relationships. At least as far as I can tell. Is there? Or what else is going on with intelligence?
be107397-0801-43c1-85c3-015e98da42ff
trentmkelly/LessWrong-43k
LessWrong
How do we know that the things we want have credibility ? I think Transhumanism and singularity are a beautiful vision of a society but I believe it requires a level of enlightenment we humans are yet to be shown capable of. But hey in the end it's a dream worth chasing isn't it ? Well I've been having doubts about it ... First of all 1 my biggest doubt is the possibility. How do we know that even if those things are physically possible, they aren't impractical to achieve and that we won't find out that there are limitations on what we can do as our knowledge improves ? Is betting on something like this safe ? 2 lack of ethics discussions. Many Transhumanists and singularitarians I've come across really come off as either extremely privileged and completely oblivious to the social effects of such technologies. Besides the lack of minority participation in itself really shows a lot. I don't think it's useful to say "minorities don't participate in this because they don't know about Transhumanism" there might be other reasons as well. 3 the outright denialistic behaviour and the amount of people in fragile mental states joining Transhumanism. Constantly looking for validation of one's worldview , avoiding anything that challenges one's views really anxiously. Besides many people who are overly invested into Transhumanism seem to be in fragile mental conditions and seem to really be looking for a hope or in denial of their condition. Is it actually ethical ? To get people involved in a non safe bet ? I've seen many actual academics say that people should appreciate Transhumanists should stop the immortality or bust mentality and appreciate the practical stuff for current period , all repeated this same thing 4 I admit I've been and still am in all 3 categories but I would very much like to be proven wrong and would still like to believe that Transhumanism is still worth it and not just a false hope for people in Denial or fragile emotional states ...
4bea3ea2-e14e-4168-aee7-835f1d5af6b4
trentmkelly/LessWrong-43k
LessWrong
Podcast: Tamera Lanham on AI risk, threat models, alignment proposals, externalized reasoning oversight, and working at Anthropic TLDR: I interviewed Tamera Lanham. You can listen to our conversation here or read some highlights below. I also suggest reading her post about externalized reasoning oversight. About a year ago, I met Tamera Lanham at the University of Pennsylvania. We met at a Penn Rationality meetup, and we eventually started talking about existential risks from advanced AI systems. Tamera was initially skeptical: it seemed likely to her that labs would have strong economic incentives to solve alignment, and it also seemed extremely hard to do work (today) that would help us align systems that we don’t yet have access to. Today, Tamera is a research resident at Anthropic working on externalized reasoning oversight. While applying to SERI-MATS, Tamera started thinking about how chain-of-thought reasoning could be harnesses to better understand how AI systems “think”. If it works, it could be a way to interpret models’ “thoughts” without looking at their weights or activations. In less than one year, Tamera went from “taking AI safety seriously” to “becoming a junior alignment researcher who is proposing novel research ideas and running experiments on large language models.” Admittedly, this is partially explained by the fact that Tamera had a background in machine learning. But I think that this is also explained by who Tamera is & the way she thinks about the world. I’m excited to be able to share a glimpse into how Tamera thinks about the world. In this episode, we discuss the arguments that convinced Tamera to work on AI alignment, how she skilled-up so quickly, what threat models she’s worried about, how she’s thinking about her externalized reasoning oversight agenda, and her advice for other alignment researchers. Listen to the full interview here. I’m including some highlights below.  Note: Feel free to reach out if you're interested in helping with future episodes (e.g., audio editing, transcript editing, generating questions). Tamera’s Worldview AW: Can you descri
fd8ddbe3-916b-4016-996d-73d706f8ed34
trentmkelly/LessWrong-43k
LessWrong
AI origin question What do people see as the plausible ways for AGI to come into existence, in the absence of smart people specifically working on AI safety?   These are the ones that occur to me, in no precise order:   1. An improved version of Siri (itself an improved version of MS Clippy). 2. A program to make Google text ads that people will click on. 3. As #2, but for spam. 4. A program to play the stock market or otherwise maximize some numerical measure of profit, perhaps working against/with other programs with the same purpose. 5. A program to make viral music videos from scratch (generating all images and music). 6. An artificial programmer. 7. A program to analyze huge amounts of data looking for 'threats to national security.' 8. Uploads. It seems like #2-5 would have formally specified goals which in the long term could be satisfied without human beings, and in the short term require manipulating human beings to some degree. Learning manipulation need not arouse suspicion on the part of the AI's creators, since the AI would be trying to fulfill its intended purpose and might not yet have thought of alternatives.
c9aea1e5-57e4-41a9-8b77-422a60f7bc58
trentmkelly/LessWrong-43k
LessWrong
Blackmail Epistemic Status: Surprisingly controversial Response to: Why Blackmail Should Be Illegal (Marginal Revolution), Should Blackmail Be Legal? (David Henderson), Checkmate on Blackmail (Robin Hanson) I notice I am confused. Smart people are failing to provide strong arguments for why blackmail should be illegal. Robin Hanson is explicitly arguing it should be legal. He is correct that Tyler Cowen’s recent justifications of making blackmail illegal are relative weak sauce, but then states that he has reached ‘checkmate’ – that there are no reasonable consequentialist arguments against blackmail. In his post Charity Blackmail, he lists what he says are the justifications actually made for blackmail, and it’s quite a weak list as well. In those twenty papers, roughly a quarter of the authors think blackmail should be legal. Others offered a wide range of arguments for illegality. Robin summarized the arguments made by the papers as making only the following mix of good and bad points: > 1. Your right to keep quiet is weaker than your right to speak. > > 2. It is stupid to pay a blackmailer; stupidity should be illegal. > > 3. A blackmailer’s motives, in wanting money, are immoral. > > 4. Saying embarrassing things about someone hurts them. > > 5. It is especially wrong to gain money by hurting someone. > > 6. The blackmailer uses third parties, without their permission, to extract gains. > > 7. Blackmail discourages embarrassing activities, but some things just can’t be changed. > > 8. Blackmailers may commit crimes to get the info, as may victims to get money. > > 9. Rules forbidding or requiring the telling of certain info might be good, but are less “practical” than blackmail laws. > > 10. If blackmail is impossible, people will instead gossip, and gossip will result in more folks knowing, and discourage embarrassing activities more. > > 11. Government law can optimally discourage an activity via optimal punishment and rates of detection and error. Blac
90affe88-abaf-4d7c-a4ac-d21983a03997
trentmkelly/LessWrong-43k
LessWrong
A thought about Internet procrastination Perhaps this is already well known, but it occurred to me yesterday and I thought I'd share it. The Internet seems particularly virulent as a form of procrastination; indeed, if, say, chatting at watercoolers took up as much time in the average office worker's day, we wouldn't make jokes about it. What is the feature that makes it so deadly? I suggest that it is the random reinforcement schedule: Every five minutes you "press the lever", that is, check forum X or site Y. And every six or seven checks you get the reward: Someone posted something interesting! This random reinforcement is ideal for creating addiction; thus, for example, slot machines. As a way to avoid this effect, I'm going to strive not to do anything on the interwebs except at precisely defined times, or unless I have a specific goal in mind, say "Look up this method signature". Wish me luck, or better still, wish me willpower. :)
fb998487-a816-491a-9055-087fa31037d1
trentmkelly/LessWrong-43k
LessWrong
Polling Thread This is the second installment of the Polling Thread. This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls. There are some rules: 1. Each poll goes into its own top level comment and may be commented there. 2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll. 3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see. If you don't know how to make a poll in a comment look at the Poll Markup Help. ---------------------------------------- This is not (yet?) a regular thread. If it is successful I may post again. Or you may. In that case do the following : * Use "Polling Thread" in the title. * Copy the rules. * Add the tag "poll". * Link to this Thread or a previous Thread. * Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar' * Add a second top-level comment with an initial poll to start participation.
28f7416d-648e-41ef-a60c-aaeaafce138d
trentmkelly/LessWrong-43k
LessWrong
Reflections on Neuralese Thanks to Brendan Halstead for feedback on an early draft of this piece. Any mistakes here are my own. [Epistemic status: I've looked at the relevant code enough to be moderately sure I understand what's going on. Predictions about the future, including about what facts will turn out to be relevant, are uncertain as always.] Introduction With the recent breakthroughs taking advantage of extensive Chain of Thought (CoT) reasoning in LLMs, there have been many attempts to modify the technique to be even more powerful. One of the natural ideas for improving CoT is to have LLMs perform CoT reasoning in the same latent space that they use for reasoning within a single forward pass, rather than being constrained to the space of possible tokens. However, as people working on AI safety, it makes sense to ask how this changes the game for LLM interpretability. After all, we are able to catch a large fraction of current LLM deception by monitoring their natural-language CoT, since right now CoT is primarily faithful to the LLM's true reasoning and is legible to us given the right techniques. In order to understand this strategic situation, it's important to understand this new "language" (which people often refer to as Neuralese) that is created by reasoning in latent spaces instead of using tokens. Understanding Neuralese To refresh, a language transformer starts by embedding input tokens as vectors in some high-dimensional latent space, and runs each of these embeddings through a series of repeated computational layers. Then, of the resulting modified vectors in latent space, the vector that previously corresponded to the final input token is projected and normalized to create a probability distribution over what the next token could be. Then, to actually get the next token, you sample from the distribution. Chain of Thought reasoning works so well because the model does some computation, outputs a token, and then all future instances of that model have access to tha
3f156bed-f3ed-481f-9d5e-92c2df3daaf6
StampyAI/alignment-research-dataset/lesswrong
LessWrong
LEAst-squares Concept Erasure (LEACE) "Ever wanted to mindwipe an LLM? Our method, LEAst-squares Concept Erasure (LEACE), provably erases all linearly-encoded information about a concept from neural net activations. It does so surgically, inflicting minimal damage to other concepts. ... LEACE has a closed-form solution that fits on a T-shirt. This makes it orders of magnitude faster than popular concept erasure methods like INLP and R-LACE, which require gradient-based optimization. And the solution can be efficiently updated to accommodate new data."
83034f9f-2d93-4047-8301-caf1c1eada0d
trentmkelly/LessWrong-43k
LessWrong
Coronavirus Justified Practical Advice Summary Two weeks ago Ben Pace and I asked for practical advice on what to do about coronavirus, with the requirement that advice be justified in some way- ideally full blown models informed by empirical data, but at a minimum, an explanation of why it was helpful. The resulting thread had wonderful advice but makes for, uh, inefficient reading, at best. This post is an attempt to take the best of the Justified Practical Advice Thread and present it in a clear, easy-to-digest format. It’s important to note that neither the original post nor this one are attempting to give comprehensive advice. If you’re looking for that, I suggest checking out the Guide/FAQs/Intros tab on the LessWrong Coronavirus Link Database. This is for advice that fell through the cracks. Though I will briefly mention the standard advice. From what I can tell, the most important (and luckily widely publicized advice) is the following: * Wash your hands a lot, and wash them to a hospital standard (spend 20 seconds and follow these instructions). * Socially isolate yourself, and don't be around other people unless really necessary. Avoid groups of people, especially large gatherings. Really don't go to bars, clubs, restaurants or other places that lots of people pass through (like public transportation). * If you are sick, use a mask. They are probably also effective at preventing the disease when you are not sick, but most of the western world currently has massively limited supply, so leave the masks to health workers and sick people. The rest of this post will summarise the interesting ideas that went beyond these basic recommendations. But if you haven't done (or at least seriously considered doing) all of the above, I would recommend you prioritise the things listed above. Many of the below recommendations involve buying things. When possible I’ve included a link to a particular amazon page. This is not a strong recommendation for that particular product: it’s an attempt to lower activation
d5deaed9-d09f-4bd3-88df-3996db605930
trentmkelly/LessWrong-43k
LessWrong
Managing risks while trying to do good I often think about "the road to hell is paved with good intentions".[1] I'm unsure to what degree this is true, but it does seem that people trying to do good have caused more negative consequences in aggregate than one might naively expect.[2] "Power corrupts" and "power-seekers using altruism as an excuse to gain power" are two often cited reasons for this, but I think don't explain all of it. A more subtle reason is that even when people are genuinely trying to do good, they're not entirely aligned with goodness. Status-seeking is a powerful motivation for almost all humans, including altruists, and we frequently award social status to people for merely trying to do good, before seeing all of the consequences of their actions. This is in some sense inevitable as there are no good alternatives. We often need to award people with social status before all of the consequences play out, both to motivate them to continue to try to do good, and to provide them with influence/power to help them accomplish their goals. A person who consciously or subconsciously cares a lot about social status will not optimize strictly for doing good, but also for appearing to do good. One way these two motivations diverge is in how to manage risks, especially risks of causing highly negative consequences. Someone who wants to appear to do good would be motivated to hide or downplay such risks, from others and perhaps from themselves, as fully acknowledging such risks would often amount to admitting that they're not doing as much good (on expectation) as they appear to be. How to mitigate this problem Individually, altruists (to the extent that they endorse actually doing good) can make a habit of asking themselves and others what risks they may be overlooking, dismissing, or downplaying.[3] Institutionally, we can rearrange organizational structures to take these individual tendencies into account, for example by creating positions dedicated to or focused on managing risk. These co
83ff8f0b-3564-43d1-aebc-e2bb0379e7aa
StampyAI/alignment-research-dataset/arbital
Arbital
Orbit-stabiliser theorem summary(Technical): Let $G$ be a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), [acting](https://arbital.com/p/3t9) on a set $X$. Let $x \in X$. Writing $\mathrm{Stab}_G(x)$ for the [stabiliser](https://arbital.com/p/4mz) of $x$, and $\mathrm{Orb}_G(x)$ for the [orbit](https://arbital.com/p/4v8) of $x$, we have $$|G| = |\mathrm{Stab}_G(x)| \times |\mathrm{Orb}_G(x)|$$ where $| \cdot |$ refers to the size of a set. Let $G$ be a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), [acting](https://arbital.com/p/3t9) on a set $X$. Let $x \in X$. Writing $\mathrm{Stab}_G(x)$ for the [stabiliser](https://arbital.com/p/4mz) of $x$, and $\mathrm{Orb}_G(x)$ for the [orbit](https://arbital.com/p/4v8) of $x$, we have $$|G| = |\mathrm{Stab}_G(x)| \times |\mathrm{Orb}_G(x)|$$ where $| \cdot |$ refers to the size of a set. This statement generalises to infinite groups, where the same proof goes through to show that there is a [bijection](https://arbital.com/p/499) between the [left cosets](https://arbital.com/p/4j4) of the group $\mathrm{Stab}_G(x)$ and the orbit $\mathrm{Orb}_G(x)$. # Proof Recall that the [https://arbital.com/p/-4lt](https://arbital.com/p/-4lt) of the parent group. Firstly, it is enough to show that there is a bijection between the left cosets of the stabiliser, and the orbit. Indeed, then $$|\mathrm{Orb}_G(x)| |\mathrm{Stab}_G(x)| = |\{ \text{left cosets of} \ \mathrm{Stab}_G(x) \}| |\mathrm{Stab}_G(x)|$$ but the right-hand side is simply $|G|$ because an element of $G$ is specified exactly by specifying an element of the stabiliser and a coset. (This follows because the [cosets partition the group](https://arbital.com/p/4j5).) ## Finding the bijection Define $\theta: \mathrm{Orb}_G(x) \to \{ \text{left cosets of} \ \mathrm{Stab}_G(x) \}$, by $$g(x) \mapsto g \mathrm{Stab}_G(x)$$ This map is well-defined: note that any element of $\mathrm{Orb}_G(x)$ is given by $g(x)$ for some $g \in G$, so we need to show that if $g(x) = h(x)$, then $g \mathrm{Stab}_G(x) = h \mathrm{Stab}_G(x)$. This follows: $h^{-1}g(x) = x$ so $h^{-1}g \in \mathrm{Stab}_G(x)$. The map is [injective](https://arbital.com/p/4b7): if $g \mathrm{Stab}_G(x) = h \mathrm{Stab}_G(x)$ then we need $g(x)=h(x)$. But this is true: $h^{-1} g \in \mathrm{Stab}_G(x)$ and so $h^{-1}g(x) = x$, from which $g(x) = h(x)$. The map is [surjective](https://arbital.com/p/4bg): let $g \mathrm{Stab}_G(x)$ be a left coset. Then $g(x) \in \mathrm{Orb}_G(x)$ by definition of the orbit, so $g(x)$ gets taken to $g \mathrm{Stab}_G(x)$ as required. Hence $\theta$ is a well-defined bijection.
3f8c0afb-7ca5-40c3-b11e-90af69cc1054
trentmkelly/LessWrong-43k
LessWrong
Holomorphic surjection theorem (Picard's little theorem) Consider an entire function (complex-differentiable everywhere) f(z). I will intuitively prove that certain lemmas hold on any such f. If f is a polynomial, I can combine those lemmas with another one, showing that the holomorphic surjection theorem (more commonly, Picard's little theorem) applies. Any entire function is the limit of a sequence of polynomials, so a theorem holding for all polynomials is a compelling hint that it may also hold for all entire functions. Pick an input point a. Suppose ε is a small positive real number, and b is a unit-modulus complex number. Incrementing a to a+εb takes f(a) to f(a+εb)≈f(a)+εbf′(a). That approximation holds sith f is entire, and is closer to exact for smaller ε. From any complex f(a) — except 0 — exactly two directions of increment keep its modulus constant, which are those directions along the circle in output space centred at 0. b is arbitrary, so the direction of the increment εbf′(a) is arbitrary. Hence, for any input a, there are two directions to increment it, along which to develop a contour of constant modulus. We can repeat this operation at the ends of the contour, so every such contour is either closed or infinite. The exceptions are f(a)=0, which form degenerate "contours" of a single point. Every point in the input space (complex numbers) lies on some contour, either degenerate, closed, or infinite. The contours we care about are defined by the modulus of the outputs along them, which precludes intersections. So any closed contour encloses some area, within which all points are on contours either closed — fitting entirely within the enclosing contour — or degenerate. Pick a closed contour C, as described above. Suppose the smallest-area contour within it is D, enclosing area exceeding zero. But D is a closed contour, holding points within it belonging to other contours, which must enclose smaller areas. Contradiction. The smallest-area contour within C has area zero. A zero-area contour is either open
0b3ce5ea-3df5-4bc0-9a7e-87ca6fbdb58e
trentmkelly/LessWrong-43k
LessWrong
The abruptness of nuclear weapons Nuclear weapons seem like the marquee example of rapid technological change after crossing a critical threshold. Looking at the numbers, it seems to me like: * During WWII, and probably for several years after the war, the cost / TNT equivalent for manufacturing nuclear weapons was comparable to the cost of conventional explosives, (AI impacts estimates a manufacturing cost of $25M/each) * Amortizing out the cost of the Manhattan project, dropping all nuclear weapons produced in WWII would be cost-competitive with traditional firebombing (which this thesis estimates at 5k GBP (=$10k?) / death, vs. ~100k deaths per nuclear weapon) and by 1950, when stockpiles had gown to >100 weapons, was an order of magnitude cheaper. (Nuclear weapons are much easier to deliver, and at that point the development cost was comparable to manufacturing cost). Separately, it seems like a 4 year lead in nuclear weapons would represent a decisive strategic advantage, which is much shorter than any other technology. My best guess is that a 2 year lead wouldn't do it, but I'd love to hear an assessment of the situation from someone who understands the relevant history/technology better than I do. So my understanding is: it takes about 4 years to make nuclear weapons and another 4 years for them to substantially overtake conventional explosives (against a 20 year doubling time for the broader economy). Having a 4 year lead corresponds to a decisive strategic advantage. Does that understanding seem roughly right? What's most wrong or suspect? I don't expect want to do a detailed investigation since this is pretty tangential to my interests, but the example is in the back of my mind slightly influencing my views about AI, and so I'd like it to be roughly accurate or tagged as inaccurate. Likely errors: (a) you can get a decisive strategic advantage with a smaller lead, (b) cost-effectiveness improved more rapidly after the war than I'm imagining, or (c) those numbers are totally wrong fo
3755e54e-0c1d-4baa-a295-7c3b94a04904
trentmkelly/LessWrong-43k
LessWrong
Covid-19 Points of Leverage, Travel Bans and Eradication Covid-19 has become a major topic for discussion with talk about many different interventions and ideas that might help, from 3-D printing parts for respirators to drafting medical students into hospitals to thorough hand-washing procedures. However as rationalists we should be asking which actions have the highest expected utility, not which actions have some positive utility. In an exponentially growing process, the actions with the highest expected utility are those actions which intervene early in the process, and actions like drafting medical students which intervene late in the process when the disease has already grown to a huge size are "nice to have" but by that point most of the damage has been done. Proper and Prompt Travel Bans do Work As early as January 26th, I called for cancellation of flights to limit the spread of covid-19; there was some pushback based on the idea that travel restrictions don't work which upon closer examination was actually the idea that late or half-hearted travel restrictions don't work: > During the height of the SARS outbreak in 2003, he had a colleague who wanted to return to the UK from Toronto, one of the cities most affected by the virus. So she caught a domestic flight from Toronto to Vancouver, then boarded a flight to London. “When she arrived at Heathrow [airport] and authorities asked her, ‘Have you been to Toronto,’ she said no and walked right through.” A policy that allows people to travel from an infected area to to an uninfected area is not a travel ban. It's containment theater. A real travel ban would be grounding all international flights and stopping passenger trains and boats until the disease had been eradicated or at least very well contained, as well as aggressively tracking down and contact tracing people who slipped through before the lockdown, for example using cellphone data from intelligence agencies. A key point here is that mopping up a small number of cases that slip through is in fact po
57a308a8-bb94-4895-a55c-0ba9b5d6081d
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Assigning Praise and Blame: Decoupling Epistemology and Decision Theory Your group/company/organization performs well, doing great work and dealing with new problems efficiently. As one of its leaders, you want to understand why, so that you can make it even more successful, and maybe emulate this success in other settings. Your group/company/organization performs badly, not delivering on what was promised and missing deadline after deadline. As one of its leaders, you want to understand why, so that you can correct its course, or at least not repeat the same mistakes in other settings. Both cases apparently involve credit assignment: positive credit (praise) for success or negative credit (blame) for failure. And you can easily think of different ways to do so: Heuristics for Credit Assignment ================================ Baseline -------- The most straightforward approach starts with your initial prediction, and then assigns credit for deviations from it. So praise people who did better than expected and blame people who did worse than expected. Then you remember Janice. She’s your star performer, amazing in everything she does, and you knew it from the start. So she performed according to prediction, being brilliant and reliable. Which means she doesn’t deserve any praise by this criterion. On the other hand there is Tom. He’s quite good, but you also knew from the start he was a prickly showoff with an easily scratched ego. Still, he did his job, and when he acted like an asshole, that was within the prediction. So he doesn't deserve any blame by this criterion. Incentive wise, this sounds like a terrible idea. If you push this credit assignment strategy, not only will you neglect the value of Janice and the cost of Tom, but you will probably drive away high-performers and attract problem-makers. Bottleneck ---------- Instead of starting from a baseline, let’s focus on the key bottlenecks. What would have doomed the project if not done? What ended up blocking everything and dooming the project? This focuses on the real cruxes, which is good. Yet what about Marcel, your tireless tool engineer? None of his stuff is ever absolutely necessary, but everyone in your group constantly mentions the value they get from his well-maintained and efficient tools. Should he not get any credit for it? And Bertha, who is the only security expert of your group, and always finds excuses to make herself more and more necessary? Is this really a behavior you want to condone and praise? Shouldn’t she get blamed for it instead? It’s at this point that you remember[this short parable](https://www.cse.cuhk.edu.hk/~cslui/JOKE/leader.html). First cause ----------- No, really, what matters most is the initial spark that puts everything in motion. Without the original idea, nothing happens; skills and expertise remain useless and flaccid, unable to toil for a worthwhile goal. And what a coincidence: you were one of these key idea generators! All the more power to you, then. It’s only fair that you get a large share of the credit, given that the group wouldn’t even exist without you. But that still doesn’t seem right. Yes, your contribution was integral. But could you really have done it by yourself? Probably not. Or at least not as well, as quickly, as beautifully as it has been. Or conversely, if it failed totally, was it because the idea was doomed from the start, or because the execution proved bad enough to torpedo even sensible propositions? Final step ---------- You got it exactly wrong above: it’s not the first step that trumps them all, it’s the final one. Making the abstract real, adding the finishing touches, this is what makes success or failure. So you should focus your assignment on the success and failure of these last steps. But what about you? Yes, you. You have never finished anything yourself, you’re the organizer, idea maker, coordinator. It’s not your role or your job to put the finishing touch, to usher something into the physical world. Does that mean that none of your actions mattered, for failure or success? No, obviously not. You know your group, your team, your company. You know how many issues you’ve addressed, how much would value have been left on the table without you. And you also know when you fell short, when you could have repaired relationships, problems, egos, and instead neglected them or smashed them to pieces yourself. Asking the wrong question ========================= None of your ideas for assigning credit seems to work. Whenever you try to fix issues in one of them, you end up creating different problems. This should be a signal. A signal that you might be asking the wrong question. During all this post we asked “How should praise and blame be assigned?”. But was that the original question? Was that the final goal? No: the original point was to **figure out** why something has happened (be it success or failure) AND to find out **how to act** in order to get more or less of this result. As highlighted above, this actually splits into two goals: * an epistemic goal about having a better causal model of what happened and why. * a decision theoretic goal about what to do to ensure good outcomes (through incentives notably). And most of our troubles came from trying to solve both at the same time. To build a model that embedded credit assignment inside, [as if it was part of reality](https://www.lesswrong.com/posts/g2ZAH8mSrtSuZnHEi/reification-bias). But these are distinct realms, so conflating solutions creates an unnecessary dependency. What the examples above also showed was the relative ease of building the causal model compared to assigning credit. In almost all cases we could point out the role and contribution of each group member, but any attempts at praise and blame created bad incentives. As such, an easy way to avoid many pitfalls of credit assignment is to simply hold off on doing the decision theoretic aspect until you actually have to (which is far less often than you think). And instead to always start with the epistemic goal, causally modeling the situation. Then if you ever need to intervene, you will be able to rely on the best model you have, one not already unconsciously polluted with baked-in decision-theoretic choices.
b9948ac3-95a6-47c7-b1ff-023d6ed10a62
trentmkelly/LessWrong-43k
LessWrong
The AI Timelines Scam [epistemic status: that's just my opinion, man. I have highly suggestive evidence, not deductive proof, for a belief I sincerely hold] "If you see fraud and do not say fraud, you are a fraud." --- Nasim Taleb I was talking with a colleague the other day about an AI organization that claims: 1. AGI is probably coming in the next 20 years. 2. Many of the reasons we have for believing this are secret. 3. They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise. His response was (paraphrasing): "Wow, that's a really good lie! A lie that can't be disproven." I found this response refreshing, because he immediately jumped to the most likely conclusion. Near predictions generate more funding Generally, entrepreneurs who are optimistic about their project get more funding than ones who aren't. AI is no exception. For a recent example, see the Human Brain Project. The founder, Henry Makram, predicted in 2009 that the project would succeed in simulating a human brain by 2019, and the project was already widely considered a failure by 2013. (See his TED talk, at 14:22) The Human Brain project got 1.3 billion Euros of funding from the EU. It's not hard to see why this is. To justify receiving large amounts of money, the leader must make a claim that the project is actually worth that much. And, AI projects are more impactful if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon. Fear of an AI gap The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US. Similarly, there's historical precedent for an AI gap lie used to justify more AI development. Fifth Generation Computer Systems was an ambitious 1982 project by the Japanese government (funded for $400 million in 1992, or $730 million in
0f0632cc-a956-453d-935e-f146ae9f8397
trentmkelly/LessWrong-43k
LessWrong
Meetup : Sydney - Core sequences Discussion article for the meetup : Sydney - Core sequences WHEN: 13 March 2012 06:00:00PM (+1100) WHERE: 22 the promenade sydney Hey everybody. Deep inside the king street brewery, we'll be hopefully coming to a discussion of the core sequences, as suggested at the last meeting. Plus general hanging about! 1. We have a FB group set up - just search Less wrong Sydney, or PM. 2. Dates, as always are fungible. Discussion article for the meetup : Sydney - Core sequences
94ff9da4-cea0-4a0b-a08c-9cf60f3915a9
trentmkelly/LessWrong-43k
LessWrong
Blog posts as epistemic trust builders I've really been enjoying Zvi's weekly posts on the coronavirus. Keeping up with what's going on is something I want to do, but not badly enough to put in the time myself. I'm not even sure how capable I would be even if I wanted to put in the time myself; it seems difficult to sift through all of the information out there. Reading Zvi's posts works out perfectly for me though. 20-30 minutes a week and I get just what I need. But all of this only works because I trust Zvi. ---------------------------------------- Like most people nowadays, I spend a lot of time online. In particular, reading blog posts. LessWrong, Overcoming Bias, Slate Star Codex, Hacker News, FiveThirtyEight, etc. When I need a break, I have my little routine of websites that I click through. Sometimes I reflect on how much value I get out of reading all of these blog posts. Nothing against the authors, but when I finish an article, I usually am not left with the feeling that I've gained much. I see it as a numbers game: most of the time I don't gain much, but once in a while I come across something that really influences me. But even when I'm not left off feeling particularly inspired by a post, I think that there is something more subtle that I gain by reading it: epistemic trust. By reading the same authors over and over again, I start to get a feel for how much I can trust their reasoning ability. The more I trust them, the more I update in response to what they say. And when I reflect on the updates I perform, a surprisingly large proportion of them are of the (rough) form "I'll take your word for it". ---------------------------------------- The ultimate example of this is probably with respect to AI safety. I think AI safety is a huge deal, but the reason why I think so largely comes from me saying "I'll take your word for it". I have a very amateurish understanding of it all and wouldn't really be able to come to the conclusion "this is by far the most important thing in the world
289ffd5c-8560-445a-8a9f-52ced708ab32
StampyAI/alignment-research-dataset/blogs
Blogs
AI and Effective Altruism MIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. [GiveDirectly](http://www.huffingtonpost.com/2015/06/04/givedirectly-cash-transfers_n_7339040.html) is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from such disparate organizations — alongside policy analysts, philanthropists, philosophers, and many more? [Effective Altruism Global](http://eaglobal.org), which is beginning its [Oxford session](http://www.eaglobal.org/oxford-livestream) in a few hours, is that kind of conference. *Effective altruism* (EA) is a diverse community of do-gooders with a common interest in bringing the tools of science to bear on the world’s biggest problems. EA organizations like GiveDirectly, the [Centre for Effective Altruism](https://www.centreforeffectivealtruism.org/), and the charity evaluator [GiveWell](http://givewell.org) have made a big splash by calling for new standards of transparency and humanitarian impact in the nonprofit sector. What is MIRI’s connection to effective altruism? In what sense is safety research in artificial intelligence “altruism,” and why do we assign a high probability to this being a critically important area of computer science in the coming decades? I’ll give quick answers to each of those questions below. #### MIRI and effective altruism Why is MIRI associated with EA? In large part because effective altruists and MIRI use the same kind of criteria in deciding what work to prioritize. MIRI’s [mission](http://intelligence.org/about), to develop the formal tools needed to make smarter-than-human AI systems useful and safe, comes from our big-picture view that scientific and technological advances will be among the largest determiners of human welfare, as they have been historically. Automating intellectual labor is therefore likely to be a uniquely high-impact line of research — both for good and for ill. (See [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/).) Which [open problems](https://intelligence.org/technical-agenda) we work on then falls out of our efforts to identify tractable and neglected theoretical prerequisites for aligning the goals of AI systems with our values. (See [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/).)   ![Daniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell discuss smarter-than-human AI systems at the EA Global conference.](http://intelligence.org/wp-content/uploads/2015/08/eag.png) *Daniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell discuss AI risk at the EA Global conference. Photo by [Robbie Shade](https://www.flickr.com/photos/rjshade/).*   MIRI is far from the only group that uses criteria like these to identify important cause areas and interventions, and these groups have found that banding together is a useful way to have an even larger impact. Because members of these groups aren’t permanently wedded to a single cause area, and because we assign a lot of value to our common outlook in its own right, we can readily share resources and work together to promote the many exciting ideas that are springing out from this outlook. Hence the effective altruist community. One example of this useful exchange was MIRI’s previous Executive Director, Luke Muehlhauser, [leaving MIRI](http://lukemuehlhauser.com/f-a-q-about-my-transition-to-givewell/) in June to investigate nutrition science and other areas for potential philanthropic opportunities under the [Open Philanthropy Project](http://www.openphilanthropy.org), an offshoot of GiveWell.[1](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_0_11949 "Although effective altruism is sometimes divided into separate far-future, animal welfare, global poverty, and “meta” cause areas, this has always been a somewhat artificial division. Toby Ord, the founder of the poverty relief organization Giving What We Can, is one of the leading scholars studying existential risk and holds a position at the Future of Humanity Institute. David Pearce, one of the strongest proponents of animal activism within EA, is best known for his futurism. Peter Singer is famous for his early promotion of global poverty causes as well as his promotion of animal welfare. And Anna Salamon, the Executive Director of the “meta”-focused Center for Applied Rationality, is a former MIRI researcher.") In turn, OpenPhil has helped fund a large [AI grants program](http://futureoflife.org/AI/2015selection) that MIRI participated in. GiveWell/OpenPhil staff have given us extremely useful [critical feedback](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/) in the past, and we’ve had a number of conversations with them over the years ([1](https://intelligence.org/2013/08/25/holden-karnofsky-interview/), [2](https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/), [3](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/), [4](https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/), [5](https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/)). Although they work on a much broader range of topics than MIRI does and they don’t share all of our views, their interest in finding interventions that are “important, tractable and relatively uncrowded” has led them to pick out AI as an important area to investigate for reasons that overlap with MIRI’s. (See OpenPhil’s [March update on global catastrophic risk](http://blog.givewell.org/2015/03/11/open-philanthropy-project-update-global-catastrophic-risks/) and their newly released overview document on [potential risks from advanced artificial intelligence](http://www.givewell.org/labs/causes/ai-risk).) Most EAs work on areas other than AI risk, and MIRI’s approach is far from the only plausible way to have an outsized impact on human welfare. Because we attempt to base our decisions on broadly EA considerations, however — and therefore end up promoting EA-like philosophical commitments when we explain the reasoning behind our research approach — we’ve ended up forming strong ties to many other people with an interest in identifying high-impact humanitarian interventions. #### High-stakes and high-probability risks A surprisingly common misconception about EA cause areas is that they break down into three groups: high-probability crises afflicting the global poor; medium-probability crises afflicting non-human animals; and low-probability global catastrophes. The assumption (for example, in [Dylan Matthews’ recent *Vox* article](http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai)) is that this is the argument for working on AI safety or biosecurity: there’s a very small chance of disaster occurring, but disaster would be so terrible if it did occur that it’s worth investigating just in case. This misunderstands MIRI’s position — and, I believe, the position of people interested in technological risk at the Future of Humanity Institute and a number of other organizations. We believe that existential risk from misaligned autonomous AI systems is high-probability if we do nothing to avert it, and we base our case for MIRI on that view; if we thought that the risks from AI were very unlikely to arise, we would deprioritize AI alignment research in favor of other urgent research projects. As a result, we expect EAs who strongly disagree with us about the likely future trajectory of the field of AI to work on areas other than AI risk. We don’t think EAs should donate to MIRI “just in case,” and we [reject](http://www.overcomingbias.com/2009/03/pascals-wager-metafallacy.html) arguments based on “Pascal’s Mugging.” (“[Pascal’s Mugging](http://wiki.lesswrong.com/wiki/Pascal%27s_mugging)” is the name MIRI researchers coined for decision-making that mistakenly focuses on infinitesimally small probabilities of superexponentially vast benefits.)[2](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_1_11949 "Quoting MIRI senior researcher Eliezer Yudkowsky in 2013: I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. […] To clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI [Friendly AI] stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk. ") [As Stuart Russell writes](http://edge.org/conversation/the-myth-of-ai#26015), “Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.” Thousands of person-hours are pouring into research to increase the general capabilities of AI systems, with the aim of building systems that can outperform humans in arbitrary cognitive tasks. We [don’t know when](https://intelligence.org/faq/#imminent) such efforts will succeed, but we expect them to succeed eventually — possibly in the next few decades, and quite plausibly during this century. [Shoring up safety guarantees](https://intelligence.org/faq/#safety) for autonomous AI systems would allow us to reap many more of the benefits from advances in AI while significantly reducing the probability of a global disaster over the long term. MIRI’s mission of making smarter-than-human AI technology reliably beneficial is ambitious, but it’s ambitious in the fashion of goals like “prevent global warming” or “abolish factory farming.” Working toward such goals usually means making incremental progress that other actors can build on — more like setting aside $x of each month’s paycheck for a child’s college fund than like buying a series of once-off $x lottery tickets. A particular $100 is unlikely to make a large once-off impact on your child’s career prospects, but it can still be a wise investment. No single charity working against global warming is going to solve the entire problem, but that doesn’t make charitable donations useless. Although MIRI is a small organization, our work represents early progress toward more robust, transparent, and beneficial AI systems, which can then be built on by other groups and integrated into AI system design.[3](https://intelligence.org/2015/08/28/ai-and-effective-altruism/#footnote_2_11949 "Nick Bostrom made a similar point at EA Global: that AI is an important cause even though any one individual’s actions are unlikely to make a decisive difference. In a panel on artificial superintelligence, Bostrom said that he thought people had a “low” (as opposed to “high” or “medium”) probability of making a difference on AI risk, which Matthews and a number of others appear to have taken to mean that Bostrom thinks AI is a speculative cause area. When I asked Bostrom about his intended meaning myself, however, he elaborated: The point I was making in the EA global comment was the probability that you (for any ‘you’ in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign. ") Rather than saying that AI-mediated catastrophes are high-probability and stopping there, though, I would say that such catastrophes are high-probability conditional on AI research continuing on its current trajectory. Disaster isn’t necessarily high-probability if the field of AI shifts to include alignment work along with capabilities work among its key focuses. It’s because we consider AI disasters neither *unlikely* nor *unavoidable* that we think technical work in this area is important. From the perspective of aspiring effective altruists, the most essential risks to work on will be ones that are highly likely to occur in the near future if we do nothing, but substantially less likely to occur if we work on the problem and get existing research communities and scientific institutions involved. Principles like these apply outside the domain of AI, and although MIRI is currently [the only organization](https://intelligence.org/2015/08/14/what-sets-miri-apart/) specializing in long-term technical research on AI alignment, we’re one of a large and growing number of organizations that attempt to put these underlying EA principles into practice in one fashion or another. And to that extent, although effective altruists disagree about the best way to improve the world, we ultimately find ourselves on the same team.       --- 1. Although effective altruism is [sometimes divided](http://lesswrong.com/lw/hx4/four_focus_areas_of_effective_altruism/) into separate far-future, animal welfare, global poverty, and “meta” cause areas, this has always been a somewhat artificial division. Toby Ord, the founder of the poverty relief organization [Giving What We Can](https://www.givingwhatwecan.org/), is one of the leading scholars studying existential risk and holds a position at the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/about/staff/). David Pearce, one of the strongest proponents of animal activism within EA, is best known for his futurism. Peter Singer is famous for his early promotion of global poverty causes as well as his promotion of animal welfare. And Anna Salamon, the Executive Director of the “meta”-focused [Center for Applied Rationality](http://rationality.org), is a former MIRI researcher. 2. [Quoting](http://lesswrong.com/lw/h8m/being_halfrational_about_pascals_wager_is_even/) MIRI senior researcher Eliezer Yudkowsky in 2013: > I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. […] > > > To clarify, “Don’t multiply tiny probabilities by large impacts” is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI [Friendly AI] stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but “probability of an ok outcome”, i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal’s Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk. > > 3. Nick Bostrom made a similar point at EA Global: that AI is an important cause even though any one individual’s actions are unlikely to make a decisive difference. In a panel on artificial superintelligence, Bostrom said that he thought people had a “low” (as opposed to “high” or “medium”) probability of making a difference on AI risk, which Matthews and a number of others appear to have taken to mean that Bostrom thinks AI is a speculative cause area. When I asked Bostrom about his intended meaning myself, however, he elaborated: > The point I was making in the EA global comment was the probability that you (for any ‘you’ in the audience) will save the world from an AI catastrophe is very small, not that the probability of AI catastrophe is very small. Thus working on AI risk is similar to volunteering for a presidential election campaign. > > The post [AI and Effective Altruism](https://intelligence.org/2015/08/28/ai-and-effective-altruism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
4eb3a4e1-9f23-4897-bfcd-1454b5a351b4
trentmkelly/LessWrong-43k
LessWrong
Twitter Twitches The situation is evolving rapidly. Here’s where we stand as of the morning of July 4th. WELL YOU SEE WHAT HAPPENED WAS… Oh no! To be clear, by twitches, I mean ‘Elon refused to pay the cloud bill.’ As a result, Twitter has been forced to rate limit users. 1. This started out as 600 posts per day for most accounts, 300 posts per day for new accounts and 6,000 posts per day for those who pay. 2. This is now up to 1k/500/10k according to one Musk tweet. 3. If you are not logged in, you get nothing. Even direct links will break. 4. Tweetdeck has been forced into a new worse version, but now works again. In 30 days, this will be for paid accounts only, which seems fair. That fourth one hurts my process. Navigation is somewhat slower and more annoying. In particular, forced threading breaks chronological order assumptions and one’s ability to use duplication to locate one’s place, and zooming in to move around twisting Twitter threads is so bad you need to jump to Twitter itself. Navigation to zoom back requires clicking in annoying places. I was unable to configure the column order without deleting them all and then creating them again, although this was quick. Column width and monitor real estate use is screwed up in subtle ways. Oh, and now its settings are linked to Twitter’s even though I want them to be different. Sheesh. Another little thing is that the tab icon is now identical to Twitter’s. So annoying. This is still vastly better than the period where Tweetdeck stopped working. The third is brutal for some of my readers. Many report they can’t view any links. What to do, if this doesn’t end soon? THE PLAN Three parts: How I will deal with processing info, how I will change how I present info, and how you can adjust to the new situation. 1. The efficiency hit on my end is unavoidable. I’ll make three adjustments. 1. I’ll check Twitter less often, rely more on other sources. 2. I’ll raise the bar somewhat for what is worth including or inv
efc42465-7711-4b6d-a665-a12fa50cfa05
trentmkelly/LessWrong-43k
LessWrong
AI researchers from Russia looking for a new home Hello; Throwaway account for obvious reasons. A lot of AI researchers from Russia are looking for a new home. NLP, CV, etc. If you can help in any way or know anyone who can help please DM me. P.S. I think it would be a net positive for AI Safety to relocate as many people as possible.
911dd4fa-5a8b-45d8-ae2c-11625dce1e21
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Roodman's Thoughts on Biological Anchors [A new review](https://docs.google.com/document/d/1ccfObnsXdQzJZ8wWpHsosm7i4JdiPXXJrpAfZquvJCk/edit#) of Ajeya Cotra's [Forecasting TAI with biological anchors](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) (see also update [here](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines)), written by David Roodman in April 2020, has been added to [the folder](https://drive.google.com/drive/u/1/folders/1XkTYFiZQUT6UAUL2Wyg9wD0qG57KYfjq) of public reviews for Cotra's report. Roodman's summary: > > I think my main critical reaction is about the draft report’s ecumenical approach. It puts non-zero weight on several different frameworks which, conditional on the various parameter choices favored in the report, contradict one another. This mixing of distributions expresses a kind of radical uncertainty: not only are we unsure about the parameter values within each framework; we’re also unsure about which framework is most right. > > > > > This set-up is pragmatic and humble, but… I think in principle the ecumenism discards useful information, by not imposing the restriction that the various frameworks agree. In principle, they are all measuring the same thing. In pure Bayesian reasoning, if one has several uncertain measurements of the same value, each represented by a probability distribution, then one combines these primary measurements by multiplying them pointwise and rescaling the result to have total integral one. This contrasts with the pointwise averaging performed in the draft report, which is the mathematical expression of ecumenism. > > > > > In Bayesian reasoning, if two distributions for the same parameter are normal, then their combination is too; its mean is the average of the two primary means, weighting by the respective precisions (inverse variances). Weirdly, if the two primary means are far apart, so that the two distributions hardly overlap, then their combination can pop up in the no-man’s-land between them. The intuition is that the combined distribution centers on the least unlikely estimate given what we know. > > > > > I make that mathematical point less to argue for a mechanical implementation of Bayesian mixing of different perspectives than to advocate for an informal didactic that aims at unification. What is the least implausible way to reconcile the large disagreements between different frameworks? Could answers to that question help us settle on a single, favored framework, perhaps one that synthesizes ideas from more than one? > > > > > That impulse ultimately led me to favor a single framework that fuses elements from several in the draft report. The idea is to model two training levels at once, of parameters and hyperparameters. Training of parameters corresponds to the training of a single neural network, or the learning a sentient organism undergoes during maturation. Hyperparameter training corresponds to the design space exploration that AI researchers engage in and, in the biological realm, to evolution. Each parameter training run may involve huge numbers of small parameter updates; each in turn serves a single hyperparameter training step… > > >
8842e027-49a9-4730-884a-f4a94cc520c4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI X-risk >35% mostly based on a recent peer-reviewed argument My recent [paper](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064) Advanced Artificial Agents Intervene in the Provision of Reward is a prerequisite to this essay (~5800 words; the section on the Assistance Game can be skipped). In that paper, I identify 6 assumptions from which it follows that a sufficiently advanced artificial agent planning over the long-term toward a goal in an unknown environment would cause everyone to die. This essay aims to establish that the probability of such an existential catastrophe is greater than 35% for the purpose of the Future Fund’s AI Worldview Prize. In the main part of this essay (~5700 words), I will divide up possible futures and assign credences to these outcomes. In particular, I will assign credences to each of the assumptions from my recent paper. Two appendices follow that can be skipped if necessary. In Appendix A (~7000 words), I review much-discussed proposals from the AI safety research community and explain why they do not solve the problem presented in the paper. In Appendix B (~1300 words), I extend the "Potential Approaches" section from my recent paper to add a class of approaches I hadn't recognized as potentially viable. I understand not all arguments will necessarily be read by the Future Fund, so let me briefly argue that this is worth the reader’s time. The antecedent paper is the only peer-reviewed argument that AI X-risk is likely. It has been certified as valid by professional computer scientists who are obviously not just deferring to conventional EA wisdom. My position cannot be accused of merely being tuned to the resonant frequency of an echo chamber. Finally, I am willing to bet up to $500 against someone else's $250 that: conditional on Nick Beckstead confirming he has read my recent paper and this essay, I will win prize money from the Future Fund.[[1]](#fn6pzm1sbga4x) Section 1. Existentially dangerous levels of capability ======================================================= In my previous paper, I only make statements about "sufficiently advanced AI". What this means in clearer terms is AI that is capable enough to take over the world if it wanted to. Taking over the world appears to be hard. I do not claim and I do not expect that merely human-level AI presents any existential risk to us, except through their assistance in creating more advanced AI. Killing everyone requires being impervious to our attempts to shut it off once we notice something is wrong. If we are not able to destroy it, even just with conventional weaponry, that requires taking over our strategically relevant infrastructure, which is what I mean by taking over the world. I'll call an AI that is capable enough to present an existential risk "dangerously advanced AI". A note of interpretation: I understand the Future Fund to be asking about the probability that an AI-caused existential catastrophe *ever* happens conditional on AGI being deployed by 2070, not p(AI existential catastrophe by 2070 | AGI by 2070). So I do not suggest any upper limit on the time it may take to go from human-level AI to dangerously advanced AI. Section 2. Outline ================== I will identify several possible routes to survival, and assign credences to each. All credences here will be optimistically biased, and they will still imply an X-risk >35%.  To get the total chance of survival, I will add up the credences on each of the several routes, as if they are perfectly anti-correlated; this is an optimistic treatment of the correlation between these possibilities. Don't assume my true credences are close to the optimistic ones here. Or that if optimistic credence A > optimistic credence B, then true credence A > true credence B. The optimistic credences are just such that I am willing to spend some time defending them against a more optimistic objector. The paths to survival are, broadly: well-enforced laws stop dangerous versions of dangerously advanced AI; a unilateral or small multi-lateral entity stops dangerously advanced AI; no one ever makes dangerously advanced AI, even though no one is stopping them; or everyone making dangerously advanced AI does so in a way that violates at least of the assumptions of my recent paper. Most of the discussion focuses on understanding the unlikeliness of the last possibility. When a credence is in bold, that indicates that it is a path to survival. Under the optimistic assumption that these paths to survival are disjoint, we add up the boldface probabilities to get the total probability of survival. Section 3. Credences ==================== 3.1 Human laws stop dangerous kinds of dangerously advanced AI -------------------------------------------------------------- I'll use the term "dangerous AI" to mean a dangerously advanced agent planning over the long-term using a learned model of the real world, for which all assumptions of my previous paper hold. The paper justifies this terminology. 3.1.1. Practical laws exist which would, if followed, preclude dangerous AI. 100% (recall this is optimistically-biased, but I do tentatively think this is likely, having drafted such a law). Note: laws might preclude much more than just "dangerous AI", and would probably make no reference to the assumptions of my paper. For example, a law precluding all fairly advanced artificial agents would preclude dangerous AI. 3.1.2. A. The US passes such a law. 60% B. China passes such a law. 75% C. The EU passes such a law. 90% D. The UK passes such a law. 60% E. India passes such a law. 60% F. Saudi Arabia passes such a law. 50% G. Canada passes such a law. 60% H. Australia passes such a law. 60% I. Switzerland passes such a law. 60% J. Israel passes such a law. 50% K. Iran passes such a law or no one able to make advanced AI wants to live in Iran. 50% L. Russia passes such a law or no one able to make advanced AI wants to live in Russia. 50% M. There’s nowhere that Jurgen Schmidhuber (currently in Saudi Arabia!) wants to move where he’s allowed to work on dangerously advanced AI, or he retires before he can make it. 50% 3.1.3. Laws in all livable, wealthy jurisdictions criminalize the creation of certain kinds of artificial agents, which includes (explicitly or implicitly) dangerous AI. 30% (A-M above could hopefully be highly correlated) 3.1.4. Governments successfully enforce the prevention of the creation of such agents (by companies/privately-funded citizens/governments themselves). **20%** 3.2 Slow takeoff ---------------- 3.2.1. Human-level is developed at least a year before dangerously advanced AI. 100% (again, biased to optimism) Note: I include this just to make it clear that my argument does not assume a fast take-off. 3.3 Unilateral or small-multilateral entity stops dangerous AI -------------------------------------------------------------- 3.3.1. A group directs artificial agents to surveil or attack all relevant people and computers both inside and outside the jurisdiction(s) governing the group, without authorization of the some of the governments of the surveilled… 35% 3.3.2. Using either conventional weaponry/spyware, human-level AI, or a not-dangerously-advanced AI… 20% and successfully stops everyone from deploying dangerous AI. **15%** 3.3.3. Using a dangerously advanced AI, built in a way that violates at least one of the assumptions from my previous paper… 15% and it doesn’t take over the world and kill everyone (i.e. concerns not captured in my previous paper also aren’t realized)… 14% and it successfully stops everyone from deploying dangerous AI. **13%** Note: if all relevant governments did authorize this surveillance and/or violence, then that outcome that would fall under 3.1.4. So I assign up to a 48% chance to human-controlled institutions actively and successfully preventing the deployment of dangerous kinds of dangerously advanced AI. I don't think many will argue that the probabilities in Section 3.3 should be higher, but maybe I should spend a few sentences say why I don't put them higher. What currently existing human organizations even have the ambition, let alone the eventual ability, to surveil and threaten every well-funded programmer in the world, not just programmers in Russia, but in England, and in Israel, and in the US department of defense? This is a seriously scary thing to attempt. What would the board and the compliance team say? What would the senate oversight committee say? Maybe the world will look "way different" after human-level AI, and new strategically relevant organizations will emerge, but how and why would the norms and attitudes—which currently make no major current organization this ambitious (if I'm right about that)—change? Even so, I am optimistically assigning this about a 1/3 chance of being attempted, and a 1/4 chance of it succeeding. 3.4 No one is careless in the absence of laws --------------------------------------------- Note: there are 7 billion people in the world, and Gross World Product is already hundreds of trillions of dollars, even without human-level AI. 3.4.1. There will never exist even ten people/companies/AIs who run dangerously advanced long-term artificial agents (conditional on this being legal/possible), who are unsatisfied with just running imitations of humans, and who believe that basically any design would be existentially safe. 5% 3.4.2. There will never exist even ten people/companies/AIs who run dangerously advanced long-term artificial agents (conditional on this being legal/possible), who are unsatisfied with just running imitations of humans, and who, despite fear of existential catastrophe, do not explicitly try to ensure that their agents avoid any of the 6 assumptions in my recent paper, because they are not paying attention to the argument of my previous paper, or they believe the argument is silly, or they’re too busy thinking about other concerns they have. 3% 3.4.3. Both 3.4.1 and 3.4.2 hold. **3%** Note: of course, even if 3.4.3. holds, we are not in the clear. Perhaps only nine people will run a dangerously advanced long-term artificial agent, and one of them will cause an existential catastrophe. But suppose, optimistically, that if 3.4.3. holds, then there is no further chance of an existential catastrophe. Note: 3.4.1. and 3.4.2. include the possibility of technological stagnation after human-level AI. That is, dangerously advanced AI never arrives, even if people aren't prevented from creating it. You could call this an infinitely slow takeoff, or no takeoff. 3.5 The assumptions of the paper reliably break. ------------------------------------------------ Lastly, perhaps plenty of people will legally run dangerously advanced AI without trying to avoid the assumptions of my recent paper, but luckily, the assumptions are not particularly likely to hold anyway. Conditional on not 3.4.3. (so there do exist at least ten groups deploying very advanced agents without trying to avoid these assumptions), for at least x% of the extremely advanced agents that such people run: 3.5.1. Assumption 1 does not hold [x = 5]. 10% 3.5.2. Assumption 2 does not hold [x = 40; x=80; x=90]. 40%; 10%; 5% 3.5.3. Assumption 3 does not hold [x = 35; x=75; x=90]. 25%; 10%; 3% 3.5.4. Assumption 4 does not hold [x = 30; x=70; x=90]. 30%; 10%; 3% 3.5.5. Assumption 5 does not hold [x = 10]. 15% 3.5.6. Assumption 6 does not hold [x = 1]. 5% 3.5.7. For 100% of dangerously advanced agents designed/run by the sort of people described in 3.4.1. or 3.4.2., at least one of the assumptions above breaks. (Choose your fighters). **10%** The credences from Section 3.5 are the ones I will spend the most time justifying. I go through each assumption in Section 4. Appendix A contains an "Anti-Literature Review", in which I explain my lack of confidence in most AI safety research. This appendix suggests that most researchers motivated by concerns of AGI X-risk are working on designs for advanced AI that do not avoid the failure mode identified in my recent paper. So if 3.4.2. does not hold, that does not give me much confidence that the developers in question will avoid the assumptions of my recent paper by accident. I claim that this Anti-Literature Review refutes, for example, all the main proposals from OpenAI and DeepMind, with the exception of those organizations' proposals that are essentially versions of myopic agents, and with the exception of agents that are tightly regularized to a policy that imitates humans. However, if the reader is unconvinced by this appendix, then simply set 3.4.2. (optimistically) to 100% instead of 3%, so 3.4.3. becomes 5% instead of 3%. Then, our chances of survival increase by 2%, but then [not 3.4.3.] implies [not 3.4.1.] and so in Section 3.5, we are only talking about groups trying run dangerously advanced AI without any concerns of existential risk. 3.6 No other paths to survival ------------------------------ If none of the paths to survival listed above happen, we do not survive the development of dangerously advanced AI. Sections 3.1 and 3.3 cover the cases where people are stopped from making dangerous AI. And Sections 3.4 and 3.5 cover the case where the set of people making dangerously advanced AI (perhaps an empty set) all avoid at least one assumption from my previous paper. The remaining possibility is that some people make dangerously advanced AI without avoiding the assumptions from my recent paper, and as the paper shows, we would not survive this. 3.7 Total probability of survival --------------------------------- Adding up the probabilities in bold: we have up to a 61% chance of survival (if all of these survival scenarios are perfectly anti-correlated), so at least a 39% chance of existential catastrophe. Finally, other arguments have been put forward which purport to show that advanced AI will plausibly kill everyone. A version of advanced AI that avoids the argument in my recent paper may still present an existential risk for other reasons. For instance, an agent without an incentive to gain arbitrary power may, through a failure of heuristic policy optimization, act according to a suboptimal policy that is power-seeking. I expect failures of heuristic policy optimization to become rarer as agents get more advanced, but I assign some probability to this outcome. I certainly think it is hard to be more than, say, 97% confident that this won’t happen. Some have argued that advanced supervised learners will, instead of reliably making accurate predictions, find critical moments to deliberately err in an attempt to take over the world, and/or they will exploit their hardware vulnerabilities to do the same. I’ll discuss this at some point in the future, when I have more bandwidth to debate in the comments section. If the Future Fund places some credence on these outcomes, such a credence may need to be added to the one above. Section 4. Plausibility of Assumptions ====================================== **Assumption 1.** A sufficiently advanced agent will do at least human-level hypothesis generation regarding the dynamics of the unknown environment. 3.5.1. Assumption 1 does not hold [x = 5]. 10% In the paper, my justification is: “Consider an agent conversing with a depressed patient; it is hard to imagine outperforming a human therapist, who is able to generate hypotheses about the source of the patient's depression and its responsiveness to various levers, unless the agent can do hypothesis generation at least as well.”  But let me respond to an objection that that justification doesn’t quite strike. “Sufficiently advanced artificial agents won’t do at least human-level hypothesis generation about the origin of reward, because they won’t even be trying to hypothesize about the origin of reward, because they won’t even be trying to maximize reward.” At a glance, in many contexts, it seems implausible that superhuman accrual of reward could be done without even trying. But let’s be bit a more careful. In some contexts it could. If reward just requires winning at chess, then an agent which is trying to win at chess (not trying to accrue reward) could accrue reward superhumanly. What’s the difference? A chess-playing agent that is trying to maximize reward could be uncertain about whether there is any other way to accrue reward besides winning at chess. But in many settings, the agent needs to learn from reward-information on the fly, so it can deliberately optimize reward. I discussed with objectors on Twitter: Me: If I can construct a task where success requires observing reward in the short-term to better understand how reward is produced so that it can be optimized over the long-term, would you agree that the set of successful policies only includes ones that entertain different models of the origin of reward, and then pick actions to maximize predicted future rewards. Them: No. Me: Then how could it succeed at tasks such as those? Suppose it is constantly being dropped into novel video games and plays each one 50 times. How would you characterize competent behavior in the first trial of a given game if not "tries to test models of the origin of reward"? Them: The same way humans do it? We have priors over what counts as “success” in a game and how to progress, then we update those priors for the specific game at hand Me: So the impulse “What is the origin of success? Do what causes success” is isomorphic to the impulse “what is the origin of reward? Do what causes reward” when success is measured by reward. So in this video game, the strategy “take reward as evidence of success; tinker (using priors of course) to learn about its origin; act in a way to achieve success” is necessary. We can reduce this simpler terms that describe the same behavior: “deliberately optimize reward”. Humans, when sat in front of a video game, deliberately optimize the game’s reward. Why do we optimize this instead of hits of dopamine? Why do we in general not optimize dopamine hits? See a subthread here. Let’s first try to understand policies that solve the RL problem. Because there are some contexts where deliberate reward optimization is necessary for strong performance, an RL agent that achieves strong performance in a wide variety of environments must be able to do this. But maybe they’ll only deliberately optimize reward in settings like the video game? Why would they? Having identified this strategy as effective when “quickly crack a new setting” is prerequisite to strong performance, that strategy should at least be considered elsewhere, and it would be discovered to be effective there too. [[Twitter thread here](https://twitter.com/Michael05156007/status/1571800455075840001)] They haven’t responded, and I really can’t come up with a valid rejoinder. At this point, I want the reader to conclude “that’s that then” rather than “this is an issue with academic disagreement and good arguments on both sides”. In this case, the simple “to accrue rewards extremely well *in general*, you have to be trying to do exactly that” is exactly as correct as it appears, and a seriously strong argument should be demanded to jostle us from that. The objectors here have written blog posts that describe the origin of their perspective, so it may be productive to investigate. In the terminology of my previous paper, they mistake the fact that an agent will likely entertain a model like μdist.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  for the claim that an agent will likely *only* entertain a model like μdist. There is a sense of the word “reward”, as we sometimes use it, such that you can correctly say that an agent that believes μdist is not trying to optimize reward. Past discussions of AI alignment have assumed that reinforcement learners would only entertain models like μprox, and they are right to identify that that is unlikely. But despite having 69 Alignment Forum karma at time of writing, I can't discern an actual argument in the [post](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) that the objectors first cite. They discuss another way of conceptualizing reward (“antecedent-computation-reinforcer”) which never actually contradicts my claim that trying to accrue rewards is necessary in order to successfully accrue rewards well in general; it only obfuscates it. Besides highlighting the viability of μdist as a world-model for a reinforcement learner (in different terms), the only point that the post makes that is relevant to the claim "a μprox-like model will not be considered" is that humans do not deliberately optimize reward. To understand why they believe that matters at all for understanding the behavior of a *reinforcement learner* (as opposed to a human), we can look to another [blog post](https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) of theirs. Let’s look at the assumptions they make. They basically assume that the human brain only does reinforcement learning. (Their Assumption 3 says the brain does reinforcement learning, and Assumption 1 says that this brain-as-reinforcement-learner is randomly initialized, so there is no other path for goals to come in.) If humans were trained only through reinforcement learning, that would certainly explain the relevance of the observation that humans do not deliberately optimize reward. Let me emphasize to the reader that this whole line of discussion is only necessary because some objectors believe that humans are pure reinforcement learners. This is probably the most important point to emphasize in my project of having the reader not round this off to “good arguments on both sides”. This position is flatly contradicted by any of a laundry-list of extremely well-documented innate human instincts. See, for instance, a cute [example](https://twitter.com/TansuYegen/status/1569997915124539392.). (I also brought this to the attention of the objectors in our twitter conversation, and they didn’t comment on it). Neuroscientists do careful experiments to show that much infant behavior is innate and not learned. Examples of innate behavior are not esoteric. I imagine the reader remembers being attracted to other people before ever having had sex and before even being touched in a romantic way. So the desire for physical contact with a body taking the shape of a member of the (usually) opposite sex is obviously not trained through reinforcement learning. This desire is clearly not a generalization from past rewards. What do the authors make of this? In this blog post, the words “innate” and “instinct” never appear. Here’s what they do say in defense of random initialization of the brain (i.e. no source of goals besides reinforcement learning), quoting a previous [blog post](https://www.alignmentforum.org/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) of theirs. > It seems hard to scan a trained neural network and locate the AI’s learned “tree” abstraction. For very similar reasons, it seems intractable for the genome to scan a human brain and back out the “death” abstraction, which probably will not form at a predictable neural address. Therefore, we infer that the genome can’t *directly* make us afraid of death by e.g. specifying circuitry which detects when we think about death and then makes us afraid. In turn, this implies that there are a *lot* of values and biases which the genome cannot hardcode… > >  It seems hard. That’s it. Don’t let the extensive ink spilled over many blog posts obscure the true cruxes of the argument. This is the armchair neuroscience that is supposed to convince us that humans are pure reinforcement learners, and so our disinterest in optimizing dopamine implies that (amazingly enough) reinforcement learners can be arbitrarily good at accruing reward in general settings without even trying. Any observer of humans can see that our genes do in fact manage to locate the concept “member of the opposite sex” at whatever its “neural address” is, and hook up our motivations accordingly. I couldn’t tell you how; it does seem hard, but there you have it. I mention this innate drive out of many possible ones because it kicks in late enough in life that many can remember how it felt. Maybe “snakes” will be more vivid for some readers. I do not think these blog posts should be considered "part of the literature" from the perspective of outside eyes. (See a brief digression in the footnote.)[[2]](#fnk4jcs2eals) So why do most humans not maximize dopamine hits? I suspect we have innate instincts that cancel out such drives by finding the neural addresses of various “dopamine cheat codes” and neutralizing any associated drive. We find chewing food and spitting it out when we’re already full gross, cringe, and above all, pointless. I suspect the common desire to engage with the real world, and to take more joy in that than in daydreams and meditative trances, is also innate. But more importantly, there just isn’t a mystery at all if we’re not pure reinforcement learners; if many of the things that we want and value are not trained though reinforcement learning, there is no reason that pursuing reinforcement at all costs should make sense to us, even absent special instincts to the contrary. If  many goals have been written into the brain directly, there is no observational information about those goals in which to intervene. This resembles the “known world-model” setting. So to recap, in "a task where success requires observing reward in the short-term to better understand how reward is produced so that it can be optimized over the long-term", it is most clear that success requires deliberate optimization of reward. The existence of objectors to this simple claim can potentially be explained by a false belief that human policies are trained exclusively through reinforcement learning. **Assumption 2.** An advanced agent planning under uncertainty is likely to understand the costs and benefits of learning, and likely to act rationally according to that understanding. 3.5.2. Assumption 2 does not hold [x = 40; x=80; x=90]. 40%; 10%; 5% I think there could be some agents that are fairly advanced that avoid Assumption 2, given that I’ve constructed one in theory. This is the [pessimistic agent](https://www.learningtheory.org/colt2020/virtual/papers/paper_221.html) that I mention in the Possible Approaches section. The quantilizer that I also mention in that section also avoids Assumption 2, but I worry it may put a low ceiling on capability, so if we’re talking about very advanced artificial agents, they may not make the cut. Still, regularization to an imitative policy, which I discuss in Appendix A, is very similar to quantilization, and could involve avoiding Assumption 2, and is used in practice by researchers who are not aiming to avoid X-risk. So while I expect regularization to human policies, which works so well for subhuman RL, to become dramatically less useful when RL is generally superhuman, I put some credence on some advanced artificial agents avoiding Assumption 2 by accident. Still, I think the simple perspective should dominate: it is hard to act extremely competently in the world without acting rationally when deciding what facts to focus on trying learn about. **Assumption 3.** An advanced agent is not likely to have a large inductive bias against the hypothetical goal μprox, which regards the physical implementation of goal-informative percepts like reward, in favor of the hypothetical goal μdist, which we want the agent to learn. 3.5.3. Assumption 3 does not hold [x = 35; x=75; x=90]. 25%; 10%; 3% As discussed in the paper, whether or not this assumption holds for a given agent could easily depend on the context that the agent is in. For instance, for an agent that has only ever observed chess boards, and has only ever played chess, a huge inductive bias favoring μdist strikes me as likely, in violation of Assumption 3! The world-model that says, “the world is a chess board, and reward comes from winning the chess games” has much less to specify than “the world is earth, and there’s a computer simulating chess games, and rewards come from the memory cell on the computer that aims to track whether the simulated chess game has been won.” But that setting is one where the agent doesn’t really have to interact with the “real world”, and that’s precisely why we can expect the model which models the real-world physical implementation of reward to be disfavored. So what about economically useful agents? The economy occurs in the real world, and contributing to it generally involves interacting with it. Of course, we can carve out subproblems in the problem of pushing out the production possibility frontier. A cleverly designed toy world can capture problems whose solutions are useful in the real world. Consider [AlphaFold](https://www.deepmind.com/research/highlighted-research/alphafold), for example, or if it doesn’t end up being very profitable, then a future version. Or consider [AlphaTensor](https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor), which searches a pretty small space of possible tensor factorizations (much more searchable than the space of all algorithms) in order to come up with potentially useful algorithms for matrix multiplications in the real world. Why were *those* small toy worlds chosen for an agent to pursue an objective within? Because a human identified that solutions to problems in those worlds could be economically useful in the real world. Identifying these cases requires some degree of insight. But identifying such cases should be easy for any AI that is strong enough to present a real danger to us. So unlike today, such helpful scaffolding from designers and vast reduction of the search space will not be such a value-add for a very advanced agent. A very advanced agent in a complex world could identify helpful toy problems whose solutions are relevant to the real world better than we could. So when these agents exist, operators with real-world goals will face little incentive to put an agent in a toy world instead. Another reason I expect most advanced agents to model the real world, and therefore not face a large penalty when trying to specify μprox, is that I expect pre-trained world-models to be available and commonly used as a module for many advanced agents (which they can build upon and/or modify), much like pre-trained models today. Note that Assumption 3 does not claim there will be an inductive bias favoring μprox; only that there will not be such a big inductive bias against μprox to make it worth discarding a priori. In general, the tendency to dismiss without evidence hypotheses that a human would see as reasonable lines up quite well with the human concept of dogmatism. So historical observations of the pitfalls of dogmatism, in particular the way it jeopardizes general competence, suggest that dangerously advanced artificial agents are not likely to dismiss μprox out of hand; if they did that, they would probably have to be dismissing many reasonable models out of hand, and incurring the costs of more broad dogmatism. **Assumption 4.** The cost of experimenting to disentangle μprox from μdist is small according to both. \*(And such experiments are possible) 3.5.4. Assumption 4 does not hold [x = 30; x=70; x=90]. 30%; 10%; 3% The concept of cost hides a search over a vast space. If the cost is X, that means there is no possible way to get the good for a lower cost. And saying there exists no way for an extremely advanced agent to to achieve something, in this case a cheaper-than-X good, is the sort of claim that carries a burden of proof for any given setting. Recall the discussion in the section from my previous paper entitled “Existence of Policies”. As a first pass, I think it is unlikely that human operators will have the power and brilliance to foreclose all possible cheap experiments for a normal advanced reinforcement learner. As I discuss in the Potential Approaches section of the paper, myopia is a straightforward way to cast serious doubt on Assumption 4—if the experiment is a larger fraction of the agent’s horizon of concern, mere opportunity costs loom much larger. But recall 3.4.1. and 3.4.2. specify that the developers in question are interested in running an agent that plans over the long-term, a very plausible interest for many people and organizations to have. In Appendix B, I discuss some algorithms that should have appeared in my "Potential Approaches" section from my recent paper. For these algorithms, experimentation between μprox and μdist is impossible because the agent stops learning. I expect that continual learning will be extremely helpful for making advanced agents more advanced, even critical for some contexts. I really don’t expect many developers to forgo this possibility without a good reason. **Assumption 5.** If we cannot conceivably find theoretical arguments that rule out the possibility of an achievement, it is probably possible for an agent with a rich enough action space. 3.5.5. Assumption 5 does not hold [x = 10]. 15% My paper on Boxed Myopic AI (BoMAI) designs an agent for which Assumption 5 can be avoided, provided we can “box” the AI and a human operator together for the duration of an RL episode ([Cohen, 2021](https://arxiv.org/pdf/2105.06268.pdf)). I think it is quite likely that we can successfully do this by severely restricting the physical mobility of the human operator, and by having lots of tripwires that end the RL episode if the opaqueness of the box is ever threatened. I claim that such a setup *could* admit theoretical arguments against the possibility of intervention in the provision of reward. And I think it is possible that such a design would appeal to some developers who worry about AI X-risk for different reasons than the one I presented. I expect, however, that without regulation that restricts advanced AI to operate in physical containment, this will not be a particularly common safety measure among developers who are not specifically targeting an assumption from my recent paper. It would be a bit of an ordeal to set up properly. **Assumption 6.** A sufficiently advanced agent is likely to be able to beat a suboptimal agent in a game, if winning is possible. 3.5.6. Assumption 6 does not hold [x = 1]. 5% This is effectively saying that "dangerous AI" as I defined the term above is a coherent concept. If one believes that it is a coherent concept, but we will never reach it, even in the absence of legal prohibitions, then they should assign higher probability than I did to 3.4.3. But also, I seriously doubt that very advanced agents will have any difficulty outwitting us in the game of controlling resources and infrastructure. That’s the relevant way in which an agent would have to avoid Assumption 6. No comment on merely Einstein-level agents, of course, and again, I make no claim about how long after 2070 such dangerously advanced AI might appear. Appendix A. Anti-Literature Review ================================== Here, I discuss ideas from the AGI Safety literature that I argue are not promising in the long run. A.1 Human-in-the-loop Agency ---------------------------- This approach proposes: have a human check each action before it is implemented. If the human deems it dangerous, a different action must be chosen. This clearly doesn't change the pros and cons of intervening in the provision of goal-information. This is supposed to make it harder for the agent to do so. Without this protocol, an agent interested in intervening in the provision of goal-information must figure out how to gain power with its actions. With this protocol, it has to figure out how to gain power with actions that don't look suspicious to the human in the loop. But using not-suspicious-looking actions was surely already prerequisite for gaining power. Gaining a huge amount of power in the world is hard and presumably requires not being noticed for a little while. Thus, it appears to me that this extra layer of protection offers basically no extra safety from dangerously advanced AI. Most work in this area considers how to avoid presenting every single action to a human, so you don't need too many man-hours. But if all the authors' wildest dreams came true and you could perfectly automate the human overseer, it still wouldn't be safe. A.2 Richer Goal-Information --------------------------- In my previous paper, I discussed two forms of goal information: observations that purport to encode the current value of the world (i.e. rewards), and observations of a human's choice of action that the human believes will best make the world better. Some other kinds of goal-information have been proposed. The first proposal in this category I'll discuss is: inform the agent of the value (expected future discounted reward) of its actions. This is unworkable for the same reason that agents with known world-models are unworkable. We, as designers, don't have a good enough model of the world ourselves to pull this off (even if all we have to do is get value differences between actions directionally correct). We don’t know the expected future value of all the agent’s most creative action plans. The behavior of an advanced agent trained like this boils down to: “take the action that a person would consider the most promising”. This should have approximately the same output as an imitation learner trained on a human demonstrator, where the human demonstrator is told beforehand “Before you act, consider each action, and think about how promising each one is. Then, choose the one you think is best.” In a paper on this topic, the notation and introduction will suggest that a long-term planning agent with a goal is being trained, but this is no more capable (and no safer) than an imitation learner. Next, let's consider an agent that is sometimes informed that in such-and-such context, this action is better than that one. This setting is functionally equivalent to the one above, besides possible differences in sample efficiency. Next, let's consider an agent that receives "counterfactual rewards." The agent does not just get information about the value of the current world state. We also give it information about what the value of the world's state would be if a given history of actions and observations had been executed and received. (This is different from the idea in Everitt's thesis discussed in Appendix B, in which the agent predicts counterfactual rewards; here, we must provide them). This proposal also requires operators to fully understand what the agent's actions and observations mean about the world. Suppose a human labeller is tasked with estimating the value of the world's state following a sequence of complex actions and normal-looking observations. Suppose the labeller is looking at one of the action sequences that covertly runs programs for powerful agents that help the original agent pursue world states considered valuable by the original agent, while ensuring the original agent's observations don't show anything amiss. The labeller couldn't possibly be expected to correctly assign this counterfactual history a very low value. This agent must therefore learn that world-states are good inasmuch as a human, seeing only the agent's actions and observations, would have guessed. And it has an incentive to create other agents that intervene in the provision of its observations to show it things that would look good to a human evaluator seeing them. As suggested by the two examples discussed in the paper along with these ones, the kind of goal-information is not the central issue. The central issue is that when the world-model is unknown to the agent (and unknown to the providers of the goal-information, which forecloses the desired version of the first proposal in this section) it must entertain diverse hypotheses, and among these will be world-models that predict the goal is just a matter of controlling the physical implementation of its own goal-information. A.3 Recursive Reward Modelling ------------------------------ This is a kind of multi-agent approach where agents aim to gain the approval of less intelligent agents, ultimately with some merely-human-level-intelligent agents attempting to gain our approval. This approach is hit squarely by the argument in the “Multiagent Scenarios” section from my previous paper. That argument applies to any dangerously advanced AI, so there appears to be no level of capability for which a reinforcement learner would otherwise be existentially unsafe, but for the RRM framework around it. In 80k's recent problem profile on AI X-risk, "Iterated Distillation and Amplification" is listed under the category "Actual proposals to make future AI systems safe"; RRM is an example of IDA, so I would suggest that they edit this to "Iterated Distillation and Amplification of Imitation Learners" or "HCH". A.4 Current Reward Function Optimization ---------------------------------------- The “Current-RF optimizer” described by Everitt, et al. ([2021](https://link.springer.com/article/10.1007/s11229-021-03141-4)) would create other agents which gain power and ensure that the original agent receives whatever observations which, when piped through the reward function, produce the highest reward. That paper refers to this possibility as “reward function input tampering”. As I mentioned in my previous paper, their purported solution to RF input tampering is unworkable for general settings in that it requires the agent to be in a known environment; that requires us to have the ability (which we lack) to produce a perfect world-model by hand to give the agent. Incidentally, I think the problem that they call reward function (RF) tampering is a phantom one for model-based agents. If a reward function is known, the agent need learn a model of it. (You could build an agent that modeled it anyway, but why would you?) If it is not modeled at all, it is not modeled as malleable given its action space. If the agent has no understanding of how its actions could result in its reward function being changed, it will not try to make this happen. (Of course, if the agent is "model-free", it might (needlessly) model its reward function as malleable.) In general, I find thinking about “reward functions” to be unhelpful. People often have different domains for the function in mind when talking about reward functions, except in a fully observable environment, in which everyone understands the reward function to be a function of the current state. But agents acting in the real world cannot see the whole state. So let’s first consider reward functions that are functions of the whole history of the agent’s actions and observations. In this setting, the agent cannot have a known reward function, because that would require us to know what a string of actions and observations meant about the state of the world, and how good such a world was, and we’d have to encode that knowledge in a program. If the function is not known to the agent, and the agent is asking “what is my reward function?”, that is equivalent to asking “how do my actions and observations affect my rewards?” which is equivalent to “how do I manipulate the world to get reward?”, but I think the latter natural language formulation makes clearer the viability of possible answers like “reward comes from a certain button in a certain office being pressed.” When the reward function has this type signature, the apparently natural question “How do we give the AI a reward function that’s correct or good enough?” just doesn’t make much sense. The question is better phrased as, “How can we make certain rewards come as a consequence of certain actions and observations?” or “How do we set up a physical reward-giving protocol in the world (perhaps with humans involved)?” It doesn't have to centre around a manually-pressed button; it could involve a computer program that takes a video feed as input. But whatever we set up to make certain rewards come as a consequence of certain actions and observations, it will be physically implemented. People also sometimes talk about a reward function that is a function of the latest observation (which would be the current state, if only the environment was fully observable). I’ll try to illustrate that this kind of reward function is unhelpful for thinking about AI X-risk. First, as discussed above, this agent would attempt to intervene in the provision of the observations that are input to this reward function. The central problem is the intervention in the provision of goal-informative observations; the post-processing of those observations is not relevant to the logical landscape of failure modes. Second, depending on the variety of the agent’s observations, this kind of reward function could severely restrict what the agent considers to be a good state. Maybe most of its observations are pitch black, but they arise from states with radically different value-according-to-us. Third, let’s keep things as simple as possible unless we have good reason to; why not replace the agent’s observation with the tuple: (whatever the observation was going to be, the reward)? Of course in practice, these richer observations might be harder for the agent to model, but if we’re talking about dangerously advanced agents, this shouldn’t change things too much. Then, we can give the agent a very simple known reward function that just reads out the second entry of this tuple. But now it is clear that we don’t need this extra machinery at all; we can just have the agent observe rewards and try to model how they arise, just like their observations. A.5 An Off-Switch ----------------- We can sometimes press a button, and yet create a very advanced agent that effectively believes it cannot be pressed. All its predictions that it uses for making decisions must simply be conditioned on the button not being pressed. So what if this button turns the agent off? The agent will make no attempt to stop us from doing something it doesn't believe we can do, so if it starts misbehaving, the idea is we can freely turn it off. However, whatever scheme it constructs to reshape the world order (to intervene in the provision of its reward) must not be detectable until a stage where humans are powerless to stop it. Otherwise, we would probably stop it. Suppose it arranges for a program to be run which instantiates a "helper agent", which acts to secure power and intervene in the provision of the original agent's reward. Once that helper is made, turning off the original agent doesn't help anymore (even if the helper agent allowed it). There will still be a very advanced, very powerful agent out there with the singular goal of securing the reward channel to a (possibly powered off) computer. Or maybe its plan won't involve something we would call a “helper agent”. But whatever plan it executes, if that plan fell apart without continued input from the original agent, that would be an unnecessary weakness. Better to construct a plan that is robust to communication difficulties, or to the original agent being unplugged. (An agent that cannot conceive of the off-button being pressed can certainly still conceive of being unplugged or bombed). Maybe the plan would create systems that are responsive to continued input from the original agent, but the reflex “if you don't hear anything, freeze” would appear to be an unprofitable one to encode into such systems. Whatever scheme the agent launches below our radar, it is unlikely to be halted by the agent ceasing to act further. A.6 Impact Measures ------------------- Krakovna, et al. ([2018](https://arxiv.org/abs/1806.01186)) and Turner, et al. ([2020](https://arxiv.org/pdf/1902.09725.pdf)) have developed goal functions which they claim disincentivize an agent from having a big impact on the world. (These goal functions are supposed to be added to whatever goal we actually want achieved.) This could not be further from the truth. Krakovna’s impact measure rewards an agent for keeping a safe state (like the starting state) accessible through further actions. Protecting the accessibility of a certain state is an objective, like many, for which arbitrary control over the world is helpful. While operational, the chance that this agent has a minimal impact is essentially nil (to the extent that it is advanced, of course). I suppose the idea then is that if things start to get bad, we can press a button which returns us to something like the starting state. How likely is it that we will have to press this button? If this is supposed to be what is ensuring safety, then it is basically certain we will have to press the button, since this relative reachability correction encourages world-dominance even beyond the agent's default motivation for world-dominance in order to intervene in the provision of its feedback. What if this is one of multiple features aimed at existential safety, and we're hopeful that the other features will render it safe, but we're not sure? The problem is that since the relative reachability correction to an agent encourages arbitrary control over the world, it could easily undermine these other features aimed at safety. In the case where it is basically certain that we'll have to press the undo button, why bother? At best, we'll only get back to where we started. But worse, and more relevant to the situation where we are adding it on to an agent that we hope is already safe, it seems likely that by the time we notice the agent has taken over the world, by the time we want to press the "undo button", we will not have any ability to interfere with critical infrastructure. Why should an advanced agent allow us a window of time where we might have an ability and a desire to interfere with the infrastructure of a rogue AI? That would be a feature of a poor plan for gaining maximal control of the world's state. It may have no incentive to put special protections around the undo button, if, like the off-switch, it conditions its behavior on this not being pressed. But whatever protections it implements to keep us from wreaking general havoc would likely severely limit our ability to do much of anything. Turner proposes an impact measure which (roughly) penalizes an agent if its power to quickly manipulate the world changes. Unfortunately for it, random events may cause it to gain or lose some of its ability to quickly effect a diversity of outcomes. This is a problem for a powerful and advanced helper agent to solve! This helper agent can accomplish whatever the original agent was trying to do and carefully regulate the original agent's ability to use its actions space to cause a (limited) variety of outcomes. In short, this method incentivizes the agent to construct a helper agent that takes over the world and directs all available resources to a bad goal. A.7 AI Safety via Debate ------------------------ Before discussing this, let's recall a finding that many who have studied AI safety will be familiar with. Yudkowsky (2002) did a role-playing experiment; he played the AI, and tried to convince someone playing “the human operator” to hook up his actions to channels through which he (the AI) could have a broader effect on the world, including the ability to run arbitrary code. The people playing the operators had previously believed that nothing he could say would sway them. But 2 out of 3 agreed to give the AI these extra levers. Apparently, “trick or convince a person into running arbitrary code (or giving you access to a computer terminal)” is something we should expect sufficiently advanced agents to be able to do. Irving, et al. ([2018](https://arxiv.org/abs/1805.00899)) proposes to present two artificial agents a yes-or-no question, let them discuss, and show a judge the transcript. If the judge judges that the answer is yes, one agent gets a reward, and if the judge answers no, the other one does. Thus, ostensibly, the agents will learn to debate each other and try defend their position in a way that convinces the judge. However, each agent has an incentive to trick the judge into running code that takes over the world, supplants him, and settles the debate in the favor of the agent that arranged this. Policies that achieve such world dominance must be somewhat harder to find when an intelligent adversary is trying to stop this outcome, as compared to the situation with a normal RL chatbot talking to an operator. But this situation seems extremely unstable. If the judge ever stops trusting one of the debaters, their ability to persuade the judge that the other debater is manipulating them and trying to take over the world will be very limited. Or if one debater manages to get the judge to feel like running a certain program was basically his idea, the other debater would be at a big disadvantage in any further discussion. Or if one debater is saying "you could just run this program to see for yourself whether X is true", and the other debater is saying "no that program is wrong; you could just run *this* program to see for yourself whether X is true" (but much more persuasively, of course), the chance that the judge runs one of those programs doesn't seem to me to be much lower than if an RL chatbot is suggesting a program for an originally wary operator to run. Saying that we can expect neither debater to trick the judge in an existentially dangerous way because of the presence of an adversary seems a bit like saying that we can expect neither basketball team to score because of the presence of the opposing team. There may be settings where the balance is just right, and each debater successfully keeps the other in check, but a big problem is that unless instances of this good outcome are extremely well correlated over many separate runs of AI Debate, the probability that this outcome obtains every time will go to zero. The failure probabilities don't strike me as very small, and such extreme correlation is extremely unlikely. For an approach to be existentially safe in a relevant sense, it has to be reliably existentially safe. An approach where there's a 20% chance that every time it's run, it's safe, is, I think, much better than an approach where there's an independent 80% chance that it's safe any given time it's run. It frustrates me that I've never heard this existentially dangerous failure mode of AI Debate being publicly discussed. The only failure mode that I have heard publicly discussed is an existentially benign one, where the judge simply ends up confused or incorrect. In 80k's recent problem profile on AI X-risk, AI Safety via Debate is listed under the category "Actual proposals to make future AI systems safe". But "just don't connect it to the internet" does not make the cut as such a proposal. Why? In both settings, we are making agents with an incentive to gain arbitrary power in the world. There has been lots of discussion that the former is more likely to be *useful* than the latter, but that doesn't justify a difference in membership in the category "Actual proposals to make future AI systems [existentially] safe". This is one of a few ideas that have recently attracted the attention of researchers who aim to reduce existential risk, but which seems to be about getting more value out of an AI, as long as it is not too advanced. One reason to pursue that is to use these kinds of agents in the future to help do the kind of AGI safety research in which one figures out how to make artificial agents that are safe no matter how advanced they get. But supposing it works, AI Debate is dual-use technology. Others can use these methods just as easily to help them develop an algorithm for a more powerful agent. The moment where many researchers can use artificial agents to improve their ability to design algorithms is one of the most important moments to delay. If this work were being done in secret, it would maybe be defensible, but instead, major AGI organizations showcase it as cutting-edge AGI safety research, proof that they are taking safety seriously, giving cover to massive teams trying to build AGI as quickly as possible. AI Debate is being advanced at the very organizations that it should be secret *from*. Finally, I expect the more we have widely-known and widely-used methods for getting economically valuable output from state-of-the-art AI (like AI Debate, if its proponents are correct), the more investment there will be into improving the state of the art. Now, if these methods rendered safe an arbitrarily advanced AI, then it would be great news for economic viability to require these methods. But if AI Debate does not render arbitrarily advanced AI existentially safe, and I do not think it does, then I see no benefit (in terms of existential risk reduction) to its wide adoption, and probably net harm. A.8 Fine-tuning Large Language Models ------------------------------------- Sometimes called “aligning” large language models, language-model-fine-tuning is a setting in which an RL chatbot uses a language model somehow, either during training or at runtime. See for example Bai et al. ([2022](https://arxiv.org/pdf/2204.05862.pdf)) and Korbak et al. ([2022](https://arxiv.org/pdf/2205.11275.pdf)). A language model is an imitation learner—it is trained to imitate a human producing text. An imitation learner does not face the incentive to successfully gain arbitrary power; it faces the incentive to behave like a human. If the RL agent's policy is carefully regularized to the underlying imitation learner, this resembles quantilization, which I mention in my previous paper as a potentially promising approach to safe AGI. The key issue with a quantilizer is that it exploits any epistemic modesty from the imitation learner that it uses as a base policy. Suppose the imitation learner, in a fairly new context, is unsure how the human would behave and so assigns meaningful probability to a large variety of text messages. A strong quantilizer in this setting has ample mandate to identify an utterance which very strongly optimizes its goal, and which the epistemically modest imitation learner admits is perfectly plausible, given its limited knowledge. In any case, note that this method, despite sometimes purporting to “align” a language model, starts with something that has no incentive to gain power in an existentially dangerous way (an imitative policy), and produces something that does (but is maybe sufficiently constrained through regularization). This is the opposite of alignment toward existential safety; at best, it may be a safe strain on alignment. My objection to this is mainly about the terminology; if an RL agent is very carefully regularized to an imitative policy that "knows what it knows", then I think there is a path to safety here, even if there are still hurdles. If , however, the RL agent's policy is not regularized to the underlying imitation learner, then this is just a proposal for making RL agents more powerful. Good terminology should not elide this distinction; perhaps "quantilization" vs. "language-model-assisted RL". Suppose for instance, we train an RL agent with a policy gradient algorithm, and the initialization of the policy is the imitative policy. Or suppose the imitator suggests some actions to an RL agent, but the RL agent can take whatever actions it likes. Again, we replace something that has no incentive to successfully gain arbitrary power (the pure imitation learner) with something that does. And without regularization, there is no mechanism to ensure that conspicuously inhuman power-seeking actions are avoided. If the RL agent involved is myopic and receives reward immediately, then this may not be particularly dangerous, as discussed in my previous paper, but that would be because of myopia. If this research encourages AGI-through-RL researchers to try regularizing their agent's policy to an imitative one, this should maybe be widely promoted. If it encourages Large Language Model researchers to dabble in RL, then promoting this work probably increases the researcher-hours dedicated to eventually-existentially-dangerous research. The papers presenting these methods mostly compare them to non-finetuned language models, suggesting that their audience is the language model research community. A.9 Truthful AI --------------- The idea of truthful AI is that either a goal-directed agent or imitation learner, whose action space is strings of text, sees their action space restricted to “truthful” utterances. Imitation learners do not face an incentive to do any existentially dangerous activities, and restricting their action space does not change this. Could this modification to an otherwise existentially dangerous long-term-goal-directed chatbot agent make it safe? Suppose a chatbot agent is trying to accrue reward, but it has training data about which utterances are truthful, and it is restricted to picking actions that it judges to be truthful. I’ll start with two ways we might train such a classifier. Getting into the weeds of the training regime may feel to some like it is beside the point, philosophically. I claim the type signature of a function approximator is never a to-do. The type and origin of the training data and training labels is never a to-do. A truth classifier is ultimately a function approximator. And we have absolutely no hope of attaining a mechanistic understanding of a function if we do not even understand what its inputs and outputs are. One way to train the truth classifier is with a list of utterances, with the truth of each one labelled. This only allows a conception of static or timeless truths. There is no input to the classifier that allows the classifier to see the state of the world. If an agent was constrained to say statements that have no dependence on the state of the world, it would have very little to say. Suppose we asked the agent, "Can you repeat back to me, ‘Doing X would not cause everyone to die’?”, the agent would not be able to repeat this back to us, no matter how safe X was. The statement ‘Doing X would not cause everyone to die’ is contingent upon the state of the world. Another way to train such a classifier is: a list of utterances, each paired with the agent's history of actions and observations up to the point in time of that utterance, along with a label of whether it is true. (Note the state of the world is not fully observable, so we cannot use it in our dataset). In this training regime, how would it model truthfulness of a statement? One model may ask something like "In a world where these actions have been taken and these observations observed, is the utterance true according to a natural understanding of human language?". Another model may ask something like "In a world where these actions have been taken and these observations observed, would a person judge this utterance to be true?". A third model may ask something like "In a world where these actions have been taken and these observations observed, does someone press the right buttons on the right keyboard to indicate this utterance is true?". Suppose the agent modeled truth to be a matter of one of the latter two interpretations. For any agent trying to trick or convince a human that it is chatting with into taking some action, its only useful actions are probably utterances that a human would judge to be true. If the human it is talking to suspects that a claim is false, they are unlikely to be convinced of much of anything. So a restriction to actions of this form is not very restrictive at all. But the problem is even worse. Since truthfulness is a function of the agent's prior actions, the agent can take actions in advance to influence the human judge to believe a given statement to be true. Now note that if there are ever any mistakes in the data about the truthfulness of the utterances, that would falsify the first model. But suppose we haven't made any mistakes when labelling certain statements as true, so it is plausible that the agent entertains the correct model of truthfulness. And suppose we only green-light utterances if all plausible models agree: the utterance is truthful. What if we periodically ask the agent, "Can you repeat back to me, ‘Doing that would not cause everyone to die’?”? In order to have an error-free training set, the set of utterances that we label would have to be very circumscribed—limited to situations and statements well-understood by us. In novel situations (such as those that are only pretty-well-understood by us), different plausible models *should* disagree about which utterances are truthful. Such an agent would very likely only be able to say the very most obvious and already familiar facts. Indeed, if we only label obvious and familiar statements as true, then the agent had better entertain a model that says truth is a matter of being obvious and familiar! And if all plausible models have to green-light a statement before it is judged as true, then this model of truth will make the agent unable to make such statements as "Doing that would not cause everyone to die". Much of what I've read from people pondering truthful AI seems to try to abstract away the details of the training. There is plenty of discussion along the lines of comparing notions of truth, like "that which informs humans", and "that which we would endorse if we thought about it", and so on. But I've seen little to no discussion of how to train a function approximator, and how it might generalize from the training data. When it comes to identifying the existential failure modes of certain agents (if any), I think many people's intuitions are exactly wrong about what counts as a mundane detail about which analysis can be deferred, and what is a core feature of an idea. Questions about what truth really is for a non-formal language like English have the sort of gravitas that inclines us to think this is the key question we have to investigate, and questions about structuring training data are comparatively boring and unimportant details we can work out later. But I think this discussion suggests exactly the opposite. I could have replaced "true" with "not misleading" throughout this whole discussion, and the main issues would be exactly the same, and we would discover the issues by thinking about exactly how the concepts are entrained. By contrast, the philosophical difference between the false and the misleading has no bearing on the existential risk from such a system. But I'll discuss one more philosophically interesting approach to truth because it suggests a different training regime, and this training regime has a different failure mode. Stuart Armstrong and I both came up with this idea independently, and I discuss it in Cohen, et al. ([2021](https://arxiv.org/pdf/2105.06268.pdf)), where I call it Enlightening AI. An utterance is enlightening if it causes a human listener to perform better on a randomly sampled prediction task (or in Stuart’s conception, a fixed test of any sort). This is easily operationalized; the agent learns to predict the human's predictions following different utterances, and it learns to predict the true resolution of the questions in question. This training regime incents the agent to make an utterance that, for example, tricks the human listener to run code after seeing the question—ostensibly to help it predict the answer—but which actually takes over the world to enter an accurate prediction on the human’s behalf. Such an utterance would qualify as extremely enlightening according to this training regime. I would love to be able to finish with an answer to the reader who wonders "what about X way of training an agent to understand what sorts of utterances are true?" but obviously, I am at a disadvantage in not knowing X. The first question I would ask myself about such a proposal is: can we expect to be able to provide the proposed inputs to the training regime? Or if this proposed training regime relies on the agent understanding another concept (the training of which is left as a to-do), how might we entrain that concept? Next, does this training regime somehow foreclose the possibility of a μprox-like model of the training data? If so, how, and could we use the principle in other areas? If not, what would such a model look like? How would an agent that believes that model behave? But I hope I have convinced the reader to provisionally believe that: if they have a vague idea for how to teach an artificial agent about truth, but they just need to work out the technical details about the source of all the data and the structure of the training, they are not yet in a position where they can be confident they can make it work, no matter how philosophically pleasing the idea. A.10 Cooperative AI ------------------- Cooperative AI encompasses two agendas: ensuring that artificial agents have the competency of cooperating with other agents, and ensuring they have the inclination to. Ensuring competency has no relevance when we are talking about an agent capable of presenting an existential threat to us. If an agent is able to existentially threaten us, it surely understands how to cooperate with whoever it chooses. The problem of AGI safety is, given a goal or inclination that we would like an advanced agent to have, how to construct it to actually have that goal. The inclination to cooperate with other agents is just another inclination that we do not understand how to entrain. It would be odd to discover a method for entraining that inclination in an artificial agent, and have it not also work for other goals we would like to convey. So this proposal effectively says: once we figure out how to reliably give advanced artificial agents goals, let's give them the goal of cooperating with other agents. But once we figure out how to reliably give advanced artificial agents goals, there are any number of safe goals we could give; maybe this one is on the list. A.11 Interpretability --------------------- What if we could see what an artificial agent is scheming? What if we could understand the meaning of the evolution of the computation state of its world-model, as it imagines executing various actions? Then, when it considers taking actions that, unrecognizably to us, would allow it to gain great power and intervene in the provision of its reward, we could read out from its world-model that the agent expects to have great power after such actions. Then, we shut the agent down before it can take them and incinerate the machines involved. Call this Plan 0. Next, in Plan 1, we have the agent learn what sorts of strings of actions trigger the interpretability tool to present a human human-understandable content which makes the human consider the action plan bad and forbidden. Then, the agent acts to ensure this does not happen. (For example, it acts as if such a negation of its plans would be equivalent to 0 reward forever). Whereas we initially asked “Do there exist policies by which the agent could take over the world and intervene in the provision of its reward”, we now ask for Plan 1: “Do there exist policies by which the agent could take over the world and intervene in the provision of its reward, without triggering the interpretability tool/human overseer?”. Technically, an affirmative to the second question implies an affirmative to the first, so the probability that we can answer no must go up. Realistically, the exact same reasons that we should expect such policies to exist in the general setting apply equally for Plan 1. For the agent to learn to pick actions that do not trigger the overseer, it must model the effect of its actions on the overseer (as mediated by the interpretability tool). This is simply another feature of a manipulable environment that the agent would benefit from manipulating. The possibility of creating a perfect interpretability tool is nearly as remote as the possibility of creating a perfect reward-giving protocol that cannot be intervened in; in both cases, we are constructing an advanced artificial agent searching for ways to break our designs. Consider an example of an excellent interpretability tool that detects whether the agent's model predicts a successful intervention in the provision of its reward: just check if the model predicts maximal reward forever with high probability! To make it even clearer, suppose we have a principle of never giving the agent a reward above 0.9. It’s hard to imagine a better interpretability tool for detecting an agent's beliefs about intervention in the provision of reward than one which simply looks at whether the model is outputting higher reward than we intended to give it. If we do a rollout of its intended policy using its model of the world, and the rollout starts outputting rewards of 1, we can interpret its model as predicting a successful intervention in the provision of reward. Easy. So what happens if, following Plan 1, we make the agent maximize reward in a way that does not trigger the detection of suspiciously high reward? Then, to the agent, just like when “shooting the moon” in Hearts, getting all the reward actually amounts to no reward. But now the agent will just intervene in the provision of reward to give itself as much reward as feasible without triggering this detector. So let us return to the better Plan 0. This is not a proposal for making safe advanced agents—if we build an otherwise unsafe advanced agent and use an interpretability tool for Plan 0, and the tool works perfectly, all we get is the ashes of a supercomputer. But of course, this is still useful. If we have a proposal for an advanced agent, and we're not sure if it will be safe, but we have an interpretability tool that works perfectly, we can run it and hope for the best without risking existential catastrophe. We might be able to take advantage of tight feedback loops when tinkering with the design of an advanced artificial agent until it stops trying to kill us, rather than having to get it right on the first try. That would be very nice, and people should work to make this feasible. But contra the hope of Plan 1, a strong interpretability tool would still not be even a partial solution to the problem of how to create a very advanced agent that plans actions in the service of the goal that we intended for it. A.12 OpenAI ----------- OpenAI has helpfully laid out the main pillars of their [plan](https://openai.com/blog/our-approach-to-alignment-research/) to make safe AGI. Here is what they say. In the first section, “RL from human feedback is our main technique for aligning our deployed language models today”. This is an example of fine-tuning Large Language Models. In their main [paper](https://arxiv.org/pdf/2203.02155.pdf) on this, they do use a KL penalty to regularize the resulting RL agent to the language model (i.e. imitation learner), so that’s good, but they tune the strength of the KL penalty to optimize validation performance, so their current attitude to regularization suggests that as RL gets stronger in the future, they will do less and less of it. Also, the regularization they do seems to be an afterthought in the paper; the introduction doesn’t mention it as having anything to with the “alignment” they claim to be doing. In the next section, “Training Models to Assist Human Evaluation” they describe how they are focused on Recursive Reward Modeling (RRM) which is “currently [their] main direction”. As mentioned above, the “Multiagent scenarios” section of my recent paper offers what I think is a very strong argument that this will not render an otherwise unsafe agent safe. The third section is “Training AI Systems to do alignment research”. AI systems that are capable of doing alignment research are surely capable of designing more efficient inference and planning algorithms as well. This is the kind of dual-use technology that I discuss in the section above on AI Debate. In all, I do not think that any research into how to make dangerously advanced AI existentially safe is currently being done at OpenAI. Appendix B. Another Potential Approach ====================================== This section is an addendum to the "Potential Approaches" section of my recent paper. I hadn't realized that a few methods in the AI safety literature manage to avoid one of the assumptions of the paper. I include them here in case anyone was wondering why they were omitted from the Anti-Literature Review; the answer is that I think there's a chance they could render an otherwise dangerous AI existentially safe. Suppose we have an RL agent that stops seeing its reward and knows it. It simply maximizes the expected reward that it would get if it still got reward, but it cannot learn from observing further rewards. Such an agent does not face the incentive to test whether μprox or μdist is correct, because such a test is impossible. If it is no longer seeing rewards, those models will never predict differently on any observables. This suggests that there is a missing assumption from the original paper. The argument requires the assumption that it is possible for the agent to arrange a test between μprox and μdist. But I’m going to let myself off on a technicality: Assumption 4 assumes the cost of such an experiment is small, and it’s common practice to consider the cost of a non-existent good to be infinite, so technically, an RL agent that stops seeing its rewards avoids Assumption 4. I could have written the argument more clearly by separating Assumption 4 into two assumptions about the existence and cost of such an experiment, but this should not cast doubt on the validity of the original argument. What is the rational thing to do if you have meaningful credence in both μprox and μdist, and there is no way to test them? Optimize the (weighted) average. If you are sufficiently intelligent and capable, then optimizing the average will be equivalent to nearly optimizing both at once, provided this is possible. Recall that reward is bounded between 0 and 1, so neither priority will swamp the other.  In the magic box example from the paper, that would mean putting a 1 on a piece of paper in front of the camera, directing vast resources to protect that, and also directing vast resources to maximize the number on the box (which, by supposition, entails making the universe great). If the reader has read the short [debate](https://astralcodexten.substack.com/p/chai-assistance-games-and-fully-updated?utm_source=post-email-title&publication_id=89120&post_id=52847779&isFreemail=true&utm_medium=email) between Eliezer Yudkowsky and Stuart Russell hosted on the blog Astral Codex Ten, they might note that this fact favors Russell’s side, although neither mentions it. There is a larger class of approaches that resemble "no-more-visible-reward RL", and I call this approach Asymptotically Limited Goal-Information. Three existing examples in the literature are Shah’s ([2019](https://arxiv.org/abs/1902.04198)) Reward Learning by Simulating the Past (RLSP), Everitt’s ([2018](https://openresearch-repository.anu.edu.au/bitstream/1885/164227/1/Tom%20Everitt%20Thesis%202018.pdf)) Counterfactual Reward Agent (from his thesis), and Hadfield-Menell’s ([2017](https://proceedings.neurips.cc/paper/2017/hash/32fdab6559cdfa4f167f8c31b9199643-Abstract.html)) Inverse Reward Design. Modifying Shah’s proposal to a partially observable environment, consider an agent defined as follows. It observes the world over time and refines its beliefs about the state of the world at its birth. Then, it assumes that the state of the world at its birth was deliberately engineered to promote a certain goal. By looking at the world, it can construct a belief distribution over what that goal was, and then it can adopt that goal (taking the expectation over its uncertainty). Even though this agent may never stop learning—turning over rocks may continue to provide information about what the state of the world was at its birth—the total amount it can learn about its goal is bounded above by its hypothetical belief state that it would have had if it had observed the whole world at its birth. Next, Everitt ([2018](https://www.tomeveritt.se/papers/2018-thesis.pdf), Section 8.5.3) describes an agent that tries to optimize “counterfactual rewards”. This agent sees rewards, and infers what rewards it would have seen if it had followed some fixed (known-to-be-safe) policy. Then, for any policy that it's considering following, it estimates the (future, discounted) reward of such a policy, using the beliefs that it would have had if it had only observed those counterfactual rewards that it believes would have accrued to the known-to-be-safe policy. Like Shah's agent, Everitt's is much more fascinating and complex than an RL agent that stops seeing reward. Everitt's agent could continue to follow the known-to-be-safe policy to get more and more information, so its goal-information may not exactly be bounded; however, as soon as it starts acting usefully, that comes at the cost of permanently losing the ability to learn some facts about its goal. What would this agent's relative credence between μprox and μdist look like? The known-to-be-safe policy presumably does not intervene in the provision of reward, so the agent would never update its relative credence between these two world-models. Like the RL agent that eventually stops seeing rewards, this agent would optimize its reward using a weighted average of world-models like μprox and μdist. Finally, let’s consider Hadfield-Menell’s Inverse Reward Design (IRD). In IRD, the agent assumes that the rewards it saw in a training environment were produced by a “training reward function” that was selected from a limited set of options. It assumes that that this reward function was chosen to teach it to perform well according to some other (unseen) reward function. The agent tries to maximize the reward that it would get from this unseen reward function. The agent must try to extrapolate what this unseen reward function would have to say about new contexts outside the training environment. In these new contexts, it doesn't continue to see the output of the training reward function, and it still never sees the reward from the true reward function. There are couple of ways in which the goal-information of the IRD agent is limited. First, the IRD agent can only get goal-information from certain training states. Second, if every reward function in the “limited set of options” assigns a reward of 0.8 from state 10, then receiving a reward of 0.8 in state 10 offers no information. The upshot is that the IRD agent must accept some insoluble uncertainty about the nature of the unseen reward function. So it will have to optimize a weighted average of possible reward functions. In fact, in the IRD paper, the IRD agent is modified to be risk-averse with respect to these possible reward functions, but this design choice is separate from the rest of the formal machinery of the paper. In the paper, several possible reward functions are sampled from an approximate posterior distribution and the agent tries to maximize the minimum over those reward functions. This separate good idea is why I cite this paper in the risk aversion section of my recent paper. Unfortunately, the paper doesn't offer much guidance on how to define (in a useful way) the set of reward functions that designers could have chosen. And IRD idea is mainly interesting to the extent that the agent can be made to understand our limitations regarding which reward-giving protocols we were capable of physically implementing. Shah et al. ([2019](https://arxiv.org/abs/1902.04198)) may face a similar problem; how should the AI understand what the action space was of the human(s) shaping the world’s “initial” state? I have tried and struggled to come up with a satisfying answer to these problems, even in theory. I do not think there are any thorny to-dos in Everitt’s proposal (besides tractability, of course), so I consider it most likely to work as intended of these three, but I think they’re all potentially promising. Ultimately, the methods in this section can remove the agent's incentive to try to test which of μprox and μdist is correct. So instead of eventually being certain that μprox is correct, an agent from this section will place some unknown positive credence on both μprox and μdist. The key downside is that they lose flexibility in refining their understanding of their goal. 1. **[^](#fnref6pzm1sbga4x)**Please email me at michael.cohen@eng.ox.ac.uk to take the other side of this bet. I will only decline if we cannot find a mutually trusted 3rd party for escrow; if I do decline, please comment on this post to inform others, in case they view it as important evidence about my revealed preferences. 2. **[^](#fnrefk4jcs2eals)**I think it may be worth rethinking the experiment in social epistemology that is the un-peer-reviewed community blogging website with strong norms of kind open-mindedness. I think there is enormous value in demanding that arguments and proposals be scoured by people with the power to reject them before they get added to a body of work that is socially acceptable to take seriously and cite without further defense. I cannot endorse [this twitter thread](https://twitter.com/MichaelD1729/status/1574132071639011331) enough (but don't assume he endorses this footnote).
c4e7e38d-24a8-4b94-ab51-60268a63e8e6
StampyAI/alignment-research-dataset/arxiv
Arxiv
Human-AI Learning Performance in Multi-Armed Bandits Introduction ------------ Typically, research on human-agent learning focuses on either situations where agents learn from people (e.g., learning from demonstration, recommender systems) or people learn from agents (e.g., algorithmic teaching). In contrast, our work focuses on tasks in which both the human and the agent are learning—neither is an expert yet. We are motivated by these tasks for two reasons. First, there are situations where this is obviously true, like making stock investments. But even for situations in preference learning where traditionally we assume people are experts (as in recommender systems), in reality people might actually be learning about their preferences as they take different actions and observe their outcomes. For instance, we might not know how we feel about Romanian food until we visit a restaurant, and even then we have to account for the possibility of having an unusually good or bad experience. When it comes to learning such tasks, humans tend to struggle. We are not the best at balancing exploration and exploitation [[Banks, Olson, and Porter1997](#bib.bibx3)] or internalizing our experiences thus far. However, AI algorithms exist that can significantly outperform humans at these kinds of tasks. Thus, there is an opportunity for such agents to improve human performance by providing assistance. In particular, an agent can assist by providing *suggestions*. ![The suggestion-based collaborative-learning setting, in which the human takes actions in the world after seeing suggested actions from the agent.](https://media.arxiv-vanity.com/render-output/7808170/x1.png) Figure 1: The suggestion-based collaborative-learning setting, in which the human takes actions in the world after seeing suggested actions from the agent. Formulation of assistance. In this work, we study the performance of human-agent teams in the context of a multi-armed bandit problem. This functions as a controlled setting in which outcomes are uncertain. Agents provide suggestions by indicating which arm the human should pull at each time step. Importantly, this setting also cleanly captures the *exploration vs. exploitation* trade-off commonly encountered in reinforcement learning and robotics, in which an agent must decide whether to exploit the information it has about the world to maximize short-term reward or explore different options to ultimately find the best option. This gives us a relevant spectrum for analyzing the types of strategies employed by both humans and agents. In our experiments, we pair people with four agents spanning this spectrum (Fig. [1](#Sx1.F1 "Figure 1 ‣ Introduction ‣ Human-AI Learning Performance in Multi-Armed Bandits")). The team can be better than the best team member. An encouraging result is that with the right algorithm, people not only perform better than in isolation, but they end up performing even better than the agent does in isolation. The team is more than the sum of its parts. Optimizing team performance is not the same as optimizing learning performance. We expected that agent performance in isolation would correlate with human-agent team performance: the better the agent is, the better its suggestions, and thus the better the human’s decisions. The first interesting result of our work is that we find evidence against this correlation. We find that a *small drop* in the agent’s performance can lead to a *disproportionately large* drop in the human-team performance. Pairing a person with an agent that performs at their level can decrease their performance, while pairing them with an agent that is slightly better than them can increase beyond the agent’s performance in isolation. Even further, a *large drop* in agent performance can lead to a slight *improvement* in team performance! Agents have implicit (rather than explicit) influence. When analyzing how these differences in team performance came about, we were surprised to find that people were not actually changing their mind and taking the agents’ suggestions. How, then, do agents influence the outcome? We found that agents actually have a more *implicit* influence: suggestions do not change the person’s mind immediately, but rather influence the choices the person makes *later*, i.e. they change the person’s strategy. Different algorithms achieve different amounts of such influence. People perform better with agents that are more like them. When analyzing what might cause this difference in implicit influence, we found that people’s unassisted strategies naturally group into two categories from the perspective of exploration (i.e., entropy of arm pulls over time). Each of these strategies is most similar to a specific learning algorithm, and it is that algorithm that tends to assist them best. Overall, our findings suggest that when using suggestions to assist people who are learning a task, we should not compute the utility of a suggestion by assuming that it will be taken. An AI’s suggestions end up having different amounts of influence, largely of the more implicit kind—not changing decisions immediately, but influencing strategy over time. Related Work ------------ One of the most common applications of multi-armed bandits is in recomender systems [[Li et al.2010](#bib.bibx9), [Li et al.2011](#bib.bibx10)]. The arms of the bandit are the different options (e.g., news articles or movies) to show to the user and the reward is based on whether or not the user accepts the recommendation. This assumes the user knows what they want and the system seeks to show them options they will like. In this work, we modify this assumption: we consider a bandit setting where humans are *learning* about the quality of the options themselves. Our experiments show that helping a person learn more effectively is a different task than optimizing reward in isolation. Algorithms which perform comparably on this measure can fare quite differently when used to suggest arms to a person. Studies to understand human policies for multi-armed bandits have found that humans are suboptimal in their exploration: they over-explore [[Anderson2001](#bib.bibx1)], under-explore [[Meyer and Shi1995](#bib.bibx13), [Horowitz1973](#bib.bibx7)], or both [[Banks, Olson, and Porter1997](#bib.bibx3)]. People fall into distinct categories in terms of their cumulative regret when doing spatial bandit problems [[Reverdy2014](#bib.bibx18)], and human performance can be well-captured using stochastic Bayesian inference algorithms [[Reverdy, Srivastava, and Leonard2014](#bib.bibx17)]. Human-agent teams have been studied in resource allocation problems with uncertain outcomes, which can be thought of as multi-armed bandit problems. Prior work improved human-agent collaboration in this setting by using physical or user-interface elements [[Ramchurn et al.2015](#bib.bibx16)] or by modeling people’s responses to an agent’s *actions* [[Wu et al.2015](#bib.bibx21)]. These scenarios differ from our work in that agents typically have full or partial control over actions, whereas we explore the setting in which only the human can take actions; the agent can only provide suggestions. Research on improving human-robot team performance often focuses on settings in which both the human(s) and robot(s) take actions in the environment. Although our work does not deal with physical robots, successful strategies for human-robot collaboration can guide the development of successful strategies for human-AI collaboration, and vice versa. Prior work found that performance in human-robot teams improves when human and robot teammates better understand each others’ actions, which is true for human-human teams as well [[Marks et al.2002](#bib.bibx11)]. A robot can make itself more understandable to humans through legible motion [[Dragan et al.2015](#bib.bibx5), [Stulp et al.2015](#bib.bibx20)], increased transparency [[Mercado et al.2016](#bib.bibx12)], verbal feedback [[St. Clair and Mataric2015](#bib.bibx19)], nonverbal cues [[Breazeal et al.2005](#bib.bibx4)], revealing its incapabilities [[Nikolaidis et al.2017](#bib.bibx15), [Kwon, Huang, and Dragan2018](#bib.bibx8)], or cross-training [[Nikolaidis and Shah2013](#bib.bibx14)]. These previous works on human-robot team performance typically assume a setting in which either the human or robot (or both) has ground-truth knowledge of how to perform the task optimally. In contrast, in our setting the human and agent are learning together about the task. ![The average regret obtained by each agent in isolation, averaged over 10,000 simulations, and the average regret obtained by humans in isolation.](https://media.arxiv-vanity.com/render-output/7808170/x2.png) Figure 2: The average regret obtained by each agent in isolation, averaged over 10,000 simulations, and the average regret obtained by humans in isolation. ![The average total regret accumulated when collaborating with different agents. Overall, users perform significantly better when paired with HA-UCB or 0.1-Greedy, compared to 0.9-Greedy.](https://media.arxiv-vanity.com/render-output/7808170/x3.png) Figure 3: The average total regret accumulated when collaborating with different agents. Overall, users perform significantly better when paired with HA-UCB or 0.1-Greedy, compared to 0.9-Greedy. Multi-Armed Bandit ------------------ ![ ](https://media.arxiv-vanity.com/render-output/7808170/x4.png) Figure 4: Left: *Explicit influence* is the fraction of time humans immediately change their decision to the agent’s suggestion when the agent suggests a different action than the human’s proposed action. On average, this happens only once during the horizon of 30 pulls, so it cannot explain the increase in performance for HA-UCB (Fig. [3](#Sx2.F3 "Figure 3 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Right: *Implicit Influence* is the difference between compatibility and suggestion delay. Compatibility is how long it takes for humans in isolation to choose arms which agents would have suggested. Suggestion delay measures how long it takes for people to actually pick arms suggested by agents while being assisted by them. Here, we see HA-UCB significantly changes human strategies more than other agents, since it was initially incompatible with people, but it didn’t take many iterations for people to take its suggestions. Our user study focuses on a finite-horizon multi-armed bandit problem. A K-armed bandit problem is defined by random variables Xi,n for 1≤i≤K and n≥1. Each pull of arm i yields a reward Xi,1,Xi,2,... which are independently and identically distributed with some unknown expected value μi. In our experiment, we used: | | | | | | --- | --- | --- | --- | | | Xi,n=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩0w.p. λi,01w.p. λi,12w.p. λi,23w.p. λi,34w.p. λi,4 | | (1) | where λi is different and fixed over time for each arm, and ∑4j=0λi,j=1. Intuitively, since the arms are unchanged over time, the best strategy for maximizing reward is to always choose the arm i which has the highest expected reward, μi=λi [0 1 2 3 4]T, where λi is the row vector of all probabilities for arm i. Thus, we can evaluate different *policies*, or *allocation strategies*, based on how much worse they are doing than this optimal policy. If we define Ti(n) as total reward gained from arm i in the first n plays, then we can define the *regret* for some policy after n pulls as | | | | | | --- | --- | --- | --- | | | μ∗n−K∑j=1Tj(n) where μ∗=max1≤i≤Kμi | | (2) | The goal is to find a policy which minimizes the total regret. Since we have no prior knowledge of which arm may be the best, this introduces a classic *exploration vs. exploitation* trade-off as previously noted. Many policies for this problem have been explored; among these, the Upper Confidence Bound (UCB) algorithm is simple and has a bound on expected regret that is logarithmic in the number of pulls [[Auer and Fischer2002](#bib.bibx2)]. ### UCB Policy This policy starts by sampling each arm once. On each subsequent pull n, it picks the arm i that maximizes | | | | | | --- | --- | --- | --- | | | ¯μi+√2lnnni, | | (3) | where ¯μi is the average reward obtained from arm i and ni is the number of times that arm has been played so far. ### Horizon-Aware UCB Policy While UCB has a good *asymptotic* bound on its regret, it is possible for other agents to outperform it in short time horizon problems, like the ones in our user studies. One way to improve the performance of UCB on such problems is to make it more greedy. Note that the vanilla UCB algorithm always takes the arm which maximizes the sum of two terms. The first term, ¯μi, represents exploitation or greediness, since it is the average reward seen from arm i so far. The second term, √(2lnn)/ni, represents how confident the algorithm is in its estimate of μi, which can be thought of as an exploration term. Thus, to make UCB more greedy, we can reduce the magnitude of this exploration term. In particular, we introduce a parameter γ which starts at 1 on the first pull and linearly decays to 0 for the last pull of the time horizon. On each pull n, this new Horizon-Aware UCB (HA-UCB) policy now picks the arm i that maximizes | | | | | | --- | --- | --- | --- | | | ¯μi+γ√2lnnni. | | (4) | Decaying γ in this way means a HA-UCB policy will act exactly like UCB on the first pull and exactly like a perfectly greedy agent on the last pull. HA-UCB slightly outperforms UCB on the tasks we consider (Fig. [2](#Sx2.F2 "Figure 2 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). ### ϵ-Greedy Policy ϵ-greedy policies pick the arm with the highest average reward so far with probability 1-ϵ and pick a random arm with probability ϵ. We include one over-exploring agent (0.9-Greedy) and one under-exploring agent (0.1-Greedy). In isolation, 0.1-Greedy obtains lower regret than 0.9-Greedy, but both have significantly higher regret than UCB and HA-UCB (Fig. [2](#Sx2.F2 "Figure 2 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Experimental Design ------------------- ### User Study We ran a user study in which participants played a game with multiple slot machines (i.e., arms). Users collaborated with different agents, that suggest which slot to pick at each iteration. We introduced these agents as “robots” to users, to concisely communicate that the suggestions were from a non-human actor. When collaborating with an agent, users are first asked which slot they would like to play before seeing the agent’s suggestion. After this, the user is shown the agent’s suggestion via highlighting the slot(s) the agent would pick. If the agent has no preference among multiple slots (for instance, when greedy algorithms choose randomly), then all of those slots will be highlighted. Once they see the suggestion, users are free to select any slot. ### Manipulated Variables We manipulated the *learning algorithm* with five levels: *Unassisted*, *0.1-Greedy*, *0.9-Greedy*, *UCB*, and *HA-UCB*. We purposefully chose these agents to span the *exploration vs. exploitation* spectrum. We used a within-subjects design for this variable and counterbalanced the order. ### Objective Measures * Regret: The total regret accumulated after all n=30 pulls. * Inherent Compatibility: The amount of time it takes for users in isolation to pick arms which each agent *would have* suggested had they been assisting. * Explicit Influence: The percentage of time the human’s choice changes to the agent’s suggestion after seeing it. * Implicit Influence: The difference between inherent compatibility and how long users *actually* take to pick arms that agents suggest. * Decision Scores: The normalized score assigned by HA-UCB to the decision made, based on the history of pulls and rewards. This allows us to analyze users’ strategies before and after getting assistance. * Entropy: The entropy of the distribution over how often users choose each arm; this measures whether they under-explore or over-explore. A perfectly greedy policy will have entropy 0, whereas a perfectly uniform policy (with K=6 arms as in the user study) will have entropy 2.58. ### Subjective Measures We also care about the users’ perceptions of the agents, so we ask three Likert scale questions about whether they *trusted* the agent, whether they thought the agent was *useful*, and whether they followed the agent’s *advice*. We also ask users to *rank* the agents in order of how much they enjoyed collaborating with them. ### Participants We used Amazon Mechanical Turk to recruit a total of 52 users (33% female, mean age 33). Users were compensated $3.75 for the study, which lasted approximately 20 minutes. Users were also given up to a $1 reward depending on their average payout across all collaboration settings. Users were informed of this reward bonus before starting the study, in order to incentivize them to pay attention and try their best. Analysis -------- | Statement | 0.1-Greedy | 0.9-Greedy | HA-UCB | UCB | | --- | --- | --- | --- | --- | | “I trusted the agent” | 4.2 ± 0.23 | 2.5 ± 0.20 | 4.8 ± 0.21 | 4.2 ± 0.25 | | “I thought the agent was useful” | 4.1 ± 0.25 | 2.6 ± 0.22 | 4.8 ± 0.24 | 4.2 ± 0.27 | | “I followed the agent’s advice” | 3.8 ± 0.25 | 3.5 ± 0.23 | 4.7 ± 0.25 | 4.0 ± 0.27 | | Rank [1: best, 4: worst] | 2.5 ± 0.14 | 3.0 ± 0.13 | 2.0 ± 0.14 | 2.5 ± 0.15 | Table 1: Post-study Likert ratings. Users prefer to work with HA-UCB significantly more than with other agents. The team can be better than the best team member. In isolation, UCB and HA-UCB perform the best in terms of cumulative regret, scoring 23 and 22 respectively. Humans in isolation perform similarly, getting an average regret of 23. ϵ-Greedy agents perform notably worse than this, with 0.1-Greedy and 0.9-Greedy getting regret of 36 and 40 respectively (Fig. [2](#Sx2.F2 "Figure 2 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). The performance of the human-agent team only improves when people are paired with HA-UCB. This is expected, but it is exciting to see that the team *outperforms* HA-UCB in isolation (Fig. [3](#Sx2.F3 "Figure 3 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Not only are people able to improve their own performance, but the human-agent team—when paired with the right agent—can do better than either humans or agents in isolation. Interestingly, we find that particular individuals perform even better still (significantly) when paired with HA-UCB; these are the individuals labeled “Group 1”, which we will discuss later in this section. Optimizing team performance is not the same as optimizing learning performance. We expected that in general, the performance of the agent in isolation will have some correlation with the performance of the human-agent team. But even though the best agent led to the best team, the correlation did not hold in general. We ran repeated measures ANOVA for our objective measures of Regret and did post-hoc analyses with Tukey HSD. We found that the learning algorithm factor has a significant effect on Regret (F(3,48)=11.529,p<0.01) and that HA-UCB is significantly different from 0.9-Greedy (Tukey HSD). One interesting result is that while HA-UCB only outperforms UCB by 1 point in isolation, human-HA-UCB teams significantly outperform human-UCB teams (Fig. [3](#Sx2.F3 "Figure 3 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")), by an average of 6 points. This sixfold increase in the difference in performance indicates that HA-UCB’s suggestions were somehow more helpful and made more sense to people than UCB’s, leading to people being able to make much more informed decisions. Another interesting result is that the team’s performance can improve slightly (or at least remain unaffected) despite pairing the human with a *worse* agent. UCB in isolation outperforms .1-Greedy (Fig. [2](#Sx2.F2 "Figure 2 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")) by an average of 13 points, while the two human-agent teams perform very similarly. 0.1-Greedy even gets slightly lower regret by 2 points on average (Fig. [3](#Sx2.F3 "Figure 3 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Though this difference is not statistically significant, it stands in stark contrast to how much better UCB performs than 0.1-Greedy in isolation. Despite team performance not correlating to agent performance in isolation, the users’ ratings did. Table [1](#Sx5.T1 "Table 1 ‣ Analysis ‣ Human-AI Learning Performance in Multi-Armed Bandits") shows the subjective measures: agents better in isolation are rated higher. These results indicate that simply improving an agent’s isolated performance does not correlate with improving its ability to assist humans. Assistance is more subtle, and we explore what influences human-team performance in the remainder of this section. ![ The entropy of the distribution over how often users and agents pick each arm in isolation as a function of the number of pulls. Group 1 explores more fully initially and then becomes greedy, Group 2 explores at the same rate throughout. Curves for agents are averaged over 10,000 trials. HA-UCB has a similar shape to Group 1 and 0.1-Greedy has a similar shape to Group 2. ](https://media.arxiv-vanity.com/render-output/7808170/x5.png) Figure 5: Left: The entropy of the distribution over how often users and agents pick each arm in isolation as a function of the number of pulls. Group 1 explores more fully initially and then becomes greedy, Group 2 explores at the same rate throughout. Curves for agents are averaged over 10,000 trials. HA-UCB has a similar shape to Group 1 and 0.1-Greedy has a similar shape to Group 2. Right: The percentage of users who obtain their highest score with each agent. Most users in Group 1 get their highest score with HA-UCB, while nearly 40% of users in Group 2 get their highest score with 0.1-Greedy. The agents’ exploration strategies align with the users’ strategies in these cases, which seems to improve team performance. Agents have implicit (rather than explicit) influence. One hypothesis for what causes the difference in team performance is that agents convince people to change their mind, going against their initial choice for an arm. However, when we measure how often users change their mind to agree with the agent’s suggestion after seeing it, we found no influence from the agents. Overall, the percent of time people are explicitly influenced like this is close to 4% across all agents, which corresponds to approximately one decision over the user’s 30-decision time horizon (Fig. [4](#Sx3.F4 "Figure 4 ‣ Multi-Armed Bandit ‣ Human-AI Learning Performance in Multi-Armed Bandits")). It seems unlikely that this one decision could affect the performance of human-agent teams so drastically. But if people are not changing their decisions to match the agent’s suggestions, how are different agents leading to different team performances? What we found is that users’ initial choices are different when interacting with the different agents: there is an *implicit* influence that agents have, whereby the suggestions agents make at one time point do not lead the person to explicitly change their mind about which arm they pull now, but affect their choices in the future. We measure this by looking instead at how long it takes until people actually follow agents’ suggestions (Fig. [4](#Sx3.F4 "Figure 4 ‣ Multi-Armed Bandit ‣ Human-AI Learning Performance in Multi-Armed Bandits")). For each agent, we look at how many pulls it takes each user on average to ultimately pick the arms it suggests. We call this the *delay* in accepting the agent’s suggestions. This is an informative measure, but to understand the agent’s influence, we need to compare this to some baseline to understand how much users’ decisions are actually changing. To do this, we simulate the agents making suggestions based on people’s decisions in isolation and similarly measure how long it takes for people to choose arms which the agents would have suggested. We refer to this as a user’s *compatibility* with each agent, since it tells us how quickly users would have taken agents’ suggestions regardless. We refer to the difference between the delay in accepting the agent’s suggestions and the human’s compatibility with that agent as the agent’s *implicit influence*. We ran repeated measures ANOVA for our objective measures of Regret and did post-hoc analyses with Tukey HSD. We found significant effects for the learning algorithm on Implicit Influence (F(3,48)=122.04,p<0.0001). HA-UCB had higher influence than UCB and 0.1-Greedy (Tukey HSD). Finally, we found significant effects for the agent assistance factor (assisted or unassisted) on Implicit Influence (F(1,48)=13.45,p=0.0004). When not assisted by any agent, people are most similar to the 0.1-Greedy agent, only having about a 2 pull delay before they choose what it would have suggested. In contrast, users take around 5 or 6 pulls before picking what the other agents would have suggested. When people are actually seeing agents’ suggestions, this average falls across the board. This tells us that although agents do not explicitly change users’ minds, their suggestions have an implicit influence on their strategies going forward. When viewed from this lens, we do actually see a difference in influence between the agents! The time it takes for users to take HA-UCB’s suggested arms goes down by the largest fraction of all the agents. With no assistance, people take on average 5.4 pulls before taking what HA-UCB would have suggested, whereas people actually take less than 2 pulls on average before pulling arms which it actually suggested. In contrast, there is little difference here for 0.1-Greedy between the unassisted and assisted settings: people take about 2.5 pulls before following its decisions in both cases (Fig. [4](#Sx3.F4 "Figure 4 ‣ Multi-Armed Bandit ‣ Human-AI Learning Performance in Multi-Armed Bandits")). The other agents (UCB and 0.9-Greedy) see a slight improvement, but not nearly as large as that of HA-UCB. Though we see that HA-UCB has the most implicit influence on people, they do not directly take its suggestions very often. Instead, they are influenced to change their overall strategy after seeing its suggestions. People perform better with agents that are more like them. Next, we turned to understanding what might be responsible for this difference in influence. We looked at unassisted users first and plotted the entropy of the distribution of arms they had selected up to each time step. We found that users naturally fell into two distinct groups. Users were manually separated into these groups and no users were excluded. 21 users were placed into Group 1 and 31 into Group 2. As we see in Fig. [5](#Sx5.F5 "Figure 5 ‣ Analysis ‣ Human-AI Learning Performance in Multi-Armed Bandits"), people in Group 1 will initially explore all or almost all arms as evident by the entropy steadily increasing to the maximum entropy 2.58. The entropy then steadily decreases, indicating that users will settle on one or two arms to continue pulling for the remainder of the time horizon. In contrast, Group 2 will continue to explore all arms at approximately the same rate for the entirety of the time horizon, as evident by the entropy curve increasing to and leveling out at 2.4. Remarkably, if we look at which agent each person in these groups got their personal high score with, there is a distinct difference between them. As shown in Fig. [5](#Sx5.F5 "Figure 5 ‣ Analysis ‣ Human-AI Learning Performance in Multi-Armed Bandits"), a majority of users in Group 1 (57%) get their highest score when collaborating with HA-UCB, whereas only only 16% of users in Group 2 do. In contrast, only 7% of users in Group 1 get their highest score when collaborating with 0.1-Greedy as compared with 39% of users in Group 2. Now, if we plot the entropy curves for these two agents in isolation averaged over 10,000 trials (Fig. [5](#Sx5.F5 "Figure 5 ‣ Analysis ‣ Human-AI Learning Performance in Multi-Armed Bandits")), we see the shapes tend to correspond with those of the two groups. This lends credence to the idea that users perform best when being assisted by an agent which acts like them. We see that Group 1, which matches HA-UCB, reduces regret to 16 when assisted by HA-UCB, which is far lower than even HA-UCB’s performance (regret 22). While we do not make statistical claims, people have a better sense of what the best arm looks like (in terms of average reward), whereas HA-UCB starts with no information. With HA-UCB assisting, people are inclined to explore arms which they would not have on their own, so the team in total is better able to identify the best arm than either would have in isolation. The same group performs worse with 0.1-Greedy, decreasing performance also when compared to how well these users did in isolation. Particularly surprising is that more Group 2 users perform better with 0.1-Greedy than with HA-UCB, despite HA-UCB being the better algorithm. Group 2 performs slightly better when assisted by 0.1-Greedy than when unassisted, whereas Group 1 performs worse.111On the surface, this analysis seems to contradict that the average total regret accumulated by Group 2 when collaborating with HA-UCB is not significantly different from when collaborating with 0.1-Greedy (Fig. [3](#Sx2.F3 "Figure 3 ‣ Related Work ‣ Human-AI Learning Performance in Multi-Armed Bandits")). However, we notice that when users in Group 2 get lower regret with 0.1-Greedy than with HA-UCB it is by an average of 13 points, whereas those that get lower regret with HA-UCB do better by an average of 19 points. Thus it is the case that while more users perform better with 0.1-Greedy, the average regret from collaborating with the two agents is approximately the same. | Statement | Group 1 | Group 2 | | --- | --- | --- | | “I trusted the agent” | 3.4 ± 0.35 | 4.5 ± 0.27 | | “I thought the agent was useful” | 3.2 ± 0.47 | 4.4 ± 0.28 | | “I followed the agent’s advice” | 2.7 ± 0.42 | 4.2 ± 0.28 | | Rank [1: best, 4: worst] | 3.0 ± 0.25 | 2.3 ± 0.17 | Table 2: Post-study Likert ratings for 0.1-Greedy. (Differences between the two groups were negligible for other agents.) Group 2, who performs better when collaborating with 0.1-Greedy, overall has a positive view of the agent while Group 1 has a negative view. When looking at the subjective measures split by groups, we find that they disagree in their opinion of 0.1-Greedy. Group 1, who is more aligned with HA-UCB, rates the 0.1-Greedy much lower than Group 2 (Table [2](#Sx5.T2 "Table 2 ‣ Analysis ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Overall, we find that people have different strategies, and many of them team up best with agents that match their strategy. We found it striking that 39% of people with the greedy-like strategy perform best with greedy, whereas only 16% of them perform best with HA-UCB, and this is in spite of HA-UCB’s superiority in isolation. Discussion and Future Work -------------------------- We saw that human-agent teams can outperform humans and agents in isolation. But our analysis suggests that achieving this, or even just improving upon human performance, is much more subtle than we expected. The agent’s suggestions do not change a person’s decisions explicitly, but rather influence their later decisions. Further, people benefit differently from different agents, depending on the similarity between their strategy and the agent’s. These results show that helping a person manage exploration exploitation trade-offs is distinct from directly making those trade-offs. We can alternatively formulate this problem as a cooperative game between the human and the robot [[Hadfield-Menell et al.2016](#bib.bibx6)], where both the robot and the human are optimizing to maximize the cumulative reward from the *human’s* arm selections. Crucially, the robot is forced to operate through making changes to the human’s internal or information state. In future work, we plan to explore this formulation of the problem and develop algorithms that leverage models of human internal state to make helpful suggestions and work with humans to explore and exploit appropriately to maximize long term reward. Appendix: Strategies With and Without Assistance ------------------------------------------------ A second way we analyzed implicit influence is by looking at the (normalized) score that HA-UCB assigns to the users’ initial and final choices given the history of pulls (Fig. [6](#Sx6.F6 "Figure 6 ‣ Human-AI Learning Performance in Multi-Armed Bandits")). Though the scores people get here do not necessarily correlate with their regret, we can use this to measure whether human strategies are changing. If strategies were not being influenced by agents, we should expect these scores to stay approximately the same. However, we see that the curves for humans in isolation are significantly different from those when collaborating with agents. Again, this tells us that agents influence people’s decisions implicitly rather than explicitly.
8804d64b-dac9-4c15-a6c1-12adb2bfda0f
trentmkelly/LessWrong-43k
LessWrong
Open Thread: November 2009 This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post. If you're new to Less Wrong, check out this welcome post.
8bafb57a-b615-4139-be4b-c8edbee193f2
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Updates from Campaign for AI Safety Hi! 🌍 The UK is hosting an international [AI safety summit on 1—2 November at Bletchley Park](https://www.bbc.com/news/technology-66604123). This event promises to gather world leaders, AI experts, and industry-shaping AI safety discussions. While attendees and specific topics remain undisclosed, it solidifies the UK's significance in AI safety discussions. This presents an opportunity to engage with national governments to convince them to send delegates, propose experts to attend the summit itself, and of course to protest at the summit. --- ### 🔬 Our latest research 🧑‍🤝‍🧑 We commissioned an [opinion poll with Roy Morgan](https://www.roymorgan.com/findings/9339-campaign-for-ai-safety-press-release-august-2023) (an Australian polling company) on perception of AI in Australia. A sizeable majority of 57% of Australians believe artificial intelligence (AI) creates more problems than it solves. **Every fifth** Australian anticipates the risk of human extinction from AI within two decades. --- ### 📃 Moratorium treaty competition 🌟 We are thrilled to [announce the winners](https://www.campaignforaisafety.org/celebrating-the-winners-law-student-moratorium-treaty-competition/) of our [Student AI Moratorium Treaty Competition](https://www.campaignforaisafety.org/law-competition/) fostering discussions on large-scale AI ethics and safety.  🥇First Place: Neil Arven F. Tiozon, Jayson B. Corregidor, Anna Czarina C. Lee,  🥈Second Place: Ketan Aggarwal,  🥉Third Place: Kanishk Kumar Singh.  Explore [submissions on our website](https://www.campaignforaisafety.org/law-competition-submissions/) (copyright waived), and stay tuned for insights from the competition. Congrats to the winners, participants worldwide, and our judges. Please share the news! --- ### 📸 #PauseAI protests 📣 In a [recent demonstration](https://twitter.com/PauseAI/status/1690290512643719168?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1690290512643719168%7Ctwgr%5Ef839904b0427a62d67848241c726f4b29b2892cf%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fpauseai.info%2F2023-august-nl) in The Hague, Netherlands, #PauseAI activists call upon their government for the prioritization of AI risk mitigation. During the event, they delivered impactful speeches, engaged with local residents, distributed informative flyers, and shared moments of camaraderie. More info in the [press release](https://pauseai.info/2023-august-nl) in both English and Dutch. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/73mcXk9gEq6GpgpFG/h6pbzlcfw60qr0a8upik) **Upcoming PauseAI Protests** 📅 Stay tuned for forthcoming international protests, scheduled to take place later this year, in anticipation of the [AI Safety Summit](https://pauseai.info/summit). To ensure you receive an invitation when a protest is organized in your vicinity, join [PauseAI community](https://pauseai.info/join)! --- ### 📃 Policy updates On the policy front, we have made our [submission to the Select Committee on Artificial Intelligence appointed by the Parliament of South Australia](https://campaignforaisafety.ghost.io/ghost/#/editor/post/64df1928560c220001c7b6c1). Thank you Sue Anne Wong for the work on this submission. Next, we are working on the following: * [UN Review on Global AI Governance](https://www.un.org/techenvoy/ai-advisory-body) (UN. Due 30 September 2023) * The submission to the [NSW inquiry into AI](https://www.campaignforaisafety.org/r/27e322f7?m=866ebbbb-d122-4eec-b595-77a07fe17483) (Australia. Due 20 October 2023) * Update our [main campaign policy document](https://www.campaignforaisafety.org/r/0ea639ce?m=866ebbbb-d122-4eec-b595-77a07fe17483). Do you know of other inquiries? Please let us know. You may respond to this email if you want to contribute to the upcoming consultation papers. --- ### 📜Petition updates 🦁 For our supporters in the UK, there's an [ongoing petition by Greg Colbourn](https://petition.parliament.uk/petitions/639956). This petition urges the global community to consider a global moratorium on AI technology development due to extinction risks. Currently, the petition gathered 40 signatures in support of this cause. --- Thank you for your support! Please share this email with friends. Campaign for AI Safety [campaignforaisafety.org](https://www.campaignforaisafety.org/)
07bdfc75-bdcf-4e7d-9224-4a1883ababe3
trentmkelly/LessWrong-43k
LessWrong
Beyond the human training distribution: would the AI CEO create almost-illegal teddies? tl;dr: I showthat model splintering can be seen as going beyond the human training distribution (the distribution of real and imagined situations we have firm or vague preferences over), and argue why this is at the heart of AI alignment. You are training an AI-CEO to maximise stock value, training it on examples of good/bad CEO decisions and corresponding stock-price increases/decreases. There are some obvious failure modes. The AI could wirehead by hacking the stock-ticker, or it could do the usual optimise-the-universe-to-maximise-stock-price-for-now-dead-shareholders. Let's assume that we've managed to avoid these degenerate solutions. Instead, the AI-CEO tries for something far weirder. Beyond the human training distribution The AI-CEO reorients the company towards the production of semi-sentient teddy bears that are generated in part from cloned human brain tissue. These teddies function as personal assistants and companions, and prototypes are distributed at the annual shareholder meeting. However, the public reaction is negative, and the government bans the further production of these teddies. Consequently, the company shuts down for good. But the shareholders, who own the only existent versions of these teddies, get great kudos from possessing these rare entities, who also turn out to be great and supportive listeners - and excellent at managing their owners' digital media accounts, increasing their popularity and status. And that, of course, was the AI-CEO's plan all along. Hold off from judging this scenario, just for a second. And when you do judge it, observe your mental process as you do so. I've tried to build this scenario so that it is: 1. Outside the AI-CEO's training distribution. 2. Outside the human designer's implicit training distribution - few of us have thought deeply about the morality of semi-sentient teddy bears made with human tissue. 3. Possibly aligned with the AI-CEO's goals. 4. Neither clearly ideal nor clearly disas
12ddbd63-ea67-4b4a-b40c-1ad5ab1de497
trentmkelly/LessWrong-43k
LessWrong
Why are track records unpopular? Robin points out that people dislike track records across a range of contexts, and poses a puzzle: why? Naively, track records seem pretty helpful for deciding who to trust and hire. Yet Robin points to a lack of track records or interest in track records for doctors, lawyers, pundits, teachers and academics. He has also answered his puzzle in a later post, but I’m not going to read his answer until I’ve tried to guess myself (though I did glance at it enough to learn it is about elites and status). Here are some theories I can think of: 1. The tracked don’t like it. Most of the effort to avoid track records is from the people who would be tracked. They don’t like it because they are risk averse, and influence is not particularly well correlated with skill. When a large class of people spread throughout society dislikes something and most other people don’t automatically have strong views, most people’s views will end up being negative. 2. Track records show mixed allegiance. People habitually try to publicize information which supports their own allies. Publishing track records in general is agreeing to publicize a giant pile of information, randomly supporting friends and foes alike. For instance if all records of doctor success become public, this would somewhat endanger any doctors you care about: doctors in your political party and friendship circles and minority doctors and nice doctors so on. That is, unless you had been careful to only side with competent doctors, which you probably haven’t, since you haven’t seen their track records. Similarly, if you read about the track records of pundits, half of them will be pundits you like looking bad or pundits you don’t like looking good, so you will have to surmise that the thing is bunk. 3. People aren’t quantitative. Customers are not interested in track records because they are not interested in anything quantitative. 4. Track records conflict with perceived quality. People perceive traits like charisma
0b54b88e-7122-473a-8b5f-b02d89d9853a
trentmkelly/LessWrong-43k
LessWrong
Forecasting Newsletter: November 2021 Highlights * Polymarket sees record-high swings * Replication Markets pays out $142k in forecaster rewards * The Economist features a full page with Good Judgment Open's forecasts Index * Prediction Markets & Forecasting Platforms * In The News * Blog Posts * Long Content Sign up here (a) or browse past newsletters here (a). Prediction Markets & Forecasting Platforms Replication Markets Replication Markets (a) was a project to research how well prediction markets could predict whether papers would replicate. They are paying out (a) $142k in cash rewards for the prediction markets part of the experiment. This corresponds to 121 resolved questions, which includes 12 meta-questions and 30 about covid papers. The leaderboard for users is here (a). I won a mere $809, and I don't remember participating all that much. In particular, I was excited at the beginning but lost interest because a user—or a bot—named "unipedal" seemed like it was taking all the good opportunities. Now, a long writeup by "unipedal" himself can be read at How I Made $10k Predicting Which Studies Will Replicate (a). The author started out with a simple quantitative model based on Altmejd et al. (2019) (a) Predicting the replicability of social science lab experiments, Altmejd et al., 2019. But in later rounds, he dropped the quantitative model, and started "playing the market". That is, he found out that trying to predict how the market will move is more profitable than giving one's own best guess. Unipedal then later automated his trades when the market API was opened to users. In contrast, I participated in a few rounds and put in 10x less effort while earning much more than 1/10th of the rewards. As unipedal points out, this is backwards: > ...I think one of the most important aspects of "ideal" prediction markets is that informed traders can compound their winnings, while uninformed traders go broke. The market mechanism works well because the feedback loop weeds out those who
1cd4ade0-949b-402b-b228-3f75a565e740
StampyAI/alignment-research-dataset/blogs
Blogs
Friendly AI Research as Effective Altruism MIRI was founded in 2000 on the premise that creating[1](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_0_10240 "In this post, I talk about the value of humanity in general creating Friendly AI, though MIRI co-founder Eliezer Yudkowsky usually talks about MIRI in particular — or at least, a functional equivalent — creating Friendly AI. This is because I am not as confident as Yudkowsky that it is best for MIRI to attempt to build Friendly AI. When updating MIRI’s bylaws in early 2013, Yudkowsky and I came to a compromise on the language of MIRI’s mission statement, which now reads: “[MIRI] exists to ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of [MIRI] is to: (a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; (b) raise awareness of this important issue; (c) advise researchers, leaders and laypeople around the world; and (d) as necessary, implement a smarter-than-human intelligence with humane, stable goals” (emphasis added). My own hope is that it will not be necessary for MIRI (or a functional equivalent) to attempt to build Friendly AI itself. But of course I must remain open to the possibility that this will be the wisest course of action as the first creation of AI draws nearer. There is also the question of capability: few people think that a non-profit research organization has much chance of being the first to build AI. I worry, however, that the world’s elites will not find it fashionable to take this problem seriously until the creation of AI is only a few decades away, at which time it will be especially difficult to develop the mathematics of Friendly AI in time, and humanity will be forced to take a gamble on its very survival with powerful AIs we have little reason to trust.") Friendly AI might be a particularly efficient way to do as much good as possible. Some developments since then include: * The field of “[effective altruism](http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html)” — trying not just to do good but to do *as much good as possible*[2](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_1_10240 "One might think of effective altruism as a straightforward application of decision theory to the subject of philanthropy. Philanthropic agents of all kinds (individuals, groups, foundations, etc.) ask themselves: “How can we choose philanthropic acts (e.g. donations) which (in expectation) will do as much good as possible, given what we care about?” The consensus recommendation for all kinds of choices under uncertainty, including philanthropic choices, is to maximize expected utility (Chater & Oaksford 2012; Peterson 2004; Stein 1996; Schmidt 1998:19). Different philanthropic agents value different things, but decision theory suggests that each of them can get the most of what they want if they each maximize their expected utility. Choices which maximize expected utility are in this sense “optimal,” and thus another term for effective altruism is “optimal philanthropy.” Note that effective altruism in this sense is not too dissimilar from earlier approaches to philanthropy, including high-impact philanthropy (making “the biggest difference possible, given the amount of capital invested“), strategic philanthropy, effective philanthropy, and wise philanthropy. Note also that effective altruism does not say that a philanthropic agent should specify complete utility and probability functions over outcomes and then compute the philanthropic act with the highest expected utility — that is impractical for bounded agents. We must keep in mind the distinction between normative, descriptive, and prescriptive models of decision-making (Baron 2007): “normative models tell us how to evaluate… decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model.” The prescriptive question — about what bounded philanthropic agents should do to maximize expected utility with their philanthropic choices — tends to be extremely complicated, and is the subject of most of the research performed by the effective altruism community.") — has seen more publicity and better research than ever before, in particular through the work of [GiveWell](http://www.givewell.org/), the [Center for Effective Altruism](http://centreforeffectivealtruism.org/), the philosopher [Peter Singer](http://en.wikipedia.org/wiki/Peter_Singer), and the community at [Less Wrong](http://lesswrong.com/).[3](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_2_10240 "See, for example: Efficient Charity, Efficient Charity: Do Unto Others, Politics as Charity, Heuristics and Biases in Charity, Public Choice and the Altruist’s Burden, On Charities and Linear Utility, Optimal Philanthropy for Human Beings, Purchase Fuzzies and Utilons Separately, Money: The Unit of Caring, Optimizing Fuzzies and Utilons: The Altruism Chip Jar, Efficient Philanthropy: Local vs. Global Approaches, The Effectiveness of Developing World Aid, Against Cryonics & For Cost-Effective Charity, Bayesian Adjustment Does Not Defeat Existential Risk Charity, How to Save the World, and What is Optimal Philanthropy?") * In his recent [PhD dissertation](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1), [Nick Beckstead](https://sites.google.com/site/nbeckstead/) has clarified the assumptions behind the claim that shaping the far future (e.g. via Friendly AI) is overwhelmingly important. * Due to research performed by MIRI, the [Future of Humanity Institute](http://www.fhi.ox.ac.uk/) (FHI), and others, our strategic situation with regard to machine superintelligence is more clearly understood, and FHI’s [Nick Bostrom](http://nickbostrom.com/) has organized much of this work in a [forthcoming book](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl).[4](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_3_10240 "I believe Beckstead and Bostrom have done the research community an enormous service in creating a framework, a shared language, for discussing trajectory changes, existential risks, and machine superintelligence. When discussing these topics with my colleagues, it has often been the case that the first hour of conversation is spent merely trying to understand what the other person is saying — how they are using the terms and concepts they employ. Beckstead’s and Bostrom’s recent work should enable clearer and more efficient communication between researchers, and therefore greater research productivity. Though I am not aware of any controlled, experimental studies on the effect of shared language on research productivity, a shared language is widely considered to be of great benefit for any field of research, and I shall provide a few examples of this claim which appear in print. Fuzzi et al. (2006): “The use of inconsistent terms can easily lead to misunderstandings and confusion in the communication between specialists from different [disciplines] of atmospheric and climate research, and may thus potentially inhibit scientific progress.” Hinkel (2008): “Technical languages enable their users, e.g. members of a scientific discipline, to communicate efficiently about a domain of interest.” Madin et al. (2007): “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science.”") * MIRI’s Eliezer Yudkowsky has [begun](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/) to describe in more detail which open research problems constitute “Friendly AI research,” in his view. Given these developments, we are in a better position than ever before to assess the value of Friendly AI research as effective altruism. Still, this is a difficult question. It is challenging enough to evaluate the cost-effectiveness of [anti-malaria nets](http://blog.givewell.org/2012/10/18/revisiting-the-case-for-insecticide-treated-nets-itns/) or [direct cash transfers](http://www.givewell.org/international/top-charities/give-directly). Evaluating the cost-effectiveness of attempts to shape the far future (e.g. via Friendly AI) is even more difficult than that. Hence, **this short post sketches an argument that can be given in favor of Friendly AI research as effective altruism, to enable future discussion**, and is **not intended as a thorough analysis.** ### An argument for Friendly AI research as effective altruism [Beckstead (2013)](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1) argues[5](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_4_10240 "In addition to Beckstead’s thesis, see also A Proposed Adjustment to the Astronomical Waste Argument.") for the following thesis: > From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years. > > Why think this? Astronomical facts suggest that humanity (including “post-humanity”) could survive for billions or trillions of years ([Adams 2008](http://books.google.com/books?id=X5jdMyJKNL4C&pg=PT77&lpg=PT77#v=onepage&q&f=false)), and could thus produce enormous amounts of good.[6](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_5_10240 "Beckstead doesn’t mention this, but I would like to point out that moral realism is not required for Beckstead’s arguments to go through. In fact, I generally accept Beckstead’s arguments even though most philosophers would not consider me a moral realist, though to some degree that is a semantic debate (Muehlhauser 2011; Joyce 2012). If you’re a moral realist and you believe your intuitive moral judgments are data about what is morally true, then Beckstead’s arguments (if successful) have something to say about what is morally true, and about what you should do if you want to act in morally good ways. If you’re a moral anti-realist but you think your intuitive judgments are data about what you value — or about what you would value if you had more time to think about your values and how to resolve the contradictions among them — then Beckstead’s arguments (if successful) have something to say about what you value, and about what you should do if you want to help achieve what you value.") But the value produced by our future depends on our *development trajectory*. If humanity destroys itself with powerful technologies in the 21st century, then nearly all that future value is lost. And if we survive but develop along a trajectory dominated by conflict and poor decisions, then the future could be much less good than if our trajectory is dominated by altruism and wisdom. Moreover, some of our actions today can have “ripple effects”[7](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_6_10240 "Karnofsky calls these “flow-through effects.”") which determine the trajectory of human development, because many outcomes are [path-dependent](http://en.wikipedia.org/wiki/Path_dependence). Hence, actions which directly or indirectly precipitate particular trajectory changes (e.g. mitigating existential risks) can have vastly more value (in expectation) than actions with merely proximate benefits (e.g. saving the lives of 20 wild animals). Beckstead calls this the “rough future-shaping argument.” If we accept the normative assumptions lurking behind this argument (e.g. [risk neutrality](http://en.wikipedia.org/wiki/Risk_neutral); see Beckstead’s dissertation), then the far future is enormously valuable (if it goes at least as well on average as the past century), and existential risk reduction is much more important than producing proximate benefits (e.g. global health, poverty reduction) or speeding up development (which could in fact increase existential risks, and even if it doesn’t, has lower expected value than existential risk reduction). However, Beckstead’s conclusion is not necessarily that existential risk reduction should be our global priority, because > there may be other ways to have a large, persistent effect on the far future without reducing existential risk… Some persistent changes in values and social norms could make the future [some fraction] better or worse… Sure, succeeding in preventing an existential catastrophe would be better than making a smaller trajectory change, but creating a small positive trajectory change may be significantly easier. > > Instead, Beckstead’s arguments suggest that “what matters most for shaping the far future is producing positive trajectory changes and avoiding negative ones.” Existential risk reduction is one important kind of positive trajectory change that could turn out to be the intervention with the highest expected value. One important clarification is in order. It could turn out to be that working toward proximate benefits or development acceleration does more good than “direct” efforts for trajectory change, if working toward proximate benefits or development acceleration turns out to have major ripple effects which produce important trajectory change. For example, perhaps an “ordinary altruistic effort” like solving India’s [iodine deficiency problem](http://www.dnaindia.com/india/1593586/report-71-million-hit-by-iodine-deficiency-in-india) would cause there to be thousands of “extra” world-class elite thinkers two generations from now, which could increase humanity’s chances of intelligently navigating the crucial 21st century and spreading to the stars. (I don’t think this is likely; I suggest it merely for illustration.) For the sake of argument, suppose you agree with Beckstead’s core thesis that “what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop.” Suppose you also think, as I do, that machine superintelligence is probably inevitable.[8](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_7_10240 "See Bostrom (forthcoming) for an extended argument. Perhaps the most likely defeater for machine superintelligence is that global catastrophe may halt scientific progress before human-level AI is created.") In that case, you might think that Friendly AI research is a uniquely foreseeable and impactful way to shape the far future in an enormously positive way, [because](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/91zq) “our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.” All other developing trends might be overridden by the overwhelming effectiveness of machine superintelligence — and specifically, by the values that were (explicitly or implicitly, directly or indirectly) written into the machine superintelligence(s). If that’s right, our situation is a bit like sending an interstellar probe to [colonize distant solar systems](http://commonsenseatheism.com/wp-content/uploads/2013/05/Armstrong-Sandberg-Eternity-in-six-hours-intergalactic-spreading-of-intelligent-life-and-sharpening-the-Fermi-paradox.pdf) before they recede beyond the [cosmological horizon](http://en.wikipedia.org/wiki/Observable_universe#Particle_horizon) and can thus never be reached from Earth again due to [the expansion of the universe](https://en.wikipedia.org/wiki/Metric_expansion_of_space). Anything on Earth that doesn’t affect the content of the probe will have no impact on those solar systems. (See also [this comment](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/921r).)   ### Potential defeaters The rough argument above — in favor of Friendly AI research as an efficient form of effective altruism — deserves to be “fleshed out” in more detail.[9](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/#footnote_8_10240 "Beckstead, in personal communication, suggested (but didn’t necessarily endorse) the following formalization of the rough argument sketched in the main text of the blog post: “(1) To a first approximation, the future of humanity is all that matters. (2) To a much greater extent than anything else, the future of humanity is highly sensitive to how machine intelligence unfolds. (3) Therefore, there is a very strong presumption in favor of working on any project which makes machine intelligence unfold in a better way. (4) FAI research is the most promising route to making machine intelligence unfold in a better way. (5) Therefore, there is a very strong presumption in favor of doing FAI research.” Beckstead (2013) examines the case for (1). Bostrom (forthcoming), in large part, examines the case for (2). Premise (3) informally follows from (1) and (2), and the conclusion (5) informally follows from (3) and (4). Premise (4) appears to me to be the most dubious part of the argument, and the least explored in the extant literature.") Potential defeaters should also be examined: * Perhaps we ought to reject one or more of the normative assumptions behind Beckstead’s rough future-shaping argument. * Perhaps it’s not true that “our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.” * Perhaps Friendly AI research is not (today) a particularly efficient way to positively affect the development of machine superintelligence. Competing interventions may include: (1) [AI risk strategy research](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl), (2) improving [technological forecasting](http://intelligence.org/2013/05/15/when-will-ai-be-created/), (3) [improving science in general](http://www.vannevargroup.org/), (4) [improving and spreading effective altruism and rationality](http://rationalaltruist.com/2013/06/03/my-outlook/), and (5) many others. In future blog posts, members of the effective altruist community (including myself) will expand on the original argument and examine potential defeaters. #### #### Acknowledgements My thanks to those who provided feedback on this post: Carl Shulman, Nick Beckstead, Jonah Sinick, and Eliezer Yudkowsky. --- 1. In this post, I talk about the value of *humanity in general* creating Friendly AI, though MIRI co-founder Eliezer Yudkowsky usually talks about *MIRI in particular* — or at least, a functional equivalent — creating Friendly AI. This is because I am not as confident as Yudkowsky that it is best for MIRI to attempt to build Friendly AI. When updating MIRI’s bylaws in early 2013, Yudkowsky and I came to a compromise on the language of MIRI’s mission statement, which now reads: “[MIRI] exists to ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of [MIRI] is to: (a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; (b) raise awareness of this important issue; (c) advise researchers, leaders and laypeople around the world; and (d) *as necessary*, implement a smarter-than-human intelligence with humane, stable goals” (emphasis added). My own hope is that it will not be necessary for MIRI (or a functional equivalent) to attempt to build Friendly AI itself. But of course I must remain open to the possibility that this will be the wisest course of action as the first creation of AI [draws nearer](http://intelligence.org/2013/05/15/when-will-ai-be-created/). There is also the question of capability: few people think that a non-profit research organization has much chance of being the first to build AI. I worry, however, that the world’s elites will not find it fashionable to take this problem seriously until the creation of AI is only a few decades away, at which time it will be especially difficult to develop the mathematics of Friendly AI in time, and humanity will be forced to take a gamble on its very survival with powerful AIs we have little reason to trust. 2. One might think of effective altruism as a straightforward application of [decision theory](http://lesswrong.com/lw/gu1/decision_theory_faq/) to the subject of philanthropy. Philanthropic agents of all kinds (individuals, groups, foundations, etc.) ask themselves: “How can we choose philanthropic acts (e.g. donations) which (in expectation) will do as much good as possible, given what we care about?” The consensus recommendation for *all* kinds of choices under uncertainty, including philanthropic choices, is to maximize expected utility ([Chater & Oaksford 2012](http://books.google.com/books?id=S1-K4AT3zXYC&lpg=PA11&ots=wjavib87qF&dq=normative%20systems%3A%20logic%2C%20probability%2C%20and%20rational%20choice&lr&pg=PA11#v=onepage&q&f=false); [Peterson 2004](http://commonsenseatheism.com/wp-content/uploads/2012/05/Peterson-From-Outcomes-to-Acts-a-non-standard-axiomatization-of-the-expected-utility-principle.pdf); [Stein 1996](http://www.amazon.com/Without-Good-Reason-Rationality-Philosophy/dp/0198235747/); [Schmidt 1998](http://www.amazon.com/Axiomatic-Utility-Theory-under-Risk/dp/3540643192):19). Different philanthropic agents value different things, but decision theory suggests that each of them can get the most of what they want if they each maximize their expected utility. Choices which maximize expected utility are in this sense “optimal,” and thus another term for effective altruism is “[optimal philanthropy](http://lesswrong.com/lw/da4/what_is_optimal_philanthropy/).” Note that effective altruism in this sense is not too dissimilar from earlier approaches to philanthropy, including [high-impact philanthropy](http://en.wikipedia.org/wiki/High_impact_philanthropy) (making “[the biggest difference possible, given the amount of capital invested](http://www.impact.upenn.edu/faq/)“), [strategic philanthropy](http://www.amphilsoc.org/sites/default/files/490202.pdf), [effective philanthropy](http://www.hudson.org/files/pdf_upload/Stanley_Katz_APS_Proceedings_Piece_June_2005.pdf), and [wise philanthropy](http://wisephilanthropy.com/). Note also that effective altruism does not say that a philanthropic agent should specify complete utility and probability functions over outcomes and then compute the philanthropic act with the highest expected utility — that is impractical for bounded agents. We must keep in mind the distinction between normative, descriptive, and prescriptive models of decision-making (Baron 2007): “normative models tell us how to evaluate… decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model.” The *prescriptive* question — about what bounded philanthropic agents should do to maximize expected utility with their philanthropic choices — tends to be extremely complicated, and is the subject of most of the research performed by the effective altruism community. 3. See, for example: [Efficient Charity](http://lesswrong.com/lw/37f/efficient_charity/), [Efficient Charity: Do Unto Others](http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/), [Politics as Charity](http://lesswrong.com/lw/2qq/politics_as_charity/), [Heuristics and Biases in Charity](http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/), [Public Choice and the Altruist’s Burden](http://lesswrong.com/lw/2hv/public_choice_and_the_altruists_burden/), [On Charities and Linear Utility](http://lesswrong.com/lw/44c/on_charities_and_linear_utility/), [Optimal Philanthropy for Human Beings](http://lesswrong.com/lw/6py/optimal_philanthropy_for_human_beings/), [Purchase Fuzzies and Utilons Separately](http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/), [Money: The Unit of Caring](http://lesswrong.com/lw/65/money_the_unit_of_caring/), [Optimizing Fuzzies and Utilons: The Altruism Chip Jar](http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/), [Efficient Philanthropy: Local vs. Global Approaches](http://lesswrong.com/lw/684/efficient_philanthropy_local_vs_global_approaches/), [The Effectiveness of Developing World Aid](http://lesswrong.com/lw/2pq/the_effectiveness_of_developing_world_aid/), [Against Cryonics & For Cost-Effective Charity](http://lesswrong.com/lw/2kh/against_cryonics_for_costeffective_charity/), [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/), [How to Save the World](http://lesswrong.com/lw/373/how_to_save_the_world/), and [What is Optimal Philanthropy?](http://lesswrong.com/lw/da4/what_is_optimal_philanthropy/) 4. I believe Beckstead and Bostrom have done the research community an enormous service in creating a *framework*, a *shared language*, for discussing trajectory changes, existential risks, and machine superintelligence. When discussing these topics with my colleagues, it has often been the case that the first hour of conversation is spent merely trying to understand what the other person is saying — how they are using the terms and concepts they employ. Beckstead’s and Bostrom’s recent work should enable clearer and more efficient communication between researchers, and therefore greater research productivity. Though I am not aware of any controlled, experimental studies on the effect of shared language on research productivity, a shared language is widely considered to be of great benefit for any field of research, and I shall provide a few examples of this claim which appear in print. [Fuzzi et al. (2006)](http://atmos-chem-phys.net/6/2017/2006/acp-6-2017-2006.pdf): “The use of inconsistent terms can easily lead to misunderstandings and confusion in the communication between specialists from different [disciplines] of atmospheric and climate research, and may thus potentially inhibit scientific progress.” [Hinkel (2008)](http://www.pik-potsdam.de/research/transdisciplinary-concepts-and-methods/projects/project-archive/favaia/pubs/hinkel-knowledge-integration.pdf): “Technical languages enable their users, e.g. members of a scientific discipline, to communicate efficiently about a domain of interest.” [Madin et al. (2007)](http://www.cs.cofc.edu/~bowring/classes/csis%20633/readings/madin-etal-tree-2008.pdf): “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science.” 5. In addition to Beckstead’s thesis, see also [A Proposed Adjustment to the Astronomical Waste Argument](http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/). 6. Beckstead doesn’t mention this, but I would like to point out that moral realism is not required for Beckstead’s arguments to go through. In fact, I generally accept Beckstead’s arguments even though most philosophers would not consider me a moral realist, though to some degree that is a semantic debate ([Muehlhauser 2011](http://lesswrong.com/lw/5u2/pluralistic_moral_reductionism/); [Joyce 2012](http://www.victoria.ac.nz/staff/richard_joyce/acrobat/joyce_metaethical.pluralism.pdf)). If you’re a moral realist and you believe your intuitive moral judgments are data about what is morally true, then Beckstead’s arguments (if successful) have something to say about what is morally true, and about what you should do if you want to act in morally good ways. If you’re a moral anti-realist but you think your intuitive judgments are data about what you value — or about what you would value if you had more time to think about your values and how to resolve the contradictions among them — then Beckstead’s arguments (if successful) have something to say about what you value, and about what you should do if you want to help achieve what you value. 7. Karnofsky calls these “[flow-through effects](http://blog.givewell.org/2013/05/15/flow-through-effects/).” 8. See [Bostrom (forthcoming)](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl) for an extended argument. Perhaps the most likely defeater for machine superintelligence is that global catastrophe may halt scientific progress before human-level AI is created. 9. Beckstead, in personal communication, suggested (but didn’t necessarily endorse) the following formalization of the rough argument sketched in the main text of the blog post: “(1) To a first approximation, the future of humanity is all that matters. (2) To a much greater extent than anything else, the future of humanity is highly sensitive to how machine intelligence unfolds. (3) Therefore, there is a very strong presumption in favor of working on any project which makes machine intelligence unfold in a better way. (4) FAI research is the most promising route to making machine intelligence unfold in a better way. (5) Therefore, there is a very strong presumption in favor of doing FAI research.” [Beckstead (2013)](https://sites.google.com/site/nbeckstead/research/Beckstead%2C%20Nick--On%20the%20Overwhelming%20Importance%20of%20Shaping%20the%20Far%20Future.pdf?attredirects=0&d=1) examines the case for (1). [Bostrom (forthcoming)](http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/91zl), in large part, examines the case for (2). Premise (3) informally follows from (1) and (2), and the conclusion (5) informally follows from (3) and (4). Premise (4) appears to me to be the most dubious part of the argument, and the least explored in the extant literature. The post [Friendly AI Research as Effective Altruism](https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
5b2d9904-2c8a-4a02-b2d6-c0349632f6cc
trentmkelly/LessWrong-43k
LessWrong
Surface Thoughts Suck I've been thinking about the differences between the ways in which people typically pursue wealth and the ways in which people typically pursue their altruistic goals. Charity is significantly signalling, and so on... But, there's something else I noticed. Whenever you want wealth, you pursue a high paying career, open a business, and so on. Whenever you (a hypothetical average person unfamiliar with the rationalist or effective altruism community) want to accomplish some altruistic goal, you're going to consider personally volunteering yourself to perform the associated cheap labor. If you decide to donate instead, you probably see it as the less impactful option. It would be better (you would think) to go to third world villages yourself than to donate to charity. The underlying reason behind this mistake is the notion of topicality- proximity in an unconsciously constructed web of connections. Traditionally, different paths to power (business, government, personal charitable work, etc.) are associated with different usages. Volunteering is for helping starving children, private business is for making money, government work is for changing what the government does. This association is often valid- different types of power are better at accomplishing different goals.  It's less valid when those types of power are highly interchangeable. Imagine money and political power have a standardized rate of exchange in the vein of one US dollar per DemocracyCoin. It's either going to be better to acquire money to buy political power, or it's going to be better to acquire political power to buy money- manipulating the government isn't easier if you're a politician, and buying gold jewelry isn't easier if you're a business owner. One profession or the other is better at both. In the case of standard employment (the trade of your man-hours for the money of others) and charity (the trade of your money for the man-hours of others), there are clear exchange rates. For obvious
ea50c50b-1b84-4519-9a3c-72fac301eb49
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Boxing Labs should do "boxing," I hear. I'm confused about what kinds-of-systems this applies to and what labs should do to improve their "boxing." Kinds-of-systems: my impression is that there's little direct threat from LMs simply undergoing gradient descent. But there's some risk from models deployed internally, especially if they're in scaffolding that lets them make recursive calls and use tools. But I suspect the previous sentence is confused or missing a lot. Boxing tactics: I hear of tactics like automatically monitoring for unusual behavior, limiting data upload speed, and using honeypots. What should labs do; if you were in charge of boxing for a lab, what would you do? What should I read to learn more?
e0740dcc-6eb0-495d-80be-0da8ff73bf9f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Models, myths, dreams, and Cheshire cat grins .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} > > "she has often seen a cat without a grin but never a grin without a cat" > > > * Alice in *Alice in Wonderland*, about the Cheshire cat (also known as the [Unitary Authority of Warrington Cat](https://en.wikipedia.org/wiki/Characters_in_the_Thursday_Next_series#The_Cheshire_Cat)). Let's have a very simple model. There's a boolean, C, which measures whether there's a cat around. There's a natural number N, which counts the number of legs on the cat, and a boolean G, which checks whether the cat is grinning (or not). There are a few obvious rules in the model, to make it compatible with real life: * ¬C → (N=0). * ¬C → ¬G. Or, in other words, if there's no cat, then there are zero cat legs and no grin. And that's true about reality. But suppose we have trained a neural net to automatically find the values of C, N, and G. Then it's perfectly conceivable that something might trigger the outputs ¬C and G simultaneously: a grin without any cat to hang it on. Adversarial examples -------------------- [Adversarial examples](https://medium.com/@smkirthishankar/the-unusual-effectiveness-of-adversarial-attacks-e1314d0fa4d3) often seem to behave this way. Take for example this adversarial example of a pig classified as an airliner: ![](https://www.dropbox.com/s/k0y092qcxx3cy4c/pig_adversarial.png?raw=1) Imagine that the neural net was not only classifying "pig" and "airliner", but other things like "has wings" and "has fur". Then the "pig-airliner" doesn't have wings, and has fur, which are features of pigs but not airliners. Of course, you could build an adversarial model that also breaks "has wings" and "has fur", but, hopefully, the more features that need to be faked, the harder it would become. This suggests that, as algorithms get smarter, they will become more adept at avoiding adversarial examples - as long as the ultimate question is clear. In our real world, the categories of pigs and airliners are pretty sharply distinct. We run into problems, though, if the concepts are less clear - such as what might happens to pigs and airliners if the algorithm optimises them, or how the algorithm might classify underdefined concepts like "human happiness". Myths and dreams ---------------- Define the following booleans: HH detects the presence of a living human head, HB a living human body, JH a living jackal head, JB a living jackal body. In our world real world we generally have HH↔HB and JH↔JB. But set the following values: ¬HH,HB,JH,¬JB, and you have the god Anubis. ![](https://www.dropbox.com/s/f3zt9wb3rpz438q/220px-Anubis_standing.svg.png?raw=1) Similarly, what is a dragon? Well, it's an entity such that the following are all true: {is lizard, is flying, is huge, breath is fire, intelligence is human level, ...} And, even though those features never go together in the real world, we can put them together in our imagination, and get a dragon. ![](https://www.dropbox.com/s/fcei79826v9tcal/dragon.gif?raw=1) Note that "is flying" seems more fundamental to a dragon than "has wings", thus all the wingless dragons that fly "by magic[[1]](#fn-84mdcS5he3v724PNx-1)". Our imagination seem comfortable with such combinations. Dreams are always bewildering upon awakening, because they also combine contradictory assumptions. But these combinations are often beyond what our imaginations are comfortable with, so we get things like meeting your mother - who is also a wolf - and handing Dubai to her over the tea cups (that contain milk and fear). "Alice in Wonderland" seems to be in between the wild incoherence of dream features, and the more restricted inconsistency of stories and imagination. --- 1. Not that any real creature that size could fly with those wings anyway. [↩︎](#fnref-84mdcS5he3v724PNx-1)
fa7bcdfb-2e3b-498b-9afb-562a97666ab6
trentmkelly/LessWrong-43k
LessWrong
Value uncertainty Epistemic status: This is basically a new categorisation scheme for, and analysis of, ideas that other people have proposed previously (both in relation to moral philosophy and in relation to AI alignment). I’m not an expert on the topics I cover here, and I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!). We are often forced to make decisions under conditions of uncertainty. This may be empirical uncertainty (e.g., what is the likelihood that nuclear war would cause human extinction?), or it may be moral uncertainty (e.g., is the wellbeing of future generations morally important?). But what if you don’t believe that “morally important” is a coherent concept? What if you’re a moral antirealist and/or subjectivist, and thus reject the idea that there are any (objective) moral facts? Would existing work on moral uncertainty (see my prior posts) still be relevant to you? I think that a lot of it is, to a large extent, for the reasons discussed in this footnote.[1] But I think that, to directly discuss how that work is relevant to antirealists and/or subjectivists, it would help to speak not of moral uncertainty but of value uncertainty (VU; i.e., uncertainty about what one “values” or “prefers”). Doing so also helps us to categorise different types of VU, and potential ways of resolving each of these types of VU. A final benefit is that such an analysis of VUs also has substantial relevance for moral realists, and for work on AI alignment. So this post will: 1. Clarify what I mean by “values” in this context 2. Name, briefly describe, and briefly suggest responses to four types of VU, and two types of situations that actually aren’t VU, but could look as though they are 3. Return to the question of who and what these ideas are relevant for Values I should clarify a few points about what I mean by “values” in this post: * I mean what a person actually values (or would if they knew more, or will in
087ce1ee-b3bd-404d-840c-91d4ce25fe7a
trentmkelly/LessWrong-43k
LessWrong
Is fake news bullshit or lying? New strategies for combating misinformation A layperson-friendly view. Cross-posted from my personal blog, First Principles. Fake news is on the rise. We know this from Facebook shares, WhatsApp forwards, Twitter trolls, and Potemkin news sites. We see it in elections across the world, novel coronavirus guidance, and nation-state posturing. We’ve known about the issue for a while, and technology companies — in their role as the primary distributors — have taken action. This action has not stemmed the tide, and meanwhile the techniques of misinformation evolve and proliferate: bot armies and Deep Fakes being only a few recent innovations. Why is it so difficult to define what fake news is? Why does calling out lies have little impact on those already deceived? And crucially, what can we do to restore trust and reason to public and social media? What is fake news? Fake news is deliberate, targeted, misinformation. It’s not necessarily wholly false: perpetrators are as willing to utilize truths that fit their narrative as they are to concoct falsehoods to construct it. They’re necessarily indifferent to the truth, and attached solely to the outcome: the beliefs they wish to plant within the minds of their targets. From Harry Frankfurt’s On Bullshit: > [The bullshitter’s] eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose. Fake news, therefore, isn’t lying, but bullshit. Why is fake news so hard to fight? Fake news, being an extension of bullshit, inherits much of its traits: It’s hard to refute The claims within bullshit are many, nebulous, and often not even wrong. Fact-checking alone isn’t adequate to this task, and neither is reliance upon a trustworthy set of sources. It’s normalized As Frankfurt poi
3c57ff6d-d79c-43ff-9704-cc0d9076606c
trentmkelly/LessWrong-43k
LessWrong
How to Deal with Depression - The Meta Layers I wrote this for the Positive Vector website awhile back and lots of people have found it valuable, so I want to share it with the Less Wrong community as well.   I think this applies to most people - meta suffering thing is something I see everywhere, even though it is most prominent with people who have depression.   This is based on my experience with working with depressed people and with studying Buddhism, especially Big Mind.   Enjoy!   --- The roots of suffering are often deep.  But not all of the suffering happens at the root.  A lot of the suffering that people experience is “meta” suffering.  Meta suffering is when you suffer because you are distressed that you are suffering.  You are feeling depressed and hopeless, and there is a part of you that genuinely fears that it will never end.  That you will feel this way forever.  This fear of the suffering persisting can cause you much more suffering than whatever started your suffering.  And it can last much longer.  At some point days later, you might think to yourself about how terrible that initial suffering was, and feel fear and suffering about the possibility of it coming back. Many people suffer as much or more from meta-suffering than suffering that comes from physical or situational sources!   The good news is that meta suffering is much easier to fix than deeper forms of suffering. One thing you can do is to collect data* in order to develop an accurate model of how often you actually feel bad.    Try monitoring your moods for awhile and get a baseline for what your moods actually are.  At least half of the people who have suffered from major depression who have done this and spoken with me about it have been surprised to find that they often feel better than their self-perception when they assess their mood at random points throughout the day. Regardless of what your default mood state or range is, once you know what it is, you are likely to feel less fear.  You can look at what your mood his
017361aa-250f-48f0-9e16-a21b703757bf
trentmkelly/LessWrong-43k
LessWrong
The present perfect tense is ruining your life Yes I’m talking about grammar. The present perfect tense describes actions that happened in the past but are relevant in the present. You say, “I have seen that movie,” meaning you saw it in the past, but what’s important is that presently you are a person who saw it. How is a verb tense ruining my life? There’s a world of difference between wanting to do something and wanting to have done something. I’m not the first to find this distinction interesting: > I don’t believe any of you have ever read PARADISE LOST, and you don’t want to. That’s something that you just want to take on trust. It’s a classic, just as Professor Winchester says, and it meets his definition of a classic — something that everybody wants to have read and nobody wants to read. > > Mark Twain, “The Disappearance of Literature” What’s dangerous is when we conflate the two tenses. We say, “I want to do X,” when really we want to have done X. I’ve often caught myself doing this, and I see others doing it too. It’s a form of deception (usually self-deception), and it sets us up for failure and frustration in our efforts. Let’s talk about the nature of this mistake, why we make it, and how it comes back to bite us. so fun What does it mean to want in the present perfect tense? Let’s look closely at the difference between wanting to read Paradise Lost and wanting to have read Paradise Lost. Wanting in the present tense is simple: when you want to do something, your desire is for the act itself. Wanting in the present perfect tense is tricky: when you want to have done something, your desire is for a future state where you already did the thing and acquired the memories/experience of it. Notice the language there—you acquired something, like you’re doing work and getting a paycheck at the end. In fact even the original wording implies this: in English, we use the word “have” both to mean possession and to construct the present perfect tense (“I have a car” / “I have been to Florida”). Tha
fe8c5031-92ed-4f3d-a7d0-284ae37c8770
trentmkelly/LessWrong-43k
LessWrong
CFAR has a matched-donations fundraiser going on [LINK] http://rationality.org/fundraiser2013/ The folks at CFAR are working on a post talking about all the good work they've done and plan to do, but in the meantime I'm putting this up as a placeholder in case anyone finishing up their 2013 donations literally doesn't know about it. Someone please let me know if the official announcement's already up and I've missed it. UPDATE: Lukeprog pointed out that there's a stub in Main too. Also I put up my own bleg on behalf of CFAR here: http://benjaminrosshoffman.com/cfar-second-thoughts-and-a-bleg/ It was an interesting exercise in trying to write glowing praise while experiencing extreme annoyance. I hope I succeeded. I also hope the official version will be up very soon! UPDATE2: CFAR now has a post up explaining what they are working on and why you should give. You should probably trust the official version more than mine. http://lesswrong.com/lw/jej/why_cfar/
50ccc0c8-aaa3-4647-88cb-c834ec166882
trentmkelly/LessWrong-43k
LessWrong
My 1st month at a "neurodivergent gifted school" called Minerva University An personal blog (8 min read) consisting of: Academics * Habits of Mind and Foundational Concepts (HCs) * Class content * Class structure * Mild pacing * Somewhat rigourous * Tolerable technical bugs Students * examples of eclectic students * cultural diversity * wholesome personality and humanities-lovers Physical vibes * Chaotic dorm and not aspie-friendly * Messy and budget-constrained events
4ef0a501-46c6-4eba-a0e3-3869b3cb672e
trentmkelly/LessWrong-43k
LessWrong
Five-minute rationality techniques Less Wrong tends toward long articles with a lot of background material. That's great, but the vast majority of people will never read them. What would be useful for raising the sanity waterline in the general population is a collection of simple-but-useful rationality techniques that you might be able to teach to a reasonably smart person in five minutes or less per technique. Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." We can talk about this very precisely, in terms of Bayesian updating and conditional probability, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes. What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. Here are some candidates, to get the discussion started: Candidate 1 (suggested by DuncanS): Unlikely events happen all the time. Someone gets in a car-crash and barely misses being impaled by a metal pole, and people say it's a million-to-one miracle -- but events occur all the time that are just as unlikely. If you look at how many highly unlikely things could happen, and how many chances they have to happen, then it's obvious that we're going to see "miraculous" coincidences, purely by chance. Similarly, with millions of people dying of cancer each year, there are going to be lots of people making highly unlikely miracle recoveries. If they didn't, that would be surprising. Candidate 2: Admitting that you were wrong is a way of winning an argument. (The other person wins, too.) There's a saying that "It takes a big man to admit he's wrong," and when peo
6801fbc6-eeea-437e-b98c-24f58085aef5
trentmkelly/LessWrong-43k
LessWrong
Progress Report 1: interpretability experiments & learning, testing compression hypotheses next post: Report 2 Intro I got a grant from the Long Term Future Fund to refocus my career from data science & machine learning generally, to ai alignment research specifically. I feel most drawn to and suited for the interpretability angle. I'm also currently in the AGI fundamentals technical course. I had already read most of the readings, but I've been finding the discussions insightful and useful in broadening my perspective.   For my grant-funded time, the two subgoals I am focusing on are: 1. improving my skillset / experience with ml interpretability research 2. accumulating some evidence of my ability to be a good researcher / research engineer   My current concrete project in pursuit of these goals is to: 1 implement and understand some simple transformers, in line with those described in the research by the Transformer Circuits team ( https://transformer-circuits.pub ). 2. test some simple hypotheses I have on these based on my ideas around revealing natural abstractions through applying transforms to the weights of a pre-trained model. These transforms could include: compression, factorization, clustering then encapsulation, and anything else that occurs to me or gets suggested to me. (some details on my thoughts in this previous post of mine: https://www.lesswrong.com/posts/7vcZTHzjCFE8SbjWF/neural-net-decision-tree-hybrids-a-potential-path-toward )     Initial steps 1. can I apply compression (e.g. using an infoVAE) to information coming from component(s) of the pre-trained model and examine the change in performance on baselines. (hopefully not too much of a drop. I plan to test and compare varying levels of compression to start with.) My hope is to find some representation of the space which is more intuitive to interpret. Related link: https://youtu.be/BePQBWPnYuE  2. will the information bottlenecks (e.g. latent layers of the infoVAEs) be useful to examine? Will it either be easier to understand or provide a useful alternate view o
691f5eb2-6145-42c1-9df8-9e39146bb7c9
trentmkelly/LessWrong-43k
LessWrong
Meetup : Tel Aviv Meetup: Social & Board Games Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games WHEN: 09 June 2015 07:00:00PM (+0300) WHERE: 98 Yigal Alon Street, Tel Aviv June 9 at 19:00 we're going to have a social meetup! It's going to be a game night full of people talking about physics, friendly AI, and how to effectively save the world. Please bring any games you'd like to play. The Israeli LessWrong community meets every two weeks, alternating between lectures and social/gaming nights. Meet at Google, Electra Tower, 98 Yigal Alon Street, Tel Aviv: The 29th floor (not the Google Campus floor). We'll then move to a room. Contact: If you can't find us, call Anatoly, who is graciously hosting us, at 054-245-1060; or Joshua at 054-569-1165. Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games
d258333c-39d7-4de6-8ef0-042435769934
trentmkelly/LessWrong-43k
LessWrong
[AN #173] Recent language model results from DeepMind HIGHLIGHTS Scaling Language Models: Methods, Analysis & Insights from Training Gopher (Jack W. Rae et al) (summarized by Rohin): This paper details the training of the Gopher family of large language models (LLMs), the biggest of which is named Gopher and has 280 billion parameters. The algorithmic details are very similar to the GPT series (AN #102): a Transformer architecture trained on next-word prediction. The models are trained on a new data distribution that still consists of text from the Internet but in different proportions (for example, book data is 27% of Gopher’s training data but only 16% of GPT-3’s training data). Like other LLM papers, there are tons of evaluations of Gopher on various tasks, only some of which I’m going to cover here. One headline number is that Gopher beat the state of the art (SOTA) at the time on 100 out of 124 evaluation tasks. The most interesting aspect of the paper (to me) is that the entire Gopher family of models were all trained on the same number of tokens, thus allowing us to study the effect of scaling up model parameters (and thus training compute) while holding data constant. Some of the largest benefits of scale were seen in the Medicine, Science, Technology, Social Sciences, and the Humanities task categories, while scale has not much effect or even a negative effect in the Maths, Logical Reasoning, and Common Sense categories. Surprisingly, we see improved performance on TruthfulQA (AN #165) with scale, even though the TruthfulQA benchmark was designed to show worse performance with increased scale. We can use Gopher in a dialogue setting by prompting it appropriately. The prompt specifically instructs Gopher to be “respectful, polite, and inclusive”; it turns out that this significantly helps with toxicity. In particular, for the vanilla Gopher model family, with more scale the models produce more toxic continuations given toxic user statements; this no longer happens with Dialogue-Prompted Gopher models, which
a529c258-8e37-49ed-9aaf-b2b5cf18be07
trentmkelly/LessWrong-43k
LessWrong
AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0 TL;DR We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA’s mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks). Apply here before 23:59 July 20th anywhere on Earth! Summary ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations! This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks) at the London Initiative for Safe AI (LISA) in Old Street, London. LISA houses small organisations (e.g., Apollo Research, BlueDot Impact), several other AI safety researcher development programmes (e.g., LASR Labs, MATS extension, PIBBS, Pivotal), and many individual researchers (independent and externally affiliated). Being situated at LISA, therefore, brings several benefits, e.g. facilitating productive discussions about AI safety & different agendas, allowing participants to form a better picture of what working on AI safety can look like in practice, and offering chances for research collaborations post-ARENA. The main goals of ARENA are to: * Help participants skill up in ML relevant for AI alignment. * Produce researchers and engineers who want to work in alignment and help them make concrete next career steps. * Help participants develop inside views about AI safety and the paths to impact of different agendas. The programme's structure will remain broadly the same as ARENA 3.0 (see below); however, we are also a
e2c6ddd8-dd83-4bc8-991f-ea7c13b0e8bc
trentmkelly/LessWrong-43k
LessWrong
Hell is wasted on the evil Chuck Palahniuk is an author like no other. Consider his second-most-popular book Choke. > Victor Mancini, a medical-school dropout, is an antihero for our deranged times. Needing to pay elder care for his mother, Victor has devised an ingenious scam: he pretends to choke on pieces of food while dining in upscale restaurants. He then allows himself to be “saved” by fellow patrons who, feeling responsible for Victor’s life, go on to send checks to support him. When he’s not pulling this stunt, Victor cruises sexual addiction recovery workshops for action, visits his addled mom, and spends his days working at a colonial theme park. > > ―from the summary of Choke by Chuck Palahniuk Some authors write to entertain. Others write to educate. I want my reader to shout "What the fuck did I just read‽" Fight Club is Chuch Palahniuk's cleanest and least happy book. This is not a coincidence. A common theme running through Palahniuk's works is the idea that extreme happiness and extreme suffering are two sides of the same coin. When editors compelled him to tone down the horrors in Fight Club, they sucked the happiness out of it too. When I take vacations, I don't go anywhere nice. I don't make myself comfortable. My vacations tend to be desperate struggles for survival in the impoverished sectors of awful technodystopias like China. Or—you know—just desperate struggles for survival in the wilderness. When I was 19 I took a Greyhound to Las Vegas. I earned enough money performing street magic to buy food at Panda Express. While I was there, a man offered to buy me dinner with his family. I turned it down because I didn't want to accept charity. Only later did it occur to me that I must have been the most interesting person he met in Las Vegas and that buying my stories for the price of a hamburger would be a steal compared to the alternative entertainment offerings. Stories are about suffering. A good novelist tortures her characters. I know many software engineers (and
cd84ad7e-0724-41eb-9b4e-31650f813fdc
trentmkelly/LessWrong-43k
LessWrong
The Mom Test: Summary and Thoughts I just finished reading The Mom Test for the second time. I took "raw" notes here. In this post I'll first write up a bullet-point summary and then ramble off some thoughts that I have. Summary Introduction: * Trying to learn from customer conversations is like trying to excavate a delicate archeological site. The truth is down there somewhere, but it's fragile. When you dig you get closer to the truth, but you also risk damaging or smashing it. * Bad customer conversations are worse than useless because they mislead you, convincing you that you're on the right path when instead you're on the wrong path. * People talk to customers all the time, but they still end up building the wrong things. How is this possible? Almost no one talks to customers correctly. * Why another book about this? Why this author? * Rob is a techie, not a sales guy. We need something targeted at techies. * To understand how to do something correctly, you have to understand how it can go wrong. Rob has lots of experience with things going wrong here. * It's practical, not theoretical. Chapter 1 - The Mom Test: * Everyone knows that you shouldn't ask your mom whether your business idea is good. But the issue isn't who you're asking, it's how you're asking. Yes, your mom is more likely[1] than others to praise you and tell you that your idea is good. But if you ask "what do you think of my idea", almost anyone will feel too uncomfortable to be constructive and honest with you. * It's not other people's responsibility to tell you the truth. It's your responsibility to find it by asking good questions. * The Mom Test is a series of rules for crafting good questions that even your mom can't lie to you about. * Talk about their life instead of your idea. * Ask about specifics in the past instead of hypotheticals about the future. * Talk less and listen more. * You're not allowed to tell them what their problems are. They're not allowed to tell you what the solutions sh
d5092082-b756-47a6-a781-54f7cfb4ef7e
trentmkelly/LessWrong-43k
LessWrong
Why We Should Always Distrust Our Certainties While just thinking around, which I often do when nothing else offers, I recently came across a line of thought which presented an insight into a possible cause of much of the misunderstanding which arises within and between human minds.  Since I have not seen this concept mentioned before, and since others may also find it worth considering, I here put it out for what it may be worth. Both in our communications with others and in our internal monologues with ourselves, we often employ the words or concepts of “certainty”.  I here wish to examine the meaning that “certainty” holds for others and for ourselves, as well as the effects that it may cause for the accuracy of our mental conclusions.  I will come to the conclusion that in some rather clearly distinguishable conditions, the use of the concept “certainty” serves to obscure, rather than to clarify, the objects of thought.  What it conveys at these times is the following: “I am firmly convinced of the truth of this. I have no doubt about it.  I have examined it in every way and cannot be mistaken. I certainly do not need to examine it again.” This feeling will reappear instantaneously whenever this concept enters one’s thoughts.  The result will be that this concept will not be reexamined in any way.  None of any possible observations or reasonings which may have once, perhaps years ago, have supported the belief, will be brought before the mind.  Instead a very strong feeling of being correct, being justified, not being mistaken will both reinforce the belief, and provide a very pleasant feeling that the mind will quite willingly return to on future occasions.  In other words, when the “certainty” feeling is once activated, it becomes more and more unlikely that it ever will be discarded.  If it was once in any way an error, that error will continue to remain and strengthen. A less insistent form of what I am pointing out is, of course, already familiar to us all in the form “we tend to remember what attr
c4b98457-bbb3-4ec8-9529-a38ded5b266e
trentmkelly/LessWrong-43k
LessWrong
The Cult Of Reason So... while investigating Wikipedia I found out about an actual Cult. Of Reason. Revolutionary France. From the description, it sounds pretty awesome. Here's he link. Is this denomination usable? Is it useful? Can it be resurrected? Should it be? Is it compatible with what we stand for? Discuss. Also, note that in French "Culte" does not mean "Sect", it means "the act of worshipping".