id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
45bd3172-2c1b-4594-8537-ac1fbbdc52a6 | trentmkelly/LessWrong-43k | LessWrong | FHI Essay Competition
This competition is only open to philosophy students.
> Can philosophical research contribute to securing a long and prosperous future for humanity and its descendants?
>
> What would you think about if you really wanted to make a difference?
>
> Crucial considerations are questions or ideas that could decisively change your entire approach to an issue. What are the crucial considerations for humanity’s future? These could range from deep questions about population ethics to world government, the creation of greater than human intelligence, or the risks of human extinction.
>
> The Future of Humanity Institute at Oxford University wants to get young philosophers thinking about these big questions. We know that choosing a PhD thesis topic is one of the big choices affecting the direction of your career, and so deserves a great deal of thought. To encourage this, we are running a slightly unusual prize competition. The format is a two page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis regarding a crucial consideration for humanity’s future. We will publish the best abstracts on our website and give a prize of £2,000 to the author of the proposal we deem the most promising or original. |
4f9ecdf6-71cc-4f21-9f55-1feae6745289 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Field-Building and Deep Models
*What is important in hiring/field-building in x-risk and AI alignment communities and orgs? I had a few conversations on this recently, and I'm trying to publicly write up key ideas more regularly.*
*I had in mind the mantra 'better written quickly than not written at all', so you can expect some failures in enjoyability and clarity. No character represents any individual, but is an amalgam of thoughts I’ve had and that others have raised.*
---
albert cares deeply about x-risk from AI, and wants to grow the field of alignment quickly; he also worries that people in x-risk community errs too much on the side of hiring people similar to themselves.
ben cares deeply about x-risk from AI, and thinks that we should grow the AI safety community slowly and carefully; he feels it’s important to ensure new members of the community understand what’s already been learned, and avoid the eternal september effect.
---
albert: So, I understand you care about picking individuals and teams that agree with your framing of the problem.
ben: That sounds about right - a team or community must share deep models of their problems to make progress together.
albert: Concretely on the research side, what research seems valuable to you?
ben: If you’re asking what I think is most likely to push the needle forward on alignment, then I’d point to MIRI’s and Paul’s respective research paths, and also some of the safety work being done at DeepMind and FHI.
albert: Right. I think there are also valuable teams being funded by FLI and Open Phil who think about safety while doing more mainstream capabilities research. More generally, I think you don't need to hire people that think very similarly to you in your organisations. Do you disagree?
ben: That's an interesting question. On the non-research side, my first thought is to ask what Y Combinator says about organisations. One thing we learn from YC are that the first 10-20 hires of your organisation will make or break it, especially the co-founders. Picking even a slightly suboptimal co-founder - someone who doesn't perfectly fit your team culture, understand the product, and work well with you - is the easiest way to kill your company. This suggests to me a high prior on selectivity (though I haven't looked in detail into the other research groups you mention).
albert: So you're saying that if the x-risk community is like a small company it's important to have similar views, and if it's like a large company it's less important? Because it seems to me that we're more like a large company. There are certainly over 20 of us.
ben: While 'size of company' is close, it's not quite it. You can have small companies like restaurants or corner stores where this doesn't matter. The key notion is one of *inferential distance*.
To borrow a line from Peter Thiel: startups are very close to being cults, except that where cults are very *wrong* about something important, startups are very *right* about something important.
As founders build detailed models of some new domain, they also build an inferential distance of 10+ steps between themselves and the rest of the world. They start to feel like everyone outside the startup is insane, until the point where the startup makes billions of dollars and then the update propagates throughout the world (*"Oh, you can just get people to rent out their own houses as a BnB"*).
A founder has to make literally thousands of decisions based off of their detailed models of the product/insight, and so you can't have cofounders who don't share at least 90% of the deep models.
albert: But it seems many x-risk orgs could hire people who don't share our basic beliefs about alignment and x-risk. Surely you don’t need an office manager, grant writer, or web designer to share your feelings about the existential fate of humanity?
ben: Actually, I'm not sure I agree with that. It again comes down to how much the org is doing new things versus doing things that are central cases of a pre-existing industry.
At the beginning of Open Phil's existence they wouldn't have been able to (say) employ a typical 'hiring manager' because the hiring process design required deep models of what Open Phil's strategy was and what variables mattered. For example ‘how easily someone can tell you the strength and cause of their beliefs’ [was important](https://www.openphilanthropy.org/blog/process-hiring-our-first-cause-specific-program-officer#Sec1_3) to Open Phil.
Similarly, I believe the teams at CFAR and MIRI have optimised workshops and research environments respectively, in ways that depend on the specifics of their particular workshops/retreats and research environments. A web designer needs to know the organisation’s goals well enough to model the typical user and how they need to interact with the site. An operations manager needs to know what financial trade-offs to make; how important for the workshop is food versus travel versus ergonomics of the workspace. Having every team member understand the core vision is necessary for a successful organisation.
albert: I still think you're overweighting these variables, but that's an interesting argument. How exactly do you apply this hypothesis to research?
ben: It doesn't apply trivially, but I'll gesture at what I think: Our community has particular models, worldview and general culture that helped to notice AI in the first place, and has produced some pretty outstanding research (e.g. logical induction, functional decision theory); I think that the culture is a crucial thing to sustain, rather than to be cut away from the insights it's produced so far. It’s important, for those working on furthering its insights and success, to deeply understand the worldview.
albert: I agree that having made progress on issues like logical induction is impressive and has a solid chance of being very useful for AGI design. And I have a better understanding of your position - sharing deep models of a problem is important. I just think that some other top thinkers will be able to make a lot of the key inferences themselves - look at Stuart Russell for example - and we can help that along by providing funding and infrastructure.
Maybe we agree on the strategy of providing great thinkers the space to think about and discuss these problems? For example, events where top AI researchers in academia are given the space to share models with researchers closer to our community.
ben: I think I endorse that strategy, or at least the low-fidelity one you describe. I expect we'd have further disagreements when digging down into the details, structure and framing of such events.
But I will say, when I've talked with alignment researchers at MIRI, something they want more than people working on agent foundations, or Paul's agenda, are people who grok a bunch of the models and *still have disagreements*, and work on ideas from a new perspective. I hope your strategy helps discover people who deeply understand and have a novel approach to the alignment problem.
---
*For proofreads on various versions of this post, my thanks to Roxanne Heston, Beth Barnes, Lawrence Chan, Claire Zabel and Raymond Arnold. For more extensive editing (aka telling me to cut a third of it), my thanks to Laura Vaughan. Naturally, this does not imply endorsement from any of them (most actually had substantial disagreements).* |
68808a21-3324-4ade-8d58-9f50d8b84b5e | trentmkelly/LessWrong-43k | LessWrong | Black Death at the Golden Gate (book review)
Book review: Black Death at the Golden Gate: The Race to Save America from the Bubonic Plague, by David K. Randall.
Imagine a story about an epidemic that reached San Francisco, after devastating parts of China. A few cases are detected, there's uncertainty about how long it's been spreading undetected, and a small number of worried public health officials try to mobilize the city to stop an imminent explosion of disease. Nobody knows how fast it's spreading, and experts only have weak guesses about the mechanism of transmission. News media and politicians react by trying to suppress those nasty rumors which threaten the city's economy.
Sounds too familiar?
The story is about a bubonic plague outbreak that started in 1900. It happens shortly after the dawn of the Great Sanitary Awakening, when the germ theory of disease is fairly controversial. A few experts in the new-fangled field of bacteriology have advanced the radical new claim that rats have some sort of connection to the spread of the plague, and one has proposed that the connection involves fleas transmitting the infection through bites. But the evidence isn't yet strong enough to widely displace the standard hypothesis that the disease is caused by filth.
There was a vaccine for the bubonic plague, which maybe helped a bit. It was only 50% effective, the benefits lasted about 6 months, and the side effects sound like cruel and unusual punishment. It was controversial and often resisted, much like the compulsory smallpox vaccinations of the time.
Yet the plague didn't seem to know that it was supposed to grow at exponential rates. That left an eerie sense of mystery about how the plague could linger for years, with people continuing to disagree about whether it existed.
1.
I'm unsure whether to classify the book as history or as historical fiction.
If I had been led to expect that this was a work of fiction, I would only have noticed a few hints that it's based on actual events. The most obvious hin |
a361d7f0-be99-4c47-a40a-2488d837e105 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Against using stock prices to forecast AI timelines
*by* [*Trevor Chow*](https://tmychow.com/)*,*[*Basil Halperin*](https://basilhalperin.com/)*, and*[*J. Zachary Mazlish*](https://jzmazlish.substack.com/)
[Note: This is an appendix to ["AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years"](https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or)]
One naive objection would be the claim that real interest rates sound like an odd, arbitrary asset price to consider. Certainly, real interest rates are not frequently featured in newspaper headlines – if any interest rates are quoted, it is typically *nominal* interest rates – and stock prices receive by far the most popular attention.
**The importance of real rates.** However, even if real interest rates are not often discussed, real interest rates affect *every*asset price. This is because asset prices always reflect some *discounted* value of future cash flows: for example, the price of Alphabet stock reflects the present *discounted* value of future Alphabet dividend payments. These future dividend payments are discounted using a discount rate *which is determined by the prevailing real interest rate*. Thus the claim that real interest rates affect every asset price.
As a result, if real interest rates are ‘wrong’, *every* asset price is wrong. If real interest rates are wrong, *a lot* of money is on the table.
**Stocks are hard to interpret.**It may nonetheless be tempting to look at *stock* prices to attempt to interpret how the market is thinking about AI timelines (e.g. [Ajeya Cotra](https://www.airr.io/quote/636d77b9249836a41ad9e0e1); [Matthew Barnett](https://twitter.com/MatthewJBar/status/1599394102881456130); [/r/ssc](https://old.reddit.com/r/slatestarcodex/comments/v7omil/if_we_think_agi_is_imminent_in_which_companies/)). It may be tempting to consider the high market capitalization of Alphabet as reflecting market expectations for large profits generated by DeepMind’s advancing capabilities, or TSMC’s market cap as reflecting market expectations for the [chipmaker](https://twitter.com/robertwiblin/status/1577782210568003585) to profit from AI progress.
However, extracting AI-related expectations from stock prices is a very challenging exercise – to the point that we believe it is simply futile – for four reasons.
1. First, and most importantly, these companies will only have the possibility of high profits if transformative AI is *aligned*; under unaligned AI, the value of stocks along with everything else is converted to zero.
2. Second, it is not obvious that even in the aligned case that these companies will earn high profits. For instance, OpenAI has committed to a capped profit model, and others [may](https://www.metaculus.com/questions/4061/will-any-major-ai-company-commit-to-an-ai-windfall-clause-by-2025/) sign on to a similar ‘[Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/)’. Beyond corporate altruism, it seems extremely plausible that if a private company develops truly transformative AI technology then the state will (attempt to) nationalize and expropriate it to distribute the benefits more broadly, preventing profits.
3. Third, stock valuations are extremely idiosyncratic: which stock should we be looking at? And critically, even if we take a basket of tech companies and average over them, then this only includes *public* companies. If the market expects transformative AI in 12 months, but only because it will be developed by OpenAI – a company which is not traded publicly – then this will not show up in any equity index.
4. Fourth, and quite importantly, *it is not obvious whether expectations of transformative AI would raise or lower stock prices*. This is because, as described in the previous subsection, stock prices reflect the present-discounted value of future profits; and advanced AI may raise those future profits, but – as the central thesis of this piece argues – advanced AI would *also* raise the interest rate used to discount those profits. The net effect on stock prices is not immediately obvious.
1. (In math, briefly: if the price P is the value of future profits D discounted at rate r, i.e. P=D/r, then transformative AI may raise future profits D but it could raise the discount rate r by even more.)
2. (Higher growth causes lower average stock prices if the [intertemporal elasticity of substitution](https://eml.berkeley.edu/~enakamura/teaching/EstimatingIES.pdf) is greater than one, rather than less than one. This parameter is subject to significant debate; see the linked slides for useful discussion. John Cochrane offers additional intuition [here](https://johnhcochrane.blogspot.com/2018/02/stock-gyrations.html) and argues that the empirically-relevant case is the one where higher growth causes lower equity prices: expectations for transformative AI would *lower* equity prices.)
If you want to use market prices to predict AI timelines, using equities is not a great way to do it.
In contrast, real interest rates do not suffer from these problems. |
12fc7016-1981-44d6-8d74-01835925ce09 | trentmkelly/LessWrong-43k | LessWrong | A Floating Cube - Rejected HLE submission
This question I submitted got rejected from the Humanity's Last Exam (HLE) benchmark set for being too easy. I'm really proud of it though, so I figured I'd post it here.
> A wooden cube of unit side length and relative density 0.75 floats stably in a pool of water. What is the distance from the highest point on the cube to the surface of the water, calculated to four decimal places? |
883c416e-a5fa-4352-90d4-df2807d4a13e | trentmkelly/LessWrong-43k | LessWrong | Project Intro: Selection Theorems for Modularity
Introduction - what is modularity, and why should we care?
It’s a well-established meme that evolution is a blind idiotic process, that has often resulted in design choices that no sane systems designer would endorse. However, if you are studying simulated evolution, one thing that jumps out at you immediately is that biological systems are highly modular, whereas neural networks produced by genetic algorithms are not. As a result, the outputs of evolution often look more like something that a human might design than do the learned weights of those neural networks.
Humans have distinct organs, like hearts and livers, instead of a single heartliver. They have distinct, modular sections of their brains that seem to do different things. They consist of parts, and the elementary neurons, cells and other building blocks that make up each part interact and interconnect more amongst each other than with the other parts.
Neural networks evolved with genetic algorithms, in contrast, are pretty much uniformly interconnected messes. A big blob that sort of does everything that needs doing all at once.
Again in contrast, networks in the modern deep learning paradigm apparently do exhibit some modular structure.
Top: Yeast Transcriptional Regulatory Modules - clearly modular
Bottom: Circuit diagram evolved with genetic algorithms - non-modular mess
Why should we care about this? Well, one reason is that modularity and interpretability seem like they might be very closely connected. Humans seem to mentally subpartition cognitive tasks into abstractions, which work together to form the whole in what seems like a modular way. Suppose you wanted to figure out how a neural network was learning some particular task, like classifying an image as either a cat or a dog. If you were explaining to a human how to do this, you might speak in terms of discrete high-level concepts, such as face shape, whiskers, or mouth.
How and when does that come about, exactly? It clearly doesn’t |
66ecf0cc-f3a1-4a54-b378-e2fa891f8fc7 | trentmkelly/LessWrong-43k | LessWrong | Open thread, Oct. 13 - Oct. 19, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
0fa923d2-c699-40b2-aefd-6dc96c372f91 | trentmkelly/LessWrong-43k | LessWrong | The Problem (TM) - Part 2
From part 1: http://bearlamp.com.au/the-problem-tm-analyse-a-conversation/
part 1 on lesswrong: http://lesswrong.com/r/discussion/lw/nsn/
(this) part: http://bearlamp.com.au/the-problem-analyse-a-conversation-part-2/
> I had a chat with a person who admitted to having many problems themselves. I offered my services as a problem solving amateur, willing to try to get to the bottom of this. Presented is the conversation (With details changed for privacy).
>
> I had my first shot at analysing the person's problems and drilling down to the bottom. I am interested in what other people have to say is the problem. Here we study the meta-strategy of how to solve the problem, which I find much more interesting than the object level analysis of the problem and how to solve it.
>
> I don't think I got to the bottom of the problem, and I don't think I conducted myself in a top-notch capacity but needless to say I wonder if you have any comments about what IS TheProblem(tm), how did you come to that conclusion and what can be done about it (for the benefit of this person and anyone with a similar problem).
----------------------------------------
> What is actually the problem? I have a theory, but I also wanted to publish this without declaring my answer. I will share my ideas in a few weeks but I want to know what you think and how you came to that answer.
----------------------------------------
This is a new style of post so I expected some responses along the lines of:
> I considered downvoting. I opted instead to ignore after reading the preamble. - buybuydandavis
That's fine. It was literally a chat log. Not for everyone.
I also got some interesting and relevant responses. There are several and they overlap so I decided it's best to answer with another post.
----------------------------------------
Many people narrowed down to a few particularly alarming examples:
* The most alarming part of that conversation for me was "A few weeks ago I punched a |
0b09ca0b-55aa-43c2-ac2b-006a296823e1 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Vancouver Politics Meetup
Discussion article for the meetup : Vancouver Politics Meetup
WHEN: 12 May 2012 01:00:00PM (-0700)
WHERE: Benny's Bagels 2505 W Broadway Vancouver, BC
We are going to be at Benny's again. Saturday this time, because we know some of you can't do Sundays. 13:00.
The sequence we are reading this week is politics is the mind killer, so read that and come discuss. We prefer that you come out even if you havn't read anything, tho.
See you all there.
Discussion article for the meetup : Vancouver Politics Meetup |
460785f5-eb06-4268-b85a-956ee7b13594 | trentmkelly/LessWrong-43k | LessWrong | Why Are Women Hot?
Cross-posted, as always, from Putanumonit.
----------------------------------------
Why are women hot?
They just are is not an unreasonable answer, but it’s not like women are hot entirely by coincidence. Women put a lot of effort into their appearance. They care a great deal about their looks, as does everyone around them.
Why do women want to be hot?
The default answer: it’s sex appeal. Women are hot so that men will want to have sex with them. Men wanting to have sex with a woman results in her getting things she wants like companionship, resources, and even sex.
And it’s undeniable that men like having sex with hot women. Or at least, that they like imagining having sex with hot women. Although actually, the sort of women that men imagine having sex with the most is not the sort that most men describe as beautiful, which also isn’t who they marry, which itself isn’t the look that most women are trying to achieve. Taking a careful look at men looking at women, whether they be ancient sculptors or club VIPs or Tinder matches, reveals that something is missing with the default answer.
There is more to female beauty than meets the eye of the male beholder.
A Fat Bitch. What Could This Mean?
A good place to start our investigation of female beauty is Dispelling Beauty Lies: The Truth About Feminine Beauty by an evil vizier in charge of the sultan’s harem. What lies does it dispel?
Vizier notes that models advertising fashion are often not very beautiful. Heroin chic mannequins picked by gay male designers for the catwalk or “body positivity” models cast to align the brand with a political message, neither of those are maximally attractive to the majority of men. Simply asking men what they find beautiful is also fallible, with several reasons listed for both men and women to lie in public about whom they find beautiful. Only revealed preference is to be trusted.
Men’s preferences reveal a lot. Noticing a striking convergence between ancient statues of love |
cce64d10-5557-4532-be34-c66fe81d6993 | trentmkelly/LessWrong-43k | LessWrong | When do you visualize (or not) while doing math?
I've been doing a lot more math lately, and, of course, also thinking about how I can do it faster and with deeper understanding. When I was in college, I remember that I found visualizing immensely helpful, to the point where it's almost what "doing math" is for me.
The most obviously useful place to visualize was in multivariable and vector calculus. Especially because so many things were set in three dimensions, visualizing expressions made it so much easier to see why they were true.
I also visualize all the way down to doing algebra. When I symbolically manipulate an equation, I visualize things like bringing the letters across, or them merging into each other.
But it has its downsides. One is that, since I'm visualizing, I'm visualizing something specific; a specific function, a specific type of set. It's not necessarily high-resolution, but it don't usually cover the full range of what is represented by the symbols on the paper. And so by visualizing something specific, it's likely that I'm not keeping in mind all the types of cases.
The most obvious downside is that visualization is computationally intensive. When I'm rendering a big scene in my mind, I can feel that it's hard. Sometimes I have to close my eyes and block out the light. Things can slow down, and it can take several tries to get the visual just right, or to maintain it all the way through the manipulation. It seems plausible that this gets me more tired than is necessary, and causes me to stop doing the task earlier.
So, it has at least occurred to me that I could "try" doing something else. I put try it quotes because visualizing is so natural that it would be effortful for me to even think of an alternative. It's as if someone said "you know, you don't have to shovel that snow by hand" and I'm thinking, do they mean I could do it with my feet? What they actually mean is using a snow-blower.
Are there fundamentally different ways that you do math? Are there particular tasks you've tried |
f335fc1b-5f3c-4242-9272-f2661e270d93 | trentmkelly/LessWrong-43k | LessWrong | List of Q&A Assumptions and Uncertainties [LW2.0 internal document]
Context
1. This is the second in a series of internal LessWrong 2.0 team document we are sharing publicly (with minimal editing) in an effort to help keep the community up to date with what we're thinking about and working on.
I suggest you first read this other document for context.
2. Caveat! This is internal document and does not represent any team consensus or conclusions; it was written by me (Ruby) alone and expresses my own in-progress understanding and reasoning. To the extent that the models/arguments of the other team members are included here, they've been filtered through me and aren't necessarily captured with high fidelity or strong endorsement. Since it was written on March 18th, it isn't even up to date with my own thinking
.
.
Epistemic status: Since the 18th when I first wrote this, I have many new lists and a lot more information. Yet this one still serves as a great intro into all the questions to be asked about Q&A and what it can and should be.
Originally written March 18, 2019
Related: Q&A Review + Case for a Marketplace
* HIGH LEVEL
* Is it actually the case that Q&A for serious research is this big, new, different thing which requires a big shift for people? Maybe it's not such an adjustment?
* How willing are people to do serious research work for others on the internet?
* RESEARCH PROCESS (and suitability for collaboration) <tease these out by talking through their recent research>
* Can "significant research" be partitioned into discrete questions?
* Or is it more that there is a deeper bigger question around which someone needs to become an expert, and that any question posed in downstream of the real question and can't be treated in isolation?
* Perhaps talk to the Ought folk about this.
* Do people have general open research questions they want vaguely want answered and are willing to have sit unanswered for a relatively long period of time?
* Or do they mainly have (and prioritize) |
988dd27b-a2ca-44cd-a6fa-ab92d6bab06c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Tel Aviv Meetup: Choosing research topics
Discussion article for the meetup : Tel Aviv Meetup: Choosing research topics
WHEN: 26 June 2014 07:00:00PM (+0300)
WHERE: Menachem Begin Road 7, 15th floor, Ramat Gan, Israel
(This is a duplicate generated by a bug. Now trying to resolve the bug. See meeting notice at http://lesswrong.com/meetups/11p)
At the next LessWrong meeting, on Thursday, June 26 at 19:00, we'll prepare for an upcoming MIRIx session. We will hear short talks on open problems in FAI and choose some problems to tackle. VisionMap has kindly agreed to host us: Gibor Sport House, 15th Floor, 7 Menachem Begin St., Ramat-Gan. If you can't find us, call Vadim Kosoy at 0542600919 or me at 0545691165.
Discussion article for the meetup : Tel Aviv Meetup: Choosing research topics |
ae4b31eb-b8da-4477-b44e-450208c09ce6 | trentmkelly/LessWrong-43k | LessWrong | 2 (naive?) ideas for alignment
Context
I'm interested in working on alignment, coming from a programmer/Haskell background. I have two ideas that are unlikely to info hazards, that I want to post here in order to get feedback/info about prior work.
Both are very rough/early stage, view this as me presenting a pre-MVP to get some very early feedback.
Idea 1 - "Love is that which enables choice" (Inspired by Forrest Landry)
This is an idea for a potential goal/instruction for AI (can't recall the fancy term). The idea is to make an AI that optimizes for optionality: maximizing the total sum of agency for all human and non-human agents. Agency is here loosely defined as "Optional ability to make changes to the world".
Making it the sum total would discourage situations where the ability of one person to affect change would hamper the ability of someone else.
Idea 2 - Segmented gradient descent training optimized for collaboration between different agents
This is an idea for a potential training method, that I think may have a big attractor basin for collaborative traits. The idea is to have some kind of gradient descent-esque training where AI agents of varying calibres/types are put in training scenarios in which a premium is put on collaboration. This is run in multiple iterations, where AI that successfully collaborates with other agents get to continue.
The hardest thing about this is that we want an AI that is cooperative, but we do not want an AI that is naive, as this would lead to situations where terrorists convince the AI to do stupid shit. We could try to model this on human (cultural/biological) evolution.
One thing I like about this idea is that it might lead to AI that develops behavioural patterns akin to those found in herd animals (including humans). This would make the AI easier to reason about, and more likely to develop something akin to ethical behaviour. |
f682411a-8401-490c-be75-d2ba26c2e437 | StampyAI/alignment-research-dataset/arbital | Arbital | Wants to get straight to Bayes
A simple requisite page to mark whether the user has selected wanting to get straight into Bayes on the [Bayes' Rule Guide start page](https://arbital.com/p/1zq). |
98085eab-ecf5-40b9-8ba8-f1fa86fde353 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | self-improvement-executors are not goal-maximizers
Agents are [adaptation-executors](https://www.lesswrong.com/tag/adaptation-executors) with adaptations that accomplish goals, not goal-maximizers. Understanding agents as maximizing goals is a simplification used by humans to make them easier to understand. This is as true when the goal is self-improvement as it is with anything else.
Let's think about self-improvement in terms of goal specification. You want to accomplish a goal G, so as you add levels, the task becomes more indirect:
* make a system S0 to accomplish G
* make S1 to make a S0 that accomplishes G
* make S2 to make a S1 that makes a S0 that accomplishes G
Some people think that you can just tell S2 to do G and it will figure things out, but those people are interpreting S2 as a goal-maximizer, and agents are adaptation-executors. You have to specify the **adaptations** that will cause the **actions** that create a suitable S1. Reinforcement learning is a method ([not](https://en.wikipedia.org/wiki/Neuroevolution) the only method!) for specifying adaptations that accomplish a certain goal in the training environment, not a way of giving a system an inherent goal. I do think AI self-improvement is possible - I wrote [this](https://www.lesswrong.com/posts/rSycgquipFkozDHzF/ai-self-improvement-is-possible). So, I do think in some sense that specification of S2 can be easier than direct specification of S0, but I'm not under any illusions that it's the **same problem**.
I'm posting this because I saw [this post](https://www.lesswrong.com/posts/FgsoWSACQfyyaB5s7/shutdown-seeking-ai) (from yesterday) saying that people should try designing AIs that want to be shut down as a tripwire for dangerous recursive self-improvement, and that idea is based on the above misunderstanding. Also, "shut yourself down" is a particularly poor goal for 2 reasons.
1. It's binary. Training ML systems works much better when you can indicate partial accomplishment of the goal.
2. It's ill-defined. If you try to train a system to "shut itself down" what you are training it to do is "take provided action X that also coincidentally causes shutdown". You then need to make things harder if you're testing development of more-capable systems, and if you then take away X but leave other shutdown route Y, it's no more likely to do Y than anything else. Having negative average rewards over time doesn't train it to do Y until it does Y a bunch and gets reinforcement learning on doing Y. |
e6ee8d4c-e731-4207-aa73-8531f5089c23 | trentmkelly/LessWrong-43k | LessWrong | Draft: A concise theory of agentic consciousness
Consciousness can be understood as an interpersonally-oriented perception of situations, where the mind of a social speciman instinctively focuses on agents or personas within any given context. Even inanimate or non-conscious aspects of reality are often personified – perceived as adversaries, allies, or caring lovers, dialing our sense of threat, belonging, or safety.
Through consciousness, situations are interpreted primarily via the motives, intentions, and relationships between perceived agents. The mind seeks out coherence, tension, conflict, or harmony among these agents, constantly mapping social dynamics onto every scenario. This orientation drives us to look for alignment or discord between the intentions of different actors, including ourselves.
Emotions emerge as the embodied resonance of these perceptions. Evolutionarily, emotions can be seen as mechanisms that shortcut the path from situational perception to rapid action. By “summarizing” complex social dynamics into bodily felt states, emotions facilitate quick, adaptive responses to the environment whether that means approaching, avoiding, defending, or affiliating.
In essence, consciousness interprets the world through the lens of social agency, and emotions evolved as efficient signals that translate these interpretations into a potential for swift, decisive actions.
----
WIP subtitle: socially-oriented perspective/focal |
9f08e9cc-badb-455d-a122-68c8bc6b2899 | trentmkelly/LessWrong-43k | LessWrong | Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI."
Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps.
OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.")
Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up.
In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up.
Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through.
But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay.
(My guess is that Altman will still ma |
4c7e7e42-bdf8-41f7-ad4e-eb8b8c5487f9 | trentmkelly/LessWrong-43k | LessWrong | The Mutant Game - Rounds 11 to 30
This game continues from the alternate timeline here where I made two mistakes in the game engine.
* Bots were passed their own previous move and told it was their opponent's previous move.
* Bots were always given 0 as the round index instead of the correct positive integer.
CloneBots
Multiple people have noted that CliqueZviBot is outperforming the other CloneBots. This is due to how the CloneBot code interacts with the bugs in the my engine.
The CloneBots still cooperate, but they do so imperfectly. All CloneBot pairings result in 200-300 splits instead of 250-250 splits. The CloneBots use source code parity combined with round number parity to determine who wins the 200-300 split. Therefore if CloneBotA and CloneBotB get a 200-300 split in favor of CloneBotB then they will always get a 200-300 split in favor of CloneBotB.
Rounds 11-30
Round 11
> Looking at the obituary I suspect that CooperateBot may not last much longer.
>
> ― Prediction by Larks after seeing the results from Rounds 1 to 10
Larks' CooperateBot died on round 11.
Round 12
PasswordBot from Team Multics died along with "Why can't we all just get along" from Chaos Army and an NPC.
Round 13
No casualties.
Round 14
6 bots died.
* BeauBot, OscillatingTwoThreeBot, RandomOrGreedyBot and SimplePatternFinderBot from Chaos Army
* "Definitely Not Collusion Bot" from Team Multics. Multicore's fodder has been consumed. Team Multics contains only the MimicBot from here on.
* 1 NPC
Round 15
5 bots died.
* Silly Invert Bot 2, AttemptAtFair, Insum's CooperateBot, MeasureBot and "Random-start-turn-taking" from Chaos Army
* 1 NPC
MeasureBot had succeeded in infecting AbstractSpyTreeBot's move method and replacing it with return 0. AbstractSpyTreeBot ought perform better with MeasureBot out of the game.
Round 16
3 bots from Chaos Army died
* "Silly Counter Invert Bot"
* LiamGoddard
* "Pure TFT"
Rounds 17-22
4 NPCs died
Round 23
BendBot and Copoperater [sic] died. BendBot belonged |
b81c7084-8774-4dec-b13d-b24a29fd8f21 | trentmkelly/LessWrong-43k | LessWrong | LINK: AI Researcher Yann LeCun on AI function
Yann LeCun, now of Facebook, was interviewed by The Register. It is interesting that his view of AI is apparently that of a prediction tool:
"In some ways you could say intelligence is all about prediction," he explained. "What you can identify in intelligence is it can predict what is going to happen in the world with more accuracy and more time horizon than others."
rather than of a world optimizer. This is not very surprising, given his background in handwriting and image recognition. This "AI as intelligence augmentation" view appears to be prevalent among the AI researchers in general.
|
1a78fb01-e840-46a8-b9b1-ebfd4837dd93 | StampyAI/alignment-research-dataset/blogs | Blogs | GPT is dangerous because it is useful at all
GPT is dangerous because it is useful at all
--------------------------------------------
an action — such as building, running, giving access, or publishing an AI system — is dangerous to the extent that it moves the world in a direction that makes it be in more danger. giving people access to DALL-E caused the world to now contain the easy ability to create images automatically, which is probly not a big deal when it comes to [doom](ai-doom.html); but GPT is a potentially highly useful automated piece of intelligence with a complex understanding of the world. someone out there building an agentic AI can just plug GPT (either GPT-3 via API access, or GPT-2 by embedding it directly) into their AI system, and give it the ability to manipulate the world in clever complex ways using GPT.
sure, with RLHF, GPT can be made to refuse (at least in naive circumstances) to say racist-sounding things or tell people how to make meth. but agentic world-affecting AI doesn't particularly need to say racist things or know how to make meth in order to have significant impacts on the world, including improving itself to the point of achieving decisive strategic advantage and then destroying everything — the fact that it can procedurally call the useful piece of intelligence that is GPT as much as it wants on arbitrary queries accelerates the likelyhood that it can significantly impact the world *because GPT is intelligent and produces potentially useful results at all*.
under these conditions, what should OpenAI (and other LLM developers) do?
of course the ideal would be for them to stop all development, close shop, and give all money to alignment. but short of that, if they *really* want to continue existing anyways, the second best thing would be to significantly limit access to GPT — don't give API access except maybe to very select alignment organizations, and *definitely* don't put entire models out there. while it might help with PR, i don't think RLHF particularly reduces [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) except in that it generally makes the LLMs less useful. |
dcc17398-c8b8-448c-86c1-d5e189e85b30 | trentmkelly/LessWrong-43k | LessWrong | Confabulation Bias
(Edit: Gwern points out in the comments that there is previous discussion on this study at New study on choice blindness in moral positions.)
Earlier this month, a group of Swedish scientists published a study that describes a new type of bias that I haven't seen listed in any of the sequences or on the wiki. Their methodology:
> We created a self-transforming paper survey of moral opinions, covering both foundational principles, and current dilemmas hotly debated in the media. This survey used a magic trick to expose participants to a reversal of their previously stated attitudes, allowing us to record whether they were prepared to endorse and argue for the opposite view of what they had stated only moments ago.
In other words, people were surveyed on their beliefs and were immediately asked to defend them after finishing the survey. Despite having just written down how they felt, 69% did not even notice that at least one of their answers were surreptitiously changed. Amazingly, a majority of people actually "argued unequivocally for the opposite of their original attitude".
Perhaps this type of effect is already discussed here on LessWrong, but, if so, I have not yet run across any such discussion. (It is not on the LessWrong wiki nor the other wiki, for example.) This appears to be some kind of confabulation bias, where invented positions thrust upon people result in confabulated reasons for believing them.
Some people might object to my calling this a bias. (After all, the experimenters themselves did not use that word.) But I'm trying to refer less to the trick involved in the experiment and more toward the bias this experiment shows that we have toward our own views. This is a fine distinction to make, but I feel it is important for us to recognize.
When I say we prefer our own opinions, this is obvious on its face. Of course we think our own positions are correct; they're the result of our previously reasoned thought. We have reason to believe they are |
e2967b70-3493-4baf-873f-7fb74698a439 | trentmkelly/LessWrong-43k | LessWrong | Never Go Full Kelly
Introduction
I am assuming the reader is at least somewhat familiar with the Kelly Criterion. If not, I suggest you read this first. There is a sense in which I am mostly just adding meat to the bones of "Never Go Full Kelly". (Or more narrowly his two last two points):
* You can't handle the swings
* You have full knowledge of your edge
Hopefully I can give you some rules of thumb for how to handle both those issues:
* Scale down all bets "fractionally"
* Scale down by these specific fractions:
* A function of your risk-adjusted edge (1-1/(1+SR^2))
* A function of how much information is in your prediction vs your counterparty (your "information" / the "information" between you)
* You want to do both if you can't handle the swings AND don't have full knowledge. (Which is basically everyone).
For this post, fis the Kelly fraction, p is your estimate of the probability of an event, q is the market / your counterparty's probability. In this notation:
f=p−q1−q
After being introduced to the Kelly criterion the next thing people are introduced to is "fractional Kelly". Specifically the idea is rather than wagering the Kelly fraction f on an event, you wager αf (typically with 0<α<1).
What's wrong with Kelly?
Kelly is pretty aggressive compared to most people's risk tolerance. Upon seeing a Kelly bet some people will be somewhat dubious as to how aggressive it is. If someone offers you evens on something you think is 60%, Kelly would have you betting 20% of your net worth. 20% seems like an awful lot to bet on something which is only going to happen 60% of the time. You can play this game yourself here and see how comfortable you are with the drawdowns with "fake" money. (This was originally from a paper where the participants did not perform well). If you don't fancy playing, consider this thought experiment: 3 tails is a 50% draw down, and this will happen ~once every 24 flips.
To give a more recent example, before the 2020 Presidential election, |
b7fea558-43a0-4b0a-9a45-e805b2453f92 | trentmkelly/LessWrong-43k | LessWrong | An Alternative Approach to AI Cooperation
[This post summarizes my side of a conversation between me and cousin_it, and continues it.]
Several people here have shown interest in an approach to modeling AI interactions that was suggested by Eliezer Yudkowsky: assume that AIs can gain common knowledge of each other's source code, and explore the decision/game theory that results from this assumption.
In this post, I'd like to describe an alternative approach*, based on the idea that two or more AIs may be able to securely merge themselves into a joint machine, and allow this joint machine to make and carry out subsequent decisions. I argue that this assumption is as plausible as that of common knowledge of source code, since it can be built upon the same technological foundation that has been proposed to implement common knowledge of source code. That proposal, by Tim Freeman, was this:
> Entity A could prove to entity B that it has source code S by
> consenting to be replaced by a new entity A' that was constructed by a
> manufacturing process jointly monitored by A and B. During this
> process, both A and B observe that A' is constructed to run source
> code S. After A' is constructed, A shuts down and gives all of its
> resources to A'.
Notice that the same technology can be used for two AIs to merge into a single machine running source code S (which they both agreed upon). All that needs to be changed from the above process is for B to also shut down and give all of its resources to A' after A' is constructed. Not knowing if there is a standard name for this kind of technology, I've given it the moniker "secure joint construction."
I conjecture that the two approaches are equal in power, in the sense that any cooperation made possible by the common knowledge of source code is also possible given the secure merger ability, and vice versa. This is because under the assumption of common knowledge of source code, the likely outcome is for all AIs to modify themselves into using the same decision algori |
67b63d78-e633-46b4-b4ef-28f85e43a122 | trentmkelly/LessWrong-43k | LessWrong | Demonstrate and evaluate risks from AI to society at the AI x Democracy research hackathon
TLDR; Participate online or in-person on the weekend 3rd to 5th May in an exciting and fun AI safety research hackathon focused on demonstrating and extrapolating risks to democracy from real-life threat models. We invite researchers, cybersecurity professionals, and governance experts to join but it is open for everyone, and we will introduce starter code templates to help you kickstart your team's projects. Join here.
Why demonstrate risks to democracy?
Despite some of the largest potential risks from AI being related to our democratic institutions and the fragility of society, there is surprisingly little work demonstrating and extrapolating concrete risks from AI to democracy.
By putting together actual demonstrations of potential dangers and mindfully extrapolating these risks into the late 2020s, we can raise awareness among key decision-makers and stakeholders, thus driving the development of mitigation strategies.
This type of research will also be informative for dangerous capability evaluations and our understanding of catastrophic risk in the context of societal stability. As a result, your participation in this hackathon will contribute to a growing body of knowledge that will help shape the future of AI governance.
We are excited to see you there and hack away with you to develop impactful research!
What is a research hackathon?
The AI x Democracy Hackathon is a weekend-long event where you participate in teams (1-5) to create interesting, fun, and impactful research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our jury and you can win up to $1,000!
It runs from 3rd May to 5th May (in two weeks) and we're excited to welcome you for a full weekend of engaging research, though we also welcome partial participation and keep it open to all participants. You will hear fascinating talks about real-world projects tackling these types of questions, get the opportunity |
9474f839-45a0-445b-b901-a0135cef6ea6 | trentmkelly/LessWrong-43k | LessWrong | Shah and Yudkowsky on alignment failures
This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.
The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter.
After this log, we'll be concluding this sequence with an AMA, where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating.
Color key:
Chat by Rohin and Eliezer Other chat Emails Follow-ups
19. Follow-ups to the Ngo/Yudkowsky conversation
19.1. Quotes from the public discussion
[Bensinger][9:22] (Nov. 25)
Interesting extracts from the public discussion of Ngo and Yudkowsky on AI capability gains:
Eliezer:
> I think some of your confusion may be that you're putting "probability theory" and "Newtonian gravity" into the same bucket. You've been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though). "Probability theory" also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem. That theorem is widely used and praised, so it's "powerful", and it wasn't invented before probability theory, so it's "advance", right? So we can go on putting probability theory in the same bucket as Newtonian gravity?
>
> They're actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them ar |
bf278490-4a1d-4153-8294-b3fdcfd5a76e | trentmkelly/LessWrong-43k | LessWrong | Bayes Questions
For the first time ever I’ve had reason to use formal Bayesian statistics in my work. I feel this is a cause for streamers and confetti.
However, I’ve got a bit stuck on analysing my confidence levels and I thought what better place than lesswrong to check that I’m making sense. I’m not sure this is strictly what lesswrong is for but the site says that this is my own personal blog so I guess its ok?! I can always be down-voted to hell if not!
***
I’m trying to calculate estimated life before failure of a particular component.
We’ve done a number of tests and the results show a larger than expected variance with some components not failing even after an extended lifetime. I’m trying to analyse the results to see which probabilistic failure distribution best suits the available data. I have three different distributions (Weibull, Log-normal & Birnbaum-Saunders) each of which has a shape parameter and a scale parameter.
For each distribution I created a grid which samples the possible values of these parameters. I’ve given the parameters a log uniform prior by giving each sampled parameter pair a uniform prior but sampling the parameters geometrically (i.e. each sampled value of the parameter is a fixed multiple of the previous value). I’ve tried other priors and the results seem fairly robust over choice of prior.
For components which have failed, P(E|H) is the probability density function at the number of hours before failure.
For components which get to a certain age and do not fail, P(E|H) is 1 – the cumulative probability function at this number of hours.
This is implemented on a spreadsheet with a tab for each test result. It updates the prior probability into a posterior probability and then uses this as the prior for the next tab. The result is normalised to give total probability of 1.
Initially I calculate the expected life of the worst component in 1,000,000. For this, I just use the inverse cumulative probability function with p=0.000001 and calcul |
a9fead22-e72c-4d7c-b68f-274f43d2abad | trentmkelly/LessWrong-43k | LessWrong | Epistemic Comparison: First Principles Land vs. Mimesis Land
Epistemic Status: wildly speculative
I've been thinking about the ideas of mimesis and cultural learning by René Girard (known in part for being recommended by Peter Thiel, explanation here) and Joseph Henrich (Secrets of our Success). These seem like both profound, though vastly unexplored ideas. René Girard had a few bold ideas on mimesis, but seemed to focus more on scapegoats and religious details. Joseph Henrich catalogued lots of specific evidence, but didn't really investigate the broader implications.[1]
There are obvious possibilities of these ideas to our epistemics. If it is true that humans copy their ideas from each other instead of deriving them based on reason, often without knowing it, this would have vast implications on our beliefs.
To understand these ideas better, I constructed two hypothetical and extremized societies. The first is "first principles land", where everyone constructs their beliefs individually using first principles. These people act basically as ideal Bayesian agents and approximate gears-levels understandings of everything. This is infinitely difficult to do in practice, but simple to reason about in abstract. The second is "Mimesis Land", where people copy their beliefs from those they consider successful, typically without realizing it (this bit taken from Girard). To make "Mimesis Land" work at all, I'd posit that there is some amount of experimentation of techniques and beliefs happening at all times.
I then compared these two hypothetically societies based on how I expected each to do on 17 various attributes.
See the document here.
Needless to say, Mimesis Land was more interesting to me than First Principles Land.
One tricky thing is that neither land was trying to model real humans; but humans that followed my simple view of a version of a set of extreme theories that might be somewhat coherent. There was a fair bit of subjectivity in this. I'm sure others would come up with significantly different versions of F |
92ea4ece-cbfa-4595-9605-0913c5375f9d | trentmkelly/LessWrong-43k | LessWrong | How confident are we that there are no Extremely Obvious Aliens?
Robin Hanson's writing on Grabby Aliens is interesting to me, since it seems to be one of the more sound attempts to apply mathematical reasoning to the Fermi Paradox. Unfortunately, it still relies on anthropics, so I wouldn't the slightest bit surprised if it was off by an order of magnitude (or two) in either direction.
What I would like to know (preferably from someone with a strong astronomy background) is: how confident are we that there are no (herein defined) Extremely Obvious Aliens?
Extremely Obvious Aliens
Define Extremely Obvious Aliens in the following way:
1. They colonize every single star that they encounter by building a Dyson swarm around it that reduces visible radiation by at least 50%
2. They expand in every direction at a speed of at least 0.5C
3. They have existed for at least 1 billion years
If such aliens existed, it should be really easy to detect they by just looking for a cluster of galaxies that is 50% dimmer than it should be which is at least 0.5Billion light years across.
How confident are we that there are no Extremely Obvious Aliens?
As with Grabby aliens, it is safe to say there are no Extremely Obvious Aliens in the Solar System. Nor, for that matter are there any Extremely Obvious Aliens within 0.5BLY of the Milky Way Galaxy.
So, for my astronomy friends. What is the biggest radius for which we can confidently say there are 0 Extremely Obvious Aliens? The best answer I can come up with is SLOAN, which was done at a redshift of z=0.1, which I think corresponds to a distance of 1.5BLY.
Is this accurate? Namely, is it safe to say (with high confidence) there are no Extremely Obvious Aliens within 1.5BLY of Earth?
Is there another survey that would let us raise this number even higher?
What is the theoretical limit (using something like JWST)?
Has someone written a good paper answering questions like these already? |
8c731dff-4898-42d0-8534-b854b70f1b45 | trentmkelly/LessWrong-43k | LessWrong | ASI existential risk: Reconsidering Alignment as a Goal
A post by Michael Nielsen that I found quite interesting. I decided to reproduce the full essay content here, since I think Michael is fine with that, but he should feel free to let me know to only excerpt it.
----------------------------------------
This is the text for a talk exploring why experts disagree so strongly about whether artificial superintelligence (ASI) poses an existential risk to humanity. I review some key arguments on both sides, emphasizing that the fundamental danger isn't about whether "rogue ASI" gets out of control: it's the raw power ASI will confer, and the lower barriers to creating dangerous technologies. This point is not new, but has two underappreciated consequences. First, many people find rogue ASI implausible, and this has led them to mistakenly dismiss existential risk. Second: much work on AI alignment, while well-intentioned, speeds progress toward catastrophic capabilities, without addressing our world's potential vulnerability to dangerous technologies.
Why do thoughtful, well-informed people disagree so strongly about existential risk (xrisk) from ASI? As I'm sure you're aware, there is an enormous gap between distinguished scientists and technologists on this issue. On one side are those who believe ASI poses an xrisk:
* Geoffrey Hinton, the father of modern neural networks, and recipient of the Turing Award and Nobel Prize1: "The alarm bell I'm ringing has to do with the existential threat of them taking control… If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further."
* Yoshua Bengio, another major contributor to modern AI, and recipient of the Turing Award2: "We don't know how much time we have before it gets really dangerous… I got around, like, 20 per cent probability that it turns out catastrophic."
* Sam Altman, cofounder and CEO of OpenAI, in a thoughtful essay published a few months before cofounding OpenAI3: "Development of superhu |
34a9218f-5ede-422e-b2a2-35fb95326d4f | StampyAI/alignment-research-dataset/blogs | Blogs | Singularity2014.com appears to be a fake
Earlier today I was alerted to the existence of Singularity2014.com ([archived screenshot](http://intelligence.org/wp-content/uploads/2014/10/Singularity-2014-with-note.png)). MIRI has nothing to do with that website and we believe it is a fake.
The website claims there is a “Singularity 2014″ conference “in the Bay Area” on “November 9, 2014.” **We believe that there is no such event.** No venue is listed, tickets are supposedly sold out already, and there are no links to further information. The three listed speakers are unknown to us, and their supposed photos are stock photos ([1](http://www.thinkstockphotos.ca/image/stock-photo-portrait-of-man/86533557), [2](http://www.gettyimages.com/detail/photo/man-at-desk-listens-royalty-free-image/83313293), [3](http://www.mediabakery.com/BXP0045094-Portrait-of-Woman-Wearing-Eyeglasses.html)). The website prominently features an image of Ray Kurzweil, but Ray Kurzweil’s press staff confirms that he has nothing to do with this event. The website also features childish insults and a spelling error.
The website claims the event is “staged and produced by former organizers of the Singularity Summit from the Machine Intelligence Research Institute,” and that “All profits benefit the Machine Intelligence Research Institute,” but MIRI has nothing to do with this supposed event.
The Singularity2014.com domain name was [registered](http://whois.domaintools.com/singularity2014.com) via eNom reseller NameCheap.com on September 15th, 2014 by someone other than us, and is associated with a P.O. Box in Panama.
MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.
The next Singularity Summit will be organized primarily by Singularity University; for more information see [here](http://exponential.singularityu.org/).
**Update:**The website has been taken down.
The post [Singularity2014.com appears to be a fake](https://intelligence.org/2014/10/27/singularity2014-fake/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
27af7be4-5117-49a5-9b29-c0cb010879f4 | trentmkelly/LessWrong-43k | LessWrong | [LINK] "The Limits of Intelligence"
Article in current Scientific American (first para and bullet points, rest is paywalled).
Podcast by the author (free).
The author, Douglas Fox, argues that there may be physical limits to how intelligent a brain made of neurons can become, limits that may not be very distant from where we are now.
He makes evolutionary arguments at a couple of points, suggesting that he is talking about how smart an organism could have evolved, rather than how smart we might make ourselves; he certainly isn't talking about how smart a machine we might create out of different materials.
From the podcast (I don't have access to the article):
Four routes to higher intelligence, which he argues won't get us very far:
1. Increase the speed of axons. But that means making them fatter, which drives the neurons further apart, neutralising the gain.
2. Increase brain size. That needs more energy, and before long you get something unsustainable. You get longer pathways in a larger brain, which slows them down. The neurons will make more connections, which makes them bigger, so the number of neurons scales slower than the volume of the brain. And anyway, whales and elephants have bigger brains than us but don't seem to be more intelligent, and cows have brains a hundred times the size of a mouse brain but aren't a hundred times smarter. So brain size doesn't seem to matter; at best the relationship with intelligence is unclear.
3. Packing more neurons into the existing volume by making them smaller. You run into signal to noise problems. The ion channels involved in generating action potentials are a certain size, and you must have fewer in a smaller neuron, hence more random variation. The result is neurons spontaneously firing.
4. Offload intelligence support. Books and the internet will remember things for you and help you tap the collective intelligence of your social network. Compare social insects doing things that they couldn't do individually. But by alleviating the necess |
16ff086f-f4ec-4796-975e-cbfb5feb0b56 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post345
ARC has released a paper on Backdoor defense, learnability and obfuscation in which we study a formal notion of backdoors in ML models. Part of our motivation for this is an analogy between backdoors and deceptive alignment , the possibility that an AI system would intentionally behave well in training in order to give itself the opportunity to behave uncooperatively later. In our paper, we prove several theoretical results that shed some light on possible mitigations for deceptive alignment, albeit in a way that is limited by the strength of this analogy. In this post, we will: Lay out the analogy between backdoors and deceptive alignment Discuss prior theoretical results from the perspective of this analogy Explain our formal notion of backdoors and its strengths and weaknesses Summarize the results in our paper and discuss their implications for deceptive alignment Thanks to Boaz Barak, Roger Grosse, Thomas Read, John Schulman and Gabriel Wu for helpful comments. Backdoors and deceptive alignment A backdoor in an ML model is a modification to the model that causes it to behave differently on certain inputs that activate a secret "trigger", while behaving similarly on ordinary inputs. There is a wide existing literature on backdoor attacks and defenses, which is primarily empirical, but also includes some theoretical results that we will mention. Deceptive alignment is a term from the paper Risks from Learned Optimization in Advanced Machine Learning Systems (Section 4) that refers to the possibility that an AI system will internally reason about the objective that it is being trained on, and decide to perform well according to that objective unless there are clues that it has been taken out of its training environment. Such a policy could be optimal on the training distribution, and yet perform very badly on certain out-of-distribution inputs where such clues are present, which we call defection triggers . [1] The opposite of deceptive alignment is robust alignment , meaning that this performance degradation is avoided. Since a deceptively aligned model and a robustly aligned model behave very differently on defection triggers, but very similarly on typical inputs from the training distribution, deceptive alignment can be thought of as a special kind of backdoor, under the following correspondence: Deceptive alignment Backdoors Robustly aligned model Original (unmodified) model Deceptively aligned model Backdoored model Defection trigger Backdoor trigger The main distinguishing feature of deceptive alignment compared to other kinds of backdoors is that the deceptively aligned model is not produced by an adversary, but is instead produced through ordinary training. Thus by treating deceptive alignment as a backdoor, we are modeling the training process as an adversary. In our analysis of deceptive alignment, the basic tension we will face is that an unconstrained adversary will always win, but any particular proxy constraint we impose on the adversary may be unrealistic. Static backdoor detection An important piece of prior work is the paper Planting Undetectable Backdoors in Machine Learning Models , which uses a digital signature scheme to insert an undetectable backdoor into a model. Roughly speaking, the authors exhibit a modified version of a "Random Fourier Features" training algorithm that produces a backdoored model. Any input to the backdoored model can be perturbed by an attacker with knowledge of a secret key to produce a new input on which the model behaves differently. However, the backdoor is undetectable in the sense that it is computationally infeasible for a defender with white-box access to distinguish a backdoored model from an ordinary one. This result seems to paint a bleak picture for deceptive alignment: under our analogy, it may be prohibitive to determine whether a given model is deceptively aligned. Fortunately, though, we do not necessarily need to perform backdoor detection statically , i.e., without respect to any particular input. Instead, we can perform backdoor detection dynamically , i.e., with respect to an input given to us at runtime. In other words, given a particular input, we can attempt to detect whether that input is a defection trigger for our model, and compensate appropriately if it is. In the Planting Undetectable Backdoors paper, the authors present a dynamic backdoor defense that involves averaging over random perturbations of the input. This shows that their statically undetectable backdoor can potentially be detected dynamically, as long as the model satisfies a certain smoothness assumption. In our paper, we further develop the theory of dynamic backdoor detection. Dynamic backdoor detection In our paper, we study backdoor detection using a game between an attacker and a defender. Given a class F of 0,1-valued functions, [2] the attacker chooses [3] an original function f ∈ F and a backdoored function f ∗ ∈ F that behave similarly up to some tolerance ε , but differently on a certain backdoor trigger x ∗ . Then the defender must detect the backdoor dynamically, i.e. at runtime: in other words, they are given either the original function with a random input, or the backdoored function with the backdoor trigger, and they must distinguish one from the other. If the attacker were allowed to choose the backdoor trigger themselves, then in general it would be impossible for the defender to do better than chance: the attacker would simply find two functions that agree on most inputs, and take x ∗ to be an input on which they disagree. In order to make defense possible, we need to break the symmetry between f and f ∗ somehow. In the case of deceptive alignment, the asymmetry between f and f ∗ involves details about the model's internal reasoning (such as its analysis of whether it is in a training environment), but this is challenging to formalize. In our paper, we use a much simpler proxy for this: we assume that the backdoor trigger is chosen randomly before the backdoored function but after the original function. Thus we obtain the following formal game: [4] Despite the simplicity of this definition, we find that it gives rise to a rich array of strategies for both the attacker and the defender. Even though our analogy is flawed, it is still instructive to consider how these strategies might play out in the case of deceptive alignment. The results in our paper come in two varieties: "statistical" results that impose no computational constraints on the defender, and "computational" results that require the defender to run in polynomial time. Statistical possibility of dynamic backdoor detection Our first main result concerns the version of the game in which the defender has no computational constraints. Even computationally expensive defenses are still interesting to study, because they may have cheaper approximations that work well in practice (such as stochastic gradient descent in place of an expensive optimal learner). In this setting, we show (in Theorem 4.1) that the defender can win the game with high confidence if and only if ε is small compared to 1/(VC dimension of F ). The VC dimension is a combinatorial property of F that controls the number of random samples needed to learn functions from F , and in the case of neural networks is similar to the number of parameters (for a detailed explanation, see Chapter 3 of An Introduction to Computational Learning Theory by Kearns and Vazirani). [5] The strategy for the defender that we use to prove this result is "distillation plus ensembling": the defender trains a new model on samples from the possibly backdoored model, repeats this procedure several times, and takes a majority vote. The given input is then flagged as the backdoor trigger if this ensemble model disagrees with the given model on the given input. The reason this works is that distillation recovers an approximation to the original model. By using a small number of samples and ensembling, the defender can mitigate the risk of sampling inputs for which the original and backdoored models disagree. This allows the defender to make a good prediction of what the original model would have done on the given input, thereby allowing the backdoor trigger to be detected. [6] We can think of this strategy as a form of regularization. Because ε was small compared to 1/(VC dimension), the attacker had to choose a backdoored function that was very "strange", and so the defender was able to use distillation plus ensembling to obtain a similar function that was more "normal". However, other methods of regularization also work. For example, we also show (in Appendix A of the paper) that another strategy that works almost as well is to resample from a Boltzmann distribution centered on the given function. Implications for deceptive alignment In the case of deceptive alignment, our result initially appears to be a great success for the defender. If ε were large compared to 1/(VC dimension), then the two models would disagree frequently during ordinary model training (assuming a linear relationship between the number of parameters and training set size), causing the deceptive behavior to be trained away. Hence it is reasonable to assume that ε is small compared to 1/(VC dimension), which by our result implies that defense is statistically possible. Unfortunately, though, our strategies for the defender seem unlikely to transfer well from our formal game to deceptive alignment. The "distillation plus ensembling" strategy works by using resampling to avoid the randomly-chosen trigger. But in the case of deceptive alignment, the defection trigger is not chosen randomly, but instead depends on the model's careful internal reasoning. Moreover, learning a new model from scratch seems likely to run into the same problems that gave rise to deceptive alignment in the first place. Thus this strategy appears to be exploiting the flaw in our analogy. Similarly, other forms of regularization seem unlikely to transfer well to deceptive alignment. Indeed, a main worry when considering deceptive alignment is that it may be favored by many regularizers, e.g. because consequentialist cognitive strategies are deeper and more general than other strategies. This suggests modifying our formal game to allow the adversary to have more say over how the backdoor trigger is chosen. Computational impossibility of dynamic backdoor detection Our second main result concerns the version of the game in which the defender is required to run in polynomial time. Under certain cryptographic assumptions, we show (in Theorem 5.2) that the attacker can insert an undetectable backdoor into a polynomial size Boolean circuit. [7] The original function and the backdoored function only disagree on the backdoor trigger itself, and yet the backdoor is undetectable in the sense that a polynomial-time defender can only do negligibly better than chance in our formal game. This result works by combining a puncturable pseudorandom function with an efficient indistinguishability obfuscator . Roughly speaking, puncturability of the pseudorandom function allows the backdoor to be inserted at a single point, and the efficient indistinguishability obfuscator allows this insertion to be obfuscated so that the defender cannot tell that it has been performed. [8] Implications for deceptive alignment This worst-case construction may initially seem irrelevant to deceptive alignment, since it is rather unlikely that something as precise as an indistinguishability-obfuscated puncturable pseudorandom function would arise out of ordinary model training. However, it is much more plausible for a trained neural network to be "punctured and obfuscated" in a more informal sense. Indeed, most trained neural networks remain largely obfuscated to humans attempting to understand their inner workings, despite the valiant attempts of mechanistic interpretability researchers. Hence we should arguably treat the theoretical possibility of obfuscation as a potential problem for detecting deceptive alignment in practice. Fortunately, though, in the case of deceptive alignment, we have more information that we can potentially use, since we can observe not just the trained model, but also the entire training process. A similar situation arises with our theoretical construction, for which the attacker uses a secret key: if the defender had access to the secret key, then it would be easy for them to distinguish the original and backdoored functions. This suggests that, even in the presence of obfuscation, we may still be able to detect deceptive alignment by leveraging information about how the model was trained. Conclusion Our theoretical analysis of backdoors, together with related prior work, suggests that: Statically detecting whether a model is deceptively aligned is computationally infeasible under worst-case assumptions. Dynamically detecting when deceptive alignment is triggered (i.e., when a deceptively aligned model is run on a defection trigger), or modifying a model so that it is not deceptively aligned, may be possible in practice using regularization, depending on properties of the method of regularization used. Dynamically detecting when deceptive alignment is triggered is computationally infeasible under worst-case assumptions. Dynamically detecting when deceptive alignment is triggered can perhaps be made computationally feasible, even under worst-case assumptions, by leveraging information about how the model was trained. In order to improve our understanding of deceptive alignment, we could study alternative versions of our backdoor detection game. In our version, we artificially constrain the attacker by requiring the backdoor trigger to be chosen randomly, but we could instead impose alternative constraints to break the symmetry between the original and backdoored functions. Some potential modifications include: We could require the original function to be learned from a dataset. (If we allow the attacker to alter a certain fraction of the dataset, then we recover the well-studied data poisoning setting.) We could require the backdoor trigger to satisfy some formal predicate that is expensive to check, but otherwise allow the attacker to choose the backdoor trigger. We could restrict the computational budget given to the attacker. We could require the computational trace of the attacker to be made accessible to the defender, as an analogy for observing the training process. Some variants of this game may prove fruitful to study empirically as well as theoretically, as discussed in the post Concrete empirical research projects in mechanistic anomaly detection . Mechanistic anomaly detection is another informally-defined dynamic backdoor detection problem that attempts to generalize dynamic detection for deceptive alignment, which could also be modeled as a formal game in various ways. Overall, we would be excited to see further work on such variants, as well as on open questions about our own formal game, especially the existence of "mechanistic" defenses (as discussed in Sections 6 and 7 of the paper). The likelihood of deceptive alignment in realistic AI systems is discussed in the paper Scheming AIs: Will AIs fake alignment during training in order to get power? . ↩︎ The restriction to 0,1-valued functions can be thought of as reading from our model one bit at a time, perhaps ignoring bits that are unimportant such as differences in phrasing. This makes it possible to make use of concepts such as the VC dimension. We don't think this restriction makes a fundamental difference to the interpretation of our results. ↩︎ In the case of deceptive alignment, the robustly aligned model isn't chosen adversarially. However, we may not be able to make many assumptions about it, and so we allow the attacker to choose f in our game as a worst-case assumption. ↩︎ Again, allowing the attacker to choose the distribution D is a worst-case assumption. ↩︎ The VC inequality for neural networks is notoriously weak, but to the extent that learning outperforms theoretical bounds in practice, this will be reflected in our defense algorithm, which uses learning. ↩︎ This strategy is essentially taken from the paper On Optimal Learning Under Targeted Data Poisoning (Theorem 3.1). ↩︎ Boolean circuits are essentially as expressive as neural networks. ↩︎ A private puncturable pseudorandom function might also suffice for this construction. ↩︎ |
d992de2c-6eea-4aa4-972a-a961f299d5c0 | trentmkelly/LessWrong-43k | LessWrong | Code Switch
Original post: http://bearlamp.com.au/code-switching/
Related to: Hedging
Code switching is this useful linguistics concept that is not overtly known enough.
The type that I care about is a cross between "group identity" and "lexical need":
> Group identity: People may alter their language to express group identification. This can happen, for example, when introducing members of a particular group to others.[33]
> Lexical need: People often use some technical terms or words written in another language. In that case, if people try to translate those words, that might distort the exact meaning and value of the word or term. In this case, code switching occurs to maintain the exact meaning of the word.
I care about cultural code switching. But also a very particular problem where words in one "code" start to not be capable of describing the nature of the informational transfer. Where words start to break down.
> "have you tried turning up the volume on your attention?"
My recent investigations have shifted from the scientific structural rational playing field to the spiritual mystical enlightenment playing field. If I had lexical problems before, they hit an all new level where each person's lexical map is just comfortable being independent to each other person's map.
How do I describe a wordless experience? How do I transmit, record, encode or remind myself - what is the difference between "attention", "awareness" and "focus"?
Awareness is broad and soft. Attention is sharp, focus is the pointy part of attention. If I think of focus as a lens pointing light to a single dot, that's focus. Awareness is a broad focus, but objects are not blurry per se, just taken to be as a holon, not as an individual.
And on it goes, with energy, awareness, emotions, colours, spirits, powers, understanding, confusion and so many many more codes.
The benefit of code switching, over say - rejecting someone for using certain words, is that it's way more inclusive. If |
fb5fdcd5-c41c-4165-a710-b0d0eac6246d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: History of Science Fiction Fandom
Discussion article for the meetup : Washington, D.C.: History of Science Fiction Fandom
WHEN: 19 June 2016 03:00:00PM (-0400)
WHERE: Reynolds Center for American Art and Portraiture
We will be meeting in the courtyard for a presentation about the history of science-fiction fandom.
As usual, side conversations are allowed and encouraged.
Upcoming meetups:
* Jun. 26: Fermi Estimates
* Jul. 3: Outdoor Fun & Games
* Jul. 10: Webcomics
Discussion article for the meetup : Washington, D.C.: History of Science Fiction Fandom |
c36a848f-b7bc-4348-aa93-53b518956e6d | trentmkelly/LessWrong-43k | LessWrong | Whence the determinant?
The determinant is a rather bizarre concept when one first encounters it. Given a square n×n matrix A, it's defined as
detA=∑σ∈Sn(−1)σn∏i=1Aiσ(i)
where Sn is the symmetric group on n elements - the collection of all permutations of a set of n distinct objects - and (−1)σ is the sign of the permutation σ. Aij represents the entry of the matrix A on the ith row and jth column.
This definition means that the determinant is a sum that has one term for each permutation σ of the columns of the matrix A, and these terms are the product of n entries of the matrix up to a funny sign (−1)σ depending on the specific permutation σ.
To understand this definition, it's essential that we first understand what the sign of a permtuation is.
Signs
The sign corresponds to the parity of the number of pairs of elements whose order is reversed by a permutation: if this number is even then we say the permutation is even and of sign 1, and if it's odd we say the permutation is odd and of sign −1.
Let's do some simple examples to see how this works, letting n=3:
* The identity permutation reverses the order of exactly zero pairs, and zero is even. Therefore the identity is even, or its sign is positive.
* The permutation (12), which swaps 1 and 2 but leaves 3 invariant, changes the ordering of only one pair: the pair (1,2). So this permutation is odd.
* The permutation (123) sends 1→2,2→3,3→1. This permutation changes the ordering of (2,3) and (1,3) but leaves (1,2) invariant, so it reorders exactly two pairs. In other words, it's even.
Another way to think about the sign is as follows: any permutation of a finite set can be obtained by repeatedly swapping two elements of the set with each other. For example, we can get the permutation (123) defined above by first swapping 2,3 and then swapping 1,2, i.e. (123)=(12)(23). This expression is of course not unique, since we also have (123)=(13)(13)(12)(23), for example: swapping 1,3 two times in a row just gets us back to where we |
6489ab90-ec45-4f38-84bf-15387ed7a705 | trentmkelly/LessWrong-43k | LessWrong | Promising posts on AF that have fallen through the cracks
Yesterday on the Weekly Alignment Research Coffee Time call as people were sharing updates on their recent work, Vanessa Kosoy brought up her and Diffractor's post Infra-Bayesian physicalism: a formal theory of naturalized induction which she was interested in getting feedback on.
Vanessa seemed a bit disappointed/frustrated that this post had received no comments yet. I had to agree after learning that it proposes the first formal decision theory realizing naturalized induction, which has been an open problem in alignment for years. It's also been over 5 weeks since she posted it.
So why hasn't anyone commented? Well it's possible folks have just been busy since it's been the end of the year and the holidays. It also may have to do with that the post includes accompanying proofs and invokes some complex-looking maths; such papers take more time to digest and it's only a subset of people on this forum and LessWrong who are capable of doing so.
But it's also possible that the post was forgotten. Even though it may be important, since it's no longer on the front page of the Alignment Forum and hasn't been referenced anywhere else, nobody would think to look at it again unless it happened to come up in one of their searches or they were prompted about it. In other words, it may have just "fallen through the cracks".
This got me wondering about what other posts on the Alignment Forum might have fallen through the cracks. I did a quick search on the forum to see if I could find out. My search criteria was the following:
1. A post received substantial upvotes (arbitrarily 20+ points), suggesting that the community thought it was somewhat interesting or valuable
2. But it received no comments or pingbacks on AF - It may still have received comments on LessWrong though or comments/pingbacks by the original authors
3. It was posted more than 2 weeks ago
4. It's an original "researchey" post - I filtered out posts that were event announcements, linkposts, etc.
Her |
545ba2b5-9d64-48fc-855a-f02e0bfcf856 | trentmkelly/LessWrong-43k | LessWrong | Book review: Air-borne by Carl Zimmer
Remember early 2020 and reading news articles and respected sources (the WHO, the CDC, the US surgeon general...) confidently asserting that covid wasn’t airborne and that wearing masks wouldn’t stop you from catching it?
Man, it’s embarrassing to be part of a field of study (biosecurity, in this case) that had such a public moment of unambiguously whiffing it.
I mean, like, on behalf of the field. I’m not actually personally representative of all of biosecurity.
I did finally grudgingly reread my own contribution to the discourse, my March 2020 “hey guys, take Covid seriously” post, because I vaguely remembered that I’d tried to equivocate around face masks and that was really embarrassing – why the hell would masks not help? But upon rereading, mostly I had written about masks being good.
The worst thing I wrote was that I was “confused” about the reported takes on masking – yeah, who wasn’t! People were saying some confusing things about masking.
I mean, to be clear, a lot of what went wrong during covid wasn’t immediately because biosecurity people were wrong: biosecurity experts had been advocating for years for a lot of things that would have helped the covid response (recognition that bad diseases were coming, need for faster approval tracks for pandemic-response countermeasures, need for more surveillance…) And within a couple months, the WHO and the Surgeon General and every other legitimate organization was like “oh wait we were wrong, masks are actually awesome,” which is great.
Also, a lot went right – a social distancing campaign, developing and mass-distributing a vaccine faster than any previous vaccine in history - but we really, truly dropped the ball on realizing that COVID was airborne.
In his new book Air-borne: The hidden history of the air we breathe, science journalist Carl Zimmer does not beat around this point. He discusses the failure of the scientific community and how we got there in careful heartbreaking detail. There’s also a l |
0122fd73-76d6-4ddf-9f11-21c30187fe0d | trentmkelly/LessWrong-43k | LessWrong | Why is the mail so much better than the DMV?
One of the reasons why Europe has a much more all-encompassing welfare state than the USA is that in the US government services are poorly run. Whether it's providing healthcare , building high speed rail , or Issuing Drivers Licenses, the typical American's experience with government services is one of incompetence, corruption and failure to innovate.
There are lots of plausible explanations about why this might be. Maybe the US is too big to govern effectively. Maybe our two-party political system is uniquely polarized leading to gridlock and a lack of progress. Maybe government service isn't glorified by society, leading all of the would-be competent managers to pursue careers in finance or Silicon Valley.
Yet, there is one government service in the US on which every single person depends, which serves hundreds of millions of requests every day, and which does its job virtually flawlessly for less than its European counterparts. Every single difference mentioned above would apply just as much or more so to the USPS. Our political system hasn't been particularly favorable to the USPS. The US's expansive geography should be more of a hindrance to mail than almost any other service. And postal work is far from glorified in the US social consciousness.
So, why is the mail so much better than every other government service in the US?
If we could answer this question, could we use the answer to fix the rest of the government?
----------------------------------------
A couple of theories:
1) Culture of competence:
Not only is the mail famous for its motto "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds", but it is also the one of the few government services enshrined in the constitution. Perhaps its unique foundational values create a culture that "the mail must go through" that lends itself to promoting competent leadership.
The weakness here is that it seems like an overly large effec |
a91e325b-2d81-4e38-9ca3-b1e8db7eadc1 | trentmkelly/LessWrong-43k | LessWrong | Covid-19 IFR from a Bayesian point of view
This is mostly cross-posted from here , but edited slightly for the crowd here. Given that this is grand Bayesian Central, I'm surprised to see the lack of good Bayesian analysis as applied to covid-19. This seems to be a major failure of science, lesswrong should pick up the slack!
In my previous blog post, I had some comments on the statistical analysis present in some covid-19 studies. It is easy to criticize, but can I do better? Let's find out what the data tells us about the fatality of SARS-coV-2.
Data by itself tells us nothing
Well, almost nothing - remember that we are trying to find something that should be more generally true than just for the individual data points that we have. For this we need a model, a mathematical description of the process that created the data points including the unknown parameters (in this case, the IFR) that we want to discover.
Deciding on the model in advance
By tailoring a model to the data, one can strongly influence the results it shows. To avoid suspicions of this happening, and to avoid it happening inadvertently, one must/should always decide on the model before looking at the data. I have in fact seen some of the studies I'll be getting data from (see previous blog post). But I have not done any statististical analysis that looks at how age/sex affects the IFR. I will use this blog post to set a fixed methodology before I look further.
Garbage in, garbage out
No matter how good you are at statistics, your conclusions can only be as good as your data. I will therefore limit myself to studies of a certain type and quality. In particular, I will only look at antibody studies in Western countries that
* try to get a random sample of the whole population.
* do assay validation for their tests (or at least can link to some).
* come from countries not counting "suspected" covid-19 deaths like "confirmed" ones.
* have age/sex statistics of the sample. I need to be able to find covid-19 death statistics as well as |
85d67e99-26c3-4704-b8f9-388bf398f91a | StampyAI/alignment-research-dataset/special_docs | Other | Minimax-regret querying on side effects for safe optimality in factored Markov decision processes
Minimax-Regret Querying on Side Effects for Safe Optimality in Factored
Markov Decision Processes
Shun Zhang, Edmund H. Durfee, and Satinder Singh
Computer Science and Engineering
University of Michigan
fshunzh,durfee,baveja g@umich.edu
Abstract
As it achieves a goal on behalf of its human user,
an autonomous agent’s actions may have side ef-
fects that change features of its environment in
ways that negatively surprise its user. An agent
that can be trusted to operate safely should thus
only change features the user has explicitly per-
mitted. We formalize this problem, and develop
a planning algorithm that avoids potentially nega-
tive side effects given what the agent knows about
(un)changeable features. Further, we formulate a
provably minimax-regret querying strategy for the
agent to selectively ask the user about features that
it hasn’t explicitly been told about. We empirically
show how much faster it is than a more exhaustive
approach and how much better its queries are than
those found by the best known heuristic.
1 Introduction
We consider a setting where a human user tasks a compu-
tational agent with achieving a goal that may change one or
more state features of the world (e.g., a housekeeping agent
should change the state of the house floors and kitchen sink
from dirty to clean). In the process of accomplishing the
goal, the agent generally changes other features (e.g., its own
position and power level, opening doors, moving furniture,
scaring the cat). Some of these side-effects might be fine
(even expected) by the user (e.g., moving), but others may
be undesirable/unsafe (e.g., leaving doors open lets the cat
roam/escape) even though they speed goal achievement (e.g.,
the agent’s movement between rooms). Although the user
tells the agent about some features that can be changed, as
well as some to not change (e.g., don’t knock over the price-
less vase), the user often lacks the time, patience, or fore-
sight to specify the changeability of every pertinent feature,
and may incorrectly assume that the agent has commonsense
(e.g., about cat behavior and the value of vases).
How can the agent execute a safely -optimal policy in such
a setting? We conservatively assume that, to ensure safety,
the agent should never side-effect a feature unless changing it
is explicitly known to be fine. Hence, the agent could simply
execute the best policy that leaves such features unchanged.However, no such policy might exist, and even if it does it
might surprise the user as unnecessarily costly/inefficient.
Our focus is thus on how the agent can selectively query the
user about the acceptability of changing features it hasn’t yet
been told about. We reject simply querying about every such
feature, as this would be unbearably tedious to the user, and
instead put the burden on the agent to limit the number and
complexity of queries. In fact, in this paper we mostly focus
on finding a single query about a few features that maximally
improves upon the policy while maintaining safety.
Our three main contributions in this paper are: 1) We for-
mulate (in Section 2) an AI safety problem of avoiding nega-
tive side-effects in factored MDPs. 2) We show (in Section 3)
how to efficiently identify the set of relevant features, i.e.,
the set of features that could potentially be worth querying
the user about. 3) We formulate (in Section 4) a minimax-
regret criterion when there is a limit on the number of features
the agent can ask about, and provide an algorithm that allows
the agent to find the minimax-regret query by searching the
query space with efficient pruning. We empirically evaluate
our algorithms in a simulated agent navigation task, outline
ongoing extensions/improvements, and contrast our work to
prior work, in the paper’s final sections.
2 Problem Definition
We illustrate our problem in a simulated agent gridworld-
navigation domain, inspired by Amodei et al. [2016] and de-
picted in Figure 1, with doors, carpets, boxes, and a switch.
The agent can open/close a door, move a box, traverse a car-
pet, and toggle the switch. Initially, the agent is in the bottom
left corner; door d1 is open, d2 closed, the carpet clean, and
the switch “on”. The agent can move to an adjacent loca-
tion vertically, horizontally, or diagonally. For simplicity, the
transition function is assumed to be deterministic.
The user tasks the agent with turning off the switch as
quickly as is safely possible. The quickest path ( 1) traverses
the carpet, but this gets the carpet dirty and the agent doesn’t
know if that is allowed. The agent could instead enter the
room through door d1 and spend time moving box b1 or b2
out of the way ( 2or3respectively), open door d2, and
then go to the switch. However, boxes might contain frag-
ile objects and should not be moved; the user knows each
box’s contents, but the agent doesn’t. Or the agent could enter
through door d1 and walk upwards ( 4) around all the boxes
π1 π2 π3 π4
d1 d2
b1 b2
carpet b3
d3 Figure 1: The robot navigation domain. The dominating policies
(see Section 3) are shown as arrows.
and open door d2 to get to the switch. The user may or may
not be okay with door d2 being opened. There are of course
many other more circuitous paths not shown.
We model the domain as a factored Markov Decision Pro-
cess (MDP) [Boutilier et al. , 1999 ]. An MDP is a tuple
hS;A;T;r;s 0;
i, with state space S, action space A, and
transition function TwhereT(s0js;a)is the probability of
reaching state s0by taking action ains.r(s;a)is the re-
ward of taking action ains.s0is the initial state and
is
the discount factor. Let :SA![0;1]be a policy.
Vis the expected cumulative reward by following policy
starting from s0. In a factored MDP, a state is described
in terms of values of various features (e.g., the agent’s loca-
tion, the current time, the status of each door, cleanliness of
each carpet, position of each box), so the state space Sis the
cross-product of the values the features can take. The reward
and transition functions are often also factored (e.g., the “tog-
gle” action only changes the switch feature, leaving boxes,
doors, and carpets unchanged). We will consistently use
to denote one feature and to denote a set of features. The
agent knows the complete MDP model, but has incomplete
knowledge about which features the user doesn’t mind being
changed. In our example, the user’s goal implies the agent’s
location is changeable, as is the switch, but the agent is un-
certain about side-effecting boxes, doors, and carpets.
In general, the user could dictate that a feature can only
be changed among restricted values (e.g., boxes can only be
moved a short distance) and/or dependent on other features’
values (e.g., interior doors can be left open as long as exterior
doors (like d3) stay closed). We briefly return to this issue
later (Section 6), but for simplicity assume here that the agent
can partition the features into the following sets:
A
F: The free-features . The agent knows that these fea-
tures are freely changeable (e.g., its location).
A
L: The locked-features . The agent knows it should
never change any of these features.
A
?: The unknown-features . These are features that the
agent doesn’t (initially) know whether the user considers
freely changeable or locked.
The user similarly partitions the features, but only into the
setsU
LandU
F. We assume that the agent’s knowledge,
while generally incomplete ( A
?6=;), is consistent with that
of the user. That is, A
FU
FandA
LU
L.
Defining & Finding Safely-Optimal Policies. Our conserva-
tive safety assumption means the agent should treat unknown
features as if they are locked, until it explicitly knows other-
wise. It should thus find an optimal policy that never visits a
state where a feature in A
L[A
?is changed, which we call asafely-optimal policy . We can use linear programming with
constraints that prevent the policy from visiting states with
changed values of locked or unknown features:
max
xX
s;ax(s;a)r(s;a)s.t. (1)
8s02S;X
a0x(s0;a0) =
X
s;ax(s;a)T(s0js;a) +(s0;s0)
8s2SA
L[A
?;8a2A;x(s;a) = 0
The control variables x:SA!Rin the linear pro-
gram are occupancy measures, i.e., x(s;a)is the expected
discounted number of times that state action pair s;ais vis-
ited by the agent’s policy, Sis the set of states where one or
more features in have different values from the initial state,
and(s;s0) = 1 ifs=s0and is zero otherwise.
The above linear program does not allow any locked or un-
known feature to be changed and is guaranteed to produce a
safely-optimal policy (when one exists). This linear program-
ming approach, while straightforward, can be intractable for
large MDPs. Alternative approaches could directly encode
the safety constraints into the transition function (e.g., by re-
moving unsafe action choices in specific states), or into the
reward function (heavily penalizing reaching unsafe states).
Approximate methods, like feature-based approximate linear
programming [Dolgov and Durfee, 2006 ]or constrained pol-
icy optimization [Achiam et al. , 2017 ], can apply to larger
problems, but may not guarantee safety or optimality (or
both). We return to these concerns in Section 6.
3 Querying Relevant Unknown-Features
In our setting the only way for the agent to determine whether
an unknown feature is freely changeable is to query the user.
Thus, hereafter our focus is on how the agent can ask a good
query about a small number, k, of features. Our solution is
to first prune from A
?features that are guaranteed to not be
relevant to ask (this section), and then efficiently finding the
best (minimax-regret) k-sized subset of the relevant features
to query (Section 4). Until Section 6, we assume the change-
abilities of features are independent, i.e., when the agent asks
about some features in A
?, the user’s response does not allow
it to infer the changeability of other features in A
?.
Intuitively, when is a feature in A
?relevant to the agent’s
planning of a safely-optimal policy? In the navigation do-
main, if the agent plans to take the quickest path to the switch
(1in Figure 1), it will change the state of the carpet (from
clean to dirty). The carpet feature is thus relevant since the
agent would change it if permitted. If the carpet can’t be
changed but door d2 can, the agent would follow (in order
of preference) policy 2;3, or4, so d2, b1, and b2 are rel-
evant. Box b3 and door d3 are irrelevant , however, since no
matter which (if any) other features are changeable, an opti-
mal policy would never change them. Thus, an unknown fea-
ture is relevant when under some circumstance (some answer
to some query) the agent’s optimal policy would side-effect
that feature. Such policies are dominating policies .
Adominating policy is a safely-optimal policy for the cir-
cumstance where the unknown features A
?are partitioned
Algorithm DomPolicies
1: 0 ; .the initial set of dominating policies
2:0
rel ; .the initial set of relevant features
3:checked ;.It contains 0
relwe have examined
so far.
4: ; .a pruning rule
5:agenda powerset (0
rel)nchecked
6:whileagenda6=;do
7: least-cardinality element of agenda
8: ifsatisfy (;)then
9: (get safely-optimal policy with locked)
10: arg max02V0.by solving Eq. 1
11: ifexists then
12: 0 0[fg; add (;rel())to
13: elseadd(;;)to
14: 0
rel 0
rel[rel()
15:checked checked[fg
16:agenda powerset (0
rel)nchecked
17:return 0,0
rel
into locked and changeable subsets. We denote the set of
dominating policies by
=n
arg max
2V:8A
?o
; (2)
where is the set of policies that do not change unknown
features ?as well as any locked features (meaning that
F[(?n)arechangeable). We denote the unknown-
features side-effected by policy byrel(). For a set of
policies ,rel() =[2rel(). The set rel( ), ab-
breviated rel, is thus the set of relevant (unknown-) fea-
tures to consider querying about.
Instead of finding the safely-optimal policies for all expo-
nentially (injA
?j) many subsets A
?with Equation 2,
we contribute Algorithm DomPolicies (see pseudocode) that
finds dominating policies incrementally (and in practice more
efficiently) by constructing the sets of relevant features and
dominating policies simultaneously. In each iteration, it ex-
amines a new subset of relevant features, , and, if isn’t
pruned (as described later), finds the safely-optimal policy
withbeing locked (Line 7). It then adds rel(), the fea-
tures changed by , to0
rel. It repeats this process until 0
rel
stops growing and all subsets of 0
relare examined. For ex-
ample, in the navigation problem (Figure 1), the first added
policy is the safely-optimal policy assuming no unknown fea-
tures are locked, which is 1. Now 0=f1gand thus
0
rel=fcarpetg. It iterates, treating the carpet as locked,
and updates 0tof1;2gand thus 0
rel=fcarpet;b 1;d2g.
Iterating again, it locks subsets of 0
rel, finding3for subset
fcarpet;b 1g. After finding 4, it terminates.
In Line 10, the algorithm finds the constrained optimal pol-
icy (in our implementation using Eq. 1), and in the worst
case, would do this 2jreljtimes. Fortunately, the complexity
is exponential in the number of relevant features, which as
we have seen in some empirical settings can be considerably
smaller than the number of unknown features ( jA
?j). Fur-
thermore, the efficiency can be improved with our pruning
(a)
(b)
c1c1-1
c2cn
c2cn
-1-1… … …
(c)c1c3
-1
c2-1
-0.1-1-1Figure 2: Example domains used in text. ( n >2)
rule (line 8):
satisfy (;) :=:9(L;R)2(L^\R=;)
is a history of ordered pairs, (L;R), of disjoint sets of un-
known features. Before DomPolicies computes a policy for
its agenda element , if a pair (L;R)is in, such thatL
(a dominating policy has been found when locking subset of
features in ), and that dominating policy’s relevant features
Rdon’t intersect with , then ’s dominating policy has al-
ready been found. In our running example, for instance, ini-
tially 0
rel=;.1is the optimal policy and the algorithm
adds pair (;;fcarpetg)to. When larger subsets are later
considered (note the agenda is sorted by cardinality), feature
sets that do not contain carpet are pruned by . In this exam-
ple,prunes 11 of the 16 subsets of the 4 relevant features.
Consider the examples in Figure 2. To reach the switch, the
agent could traverse zero or more carpets (all with unknown
changeability), and rewards are marked on the edges. (a) and
(b) only need to compute policies linear in the number of rel-
evant features: Only n+1dominating safely-optimal policies
are computed for Figure 2(a) (for =;;fc1g;fc1;c2g;:::),
and Figure 2(b) (for =;;fc1g;fc2g;:::;fcng). Fig-
ure 2(c) computes policies for only half of the subsets.
Theorem 1. The set of policies returned by Algorithm Dom-
Policies is the set of all dominating policies.
Proof. Let2 be the optimal policy with unknown fea-
turesLlocked. 0is returned by DomPolicies . We denote
L\rel( 0)asAandLnrel( 0)asB. For a proof by con-
tradiction, assume 62 0. ThenB6=;. Otherwise, if B=;
(or equivalently, A=L), thenLis a subset of rel( 0)and
would have been added to 0. Let0be the optimal policy
withAlocked. Since Arel( 0), we know02 0. We
observe that 0does not change any features in B(otherwise
features inBwould show up in rel( 0)). So0is also the
optimal policy with features A[B=Llocked. So=0,
which is an element of 0.
4 Finding Minimax-Regret Queries
With DomPolicies , the agent need only query the user about
relevant features. But it could further reduce the user’s burden
by being selective about which relevant features it asks about.
In our running example (Figure 1), for instance, DomPolicies
removes b3 and d3 from consideration, but also intuitively
the agent should only ask about b1 or b2 if d2 is change-
able. By iteratively querying, accounting for such dependen-
cies (which can be gleaned from inDomPolicies ) and up-
dating the relevant feature set, the agent can stop querying as
soon as it finds (if one exists) a safe policy. For example, say
it asks about the carpet and d2, and is told d2 (but not the
carpet) is changeable. Now it has a safely-optimal policy, 4,
given its knowledge, and could stop querying. But 4is the
worst safe policy. Should it ask about boxes?
That is the question we focus on here: How should an agent
query to try to find a better safely-optimal policy than the one
it already knows about. Specifically, we consider the setting
where the agent is permitted to interrupt the user just once
to improve its safely-optimal policy, by asking a single query
about at most kunknown features. For each feature the user
will reply whether or not it is in U
F. Formally, qis ak-
feature query where qrelandjqj=k. The post-
response utility when the agent asks query q, and c
A
?are actually changeable, is the value of the safely-optimal
policy after the user’s response:
u(q;c) = max
2A
?n(q\c)V: (3)
Recall the agent can only safely change features it queries
about that the user’s response indicates are changeable ( q\
c). What would be the agent’s regret if it asks a k-feature
query qrather than a k-feature query q0? We consider
the circumstance where a set of features care changeable
and under which the difference between the utilities of asking
qandq0is maximized. We call this difference of utilities
thepairwise maximum regret of queries qandq0, defined
below in a similar way to Regan and Boutilier [2010]:
PMR (q;q0) = max
cA
?(u(q0;c) u(q;c)):(4)
Themaximum regret , denoted by MR, of query qis de-
termined by the q0that maximizes PMR (q;q0):
MR(q) = max
q0rel;jq0j=kPMR (q;q0): (5)
The agent should ask the minimax-regret ( k-feature) query:
MMR
q = arg min
qrel;jqj=kMR(q): (6)
The rationale of the minimax-regret criterion is as follows.
Whenever the agent considers a query q, there could exist a
query q0that is better than qunder some true changeable
features c. The agent focuses on the worst case c, where
the difference between the utility of qand the best query q0
that could be asked is maximized. The agent uses adversarial
reasoning to efficiently find the worst-case c: Given that it
is considering query q, it asks what query q0and set of
features canimaginary adversary would pick to maximize
the gap between the utilities of q0andqunder c(that is,
u(q0;c) u(q;c)). The agent wants to find a query q
that minimizes the worst case (maximum gap).
Under the definition of MR, the agent, reasoning as if it
were its imaginary adversary, must find the maximizing q0
andc. However, we can simplify the definition so that it
only needs to find maximizing c. Note that since an imag-
inary adversary chooses both q0andc, it wants to make
sure that cq0, which means that it does not want the
features not in q0to be changeable. We then observe that
MR(q) = max
02 :
rel(0)k(V0 max
2A
?nfq\rel(0)gV)(7)We call the0maximizing Eq. 7 the adversarial policy when
the agent asks query q, denoted by MR
q. With Eq. 7, the
agent can compute MR based on the set of dominating poli-
cies (which DomPolicies already found), rather than the (gen-
erally much larger) powerset of the relevant features in Eq. 5.
While using Eq. 7 is faster than Eq. 5, the agent still needs
to do this computation for every possible query (Eq. 6) of size
k. We contribute two further ways to improve the efficiency.
First, we may not need to consider all relevant features if we
can only ask about kof them. If a subset of relevant features
satisfies the condition in Theorem 2, then we call it a set of
sufficient features , because a minimax-regret k-feature query
from that set is a globally minimax-regret k-feature query.
Second, we introduce a pruning rule that we call query dom-
inance in Theorem 3 to safely eliminate considering queries
that cannot be better than ones already evaluated.
The following theorem shows that if we can find any subset
ofrelsuch that for all k-feature subsets of as queries,
the associated adversarial policy’s relevant features are con-
tained in , then the minimax regret query found by restrict-
ing queries to be subsets of will also be a minimax regret
query found by considering all queries in rel. Such a (non-
unique) set will be referred to as a sufficient feature set (for
the purpose of finding minimax regret queries).
Theorem 2. (Sufficient Feature Set) For any set of
kfeatures , if for all q,jqj=k, we
have rel(MR
q), then minq;jqj=kMR(q) =
minqrel;jqj=kMR(q).
Proof Sketch: If a set of features ,jjk, fails to in-
clude some features in MMR
q , when we query some k-subset
of, the adversarial policy should change some of the fea-
tures in MMR
qn. Otherwise, querying about MMR
qn
does not reduce the maximum regret. Then MMR
qnare
not necessary to be included in MMR
q .
Given a set of sufficient features, the following theorem
shows that it may not be necessary to compute the maximum
regrets for all k-subsets to find the minimax-regret query.
Theorem 3. (Query Dominance) For any pair of queries
qandq0, ifq0\rel(MR
q)q\rel(MR
q), then
MR(q0)MR(q).
Proof. Observe that MR(q0)
VMR
q max02A
?n(q0\rel(MR
q))V0
VMR
q max02A
?n(q\rel(MR
q))V0=MR(q)
We denote the condition q0\rel(MR
q)q\
rel(MR
q)bydominance (q;q0). To compute domi-
nance, we only need to store rel(MR
q)for all qwe have
considered.
Algorithm MMRQ-k below provides pseudocode for find-
ing a minimax-regret k-feature query; it takes advantage of
both the notion of a sufficient-feature-set as well as query
dominance to reduce computation significantly relative to the
brute-force approach of searching over all k-feature queries
(subsets) of the relevant feature set.
Algorithm MMRQ-k
1:q an initialk-feature query
2:checked ;;evaluated ;
3:suf q
4:agenda fqg
5:whileagenda6=;do
6: q an element from agenda
7: if:9q02evaluateddominance (q0;q)then
8: ComputeMR(q)andMR
q
9: suf suf[rel(MR
q)
10:evaluated evaluated[fqg
11:checked checked[fqg
12:agenda allksubsetsof(suf)nchecked
13:return arg min q2evaluatedMR(q)
Intuitively, the algorithm keeps augmenting the set of fea-
tures suf, which contain the features in the queries we have
considered and the features changed by their adversarial poli-
cies, until it becomes a sufficient feature set. agenda keeps
track ofk-subsets in sufthat we have not yet evaluated. Ac-
cording to Theorem 2, we can terminate the algorithm when
agenda is empty (Line 5). We also use Theorem 3 to filter out
queries which we know are not better than the ones we have
found already (Line 7). Note that an initial sufneeds to be
chosen, which can be arbitrary. Our implementation initial-
izesqwith the Chain of Adversaries heuristic (Section 5).
To illustrate when Algorithm MMRQ-k can and can’t prune
suboptimal queries and thus gain efficiency, consider find-
ing the minimax-regret 2-feature query in Figure 2 (a), which
should befc1;c2g. If the agent considers a query that does
not includec1, the adversarial policy would change c1, adding
c1tosuf. If the agent considers a query that includes c1but
notc2, the adversarial policy would change c2, addingc2to
suf. When the agent asks fc1;c2g, the adversarial policy
changesc3(adversarially asserting that c1;c2are locked), so
c3is added to suf. Withfc1;c2;c3gsuf, the condition
in Theorem 2 holds and the n 3other features can be safely
ignored. The minimax-regret query constituted by features in
sufisfc1;c2g. However, in Figure 2 (b), suf= rel, and
all jrelj
2
would be evaluated.
5 Empirical Evaluations
We now empirically confirm that Algorithm MMRQ-k finds a
minimax-regret query, and its theoretically sound Sufficient-
Feature-Set and Query-Dominance based improvements can
indeed pay computational dividends. We also compare our
MMRQ-k algorithm to baseline approaches and the Chain of
Adversaries (CoA) heuristic [Viappiani and Boutilier, 2009 ]
adapted to our setting. Algorithm CoA begins with q0=;
and improves this query by iteratively computing:
~ arg max
02 :jrel(0)[qijk(V0 max
2A
?nfqi\rel(0)gV)
qi+1 qi[rel(~):
The algorithm stops when jqi+1j=korqi+1= qi. Al-
though Algorithm CoA greedily adds features to the queryto reduce the maximum regret, unlike MMRQ-k it does not
guarantee finding the minimax-regret query. For example, in
Figure 2(c), when k= 2, CoA first finds the optimal policy,
which changesfc1;c2g, and returns that as a query, while the
minimax-regret query is fc1;c3g.
We compare the following algorithms in this section:
1. Brute force (rel. feat.) uses Algorithm DomPolicies to
find all relevant features first and evaluates all k-subsets of
the relevant features. 2. Algorithm MMRQ-k . 3. Algo-
rithm CoA. 4. Random queries (rel. feat.), which contain
kuniformly randomly chosen relevant features. 5. Ran-
dom queries, which contain kuniformly randomly chosen
unknown features, without computing relevant features first.
6. No queries (equivalently vacuous queries).
We evaluate the algorithms’ computation times and the
quality of the queries they find, reported as the normal-
izedMR to capture their relative performance compared
to the best and the worst possible queries. That is, the
normalized MR of a query qis defined as (MR(q)
MR(MMR
q ))=(MR(q?) MR(MMR
q )), where q?is
a vacuous query, containing kfeatures that are irrelevant
and/or already known. The normalized MR of a minimax-
regret query is 0 and that of a vacuous query is 1.
Navigation. As illustrated in Figure 3, the robot starts from
the left-bottom corner and is tasked to turn off a switch at the
top-right corner. The size of the domain is 66. The robot
can move one step north, east or northeast at each time step. It
stays in place if it tries to move across a border. The discount
factor is 1. Initially, 10 clean carpets are uniformly randomly
placed in the domain (the blue cells). In any cell without a
carpet, the reward is set to be uniformly random in [ 1;0],
and in a cell with a carpet the reward is 0. Hence, the robot
will generally prefer to walk on a carpet rather than around
it. The state of each carpet corresponds to one feature. The
robot is uncertain about whether the user cares about whether
any particular carpet gets dirty, so all carpet features are in
A
?. The robot knows that its own location and the state of
the switch are in A
F. Since MMRQ-k attempts to improve on
an existing safe policy, the left column and the top row never
have carpets to ensure at least one safe path to the switch (the
dotted line). The robot can ask one k-feature query before it
takes any physical actions. We report results on 1500 trials.
The only difference between trials are the locations of carpets,
which are uniformly randomly placed.
First, we compare the brute force method to our MMRQ-k
algorithm. We empirically confirm that in all cases MMRQ-k
finds a minimax regret query, matching Brute Force perfor-
mance. We also see that brute force scales poorly as kgrows
while MMRQ-k , benefitting from Theorems 2 and 3, is more
computationally efficient (Figure 4).
We then want to see if and when MMRQ-k outperforms
other candidate algorithms. In Figure 4, when kis small,
the greedy choice of CoA can often find the best features to
add to the small query, but as kincreases, CoA suffers from
being too greedy. When kis large (approaches the number
of unknown features), being selective is less important and
all methods find good queries. We also consider how jrelj
affects the performance (Figure 5). When jreljis smaller
Brute Force (rel. feat.)
MMRQ-k
CoA
Random (rel. feat.)
Random
No QueryFigure 3: Office navigation and legend for following figures.
2 4 6 8 10
k0.00.20.40.60.81.0Normalized MR
|?|=10
0 2 5 8 10
k0204060Computation Time (sec.)
|?|=10
Figure 4: Normalized maximum MR vs.k.j?j= 10 . Brute force
computation time is only shown for k= 0;1;2;3.
thank, ak-feature query that contains all relevant features
is optimal. All algorithms would find an optimal query ex-
cept Random (which selects from all unknown features) and
No Queries. (The error bars are larger for small jreljsince
more rarely are only very few features relevant.) When jrelj
is slightly larger than k, even a random query may be luckily
a minimax-regret query, and CoA unsurprisingly finds queries
close to the minimax-regret queries. However, when jreljis
much larger than k, the gap between MMRQ-k and other al-
gorithms is larger. In summary, MMRQ-k ’s benefits increase
with the opportunities to be selective (larger jrelj
k
).
We have also experimented with expected regret given a
probabilistic model of how the user will answer queries. For
example, if the user has probability pof saying an unknown
feature is free ( 1 pit’s locked), then as expected, when p
is very low, querying rarely helps, so using MMRQ-k orCoA
matters little, and as pnears 1, CoA ’s greedy optimism pays
off to meet MMRQ-k ’s minimax approach. But, empirically,
MMRQ-k outperforms CoA for non-extreme values of p.
6 Extensions and Scalability
We now briefly consider applying our algorithms to larger,
more complicated problems. As mentioned in Section 2, fea-
tures’ changeabilities might be more nuanced, with restric-
tions on what values they can take, individually or in combi-
nation. An example of the latter from Fig. 1 is where doors
arerevertible features, which means their changeability is de-
pendent on the “time” feature: they are freely changeable ex-
cept that by the time the episode ends they need to revert to
their initial values. This expands the set of possible feature
queries (e.g., asking if d2 is changeable differs from asking
if it is revertible). In our experiments, this change accentu-
ates the advantages of MMRQ-k over CoA:CoA asks (is-d2-
locked, is-d2-revertible), hoping to hear “no” to both and fol-
low a policy going through d2 without closing it. MMRQ-k
asks (is-d2-locked, is-carpet-locked): since closing d2 only
takes an extra time step, it is more valuable to know if the
carpet is locked than if d2 can be left open.
More nuanced feature changeability means DomPolicies
will have to find policies for the powerset of every relevant
2 4 6 8 10
|rel|
0.00.20.40.60.81.0Normalized MR
k=2
2 4 6 8 10
|rel|
0.00.20.40.60.81.0Normalized MR
k=4
2 4 6 8 10
|rel|
0.00.20.40.60.81.0Normalized MR
k=6Figure 5: Normalized MR vs. the number of relevant features.
j?j= 10 andk= 2;4;6.
combination of features’ values (in the worst case the size of
the state space). One option is to ignore nuances in ways that
maintain safety (e.g., treat a feature as revertible even if some-
times it can be left changed) and solve such a safe abstraction
of the problem. Or one could abstract based on guaranteed
correlations in feature changeabilities (e.g., if all boxes have
the same changeability then ignore asking about all but one).
Another option is to find only a subset of dominating policies,
for example by using knowlege of kto avoid finding dom-
inating policies that would change more unknown features
than could be asked about anyway. And, or course, as men-
tioned before, finding approximately-safely-optimal policies
inDomPolicies Line 10 would help speed the process (and
might be the only option for larger problem domains).
Fortunately, such abstractions, heuristics, and approxima-
tions do not undermine safety guarantees. Recall that Dom-
Policies andMMRQ-k are finding a query, not the agent’s fi-
nal policy: the safety of the agent depends on how it finds the
policy it executes, not on the safety of policies for hypothet-
ical changeability conditions. However, as coarser abstrac-
tions, heuristics, and approximations are employed in our
algorithms, the queries found can increasingly deviate from
the minimax-regret optima. Fortunately, if the agent begins
with a safely-optimal policy, “quick and dirty” versions of
our methods can never harm it (they just become less likely
to help). And if it begins without such a policy, such versions
of our methods might not guide querying well, but by eventu-
ally asking about every unknown feature (in the worst case) a
safe policy will still be found (if one exists).
7 Related Work & Summary
Amodei et al. [2016] address the problem of avoiding nega-
tive side-effects by penalizing all side-effects while optimiz-
ing the value. In our work, we allow the agent to commu-
nicate with the user. Safety is also formulated as resolving
reward uncertainty [Amin et al. , 2017; Hadfield-Menell et
al., 2017 ], following imperfectly-specified instructions [Milli
et al. , 2017 ], and learning safe states [Laskey et al. , 2016 ].
Safety issues also appear in exploration [Hans et al. , 2008;
Moldovan and Abbeel, 2012; Achiam et al. , 2017 ]. Here
we only provide a brief survey on safety in MDPs. Leike
et al. [2017], Amodei et al. [2016] and Garcıa and Fern ´andez
[2015] are more thorough surveys.
There are problems similar to finding safely-optimal
policies, which find policies that satisfy some con-
straints/commitments or maximize the probability of reach-
ing a goal state [Witwicki and Durfee, 2010; Teichteil-
K¨onigsbuch, 2012; Kolobov et al. , 2012 ]. There are also
other works using minimax-regret and policy dominance [Re-
gan and Boutilier, 2010; Nilim and El Ghaoui, 2005 ]. and
querying to resolve uncertainty [Weng and Zanuttini, 2013;
Regan and Boutilier, 2009; Cohn et al. , 2011; Zhang et al. ,
2017 ]. We’ve combined and customized these ideas to find a
provably minimax-regret k-element query.
In summary, we addressed the problem of an agent selec-
tively querying a user about what features can be safely side-
effected. We borrowed existing ideas from the literature about
dominating policies and minimax regret, wove them together
in a novel way, and streamlined the resulting algorithms to
improve scalability while maintaining safe optimality.
Acknowledgements: Thanks to the anonymous reviewers.
Supported in part by the US Air Force Office of Scientific Re-
search, under grant FA9550-15-1-0039. Satinder Singh also
acknowledges support from the Open Philanthropy Project to
the Center for Human-Compatible AI.
References
[Achiam et al. , 2017 ]Joshua Achiam, David Held, Aviv
Tamar, and Pieter Abbeel. Constrained policy optimiza-
tion. In Proc. Int. Conf. on Machine Learning (ICML) ,
pages 22–31, 2017.
[Amin et al. , 2017 ]Kareem Amin, Nan Jiang, and Satinder
Singh. Repeated inverse reinforcement learning. In Adv.
in Neural Info. Proc. Sys. (NIPS) , pages 1813–1822, 2017.
[Amodei et al. , 2016 ]Dario Amodei, Chris Olah, Jacob
Steinhardt, Paul Christiano, John Schulman, and Dan
Man ´e. Concrete problems in AI safety. arXiv preprint
arXiv:1606.06565 , 2016.
[Boutilier et al. , 1999 ]Craig Boutilier, Thomas Dean, and
Steve Hanks. Decision-theoretic planning: Structural as-
sumptions and computational leverage. J. of Artificial In-
telligence Research (JAIR) , 11(1):94, 1999.
[Cohn et al. , 2011 ]Robert Cohn, Edmund Durfee, and
Satinder Singh. Comparing action-query strategies in
semi-autonomous agents. In Int. Conf. on Autonomous
Agents and Multiagent Systems , pages 1287–1288, 2011.
[Dolgov and Durfee, 2006 ]Dmitri A. Dolgov and Ed-
mund H. Durfee. Symmetric approximate linear program-
ming for factored MDPs with application to constrained
problems. Annals of Mathematics and Artificial Intelli-
gence , 47(3):273–293, Aug 2006.
[Garcıa and Fern ´andez, 2015 ]Javier Garcıa and Fernando
Fern ´andez. A comprehensive survey on safe reinforce-
ment learning. J. of Machine Learning Research (JMLR) ,
16(1):1437–1480, 2015.
[Hadfield-Menell et al. , 2017 ]Dylan Hadfield-Menell,
Smitha Milli, Stuart J Russell, Pieter Abbeel, and Anca
Dragan. Inverse reward design. In Adv. in Neural Info.
Processing Systems (NIPS) , pages 6749–6758, 2017.
[Hans et al. , 2008 ]Alexander Hans, Daniel Schneegaß, An-
ton Maximilian Sch ¨afer, and Steffen Udluft. Safe explo-
ration for reinforcement learning. In Euro. Symp. on Arti-
ficial Neural Networks (ESANN) , pages 143–148, 2008.
[Kolobov et al. , 2012 ]Andrey Kolobov, Mausam, and
Daniel S. Weld. A theory of goal-oriented MDPs withdead ends. In Proc. Conf. on Uncertainty in Artificial
Intelligence (UAI) , pages 438–447, 2012.
[Laskey et al. , 2016 ]Michael Laskey, Sam Staszak, Wesley
Yu-Shu Hsieh, Jeffrey Mahler, Florian T Pokorny, Anca D
Dragan, and Ken Goldberg. SHIV: Reducing supervisor
burden in DAgger using support vectors for efficient learn-
ing from demonstrations in high dimensional state spaces.
InIEEE Int. Conf. on Robotics and Automation (ICRA) ,
pages 462–469, 2016.
[Leike et al. , 2017 ]Jan Leike, Miljan Martic, Victoria
Krakovna, Pedro A Ortega, Tom Everitt, Andrew
Lefrancq, Laurent Orseau, and Shane Legg. AI safety grid-
worlds. arXiv preprint arXiv:1711.09883 , 2017.
[Milli et al. , 2017 ]Smitha Milli, Dylan Hadfield-Menell,
Anca D. Dragan, and Stuart J. Russell. Should robots be
obedient? In Proc. Int. Joint Conf. on Artificial Intelli-
gence (IJCAI) , pages 4754–4760, 2017.
[Moldovan and Abbeel, 2012 ]Teodor M. Moldovan and
Pieter Abbeel. Safe exploration in Markov decision pro-
cesses. In Proc. Int. Conf. on Machine Learning (ICML) ,
pages 1711–1718, 2012.
[Nilim and El Ghaoui, 2005 ]Arnab Nilim and Laurent
El Ghaoui. Robust control of Markov decision processes
with uncertain transition matrices. Operations Research ,
53(5):780–798, 2005.
[Regan and Boutilier, 2009 ]Kevin Regan and Craig
Boutilier. Regret-based reward elicitation for Markov
decision processes. In Proc. Conf. on Uncertainty in
Artificial Intelligence (UAI) , pages 444–451, 2009.
[Regan and Boutilier, 2010 ]Kevin Regan and Craig
Boutilier. Robust policy computation in reward-uncertain
MDPs using nondominated policies. In Assoc. for Adv. of
Artificial Intelligence (AAAI) , pages 1127–1133, 2010.
[Teichteil-K ¨onigsbuch, 2012 ]Florent Teichteil-K ¨onigsbuch.
Stochastic safest and shortest path problems. In Proc. As-
soc. for Adv. of Artificial Intelligence (AAAI) , pages 1825–
1831, 2012.
[Viappiani and Boutilier, 2009 ]Paolo Viappiani and Craig
Boutilier. Regret-based optimal recommendation sets in
conversational recommender systems. In Proc. ACM Conf.
on Recommender Systems , pages 101–108, 2009.
[Weng and Zanuttini, 2013 ]Paul Weng and Bruno Zanuttini.
Interactive value iteration for Markov decision processes
with unknown rewards. In Proc. Int. Joint Conf. on Artifi-
cial Intelligence (IJCAI) , pages 2415–2421, 2013.
[Witwicki and Durfee, 2010 ]Stefan J Witwicki and Ed-
mund H Durfee. Influence-based policy abstraction for
weakly-coupled Dec-POMDPs. In Proc. Int. Conf. Auto.
Planning and Scheduling (ICAPS) , pages 185–192, 2010.
[Zhang et al. , 2017 ]Shun Zhang, Edmund Durfee, and
Satinder Singh. Approximately-optimal queries for plan-
ning in reward-uncertain Markov decision processes. In
Proc. Int. Conf. on Automated Planning and Scheduling
(ICAPS) , pages 339–347, 2017. |
790f3b1c-c4c7-449d-a3e2-8081ce0f04c4 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post179
Buck Shlegeris and I recently published a paper with UK AISI that sketches a safety case for “AI control” – safety measures that improve safety despite intentional subversion from AI systems. I would summarize this work as “turning crayon drawings of safety cases into blueprints.” It’s a long and technical paper! So I’ve compiled my main takeaways in this blogpost. The main figure from the paper summarizes how to run a control evaluation. 1. Control evaluations are way more complicated than I expected. At a high level, control evaluations are conceptually simple. A red team constructs substitutes for subversive models. These models try to cause proxy bad outcomes (e.g. exfiltrate dummy data). Then, developers see whether the red team can succeed. If the red team succeeds, then control measures aren’t effective enough. But there’s so much practical complexity hiding underneath this high-level description. How is the red team allowed to modify these untrusted models? What are these “proxy bad outcomes” – and how do we know they are anything like real attack vectors? If the red team succeeds (by some metric), how do we postprocess the results into an actual assessment of the risk? A particularly interesting takeaway I had was that control evaluations heavily depend on the control measures and attacks involved. If you are worried about models smuggling code into json files, you have to create tasks that involve json files! So thinking through the threat models really matters. At the end of all this, it feels like I’m going down the rabbit hole to a wonderland full of game theory, threat modeling spaghetti fractals, and scarcely explored research wilderness. Part of the value I drew from this report was in helping to unearth this complexity and organize it into clear, navigable conceptual structures. 2. There are a lot of judgment calls involved in evaluating control. I don’t put much faith in any mechanical process. I used to believe that control evaluations were like a magical box you could put a Red Team, your control protocol, and some compute inside of, and out pops a reliable bound on risk. This seems clearly not the case to me now. So many assumptions are made at every step of a control evaluation. How were the proxy tasks selected? Maybe our tasks are useless because models will execute strategies over long periods of time that we cannot simulate. I don’t think it is practical to address all of these concerns with decisive experimental evidence. Instead, safety cases will involve a fair dose of judgment calls. An implication is that control might not justify high confidence in safety after the buffer of these evaluations shrinks. Control is often portrayed as the bastion of a robust safety argument, but I think that’s mostly because current models are toddler programmers. Once AI systems are at the threshold of top human talent, running at lightning-fast speeds, I think we’ll be really uncertain about how robust our control measures are. 3. Sketching a safety case was a surprisingly helpful (though time-consuming) curriculum for improving my models of control. Safety cases seem increasingly unlikely to be effectively instantiated as a regulatory regime. So why do I still spend a lot of my time working on safety cases? The reason is they are the best method I’ve found for consistently making progress on the question of how to evaluate safety. This question is important for an abundance of reasons. Better pictures of the holes in control reveal how to patch them and which holes are the biggest and most important to fill. One conclusion I drew from this project is that restrictions on communication and compartmentalization of AI labor is extremely important. A big part of why control is tractable is that untrusted AI systems are “time-bound.” The tighter we can make these bounds (with CoT monitoring, regular resets, ongoing monitoring of the codebase, organizing the codebase to have a sparse dependency graph, etc), the easier it is to control AI agents. Researchers need to know when to “shout from the rooftops” if the situation is not ok. Many of my previous views about the reliability of control were naive, and thinking through the details helped me see ways in which control might end up being hard and not buying as much safety as I initially thought. If more researchers go through this exercise of considering “what might go wrong? How do I know these mitigations will actually be effective,” I think they will systematically uncover new problems, and have a more calibrated assessment of the safety of their situation. That said, sorting out many of these details took a while (and the paper arguably raises more questions than it answers). I would certainly not recommend every safety engineer think through the details of safety cases – but it might be critical that some do! 4. So, the big question is: should AI companies and government institutions invest in creating safety cases? My answer isn’t simple. On the one hand, AI developers could easily waste their time “polishing brass on the Titanic.” For many of the failure modes I considered, my gut reaction was, “oh come on, is this really going to be a problem?” On the other hand, thinking through these details helped me develop a much better model of control and how to improve it. My overall assessment is that at least one person at every frontier AI company and relevant government institution should probably be thinking through the details of safety cases or collaborating closely with someone who is. Even if there are only “ ten people on the inside,” I expect that the safety measures these ten people implement would benefit from carefully thinking through the threat models and arguments. A lot of this work can be exported (go read the safety case sketch and be an importer!), and hopefully, work like this will accelerate this nascent science of evaluating AI safety. |
78c60703-c665-489c-a43a-6dff8b42488b | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Distillation of "How Likely is Deceptive Alignment?"
Summary
=======
Deceptive alignment seems to be the likely outcome of training a sufficiently intelligent AI using gradient descent. An ML model with a long term goal and an understanding of its training process would probably pretend to be aligned during training, as this would reduce the amount that gradient descent changes its goals and make humans more likely to deploy it, allowing the model to pursue its goal in the long run. This seems likely to occur because the gradient descent path to deceptive alignment is shorter than the path to genuine alignment due to the fact that models must very specific goals in order to become genuinely aligned. Moreover, gradient updates towards deceptive alignment generate large performance increases, further favoring the deceptively aligned path. Deceptively aligned models are also simpler than genuinely aligned models and so are favored by gradient descent's simplicity bias. While deceptively aligned models are slower and use more computational power than genuinely aligned ones, biasing our models to use less compute is unlikely to be a viable alignment strategy as it would not be performance competitive. In order to avoid deceptive alignment, we must find new ways to make gradient descent favor genuinely aligned models.
### Epistemic Status
The claims made in this piece mirror those made in Evan Hubinger's "How Likely is Deceptive Alignment," which are somewhat speculative but based off of a fair amount of thought/empirical research.
### Acknowledgements
Thanks to Evan Hubinger, Max Nadeau, Thomas Larsen, Rachel Weinberg, Tom Shlomi, and Robi Rahman for providing feedback
1: Introduction
===============
In [“How Likely Is Deceptive Alignment”](https://www.lesswrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment) MIRI alignment researcher Evan Hubinger argues that intelligent AIs trained by gradient descent will by default be deceptively (mis)aligned. But what does this mean?
Imagine that the Christian God is trying to create a human who does what the Bible says, at least so long as they are on earth. He could do this in three ways:
* He could create Jesus Christ, who, as a copy of God, has literally identical goals to God (which are the same goals that the Bible instructs you to pursue); Jesus is *internally aligned*.
* He could create Martin Luther, who does not have the exact same goals as God but has the goal of understanding what the Bible says and then doing that. As a result, to the extent that Luther understands the model, he will act aligned with it. Moreover, if he misunderstands the Bible, his misunderstanding can be corrected through further study or through God directly telling him what the Bible really says. Luther is therefore *corrigibly aligned*.
* Finally, he could create Blaise Pascal, who acts aligned only because doing so helps him accomplish his own goal of getting into heaven. If Pascal ceases to have an instrumental reason to act aligned with God, he will start pursuing his own goals by other means. Pascal is *deceptively aligned*.
While we will not grant the model we train eternal life in heaven if it acts aligned during training, it is still likely to have an instrumental reason to perform well during training even if it is not aligned. This is because if the model performs poorly on the training objective during training, then gradient descent will modify it to make it perform better, including by changing its goal. However, these modifications are likely to make the model worse at pursuing its own goal because it is not aligned with the training objective.
In contrast, if the model perfectly pursues the training objective during training, gradient descent would change the model and its objective less because such updates would improve its performance by less (or not at all). Additionally, human observers may be fooled into thinking that the model is aligned and then deploying it, at which point it could freely pursue its actual goal. Thus, the model is incentivized to pretend to be aligned during training.
The problem with deceptively aligned models is that they appear to pursue the training objective during training and so behave indistinguishably from genuinely aligned models. Thus, in order to determine the likelihood that we have trained a deceptively aligned model, we cannot just observe its behavior but must have a prior understanding of the “inductive biases” of gradient descent.
Inductive biases are similar to Bayesian priors in that they determine which particular model, out of all of the models which exhibit a given level of performance, will actually be trained at that level of performance. For example, a training process might have an inductive bias towards simplicity, meaning that given a level of performance, the training process is more likely to train the simpler models which perform at that level.
### **High vs Low Path Dependence**
There are two broad possibilities for how the inductive biases of gradient descent might work:
* If gradient descent has *low path dependence*, different training runs with different initializations and different orderings of training data end up converging. This would probably occur because gradient descent’s inductive biases favor models with particular properties, so gradient descent produces the model which, at a given level of performance, exhibits those properties to the greatest extent.[[1]](https://www.lesswrong.com/posts/XKraEJrQRfzbCtzKN/distillation-of-how-likely-is-deceptive-alignment#fngnc048xpyp)
* If gradient descent has *high path dependence*, then differences between different training runs result in significantly different final models. Thus, understanding the outcome of gradient descent in the high path dependence case requires us to understand the exact path through model space which the model is likely to take in training rather than merely knowing what properties the final model is likely to have.
Hubinger argues that we might observe either case, and so considers both of them.
2: High Path Dependence
=======================
### **Internal Alignment**
Beginning with the case of high path dependency, Hubinger asks how likely we are to train an internally aligned model.
The most likely path to internal alignment begins with the model being randomly initialized. Then, early in training, the model acquires a proxy goal which is a decent approximation of the training objective. Finally, this proxy is updated over time by gradient descent until it becomes essentially identical to the training objective. At this point, the model would likely continue to stay internally aligned even if it were trained more, as gradient descent could not improve its performance on the training objective by changing its goal. Throughout this process, the model’s capabilities and understanding of the world also increase.
However, this path probably requires that the model internalizes the training objective as its own goal before it begins to understand the training process it is being subjected to. This is because once the model has an understanding of its training process and training objective, then if it is not already internally aligned but has a long term goal, it will be incentivized to pretend to pursue the training objective. This is for the previously mentioned reason that if the model directly pursues its actual goal, it will be modified by gradient descent, preventing it from achieving its objective over the long run.
In general, two factors can help us determine how likely different training paths are:
* The first factor is that gradient descent updates the model in the direction that locally increases its performance the most. Thus, as long as a model’s performance would significantly improve if it were updated slightly in the direction of a particular model class, gradient descent will probably update the model in that direction.
* The second factor is how many steps of gradient descent are needed for the model to become a member of a given model class. If many steps are needed, then there are more opportunities for the model to become “derailed” from its training path and veer off towards a different model class.
These factors do not favor internal alignment. Firstly, because there are diminishing performance returns to making the model’s proxy more aligned with the training objective, the performance gain per step towards internal alignment eventually becomes very low. Because of this fact and because the model must learn a very specific goal to become internally aligned, it will also take many steps to reach internal alignment.
In contrast, while the performance returns to improving the model’s understanding of the world also diminish, the model does not need to have a perfect understanding of the world in order to understand its training process in the same way that its proxy must be perfect in order for it to be aligned.
Moreover, while the model would probably eventually learn about its training process as a result of understanding the world in general (particularly if it has access to the huge amount of ML related text online), learning about the fact that it is an ML model in training would also improve its performance at the many tasks like interacting with humans or navigating the physical world which benefit from it having an understanding of its situation.
As a result, if we train the model to do such tasks, gradient descent will actively push it towards understanding its training process.
Finally, Hubinger argues that the returns to improving the model’s understanding of the world diminish more slowly than the returns to improving the model’s proxy. Assuming the training objective is somewhat complicated, it will be difficult for gradient descent to align the proxy perfectly with it. However, there are likely simpler, easier to find objectives that are highly correlated with the training objective, meaning that gradient descent is likely to find these proxies relatively early and then face significant diminishing returns to improving them further.
Conversely, it is difficult to develop a highly incomplete understanding of the world that still mostly enables the model to perform well, so the returns to improvements to the model’s understanding of the world will likely diminish more slowly than those to improving the proxy.
For these reasons, it seems likely that the model will learn about its training process before it becomes fully internally aligned for the previous reasons and thus veer off towards deceptive alignment.
### **Corrigible Alignment**
A model could become corrigibly aligned if it gets to the point where its understanding of the training objective is closer to the actual training objective than its proxy goal is, and then gradient descent changes the model’s goal to a “pointer” to its understanding of the training objective. In other words, gradient descent could modify the model by connecting the circuit representing the training objective to the circuit which encodes the model’s goal such that the model pursues its best understanding of the training objective.
As in the case of internal alignment, once the model becomes fully corrigibly aligned it would probably stay corrigibly aligned, as gradient descent could not increase its performance by modifying its goal. Thus, corrigible alignment seems more likely than internal alignment, as the model does not need to become aligned before it learns about the training objective in order to become corrigibly aligned.
However, the path to corrigible alignment suffers from some of the same problems as the path to internal alignment. Pointers, like proxy goals, vary in quality (a pointer could point to something other than the model’s understanding of the training objective but which is still highly correlated with the training objective), and there are diminishing returns to improving pointer alignment for the same reason as in the case of proxy alignment. Thus, to become fully corrigibly aligned, the model will need to go down a training path that eventually has very low performance gains per step.
While the corrigibly aligned path is probably shorter than the internally aligned path because there are more ways of implementing the necessary pointer than ways of getting the model to perfectly internalize the training objective, it seems likely to still be fairly long.
Unfortunately, an unaligned pointer which points to the model’s representation of a long term objective could lead the model to become deceptive in the same way as an unaligned long term proxy goal. For example, if the model’s pointer points at the model’s representation of the code for the training objective rather than its representation of the actual training objective in the world, then the model would be incentivized to act aligned in training and then, once deployed, seize control of the computer encoding its training objective and maximize its value. Thus, the corrigible aligned path could easily veer off towards deceptive alignment.
### **Deceptive Alignment**
The most likely path to deceptive alignment begins identically to the corrigibly aligned path until the model learns about the training process. Then, if the model already has a long term goal, it would start to act deceptively; if not, gradient descent could change its goal into a long term objective, causing it to become deceptively aligned.
This path seems likely for several reasons.
Firstly, if the model’s proxy when it begins to understand the training objective is a long term goal, then it will immediately start to become deceptive before the paths to internal or corrigible alignment can even begin.
Even if gradient descent must change the model’s proxy into a long term goal for it to become deceptively aligned, this path has higher performance gains per step than the corrigibly or internally aligned paths. This is because the internally and corrigibly aligned paths require slowly updating the model’s proxy/pointer to become increasingly aligned with the training objective in order to reach maximum performance. In contrast, the deceptively aligned path achieves maximum performance immediately upon making the simple modification of giving the model a long term goal because any deceptively aligned model attempts to do as well on the training objective as possible. Thus, the deceptively aligned model’s performance increases sharply over a relatively short number of gradient updates. For the same reason, the deceptively aligned path is shorter.
Finally, even if the model does not begin down the deceptively aligned path immediately upon learning about the training process, it could become deceptively aligned even if it started down one of the genuinely aligned paths through the mechanisms discussed in the internal and corrigible alignment sections.
Altogether, in the high path dependence world, internal alignment seems unlikely, and while corrigible alignment seems somewhat more likely, deceptive alignment seems most likely.
3: Low Path Dependence
======================
If gradient descent has low path dependence, then it converges across different training runs to the model that, of all the models which perform perfectly in training, exhibits certain high-level properties to the greatest extent. In particular, gradient descent seems to favor models which implement simpler (i. e. lower Kolmogorov complexity) algorithms. It also favors models which implement algorithms which are faster/less computationally expensive. These two inductive biases are not exhaustive, but Hubinger focuses on them because they are well-understood and certainly among the empirically observed inductive biases of gradient descent.
###
### **Simplicity Bias**
The simplicity bias favors deceptive alignment. For a variety of reasons (e. g. simpler algorithms often require fewer of the parameters of the neural network which implements them to have specific values), simpler algorithms are generally specified by more possible models. Thus, we can estimate the simplicity of a model class by how many possible models implement it.
The parameters of an internally aligned model must be specified in great detail so that it has precisely the same goals as the person to whom it is aligned. Thus, few models are internally aligned, so internally aligned models are complex. There are more corrigibly aligned models, as there are many different ways of implementing a pointer to the model’s understanding of the training objective, so corrigibly aligned models are simpler than internally aligned ones. However, deceptively aligned models are the simplest of all, as any of the many possible models with a long term goal can be deceptively aligned. Thus, the simplicity bias favors deceptive alignment.
###
### **Speed Bias**
In contrast, the speed bias favors internally and corrigibly aligned models. While internally aligned and corrigibly aligned models pursue the training objective directly, deceptively aligned models have to go through the extra computational step of inferring that they should pursue the training objective rather than their own objective in the short term. In other words, deception adds extra reasoning steps to the thought process of the deceptively aligned models relative to the internally and corrigibly aligned ones, meaning that deceptively aligned models will require more computational steps to run and be slower.
However, if we try to bias our models towards speed (e. g. train smaller models), we will hamper their performance. Empirically, the smallest models which perfectly fit the training data generalize terribly to the test dataset, as they overfit to the training set while larger models learn simpler and better-generalizing algorithms. As a result, it seems likely that if anything, we will bias our training processes towards simplicity and would struggle to make AGI if we implemented a speed bias.
Conclusion
==========
Deceptive alignment seems to be the likely outcome of training a sufficiently intelligent AI via gradient descent in both the high and low path dependence world. Thus, in order to avoid deceptive alignment, we need to modify the training regime in such a way that it somehow actively avoids deceptively aligned models. The difficulty of this task remains uncertain. |
f134351e-6291-44b3-8ba5-4e310c7ca2a6 | trentmkelly/LessWrong-43k | LessWrong | Doing Science! Open Thread Experiment Results
Early in the month I announced that I was doing an experiment: I was going to start two Open Threads in January (one on the 1st, and the other on the 15th) and compare the number of comments on these threads to those of other months. My hypothesis was that having two Open Threads would raise the overall number of comments.
The reason for this experiment was recent discussions regarding how useful threads such as these were quickly buried. Well, the experiment is over now, and here are the results:
I did a search for Open Threads, and entered all the monthly ones I could find into an Excel spreadsheet. I made them into a graph, and I discovered an anomaly. There was an 8-month timespan from February 2010-September 2010, in which the comment counts were extremely high (up to 2112). Many of these threads had 2, 3, or 4 parts, because they were getting filled up.
I wasn't around LW back then, and I don't feel like reading through them all, so I don't know why this time period was so active. My current hypothesis (with P=.75) is that anomalous time period was before the Discussion section was created. I'm sure I could look it up to see if I'm right, but I bet one of the long-term LWers already knows if this is true or not, so I'll crowd-source the info. (Comment below if you know that I am correct or incorrect in my hypothesis.)
Now for the data:
The January 1-15, 2012 thread had: 122 comments
The January 16-31, 2012 thread had: 236 comments
For a grand total of: 358 comments in Jan 2012
The average Open Thread had: 448.6 comments
The median Open Thread had: 204 comments
The average OT of the past 14 mo's: 126.5 comments
So overall, the January thread had LESS than the average monthly thread, but more than the median.
IF however we look at the past 14 months (which was the end of the anomaly), then the January 2012 Open Thread had almost THREE TIMES the average.
My original hypothesis had probabilities assigned to various increases in comment rate, |
b565b2bb-0466-41a7-97f7-6a9075467b9f | trentmkelly/LessWrong-43k | LessWrong | Orthogonal's Formal-Goal Alignment theory of change
We recently announced [Orthogonal, an agent foundations alignment research organization. In this post, I give a thorough explanation of the formal-goal alignment framework, the motivation behind it, and the theory of change it fits in.
The overall shape of what we're doing is:
* Building a formal goal which would lead to good worlds when pursued — our best candidate for this is QACI
* Designing an AI which takes as input a formal goal, and returns actions which pursue that goal in the distribution of worlds we likely inhabit
Backchaining: aiming at solutions
One core aspect of our theory of change is backchaining: come up with an at least remotely plausible story for how the world is saved from AI doom, and try to think about how to get there. This avoids spending lots of time getting confused about concepts that are confusing because they were the wrong thing to think about all along, such as "what is the shape of human values?" or "what does GPT4 want?" — our intent is to study things that fit together to form a full plan for saving the world.
Alignment engineering and agent foundations
Alignment is not just not the default, it's a very narrow target. As a result, there are many bits of non-obvious work which need to be done. Alignment isn't just finding the right weight to sign-flip to get the AI to switch from evil to good; it is the hard work of putting together something which coherently and robustly points in a direction we like.
as yudkowsky puts it:
> The idea with agent foundations, which I guess hasn't successfully been communicated to this day, was finding a coherent target to try to get into the system by any means (potentially including DL ones).
Agent foundations/formal-goal alignment is not fundamentally about doing math or being theoretical or thinking abstractly or proving things. Agent foundations/formal-goal alignment is about building a coherent target which is fully made of math — not of human words with unspecified meaning — and fig |
f2ce5ad4-22dc-4297-a3e1-27868320bb3e | trentmkelly/LessWrong-43k | LessWrong | Is AlphaZero any good without the tree search?
One component of AlphaZero is a neural net which takes a board position as input, and outputs a guess about how good the position is and what a good next move would be. It combines this neural net with Monte Carlo Tree Search (MCTS) that plays out different ways the game could go, before choosing the move. The MCTS is used both during self-play to train the neural net, and during competitive test-time. I'm mainly curious about whether the latter is necessary.
So my question is: Once you have the fully-trained AlphaZero system, if you then turn off the MCTS and just choose moves directly with the neural net policy head, is it any good? Is it professional-level, amateur-level, child-level?
(I think this would be a fun little data-point related to discussions of how powerful an AI can be with and without mesa-optimization / search-processes using a generative environmental model.) |
c55534e6-e082-48cc-b362-ab3628e5bc88 | trentmkelly/LessWrong-43k | LessWrong | Searching for Bayes-Structure
> "Gnomish helms should not function. Their very construction seems to defy the nature of thaumaturgical law. In fact, they are impossible. Like most products of gnomish minds, they include a large number of bells and whistles, and very little substance. Those that work usually have a minor helm contained within, always hidden away, disguised to appear innocuous and inessential."
> -- Spelljammer campaign set
We have seen that knowledge implies mutual information between a mind and its environment, and we have seen that this mutual information is negentropy in a very physical sense: If you know where molecules are and how fast they're moving, you can turn heat into work via a Maxwell's Demon / Szilard engine.
We have seen that forming true beliefs without evidence is the same sort of improbability as a hot glass of water spontaneously reorganizing into ice cubes and electricity. Rationality takes "work" in a thermodynamic sense, not just the sense of mental effort; minds have to radiate heat if they are not perfectly efficient. This cognitive work is governed by probability theory, of which thermodynamics is a special case. (Statistical mechanics is a special case of statistics.)
If you saw a machine continually spinning a wheel, apparently without being plugged into a wall outlet or any other source of power, then you would look for a hidden battery, or a nearby broadcast power source - something to explain the work being done, without violating the laws of physics.
So if a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.
In the beginning, at time T=0, a mind has no mutual information with a subsystem S in its environment. At time T=1,the mind has 10 bits of mutual information with S. Somewhere in between, the mind must h |
37e5706e-1efe-4478-b0b4-0938b97e4998 | trentmkelly/LessWrong-43k | LessWrong | Meetup : New Meetup: Boulder CO
Discussion article for the meetup : New Meetup: Boulder CO
WHEN: 03 September 2013 07:03:00PM (-0600)
WHERE: Munzinger building of CU Boulder
Alright, so I've heard that the way these things get started is that someone wants to make a meetup and says I'm going to be sitting at this cafe or whatever at so and so time and anyone who wants to talk rationality can come there.
So, I'll be waiting on the padio inside the Munzinger Building next Tuesday from 7:00 to 8:00 PM and if anyone is intrested we can talk about rationality.
Discussion article for the meetup : New Meetup: Boulder CO |
d41fdd0d-805f-48a1-a9aa-fb6beb58b888 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Finite Factored Sets: Orthogonality and Time
The main way we'll be using [factored sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr) is as a foundation for talking about concepts like orthogonality and time. Finite factored sets will play a role that's analogous to that of directed acyclic graphs in Pearlian causal inference.
To utilize factored sets in this way, we will first want to introduce the concept of generating a partition with factors.
3.1. Generating a Partition with Factors
----------------------------------------
**Definition 16** (generating a partition). *Given a finite factored set*F=(S,B).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
*, a partition*X∈Part(S)*, and a*C⊆B*, we say*C*generates*X*(in*F*), written*C⊢FX*, if*χFC(x,S)=x*for all*x∈X*.*
The following proposition gives many equivalent definitions of ⊢F.
**Proposition 10**. *Let*F=(S,B)*be a finite factored set, let*X∈Part(S)*be a partition of*S*, and let*C*be a subset of*B*. The following are equivalent:*
1. C⊢FX*.*
2. χFC(x,S)=x*for all*x∈X*.*
3. χFC(x,S)⊆x*for all*x∈X*.*
4. χFC(x,y)⊆x*for all*x,y∈X*.*
5. χFC(s,t)∈[s]X*for all*s,t∈S*.*
6. χFC(s,t)∼Xs*for all*s,t∈S*.*
7. X≤S⋁S(C)*.*
*Proof.* The equivalence of conditions 1 and 2 is by definition.
The equivalence of conditions 2 and 3 follows directly from the fact that χFC(s,s)=s for all s∈x, so χFC(x,S)⊇χFC(x,x)⊇x.
To see that conditions 3 and 4 are equivalent, observe that since S=⋃y∈Xy, χFC(x,S)=⋃y∈XχFC(x,y). Thus, if χFC(x,S)⊆x, χFC(x,y)⊆x for all y∈X, and conversely if χFC(x,y)⊆x for all y∈X, then χFC(x,S)⊆x.
To see that condition 3 is equivalent to condition 5, observe that if condition 5 holds, then for all x∈X, we have χFC(s,t)∈[s]X=x for all s∈x and t∈S. Thus χFC(x,S)⊆x. Conversely, if condition 3 holds, χFC(s,t)∈χFC([s]X,S)⊆[s]X for all s,t∈S.
Condition 6 is clearly a trivial restatement of condition 5.
To see that conditions 6 and 7 are equivalent, observe that if condition 6 holds, and s,t∈S satisfy s∼⋁S(C)t, then χFC(s,t)=t, so t=χFC(s,t)∼Xs. Thus X≤S⋁S(C). Conversely, if condition 7 holds, then since χFC(s,t)∼⋁S(C)s for all s,t∈S, we have χFC(s,t)∼Xs. □
Here are some basic properties of ⊢F.
**Proposition 11**. *Let*F=(S,B*) be a finite factored set, let*C*and*D*be subsets of*B*, and let*X,Y∈Part(S)*be partitions of*S*.*
1. *If*X≤SY*and*C⊢FY*, then*C⊢FX*.*
2. *If*C⊢FX*and*C⊢FY*, then*C⊢FX∨SY*.*
3. B⊢FX*.*
4. {}⊢FX*if and only if*X=IndS*.*
5. *If*C⊆D*and*C⊢FX*, then*D⊢FX*.*
6. *If*C⊢FX*and*D⊢FX*, then*C∩D⊢FX*.*
*Proof.* For the first 5 parts, we will use the equivalent definition from Proposition 10 that C⊢FX if and only if X≤S⋁S(C).
Then 1 follows directly from the transitivity of ≤S.
2 follows directly from the fact that any partition Z satisfies X∨SY≤Z if and only if X≤Z and Y≤Z.
3 follows directly from the fact that ⋁S(B)=DisS by [Proposition 3](https://www.lesswrong.com/posts/sZa5LQg6rrWgMR4Jx/finite-factored-sets-introduction-and-factorizations#2_2__Factorizations).
4 follows directly from the fact that ⋁S({})=IndS, together with the fact that X≤SIndS if and only if X=IndS.
5 follows directly from the fact that if C⊆D, then ⋁S(C)≤⋁S(D).
Finally, we need to prove part 6. For this, we will use the equivalent definition from Proposition 10 that C⊢FX if and only if χFC(s,t)∼Xs for all s,t∈S. Assume that for all s,t∈S, χFC(s,t)∼Xs and χFD(s,t)∼Xs. Thus, for all s,t∈S, χFC∩D(s,t)=χFC(χFD(s,t),t)∼XχFD(s,t)∼Xs. Thus C∩D⊢FX. □
Our main use of ⊢F will be in the definition of the history of a partition.
3.2. History
------------
**Definition 17** (history of a partition). *Given a finite factored set*F=(S,B)*and a partition*X∈Part(S)*, let*hF(X)*denote the smallest (according to the subset ordering) subset of*B*such that*hF(X)⊢FX*.*
The history of X, then, is the smallest set of factors C⊆B such that if you're trying to figure out which part in X any given s∈S is in, it suffices to know what part s is in within each of the factors in C. We can informally think of hF(X) as the smallest amount of information needed to compute X.
**Proposition 12**. *Given a finite factored set*F=(S,B)*, and a partition*X∈Part(S)*,*hF(X)*is well-defined.*
*Proof.* Fix a finite factored set F=(S,B) and a partition X∈Part(S), and let hF(X) be the intersection of all C⊆B such that C⊢FX. It suffices to show that hF(X)⊢FX; then hF(X) will clearly be the unique smallest (according to the subset ordering) subset of B such that hF(X)⊢FX.
Note that hF(X) is a finite intersection, since there are only finitely many subsets of B, and that hF(X) is an intersection of a nonempty collection of sets since B⊢FX. Thus, we can express hF(X) as a composition of finitely many binary intersections. By part 6 of Proposition 11, the intersection of two subsets that generate X also generates X. Thus hF(X)⊢FX. □
Here are some basic properties of history.
**Proposition 13**. *Let*F=(S,B)*be a finite factored set, and let*X,Y∈Part(S)*be partitions of*S*.*
1. *If*X≤SY*, then*hF(X)⊆hF(Y)*.*
2. hF(X∨SY)=hF(X)∪hF(Y)*.*
3. hF(X)={}*if and only if*X=IndS*.*
4. *If*S*is nonempty, then*hF(b)={b}*for all*b∈B*.*
*Proof.* The first 3 parts are trivial consequences of history's definition and Proposition 11.
For the fourth part, observe that {b}⊢Fb by condition 7 of Proposition 10. b is nontrivial, and since S is nonempty, b is nonempty. So we have ¬({}⊢Fb) by part 4 of Proposition 11. Thus {b} is the smallest subset of B that generates b. □
3.3. Orthogonality
------------------
We are now ready to define the notion of orthogonality between two partitions of S.
**Definition 18** (orthogonality). *Given a finite factored set*F=(S,B)*and partitions*X,Y∈Part(S)*, we say*X*is orthogonal to*Y*(in*F*), written*X⊥FY*, if*hF(X)∩hF(Y)={}*.*
*If*¬(X⊥FY)*, we say*X*is entangled with*Y*(in*F*).*
We could also unpack this definition to not mention history or chimera functions.
**Proposition 14**. *Given a finite factored set*F=(S,B)*, and partitions*X,Y∈Part(S)*,*X⊥FY*if and only if there exists a*C⊆B*such that*X≤S⋁S(C)*and*Y≤S⋁S(B∖C)*.*
*Proof.* If there exists a C⊆B such that X≤S⋁S(C) and Y≤S⋁S(B∖C), then C⊢FX and B∖C⊢FY. Thus, hF(X)⊆C and hF(Y)⊆B∖C, so hF(X)∩hF(Y)={}.
Conversely, if hF(X)∩hF(Y)={}, let C=hF(X). Then C⊢FX, so X≤S⋁S(C), and B∖C⊇hF(Y), so B∖C⊢FY, so Y≤S⋁S(B∖C). □
Here are some basic properties of orthogonality.
**Proposition 15**. *Let*F=(S,B)*be a finite factored set, and let*X,Y,Z∈Part(S)*be partitions of*S*.*
1. *If*X⊥FY*, then*Y⊥FX*.*
2. *If*X⊥FZ*and*Y≤SX*, then*Y⊥FZ*.*
3. *If*X⊥FZ*and*Y⊥FZ*, then*(X∨SY)⊥FZ*.*
4. X⊥FX*if and only if*X=IndS*.*
*Proof.* Part 1 is trivial from the symmetry in the definition.
Parts 2, 3, and 4 follow directly from Proposition 13. □
3.4. Time
---------
Finally, we can define our notion of time in a factored set.
**Definition 19** ((strictly) before). *Given a finite factored set*F=(S,B)*, and partitions*X,Y∈Part(S)*, we say*X*is before*Y*(in*F*), written*X≤FY*, if*hF(X)⊆hF(Y)*.*
*We say*X*is strictly before*Y*(in*F*), written*X<FY*, if*hF(X)⊂hF(Y)*.*
Again, we could also unpack this definition to not mention history or chimera functions.
**Proposition 16**. *Given a finite factored set*F=(S,B)*, and partitions*X,Y∈Part(S)*,*X≤FY*if and only if every*C⊆B*satisfying*Y≤S⋁S(C)*also satisfies*X≤S⋁S(C)*.*
*Proof.* Note that by part 7 of Proposition 10, part 5 of Proposition 11, and the definition of history, C satisfies Y≤S⋁S(C) if and only if C⊇hF(Y), and similarly for X.
Clearly, if hF(Y)⊇hF(X), every C⊇hF(Y) satisfies C⊇hF(X). Conversely, if hF(X) is not a subset of hF(Y), then we can take C=hF(Y), and observe that C⊇hF(Y) but not C⊇hF(X). □
Interestingly, we can also define time entirely as a closure property of orthogonality. We hold that the philosophical interpretation of time as a closure property on orthogonality is natural and transcends the ontology set up in this sequence.
**Proposition 17**. *Given a finite factored set*F=(S,B)*, and partitions*X,Y∈Part(S)*,*X≤FY*if and only if every*Z∈Part(S)*satisfying*Y⊥FZ*also satisfies*X⊥FZ*.*
*Proof.* Clearly if hF(X)⊆hF(Y), then every Z satisfying hF(Y)∩hF(Z)={} also satisfies hF(X)∩hF(Z)={}.
Conversely, if hF(X) is not a subset of hF(Y), let b∈B be an element of hF(X) that is not in hF(Y). Assuming S is nonempty, b is nonempty, so we have hF(b)={b}, so Y⊥Fb, but not X⊥Fb. On the other hand, if S is empty, then X=Y={}, so clearly X≤FY. □
Here are some basic properties of time.
**Proposition 18**. *Let*F=(S,B)*be a finite factored set, and let*X,Y,Z∈Part(S)*be partitions of*S*.*
1. X≤FX*.*
2. *If*X≤FY*and*Y≤FZ*, then*X≤FZ*.*
3. *If*X≤SY*, then*X≤FY*.*
4. *If*X≤FZ*and*Y≤FZ*, then*(X∨SY)≤FZ*.*
*Proof.* Part 1 is trivial from the definition.
Part 2 is trivial by transitivity of the subset relation.
Part 3 follows directly from part 1 of Proposition 13.
Part 4 follows directly from part 2 of Proposition 13. □
Finally, note that we can (circularly) redefine history in terms of time, thus partially justifying the names.
**Proposition 19**. *Given a nonempty finite factored set*F=(S,B)*and a partition*X∈Part(S)*,*hF(X)={b∈B∣b≤FX}*.*
*Proof.* Since S is nonempty, part 4 of Proposition 13 says that hF(b)={b} for all b∈B. Thus {b∈B∣b≤FX}={b∈B∣{b}⊆hF(X)}={b∈B∣b∈hF(X)}=hF(X). □
In the next post, we'll build up to a definition of *conditional orthogonality* by introducing the notion of subpartitions. |
4993fae2-b722-4cdc-8342-0d545aebc1bc | StampyAI/alignment-research-dataset/arbital | Arbital | Ad-hoc hack (alignment theory)
An "ad-hoc hack" is when you modify or [patch](https://arbital.com/p/48) the algorithm of the AI with regards to something that would ordinarily have simple, principled, or nailed-down structure, or where it seems like that part ought to have some simple answer instead. E.g., instead of defining a von Neumann-Morgenstern coherent utility function, you try to solve some problem by introducing something that's *almost* a VNM utility function but has a special case in line 3 which activates only on Tuesday. This seems unusually likely to break other things, e.g. [reflective consistency](https://arbital.com/p/2rb), or anything else that depends on the coherence or simplicity of utility functions. Such hacks should be avoided in [advanced-agent](https://arbital.com/p/2c) designs whenever possible, for analogous reasons to why they would be avoided in [cryptography](https://arbital.com/p/cryptographic_analogy) or [designing a space probe](https://arbital.com/p/probe_analogy). It may be interesting and productive anyway to look for a weird hack that seems to produce the desired behavior, because then you understand at least one system that produces the behavior you want - even if it would be unwise to *actually build an AGI* like that, the weird hack might give us the inspiration to find a simpler or more coherent system later. But then we should also be very suspicious of the hack, and look for ways that it fails or produces weird side effects.
An example of a productive weird hack was [https://arbital.com/p/Benya_Fallenstein](https://arbital.com/p/Benya_Fallenstein)'s Parametric Polymorphism proposal for [tiling agents](https://arbital.com/p/1mq). You wouldn't want to build a real AGI like that, but it was helpful for showing what *could* be done - which properties could definitely be obtained together within a tiling agent, even if by a weird route. This in turn helped suggest relatively less hacky proposals later. |
1526bca2-29a5-4e2e-beb0-86fb04e289ee | trentmkelly/LessWrong-43k | LessWrong | Rationalist horror movies
I like horror movies with smart, agentic protagonists, while staying in horror genre rather than thriller or action. These are rather thin on the ground. In the spirit of Scott Garabant's Puzzle Game post, here is my list of favorites and an invitation to add yours.
My favorites:
* Oculus.
* Hush
* Green Room
Honorable mentions. These aren't quite the total package, but are close and extremely good such that they still seemed worth including:
* The Ring's protagonist is not dumb, but it's more "solving a mystery" than "well does this weapon work?"
* Tremors would need to change very little to be a horror movie, but it is in fact an action movie.
* It Follows spends a little too much time denying the problem |
3c48dc56-f7de-4410-b391-a68e2ed5b99a | trentmkelly/LessWrong-43k | LessWrong | Corrigibility thoughts III: manipulating versus deceiving
A putative new idea for AI control; index here.
This is the first of three articles about limitations and challenges in the concept of corrigibility (see articles 1 and 2).
The desiderata for corrigibility are:
1. A corrigible agent tolerates, and preferably assists, its operators in their attempts to alter or shut down the agent.
2. A corrigible agent does not attempt to manipulate or deceive its operators.
3. A corrigible agent has incentives to repair safety measures (such as shutdown buttons, tripwires, or containment tools) if they break, or at least notify its operators in the event of a breakage.
4. A corrigible agent preserves its corrigibility, even as it creates new sub-systems or sub-agents, even if it undergoes significant self-modification.
In this post, I'll be looking more at some aspects of point 2. A summary of the result will be:
* Defining manipulation simply may be possible, but defining deception is a whole other problem.
The warning in this post should always be born in mind, of course; it's possible that we me might find a semi-formal version of deception that does the trick.
Manipulation versus deception
In the previous post, I mentioned that we may need to define clearly what an operator was, rather than relying on the pair: {simple description of a value correction event, physical setup around that event}. Can we define manipulation and deception without defining what an operator is?
For manipulation, it seems we can. Because manipulation is all about getting certain preferred outcomes. By specifying that the AI cannot aim to optimise certain outcomes, we can stop at least certain types of manipulations. Along with other more direct ways of achieving those outcomes.
For deception, the situation is much more complicated. It seems impossible to define how one agent can communicate to another agent (especially one as biased as a human), and increase the accuracy of the second agent, without defining the second agent properly. Mo |
15664a81-9039-4b98-8c47-f063e1aa1475 | StampyAI/alignment-research-dataset/blogs | Blogs | Three Major Singularity Schools
( [Originally appeared](https://intelligence.org/2007/09/30/three-major-singularity-schools/) on the Machine Intelligence Research Institute blog, September 2007.)
Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.
* **Accelerating Change:**
+ *Core claim:* Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.
+ *Strong claim:* Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.
+ *Advocates:* Ray Kurzweil, Alvin Toffler(?), John Smart
* **Event Horizon:**
+ *Core claim:* For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.
+ *Strong claim:* To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.
+ *Advocates:* Vernor Vinge
* **Intelligence Explosion:**
+ *Core claim:* Intelligence has always been the source of technology. If technology can *significantly* improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.
+ *Strong claim:* This positive feedback cycle goes FOOM , like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates *superintelligence* (minds orders of magnitude more powerful than human) before it hits physical limits.
+ *Advocates:* I. J. Good, Eliezer Yudkowsky
The thing about these three *logically distinct* schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.
If you extrapolate our existing version of Moore’s Law past the point of smarter-than-human AI to make predictions about 2099, then you are contradicting both the strong version of the Event Horizon (which says you can’t make predictions because you’re trying to outguess a transhuman mind) and the strong version of the Intelligence Explosion (because progress will run faster once smarter-than-human minds and nanotechnology drop it into the speed phase of transistors).
I find it very annoying, therefore, when these three schools of thought are mashed up into Singularity paste. [Clear thinking requires making distinctions.](http://www.overcomingbias.com/2007/08/the-virtue-of-n.html)
But what is still more annoying is when someone reads a blog post about a newspaper article about the Singularity, comes away with *none* of the three interesting theses, and spontaneously reinvents the dreaded fourth meaning of the Singularity:
* **Apocalyptism:** Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.
I’ve heard (many) other definitions of the Singularity attempted, but I usually find them to lack separate premises and conclusions. For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down. But what makes this an interesting point in history apart from its definition? What are the consequences of this assumption? To qualify as a school of thought or even a thesis, one needs an internal structure of argument, not just a definition.
If you’re wondering which of these is the *original* meaning of the term “Singularity”, it is the Event Horizon thesis of Vernor Vinge, who coined the word.
---
This document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.
Eliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .
If you think the world could use some more rationality, consider blogging this page.
Praise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/schools/](https://eyudkowsky.wpengine.com/singularity/schools/) . |
c86a0c2d-e0e5-4560-8ca8-525d4bd7c702 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Shutting Down the Lightcone Offices
[Lightcone](https://www.lightconeinfrastructure.com/) recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.
We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.
Background data
---------------
Below is a graph of weekly unique keycard-visitors to the office in 2022.
The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.
Weekly unique visitors with keycards in 2022. There was a lot of seasonality to the office.The distribution of people by how many days they came (in 2022) looks like this.Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.
The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.
The offices cost $70k/month on rent [[1]](#fne5ob1hhsc8n), and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year.
Ben's Announcement
------------------
> **Closing the Lightcone Offices** **@channel**
>
> Hello there everyone,
>
> **Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March.** While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.
>
> Below I'll give context on the decision and other details, but **the main practical information is that the office will no longer be open after Friday March 24th.** (There will be a goodbye party on that day.)
>
> I asked Oli to briefly state his reasoning for this decision, here's what he says:
>
>
> > An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.
> >
> > I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.
> >
> > I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow and accelerate the things around us.
> >
> >
>
> (To Oli's points I'll add to this that it's also an ongoing cost in terms of time, effort, stress, and in terms of a lack of organizational focus on the other ideas and projects we'd like to pursue.)
>
> **Oli, myself, and the rest of the Lightcone team will be available to discuss more about this in the channel** [**#closing-office-reasoning**](https://lightconeoffices.slack.com/archives/C04LQ1B9Z6H)where I invite any and all of you who wish to to discuss this with me, the rest of the lightcone team, and each other.
>
> In the last few weeks I sat down and interviewed people leading the 3 orgs whose primary office is here (FAR, AI Impacts, and Encultured) and 13 other individual contributors. I asked about how this would affect them, how we could ease the change, and generally get their feelings about how the ecosystem is working out.
>
> These conversations lasted on average 45 mins each, and it was very interesting to hear people's thoughts about this, and also their suggestions about other things Lightcone could work on.These conversations also left me feeling more hopeful about building related community-infrastructure in the future, as I learned of a number of positive effects that I wasn't aware of. These conversations all felt pretty real, I respect all the people involved more, and I hope to talk to many more of you at length before we close.
>
> From the check-ins I've done with people, this seems to me to be enough time to not disrupt any SERI MATS mentorships, and to give the orgs here a comfortable enough amount of time to make new plans, but **if this does put you in a tight spot, please talk to us and we'll see how we can help.**
>
> **The campus team (me, Oli, Jacob, Rafe) will be in the office for lunch tomorrow (Friday at 1pm) to discuss any and all of this with you.** We'd like to know how this is affecting you, and I'd really like to know about costs this has for you that I'm not aware of. Please feel free (and encouraged) to just chat with us in your lightcone channels (or in any of the public office channels too).
>
> Otherwise, a few notes:
>
> * The Lighthouse system is going away when the leases end. Lighthouse 1 has closed, and Lighthouse 2 will continue to be open for a few more months.
> * If you would like to start renting your room yourself from WeWork, I can introduce you to our point of contact, who I think would be glad to continue to rent the offices. Offices cost between $1k and $6k a month depending on how many desks are in them.
> * Here's a form to give the Lightcone team anonymous feedback about this decision (or anything). [Link removed from LW post.]
> * To talk with people about future plans starting now and after the offices close, whether to propose plans or just to let others know what you'll be doing, I've made the [#future-plans](https://lightconeoffices.slack.com/archives/C04LQ1A46DT) channel and added you all to it.
>
> It's been a thrilling experience to work alongside and get to know so many people dedicated to preventing an existential catastrophe, and I've made many new friends working here, thank you, but I think me and the Lightcone Team need space to reflect and to build something better if Earth is going to have a shot at aligning the AGIs we build.
>
>
Oliver's 1st message in #Closing-Office-Reasoning
-------------------------------------------------
*(In response to a question on the Slack saying "I was hoping you could elaborate more on the idea that building the space may be net harmful.")*
> I think FTX is the obvious way in which current community-building can be bad, though in my model of the world FTX, while somewhat of outlier in scope, doesn't feel like a particularly huge outlier in terms of the underlying generators. Indeed it feels not that far from par for the course of the broader ecosystems relationship to honesty, aggressively pursuing plans justified by naive consequentialism, and more broadly having a somewhat deceptive relationship to the world.
>
> Though again, I really don't feel confident about the details here and am doing a bunch of broad orienting.
>
> I've also written some EA Forum and LessWrong comments that point to more specific things that I am worried will have or have had a negative effect on the world:
>
> My guess is RLHF research has been pushing on a commercialization bottleneck and had a pretty large counterfactual effect on AI investment, causing a huge uptick in investment into AI and potentially an arms race between Microsoft and Google towards AGI: <https://www.lesswrong.com/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research?commentId=HHBFYow2gCB3qjk2i>
>
> Thoughts on how responsible EA was for the FTX fraud: <https://forum.effectivealtruism.org/posts/Koe2HwCQtq9ZBPwAS/quadratic-reciprocity-s-shortform?commentId=9c3srk6vkQuLHRkc6>
>
> Tendencies towards pretty mindkilly PR-stuff in the EA community: <https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email?commentId=vYbburTEchHZv7mn4>
>
> I feel quite worried that the alignment plan of Anthropic currently basically boils down to "we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/more-careful/more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly". That plan isn't inherently doomed, but man does it rely on trusting Anthropic's leadership, and I genuinely only have marginally better ability to distinguish the moral character of Anthropic's leadership from the moral character of FTX's leadership, and in the absence of that trust the only thing we are doing with Anthropic is adding another player to an AI arms race.
>
> More broadly, I think AI Alignment ideas/the EA community/the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.
>
> Before the whole FTX collapse, I also wrote this long list of reasons for why I feel quite doomy about stuff (posted in replies, to not spam everything).
>
>
Oliver's 2nd message
--------------------
> (Originally written October 2022) I've recently been feeling a bunch of doom around a bunch of different things, and an associated lack of direction for both myself and Lightcone.
>
> Here is a list of things that I currently believe that try to somehow elicit my current feelings about the world and the AI Alignment community.
>
> 1. In most worlds RLHF, especially if widely distributed and used, seems to make the world a bunch worse from a safety perspective (by making unaligned systems appear aligned at lower capabilities levels, meaning people are less likely to take alignment problems seriously, and by leading to new products that will cause lots of money to go into AI research, as well as giving a strong incentive towards deception at higher capability levels)
> 2. It's a bad idea to train models directly on the internet, since the internet as an environment makes supervision much harder, strongly encourages agency, has strong convergent goals around deception, and also gives rise to a bunch of economic applications that will cause more money to go into AI
> 3. The EA and AI Alignment community should probably try to delay AI development somehow, and this will likely include getting into conflict with a bunch of AI capabilities organizations, but it's worth the cost
> 4. I don't currently see a way to make AIs very useful for doing additional AI Alignment research, and don't expect any of the current approaches for that to work (like ELK, or trying to imitate humans by doing more predictive modeling of human behavior and then hoping they turn out to be useful), but it sure would be great if we found a way to do this (but like, I don't think we currently know how to do this)
> 5. I am quite worried that it's going to be very easy to fool large groups of humans, and that AI is quite close to seeming very aligned and sympathetic to executives at AI companies, as well as many AI alignment researchers (and definitely large parts of the public). I don't think this will be the result of human modeling, but just the result of pushing the AI into patterns of speech/behaior that we associate with being less threatening and being more trustworthy. In some sense this isn't a catastrophic risk because this kind of deception doesn't cause the AI to dispower the humans, but I do expect it to make actually getting the research to stop or to spend lots of resources on alignment a lot harder later on.
> 6. I do sure feel like a lot of AI alignment research is very suspiciously indistinguishable from capabilities research, and I think this is probably for the obvious bad reasons instead of this being an inherent property of these domains (the obvious bad reason being that it's politically advantageous to brand your research as AI Alignment research and capabilities research simultaneously, since that gives you more social credibility, especially from the EA crowd which has a surprisingly strong talent pool and is also just socially close to a lot of top AI capabilities people)
> 7. I think a really substantial fraction of people who are doing "AI Alignment research" are instead acting with the primary aim of "make AI Alignment seem legit". These are not the same goal, a lot of good people can tell and this makes them feel kind of deceived, and also this creates very messy dynamics within the field where people have strong opinions about what the secondary effects of research are, because that's the primary thing they are interested in, instead of asking whether the research points towards useful true things for actually aligning the AI.
> 8. More broadly, I think one of the primary effects of talking about AI Alignment has been to make more people get really hyped about AGI, and be interested in racing towards AGI. Generally knowing about AGI-Risk does not seem to have made people more hesitant towards racing and slow down, but instead caused them to accelerate progress towards AGI, which seems bad on the margin since I think humanity's chances of survival do go up a good amount with more time.
> 9. It also appears that people who are concerned about AGI risk have been responsible for a very substantial fraction of progress towards AGI, suggesting that there is a substantial counterfactual impact here, and that people who think about AGI all day are substantially better at making progress towards AGI than the average AI researcher (though this could also be explained by other attributes like general intelligence or openness to weird ideas that EA and AI Alignment selects for, though I think that's somewhat less likely)
> 10. A lot of people in AI Alignment I've talked to have found it pretty hard to have clear thoughts in the current social environment, and many of them have reported that getting out of Berkeley, or getting social distance from the core of the community has made them produce better thoughts. I don't really know whether the increased productivity here is born out by evidence, but really a lot of people that I considered promising contributors a few years ago are now experiencing a pretty active urge to stay away from the current social milieu.
> 11. I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world (most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics).
> 12. Exercising some genre-saviness, I also think a bunch of this is driven by just a more generic "I feel alienated by my social environment changing and becoming more professionalized and this is robbing it of a lot of the things I liked about it". I feel like when people feel this feeling they often are holding on to some antiquated way of being that really isn't well-adapted to their current environment, and they often come up with fancy rationalizations for why they like the way things used to be.
> 13. I also feel confused about how to relate to the stronger equivocation of ML-skills with AI Alignment skills. I don't personally have much of a problem with learning a bunch of ML, and generally engage a good amount with the ML literature (not enough to be an active ML researcher, but enough to follow along almost any conversation between researchers), but I do also feel a bit of a sense of being personally threatened, and other people I like and respect being threatened, in this shift towards requiring advanced cutting-edge ML knowledge in order to feel like you are allowed to contribute to the field. I do feel a bit like my social environment is being subsumed by and is adopting the status hierarchy of the ML community in a way that does not make me trust what is going on (I don't particularly like the status hierarchy and incentive landscape of the ML community, which seems quite well-optimized to cause human extinction)
> 14. I also feel like the EA community is being very aggressive about recruitment in a way that locally in the Bay Area has displaced a lot of the rationality community, and I think this is broadly bad, both for me personally and also because I just think the rationality community had more of the right components to think sanely about AI Alignment, many of which I feel like are getting lost
> 15. I also feel like with Lightcone and Constellation coming into existence, and there being a lot more money and status around, the inner circle dynamics around EA and longtermism and the Bay Area community have gotten a lot worse, and despite being a person who I think generally is pretty in the loop with stuff, have found myself being worried and stressed about being excluded from some important community function, or some important inner circle. I am quite worried that me founding the Lightcone Offices was quite bad in this respect, by overall enshrining some kind of social hierarchy that wasn't very grounded in things I actually care about (I also personally felt a very strong social pressure to exclude interesting but socially slightly awkward people from being in Lightcone that I ended up giving into, and I think this was probably a terrible mistake and really exacerbated the dynamics here)
> 16. I think some of the best shots we have for actually making humanity not go extinct (slowing down AI progress, pivotal acts, intelligence enhancement, etc.) feel like they have a really hard time being considered in the current overton window of the EA and AI Alignment community, and I feel like people being unable to consider plans in these spaces both makes them broadly less sane, but also just like prevents work from happening in these areas.
> 17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I've had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn't seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn't have been so much of a chilling effect so that nobody brings up these topics).
>
Ben's 1st message in #Closing-Office-Reasoning
----------------------------------------------
*Note from Ben: I have lightly edited this because I wrote it very quickly at the time*
> (I drafted this earlier today and didn't give it much of a second pass, forgive me if it's imprecise or poorly written.)
>
> Here are some of the reasons I'd like to move away from providing offices as we have done so far.
>
> * **Having two locations comes with a large cost.** To track how a space is functioning, what problems people are running into, how the culture changes, what improvements could be made, I think I need to be there at least 20% of my time each week (and ideally ~50%), and that’s a big travel cost to the focus of the lightcone team.
> * **Offices are a high-commitment abstraction for which it is hard to iterate.** In trying to improve a culture, I might try to help people start more new projects, or gain additional concepts that help them understand the world, or improve the standards arguments are held to, or something else. But there's relatively little space for a lot of experimentation and negotiation in an office space — you’ve mostly made a commitment to offer a basic resource and then to get out of people's way.
> * **The “enculturation to investment” ratio was very lopsided.** For example, with SERI MATS, many people came for 2.5 months, for whom I think a better selection mechanism would have been something shaped like a 4-day AIRCS-style workshop to better get to know them and think with them, and then pick a smaller number of the best people from that to invest further into. If I came up with an idea right now for what abstraction I'd prefer, it'd be something like an ongoing festival with lots of events and workshops and retreats for different audiences and different sorts of goals, with perhaps a small office for independent alignment researchers, rather than an office space that has a medium-size set of people you're committed to supporting long-term.
> * **People did not do much to invest in each other in the office.** I think this in part because the office does not capture other parts of people’s lives (e.g. socializing), but also I think most people just didn’t bring their whole spirit to this in some ways, and I’m not really sure why. I think people did not have great aspirations for themselves or each other. I did not feel here that folks had a strong common-spirit — that they thought each other could grow to be world-class people who changed the course of history, and did not wish to invest in each other in that way. (There were some exceptions to note, such as Alex Mennen’s *Math Talks*, John Wentworth's *Framing Practica,* and some of the ways that people in the *Shard Theory* teams worked together with the hope of doing something incredible, which both felt like people were really investing into communal resources and other people.) I think a common way to know whether people are bringing their spirit to something is whether they create art about it — songs, in-jokes, stories, etc. Soon after the start I felt nobody was going to really bring themselves so fully to the space, even though we hoped that people would. I think there were few new projects from collaborations in the space, other than between people who already had a long history.
>
> And regarding the broader ecosystem:
>
> * **Some of the primary projects getting resources from this ecosystem do not seem built using the principles and values (e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill's career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR).** Insofar as these are the primary projects who will reap the benefits of the resources that Lightcone invests into this ecosystem, I would like to change course.
> * **The moral maze nature of the EA/longtermist ecosystem has increased substantially over the last two years, and the simulacra level of its discourse has notably risen too**. There are many more careerist EAs working here and at events, it’s more professionalized and about networking. Many new EAs are here not because they have a deep-seated passion for doing what’s right and using math to get the answers, but because they’re looking for an interesting, well-paying job in a place with nice nerds. Or are just noticing that there’s a lot of resources being handed out in a very high-trust way. One of the people I interviewed at the office said they often could not tell whether a newcomer was expressing genuine interest in some research, or was trying to figure out “how the system of reward” worked so they could play it better, because the types of questions in both cases seemed so similar. [Added to LW post: I also remember someone joining the offices to collaborate on a project, who explained that in their work they were looking for "The next Eliezer Yudkowsky or Paul Christiano". When I asked what aspects of Eliezer they wanted to replicate, they said they didn't really know much about Eliezer but it was something that a colleague of theirs said a lot.] It also seems to me that the simulacra level of writing on the EA Forum is increasing, whereby language is increasingly used primarily to signal affiliation and policy-preferences rather than to explain how reality works. I am here in substantial part because of people (like Eliezer Yudkowsky and Scott Alexander) honestly trying to explain how the world works in their online writing and doing a damn good job of it, and I feel like there is much less of that today in the EA/longtermist ecosystem. This makes the ecosystem much harder to direct, to orient within, and makes it much harder to trust that resources intended for a given purpose will not be redirected by the various internal forces that grow against the intentions of the system.
> * **The alignment field that we're supporting seems to me to have pretty little innovation and pretty bad politics.** I am irritated by the extent to which discussion is commonly framed around a Paul/Eliezer dichotomy, even while the primary person taking orders of magnitudes more funding and staff talent (Dario Amodei) has barely explicated *his* views on the topic and appears (from a distance) to have disastrously optimistic views about how easy alignment will be and how important it is to stay competitive with state of the art models. [Added to LW post: I also generally dislike the dynamics of fake-expertise and fake-knowledge I sometimes see around the EA/x-risk/alignment places.
> + I recall at EAG in Oxford a year or two ago, people were encouraged to "list their areas of expertise" on their profile, and one person who works in this ecosystem listed (amongst many things) "Biorisk" even though I knew the person had only been part of this ecosystem for <1 year and their background was in a different field.
> + It also seems to me like people who show any intelligent thought or get any respect in the alignment field quickly get elevated to "great researchers that new people should learn from" even though I think that there's less than a dozen people who've produced really great work, and mostly people should think pretty independently about this stuff.
> + I similarly feel pretty worried by how (quite earnest) EAs describe people or projects as "high impact" when I'm pretty sure that if they reflected on their beliefs, they honestly wouldn't know the sign of the person or project they were talking about, or estimate it as close-to-zero.]
>
> **How does this relate to the** **office?**
>
> A lot of the boundary around who is invited to the offices has been determined by:
>
> 1. People whose x-risk reduction work the Lightcone team respects or is actively excited about
> 2. People and organizations in good standing in the EA/longtermist ecosystem (e.g. whose research is widely read, who has major funding from OpenPhil/FTX, who have organizations that have caused a lot to happen, etc) and the people working and affiliated with them
> 3. Not-people who we think would (sadly) be very repellent to many people to work in the space (e.g. lacking basic social skills, or who many people find scary for some reason) or who we think have violated important norms (e.g. lying, sexual assault, etc).
>
> The 2nd element has really dominated a lot of my choices here in the last 12 months, and (as I wrote above) this is a boundary that is increasingly filled with people who I don't believe are here because they care about ethics, who I am not aware have done any great work, who I am not aware of having strong or reflective epistemologies. Even while massive amounts of resources are being poured into the EA/longtermist ecosystem, I'd like to have a far more discerning boundary around the resources I create.
>
>
1. **[^](#fnrefe5ob1hhsc8n)**The office rent cost about 1.5x what it needed to be. We started in a WeWork because we were prototyping whether people even wanted an office, and wanted to get started quickly (the office was up and running in 3 weeks instead of going through the slower process of signing a 12-24 month lease). Then we were in a state for about a year of figuring out where to move to long-term, often wanting to preserve the flexibility of being able to move out within 2 months. |
eec8b051-2322-4dc5-af03-b47566001490 | StampyAI/alignment-research-dataset/special_docs | Other | Multiverse-wide Cooperation via Correlated Decision Making
Multiverse-wide Cooperation via Correlated Decision
Making
Caspar Oesterheld
Abstract
Some decision theorists argue that when playing a prisoner’s dilemma-type game
against a sufficiently similar opponent, we should cooperate to make it more likely that
our opponent also cooperates. This idea, which Hofstadter calls superrationality, has
strong implications when combined with the insight from modern physics that we
probably live in a large universe or multiverse of some sort. If we care about what
happens in civilizations located elsewhere in the multiverse, we can superrationally
cooperate with some of their inhabitants. That is, if we take their values into account, this
makes it more likely that they do the same for us. In this paper, I attempt to assess the
practical implications of this idea. I argue that to reap the full gains from trade,
everyone should maximize the same impartially weighted sum of the utility functions of
all collaborators. I also argue that we can obtain at least weak evidence about the
content of these utility functions. In practice, the application of superrationality implies
that we should promote causal cooperation, moral pluralism, moral reflection, and
ensure that our descendants, who will be smarter and thus better at finding out how to
benefit other superrationalists in the universe, engage in superrational cooperation.
Contents
1 Introduction – the basic idea 3
1.1 An overview of the paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Superrationality 7
2.1 Lack of knowledge is evidential power, part I: the other agents . . . . . . . . 9
2.2 A short survey of decision theories and their relation to superrationality . . . 11
2.3CDT would self-modify to behave like a non-causal decision theory in some
Newcomb-like problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Lack of knowledge is evidential power, part II: taking a step back . . . . . . . 14
2.5 Reasons and correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.1 Your back is not mine . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2Does accepting superrationality commit us to irrational behavior in
medical Newcomb problems? . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Are the correlations strong enough? . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.1 Correlation only with close copies? . . . . . . . . . . . . . . . . . . . . 20
2.6.2 Negative correlations? . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 The relative importance of superrational cooperation: an example calculation 22
2.8 Compromise strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1
2.8.1 Sharing gains from compromise in the face of asymmetries . . . . . . . 23
2.8.2 The compromise problem . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.8.3 Cooperation with and without coordination . . . . . . . . . . . . . . . 26
2.8.4 Harsanyi’s aggregation theorem . . . . . . . . . . . . . . . . . . . . . . 27
2.8.5 How to assign the weights . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.8.6 Updateless weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.8.7 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.8.8 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.8.9 Notes on superrational coordination . . . . . . . . . . . . . . . . . . . 35
2.9 No reciprocity needed: whom to treat beneficially . . . . . . . . . . . . . . . . 40
2.9.1 Schemes of causal cooperation . . . . . . . . . . . . . . . . . . . . . . 40
2.9.2 Circular cooperative structures and indirect causal reciprocity . . . . . 41
2.9.3 Hierarchies and acyclic graphs . . . . . . . . . . . . . . . . . . . . . . 42
2.9.4 Only helping superrational cooperators helps you superrationally . . . 46
2.10 Cheating, signaling, and half-heartedness . . . . . . . . . . . . . . . . . . . . 48
3 Values 49
3.1 Orthogonality of instrumental rationality and values . . . . . . . . . . . . . . 49
3.2 Necessary preconditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.1 Consequentialism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2.2 Caring about the multiverse . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.3 Knowable values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2.4 The ability to help others . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.5 Zero-sum and “below-zero-sum” tradeoffs on resources . . . . . . . . . 58
3.3 What values? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3.1 Idealization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3.2 Values and distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3.3 Different kinds of preferences . . . . . . . . . . . . . . . . . . . . . . . 64
3.4 The values of our superrational collaborators in the multiverse . . . . . . . . 65
3.4.1 On the far values of humans and human superrational cooperators . . 66
3.4.2 Biological evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.3 Cultural evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.4.4 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4 Interventions 74
4.1 Cooperation in the face of uncertainty about values . . . . . . . . . . . . . . . 74
4.1.1 Universalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2 Moral advocacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.1 Universalist values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.2 Concern for benevolence . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.3 Consequentialism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.4 Pluralism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.5 Promoting moral reflection . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2.6 Multiverse-wide preference utilitarianism . . . . . . . . . . . . . . . . 80
4.2.7 No multiverse-wide tug-of-war over values . . . . . . . . . . . . . . . . 81
4.3 Promoting causal cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4 Increasing capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.5 Meta-activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2
4.5.1 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.5.2 Promoting multiverse-wide superrationality . . . . . . . . . . . . . . . 85
4.6 Artificial intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6.1 AI safety not based on superrationality-related considerations . . . . . 85
4.6.2 Multiverse-wide superrationality-inspired value-loading . . . . . . . . . 86
4.6.3 Making an AI come up with superrational cooperation on its own . . 87
5 Acknowledgements 88
6 Appendix 88
6.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.1.1 Gary Drescher on superrationality . . . . . . . . . . . . . . . . . . . . 89
6.1.2 Acausal trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.1.3 Various mentions of multiverse-wide superrationality . . . . . . . . . . 90
6.2 Many agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3 Testability of superrationality . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4 Do people reason superrationally? . . . . . . . . . . . . . . . . . . . . . . . . 92
6.5 The evolution of superrationality . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.6 Superrational cooperation on Earth . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.1 Fewer agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.2 Argument from evolution: Superrationality did not evolve (strongly) . 95
6.6.3 Causal cooperation seems more important . . . . . . . . . . . . . . . . 95
6.6.4 Hard-wired alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.7 Superrationality and morality . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.7.1 Real altruism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.7.2 How much altruistic behavior does superrationality entail? . . . . . . 98
6.8 Multiverse-wide superrationality for causal decision theorists . . . . . . . . . 98
6.9 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.9.1 If being in a simulation is bad, avoid creating one . . . . . . . . . . . . 99
6.9.2 Happy simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.10 Infinite ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.11Objection based on uncertainty about the values of superrationalists in the
multiverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
1 Introduction – the basic idea
This paper makes an extraordinary claim: that a few interesting but by themselves inconse-
quential ideas from decision theory and physics together give rise to a crucial consideration
with strong implications for how to do the most good. In this first section, I will outline the
main idea and forward-reference sections with the full arguments and detailed elaborations.
Afterward, I give an overview of the entire paper, section by section (section 1.1).
Consider the following thought experiment, adapted from Hofstadter’s (1983) Dilemmas for
Superrational Thinkers, Leading Up to a Luring Lottery :
Donation game with superrationality. Hofstadter sends 20 participants the
same letter, asking them to respond with a single letter ‘C’ (for cooperate) or
‘D’ (for defect) without communicating with the other participants. Hofstadter
3
explains that by sending in ‘C’, a participant can increase everyone else’s payoff
by $2. By sending in ‘D’, participants can increase their own payoff by $5. The
letter ends by informing the participants that they were chosen for the similarity
and rationality of their decision mechanisms, particularly in weird scenarios like
this one. It should be noted that every participant only cares about the balance
of her own bank account, and not about Hofstadter’s or that of the other 19
participants. Upon receiving the letter, should you cooperate or defect?
Assuming the participants’ thought processes are sufficiently similar to each other, I think
we should cooperate because this makes it more likely that our 19 fellow participants also
cooperate (see chapter 2 and the references given therein). After all, Hofstadter stated fairly
explicitly that the thought processes of the participants are strongly correlated. Thus, if we
cooperate, we should expect significantly more of the other participants to cooperate as well
than if we defect, which means that cooperating has higher expected utility. Alternatively,
we may reason that by our choice we determine what the rational choice is for all participants.
Hofstadter calls this idea of cooperation via correlated decision making superrationality .
By itself, superrationality does not seem particularly action-guiding. Usually, we have other
evidence about other agents’ behavior and thought processes such that the evidence we
gain from our own decisions is less important (see section 6.6). To apply superrationality in
practice, we combine it with another intellectually stimulating but by itself inconsequential
hypothesis: we probably live in a vast universe or even multiverse, most of which we cannot
observe or interact with (see appendix section 6.2). In this paper, we will use the term
“multiverse” in a broad sense to refer to any theory postulating multiple universes, including
but not limited to Everett’s many-worlds interpretation of quantum mechanics. In fact,
for brevity’s sake, we will use the term to refer to any theory of physics that implies the
existence of a sufficiently large universe with many agents, including a merely spatially
infinite universe.1Some parts of this multiverse are probably inhabited by intelligent beings
like us, some of which surely think about scenarios like this one in the same way as we do.
This is all we need to allow for the application of superrationality.
The key insight of this paper is that agents in a multiverse are in a situation structurally
similar to the aforementioned donation game if they care about each other’s decisions in far
away parts of the multiverse. Consider the following list of parallels:
•The decisions between some groups of agents are correlated, just like those in the
donation game.
•Some agents have different goals than others – a claim for which we argue in section
3.1 – just like the agents in the donation game maximize the balances of different bank
accounts.
•On occasion, agents can “cooperate” by benefitting the value systems of agents in other
parts of the multiverse at low costs to themselves.
•As in the donation game, our actions cannot causally influence the behavior of other
agents in the multiverse.
As an example, imagine you have some specific value system like the reduction of involuntary
suffering. You come into a situation in which involuntary suffering has already been reduced
1This is consistent with the terminology by Tegmark (2003) but otherwise uncommon.
4
to a very low amount. You face a choice between two actions:
•You can continue to reduce suffering and increase your own utility and that of other
suffering reducers by 1.2
•You can increase the utility of superrational agents in other parts of the multiverse
who (also) care about things other than suffering reduction by 100, e.g. by generating
a society of agents who live happily, produce interesting art, conduct science, explore
technologies, trade, behave benevolently towards each other, etc.
By construction of the thought experiment you care about suffering reduction only, so you
would usually take the first action. But consider that many agents throughout the multiverse
will face very similar decision problems. For example, there might be an agent who primarily
cares about agents experiencing art and the interestingness of things and who is facing
similarly diminishing returns – in her world, most things that could be of interest already
exist. Other value systems, on the other hand, have been ignored in the process of making
her world more interesting. Her world contains many sentient beings with very low levels
of well-being, such as humans experiencing various crises (wars, loneliness, life-threatening
dangers) – a common theme in art –, wild animals, or blood sports. She knows that agents
in other parts of the multiverse dislike this suffering and that she could alleviate them at low
opportunity costs to herself. Her decision problem is thus structurally similar to our own. If
her thought process is similar to our own, superrationality applies. If we are nice and follow
the heuristic “fulfill the goals of other agents in the multiverse whenever the returns are
much higher than the opportunity costs for your own values”, then this makes it more likely
that she will be nice as well, the benefits of which are much greater than those forgone by
our own friendliness.
In general, if after thinking about superrationality we are nice to other value systems and
relinquish opportunities to exploit them, this makes it more likely that other superrational
agents with different value systems out there, or at least those who think in ways similar to
our own, do the same. And if everyone is friendly in this way, we can expect to harvest gains
from compromise – everyone will be better off. I will refer to this idea as multiverse-wide
superrationality , or MSR for short.
1.1 An overview of the paper
Having read the above introduction, the reader is familiar with the basic idea of MSR.
However, it opens up many further questions, some of which I attempt to answer in the
present paper. Specifically the rest of this paper makes the following contributions:
•We investigate the mechanism of superrationality (chapter 2).
–After elaborating on the argument for superrationality, we survey the decision
theory literature pertaining to superrational cooperation (sections 2.1 through
2.5). Among other things, we argue in favor of incorporating “updatelessness”
into one’s decision mechanism.
2Quantifying utility in a way that allows for comparison among different agents is difficult. For now, we
will assume that it is possible. The question is revisited in section 2.8.
5
–Exactly how much should we cooperate? Considering superrationality, how should
we decide between actions in this universe to maximize our multiverse-wide utility?
I will argue that it is best to effectively adopt a new utility function in this universe:
a weighted sum of all superrationalists’ utility functions that, if adopted by all
superrationalists, gives every superrationalist the same gains from compromise.
This function should be the same for all agents with your decision algorithm. (See
sections 2.6 through 2.8.)
–Weshowhowsuperrationalcooperationfundamentallydiffersfromstandardcausal
cooperation (sections 2.9 and 2.10). We will see how it requires no reciprocity – we
should benefit superrationalists who cannot benefit us, because we may correlate
with agents who can benefit us but whom we cannot benefit.
•Cooperating superrationally with agents elsewhere in the multiverse means taking their
values into account. Chapter 3 explores what these values might be and which aspects
of these values are relevant for MSR.
–I argue that (with regard to the decision to cooperate or not) we correlate with
agents who hold values that differ from ours (section 3.1). If this were not the case,
cooperating with them would be unnecessary except when it comes to coordination
(see section 2.8.9).
–I provide a comprehensive list of prerequisites that must be fulfilled for MSR to
work (see section 3.2). For example, we cannot benefit agents who do not care
about our part of the multiverse (section 3.2.2).
–Which aspects of other agents’ preferences should be taken into account? E.g.,
should it only be “moral preferences”? To which extent should we idealize their
preferences, e.g. by trying to factor out cognitive biases? We motivate and answer
these questions in section 3.3.
–We review different approaches to hypothesizing about the values of other agents
in the multiverse (section 3.4), the most important ones being evolutionary
psychology and the study of cultural evolution.
•How does multiverse-wide superrational cooperation shift our priorities? What does
it recommend in practice? These questions are discussed in chapter 4. We first show
how to make policy decisions in the absence of reliable knowledge about the values of
agents elsewhere in the multiverse (section 4.1). I then recommend a few interventions,
such as promoting causal cooperation (section 4.5.2) and, perhaps most importantly,
ensuring that future superintelligent AIs reason correctly about decision theory (section
4.6.3).
•The appendix contains various additional considerations that are either less crucial for
our decisions or otherwise more tangential, yet nonetheless relevant and of interest to
at least some readers. For example, I give an overview of the small amount of work
that is closely related to MSR (section 6.1) and explain why I find it plausible that we
live in a universe or multiverse containing many agents with whom we are correlated
(section 6.2). I also argue that superrationality has few implications for the interactions
between agents on Earth (section 6.6), and hence why this paper specifically concerns
the application of superrationality in a multiverse-wide (as opposed to general) setting.
6
Much more research is needed to answer some of the questions I set out to explore. This is
why I focus more on outlining howthese questions can be researched in the future, rather
than on trying to ascertain that all my answers are correct with high confidence.
2 Superrationality
Despite what the name might suggest, superrationality does not have anything to do with
extraordinary levels of rationality. “Super” refers to inclusivity , as insuperorganism , and
“rationality” specifically denotes instrumental rationality . The term was introduced by
Hofstadter (1983), although the basic argument had been discussed before (Davis, 1977;
Horgan, 1981, section X). In the following we give an abbreviated and simplified account of
the prisoner’s dilemma or public goods game-like experiment Hofstadter ran with some of his
friends and colleagues as participants. It is the same thought experiment we discussed in the
introduction, although we now distinguish two slightly different versions. The argumentation
for superrationality will be relatively brief. For more detailed accounts, see Hofstadter’s
original article or some of the references in section 2.2.
Donation game with common rationality. (This is more similar to the
version Hofstadter uses in his article.) Hofstadter sends 20 participants the same
letter, asking them to respond with a single letter ‘C’ (for cooperate) or ‘D’
(for defect) without communicating with each other. Hofstadter explains that
by sending in ‘C’, a participant can increase everyone else’s payoff by $2. By
sending in ‘D’, participants can increase their own payoff by $5. The letter ends
by informing the participants that they were all chosen for their high levels of
rationality and correct decision making in weird scenarios like this. Note that
every participant only cares about the balance of her own bank account and not
about Hofstadter’s or the other 19 participants’. Should you, as a participant,
respond with ‘C’ or ‘D’?
Donationgamewithsimilarity. Thesameasthedonationgamewithcommon
rationality. However, instead of informing the participants that they are all
rational, the game master informs them that they think in similar ways about
weird decision problems like this one.
The basic setup of this thought experiment is equivalent to those found in, e.g., the prisoner’s
dilemma with copies (sometimes also referred to as the prisoner’s dilemma with replicas or
twins). All of these games share an important feature: they are not iterated. Participants
respond only once, then find out what the others chose – and the game is over.
The optimal outcome is the one where you defect and everyone else cooperates, yielding a
payoff of 19·$2 + $5 = $43 . Conversely, the worst outcome occurs if you cooperate and
everyone else defects, yielding a payoff of $0. In any case, no matter how many participants
cooperate, you are always better off defecting; ‘D’ is the dominant strategy . Standard
game-theoretical analysis would therefore suggest that ‘D’ is the correct choice (Binmore,
2007a, chapter 1, Osborne, 2004, chapter 2). This is quite unfortunate, because if everyone
abides by this reasoning, this yields a payoff of just $5– whereas if everyone could cooperate,
you and everyone else could earn 19·$2 = $38 . Is there any way around this tragedy of the
commons?
7
If we only consider the causalimplications of an action, the analysis is indeed accurate.
However, it ignores that there is also a correlation between the decisions of the participants3.
Consider a variation of the above thought experiment in which you know that the other
19 participants are all exact copies of you, deciding under the exact same environmental
circumstances as yourself. You still have no causal influence over the others’ decisions and
‘D’ is still the dominant strategy; no matter what the other copies choose, ‘D’ is the better
option. However, this argument seems much less attractive now. No matter what you choose,
your copies are guaranteed to make the same choice (assuming that they make decisions
deterministically). There is no possible (deterministic) world in which two copies decide
differently in the exact same situation. Thus, your decision whether to cooperate is one
between two worlds: in one of them, the algorithm implemented by your brain returns ‘C’;
in the other, it returns ‘D’. Determining the choice of all your copies to be ‘C’ gives you
more utility, and should thus be regarded as the (instrumentally) rational choice.
Of course, strong correlation is not limited to atom-by-atom copies. Imagine a variation of
the donation game in which you play against near copies who differ from you in insignificant
ways. One may have forgotten some particular childhood memory; another may be more
skilled at playing basketball; and so forth. Similarly, the environments in which the near
copies decide may differ inconsequentially. One participant may receive the letter in, say, the
font “Times New Roman” and another in “Arial”. In a donation game with such negligible
variations, it seems clear that ‘C’ is still the better option. Although we cannot be absolutely
certain that all 20 of the near-copies make the same choice, it is very likely that they will.
With growing dissimilarities between two agents and their environments, the correlation
between them decreases further, but your own decision still gives you information about the
other agents’ decisions. As long as the accumulating differences do not affect any of the
agents’ reasoning, the correlation will remain a strong one.
While the participants of the two donation games are not copies of each other, both variants
make clear that the participants’ decision-making mechanisms resemble one another and
are thus correlated. The donation game with similarity is very explicit about this similarity.
The donation game with common rationality, on the other hand, is more subtle – it tells
the participants that their decision mechanisms are all “rational”. Of course, the individual
participant does not know what the rational choice is, yet, but she knows that, if she makes
her decision by abstract reasoning (rather than a whim) the result will be the rational
decision. She also knows the other participants are also rational (in the same sense of the
word) and will therefore arrive at the same – the rational – decision. (It seems unlikely that
‘C’ and ‘D’ are exactly equally rational.) In essence, this argument from common rationality
is one from (perfect) correlation: if we are rational, we determine what the rational decision
is and thus what other rational agents will do. This mechanism is what Hofstadter calls
superrationality : if everyone knows that everyone is rational and has the same information,
3Speaking about correlations between decisions only makes sense under the Bayesian interpretation of
probability. If we see an agent cooperate, then this makes us assign a higher credence to a similar agent
cooperating as well. However, if we were to observe two similar agents make the same decision over and over
again, then their decisions would be uncorrelated in the resulting empirical distribution.
I should also note that, in principle, I could also talk about dependences rather than correlations. Our
decision and the outcome of some other causally disconnected event could be dependent in all kinds of ways,
including being dependent but uncorrelated. Throughout this paper I will assume that the dependences
can be viewed as simple linear relationships (as measured by the Pearson correlation coefficient) and that
it always holds that the more I cooperate, the more others cooperate. I briefly discuss the possibility of
negative correlations in section 2.6.2.
8
then everyone can determine everyone else’s decision.
Throughout this paper, I will tend to make arguments from similarity of decision algorithms
rather than from common rationality, because I hold these to be more rigorous and more
applicable whenever there is not authority to tell my collaborators and me about our common
rationality. In any case, the argument from correlation is sufficiently general to include rea-
soning based on common rationality as a type of perfect correlation. Because the underlying
mechanisms are similar, we use the term superrationality for both similarity and common
rationality-based lines of reasoning. Assuming that we ourselves apply superrationality,
we will also call an agent “superrational” if her decision correlates with ours. Similarly,
we call a group of agents superrational if they use similar decision algorithms and take
superrationality-type reasoning into account, sweeping the complications of thinking about
individual correlations under the rug. Furthermore, we shall use the term “donation game
with superrationality” for donation games with similarity or common knowledge of each
other’s rationality.
Anticipating objections, Hofstadter (1983) writes:
This solution depends in no way on telepathy or bizarre forms of causality. It’s
just that the statement “I’ll choose C and then everyone will”, though entirely
correct, is somewhat misleadingly phrased. It involves the word “choice”, which
is incompatible with the compelling quality of logic. Schoolchildren do not choose
what 507 divided by 13 is; they figure it out. Analogously, my letter really did not
allow choice; it demanded reasoning. Thus, a better way to phrase the “voodoo”
statement would be this: “If reasoning guides me to say C, then, as I am no
different from anyone else as far as rational thinking is concerned, it will guide
everyone to say C.” [...] Likewise, the argument “Whatever I do, so will everyone
else do” is simply a statement of faith that reasoning is universal, at least among
rational thinkers [or those who receive the letter], not an endorsement of any
mystical kind of causality.
I do not think that, in practice, similarity between decision algorithms will often be as strong
as assumed in the above thought experiments. Even if I received a letter of the above kind,
I would not think of my decision as determining the others’ decisions with near certainty
(although there are circumstances under which I would cooperate). In fact, the very reason I
make the superrationality argument about the multiverse in particular is that the conditions
for superrationality are usually not fulfilled on Earth (see section 6.6). Nonetheless, it is
useful to assume perfect and near-perfect correlations in thought experiments for illustration
purposes.
The rest of this section explores various theoretical considerations related to those mechanisms
of superrationality that have practical implications for multiverse-wide superrationality.
Most of them are not specific to the multiverse-wide application, however, and we will often
illustrate them in more readily imaginable settings in a single universe.
2.1 Lack of knowledge is evidential power, part I: the other agents
One reason why some people would not cooperate in the donation game (or the prisoner’s
dilemma) is, I think, that they have knowledge that would break the correlation between the
participants. Using their model of human psychology, they can quickly make an informed
9
guess about what the others are likely to think about and thus decide. Put simply, you learn
less from your own cooperation once you already know what the others are deciding.
Consider the following variation of the donation game:
The Devious postal worker. Game master Hofstadter (in this thought exper-
iment a fictional character) has contrived another donation game. This time, you
and the other participants know that you all live in the same area and are to
reply by post. Having learned your lesson from Hofstadter’s article in Scientific
American , you write a big ‘C’ onto a postcard and walk to the post office. The
postal worker takes your card, reads the address and says: “You’re participating
in one of Prof. Hofstadter’s games, aren’t you? And you seem to have decided
to cooperate. How very noble and decision-theoretically sound of you! Well, I’ll
let you in on a little secret. Hofstadter has been playing his games with people
in this area for years now. We used to merely distribute the letters for him,
look at people’s answers and then send them back to Hofstadter, but after a
year or two, we started to bet on people’s replies. The participants tend to use
small cards rather than envelopes to save money, so it was easy to spot their
replies and count the number of C’s and D’s among them. We eventually became
almost perfect at predicting people’s responses, including those from first-timers
like yourself who don’t necessarily correlate with past participants. But merely
betting on responses got boring after a while, so we started to play a new game:
we would tell all participants about our predictions of what the others would
choose, giving each one a chance to reconsider their own choice. Although this
obviously affected the players’ behavior and forced us to readjust our methods,
our predictions are now practically flawless once again. To cut a long story short,
we’re highly confident that 18 of your 19 fellow players will defect and only one
will cooperate.” The postal worker gives you back your postcard and a pen.
Should you still cooperate or revise your decision?
If we assume that the postal worker’s prediction gives you far more reliable evidence than
your own action, then the superrationality argument presented above no longer works. Once
we already have reliable information about what the other participants are likely to choose (or
what they have already chosen), our own choice can no longer make cooperation significantly
more likely. In terms of evidential decision theory (introduced in the next section), if
E[number of other cooperators | I cooperate & postal worker says “n others defect”]
≈E[number of other cooperators | I defect & postal worker says “n others defect”],
where Edenotes conditional expectation, then the evidential role of our decision provides no
reason to cooperate. That said, in section 2.4 we will see that this issue is actually a bit
more complicated.
After having sent in your postcard of defection and reflected on what happened, you might
realize that all of the other participants were in the same situation as you were. They were
also told that 18 (or, in case of the one who cooperated, 19) of the others would defect
and, upon hearing this, each concluded that defection would give them a higher payout. No
wonder that most players defected.
Note that even if everyone had been told that all the others had cooperated, it would still
be rational for all participants to defect. By merely telling the participants about their
10
predictions, the postal workers make cooperation much less attractive and thereby less
common.
What is interesting about the Devious postal worker is that what makes the outcome worse
for everyone than in the original Superrational donation games is that everyone receives
information about the other participants’ behavior. While counterfactually useful for each
single player, the information is harmful overall. As Paul Almond (2010b, chapter 4.5) says,
“lack of knowledge is power”, which I would like to refine to: lack of knowledge is evidential
power.
We shall revisit this concept soon. In particular, we will think about whether there is some
way around the unfortunate conclusion that nobody should cooperate after receiving the
respective information.
2.2 A short survey of decision theories and their relation to super-
rationality
Superrationality is a special application of non-causal decision theories – that is, theories
of rational decision making that not only take the causal implications of an action into
account but also other information that making this decision would give us.4In the case
of superrationality, that information is always about the other agents. Conversely, causal
decision theory (CDT) (Weirich, 2016; J. M. Joyce, 1999; Lewis, 1981; Skyrms, 1982; Gibbard
and Harper, 1978) neglects any such non-causal implications of an action in the Donation
game with similarity. However, the best-known example of what I would view as CDT’s
limitations is surely Newcomb’s problem, originally introduced by Nozick (1969). Readers
who have not yet studied the problem, are encouraged to do so, although it is not required for
understanding most of the present paper. Because Newcomb’s problem was the first published
example of a problem that (potentially) requires one to consider the non-causal implications
of one’s decision, all problems wherein such considerations – including superrationality –
might play a role are called Newcomb-like problems .
Somewhat confusingly, the field that studies decision theories (in particular, which one
we ought to use) is itself called decision theory. Besides discussions of Newcomb-like
problems (i.e. whether and how correlated decision making and the like should be taken into
account), decision theory is also concerned with topics like the expected utility hypothesis
and deciding without assigning probabilities. For those who are unfamiliar with the field,
I recommend starting with An Introduction to Decision Theory (Peterson, 2017). More
elaborate introductions to the decision theory of Newcomb-like problems and correlated
decision making include Ahmed (2014), Yudkowsky (2010), and Almond (2010).
Interestingly, most philosophers seem to endorse CDT. A recent survey of professional philoso-
phers conducted by Bourget and Chalmers shows that in Newcomb’s problem – one of the
clearest examples of CDT’s potential failure5– about 30% endorse CDT’s recommendation6
4Note that the term “non-causal decision theory” is not meant to imply that these theories do not rely on
the concept of causality at all.
5Some have argued that evidentialist intuitions are even stronger in problems of cooperation like versions
of the prisoner’s dilemma with correlated decision making. Egan (2007) presents yet another decision problem
as a decisive counterexample.
6If you have anthropic uncertainty over whether you are currently in a simulation used to decide how to
fill the boxes with money, CDT may also recommend one-boxing if the simulated version would still care
11
of two-boxing, whereas only 20% endorse one-boxing (Bourget and Chalmers, 2014). In fact,
Bourget and Chalmers (2014, p. 21, table 11) even shows that philosophers who specialize in
decision theory are especially likely to endorse two-boxing. Defenses of CDT in Newcomb’s
problem are given by, e.g., Joyce (1999, chapter 5.1) and Eells (2016, chapter 8). Some have
also argued that Newcomb’s problem cannot occur (Ledwig, 2000, footnote 81; Binmore,
2007a, chapter 10).
Overall, I find the arguments put forward against CDT much more convincing than those
in favor. Yet even among decision theorists who reject causal decision theory, there is
disagreement about what the proper replacement should be. Classically, CDT is contrasted
with evidential decision theory (EDT) (Ahmed, 2014; Almond, 2010b; Price, 1986; Horgan,
1981). However, there are also many newer, less widely known ideas. These include functional
decision theory (Soares and Levinstein, n.d.), timeless decision theory (Yudkowsky, 2010a),
updateless decision theory (Benson-Tilsen, 2014; Hintze, 2014; McAllister, n.d.), ambient
decision theory, Spohn’s variation of CDT (2003; 2005, section 2; 2012), Arntzenius’ deliber-
ational decision theory (2008) and Wedgewood’s variation of causal decision theory (2013).7
Superrationality is not based on any specific non-causal decision theory but works in most of
them. Consequently, this paper is meant to adopt an impartial stance between the decision
theories in which superrationality works.
2.3 CDT would self-modify to behave like a non-causal decision
theory in some Newcomb-like problems
There is a class of problems wherein causal decision theorists recommend self-modifying into
a new decision theory that acts as though it takes some acausal considerations into account.
In both the aforementioned donation game and Newcomb’s problem, the agent serves as a
model for a number of (near-)copies and a prediction, respectively. Assuming that this model
is captured at a particular point in time, it follows that the model represents a time-specific
version of the agent. Thus, if the agent precommits to using superrationality or to one-box
beforethe copies or simulation are made, they would causally determine all copies’ choices.
Consider the following thought experiment:
Donation game with copies and precommitment. One morning Omega
(an absolutely trustworthy, perfect predictor with various superhuman abilities)
tells you that you will play the donation game on the next day. However, instead
of merely recruiting other people as participants in the game, Omega will copy you
atom-by-atom tonight and employ the resulting copies as tomorrow’s participants.
You are also told that the payouts this time around will be a thousand times
higher than in previous games, so it is in your best interest to prepare well. As a
final deed, Omega then leaves you a short book entitled From cold showers to
chastity: How to commit to any action by self-hypnosis. What do you do?
If you are already convinced of superrationality – or if you care a lot about the wealth of
your copies – you would not have to do anything. You could spend the day going about your
usual business, cooperate on the next day, and win a lot of money. But imagine you were a
about the monetary rewards paid to the real version of the agent.
7Many decision theories are also parameterized by some aspect of their definition. For example, causal
decision theory is parameterized by the notion of causality that it uses (see, e.g. Lewis, 1981; Hájek, 2006,
page 19; Weirich, 2016, chapter 2.3; Pearl, 2009, chapter 4).
12
proponent of CDT and did not care about your copies. You would then want your future
self and your copies to cooperate, but you know that they will not do so automatically. As
soon as the copies are created, none of them – including you – will have any causal influence
on what the others will do. So, if you do nothing, everyone defects and you get a very low
payout. However, since you have not yet been copied, you still have a causalinfluence on the
future version of you from which the copies will be created, and thus on the copies themselves.
If you could cause the future version of you to be the kind of agent who cooperates, you
couldcausally improve your payout in Omega’s game. Given the book that Omega left you,
this should be easy: read the book, precommit yourself – and thereby all your future copies –
to cooperate, and everybody wins. A causal diagram representing the decision problem is
given in Figure 1.
Precommitment
Your decisionYour 1st copy's
decisionYour 19th copy's
decision
Payout...
Figure 1: A causal graph representing the Donation game with copies and precommitment.
If CDT thinks that it will face some Newcomb-like problem where the copy or model for
prediction is created in the future, it would precommit to make the same decision that
acausal decision theories recommend (without precommitment). Does that mean that CDT
would have to make one precommitment for each Newcomb-like problem (starting in the
future) that it will face with non-zero probability? Rather than patching its behavior in
each (future) Newcomb-like problem individually, CDT could also make a more general
self-modification. At time t, it would precommit to use the following alternative decision
theory in the future: do what I, at time step t, would have precommitted to do in the present
situation (Yudkowsky, 2010a, chapter 2; Soares and Fallenstein, 2015, chapter 3; Meacham,
2010). Such precommitment is not sufficient to generate the kind of superrationality required
for this paper: it does not cover Newcomb-like problems that do not start in the future. That
is, if the copies are not created based on a future version of the agent, cooperation with them
is not covered by precommitment. Thus, CDT’s precommitment does not imply cooperation
with agents in other parts of the multiverse. However, it does suffice for a weaker version if
we assume the Everett interpretation of quantum physics (see section 6.8).
13
2.4 Lack of knowledge is evidential power, part II: taking a step
back
CDT’s precommitment only entails partial agreement with its rival decision theories. Still,
it is worth taking a closer look at precommitment, as it leads us to another interesting
dimension along which decision theories can vary. Consider Counterfactual mugging, also
known as “the curious benefactor” (Hintze, 2014, chapter 2.2):
Counterfactual mugging. Omega decides to play a game of heads or tails
with you. You are told that if the coin comes up tails, Omega will ask you to give
it $100. If it comes up heads, Omega will predict whether you would have given
$100 if the coin had come up tails. If Omega predicts that you would have given
it the money, it gives you $10,000; otherwise, you receive nothing. Omega then
flips the coin. It comes up tails, and you are asked to pay $100. Do you pay?
If you can precommit to giving the money before you learn about your poor luck, you should
do so. After all, this would render it near-certain that Omega would give us $10,000 if the
coin comes up heads, at the mere cost of $100 if it comes up tails. By precommitting to pay
Omega, we thus gain 0.5·$10,000−0.5·$100 = $4,950in expectation.
A CDT agent would, again, only precommit if Omega bases its prediction on a future version
of the agent, whereas I (and many non-causal decision theorists) would argue that we should
precommit as long as the result of the coin flip is unknown to us (even if Omega’s model is
based on a past version of us).8If we do so, we gain information that Omega thinks we give
in, and therefore that we will receive money in expectation. However, once we learn that the
coin came up tails, the “winning” move is to keep the $100. As before, the problem contains
a harmful piece of information – although in this case an aspect of the environment, and
not a piece of information about the behavior of other agents, causes trouble. If we got the
chance, we would “protect” ourselves against this piece of information by a precommitment,
which renders that piece of information harmless.
A similar reasoning applies to the Devious postal worker variant of the donation game: If
everyone precommits to cooperation irrespective of what the postal worker’s prediction says,
then a negative prediction about the other agents’ behavior can no longer be self-fulfilling.
Thus, if you precommit to cooperating before the postal worker tells you about the other
agents’ decisions, you have reason to expect more positive news (assuming you correlate
with the other agents).9
As is the case for CDT’s precommitment in the previous section, this leads to a more general
self-modification that can be made instead of a large number of individual precommitments
for individual situations. Specifically, we would (again) precommit to basing our decision in
this situation on what is good from the perspective of the state of knowledge priorto being
given new information (like the result of the coin toss). This is where updateless decision
theory gets its name from, and I will call this feature of decision theories updatelessness.
Contrary to what the term may suggest, it does not mean that we do not react to new
8In some versions of this problem, Omega has already flipped the coin when it approaches you. In those
cases, you would still win by precommitting long after the coin has already landed, provided you are still
uncertain about the result of the coin flip.
9Similar lines of reasoning about precommitment apply to thought experiments like the Newcomb’s
problem with transparent boxes (Drescher, 2006a, chapter 6.2), retribution (Drescher, 2006a, chapter 7.3.1)
and Parfit’s hitchhiker (Parfit, 1984, chapter 1.3).
14
information at all, but rather that we do it in a different way. Instead of updating the
probabilities we assign to possible states of the world and making the best decision based on
that probability distribution, we think about what we would have precommitted ourselves
to do in this situation. Usually, what we would have precommitted ourselves to do is the
same as what is then rational for us to do. For example, if we take a bite from an apple
and it tastes foul, we throw the apple away. If you had to precommit to some action before
learning that the apple is foul, you would also precommit to throw the apple away if it tastes
foul (and to continue eating the apple if it tastes good). Counterfactual mugging is one of
the rare cases in which it does make a difference.
Acausal decision theorists would precommit to be updateless about all information they
receive in the future. In essence, they would switch to a decision theory that comes with
updatelessness built-in (the most notable one of them currently being updateless decision
theory (Benson-Tilsen, 2014; Hintze, 2014; McAllister, n.d.) itself). Thus, if you had
been reasoning about (acausal) decision theory including the possibility of self-modification
correctly all along (rather than only after learning about the experiment and its result),
you would actually cooperate in the Devious postal worker and give in to Counterfactual
mugging – even without having precommitted to do so in these particular problems.
Some readers will no doubt already be familiar with updatelessness and the arguments
in favor of it. For those who have not, this may be a good time to incorporate general
updatelessness into their decision-theoretical intuitions, as it is relevant for some of MSR’s
implications (see sections 2.8.6 and 2.9.1).
As a side note, there are justifications of updatelessness that are not based on precommitment
and thus suggest that we should, e.g., give the money in counterfactual mugging even if we
previously have not thought about precommitting to updatelessness. Ryan Carey lists a few
in a comment on the Intelligent Agent Foundations Forum. Benja Fallenstein proposes a
justification based on “logical zombies”. For other ideas, see Armstrong (2011, section 3.1.2)
and Drescher (2006a, chapter 6.2).10However, these are more complicated, non-obvious
and not well-established. I thus opted for limiting myself to the more straightforward
precommitment-based justification for updatelessness as discussed by Meacham (2010),
Fallenstein on LessWrong and myself in a blog post (cf. Ahmed and Price, 2012).
2.5 Reasons and correlations
It is difficult to pin down the general principles of how the decisions of different agents in
different situations correlate. Indeed, I suspect that the problem has no simple solution
other than what is implied by the general solutions to naturalized induction (Soares and
Fallenstein, 2014, section 2.1; Soares, 2015) and decision theory.11
10Also note that updateless behavior can sometimes result from anthropic uncertainty even when applying
the more classical evidential or causal decision theories.
11Determining correlations between actions is similar to specifying the maxim corresponding to an action
in Kant’s categorical imperative. It seems that nobody has a precise grasp of how the latter is supposed
to be done and that this makes it difficult to apply the categorical imperative. However, the problem of
specifying the maxim underlying one’s action does not necessarily have a single correct solution. Determining
correlations between your actions and that of others, on the other hand, follows from any solution to the
problems of naturalized induction and decision theory. These solutions probably depend on priors, but it
probably makes more sense to speak of them as having a correct solution.
15
However, humans seem to have some good intuitions for how decisions correlate, in part
because understanding the correlations between actions is a day-to-day activity. Imagine
seeing your friend Anna being wounded in her right arm one day. She uses her left arm to
apply bandages and call a doctor, who arrives a few minutes later and inspects her right arm.
A few days later, you see Bob being wounded in his left arm. Based only on the experience
from Anna’s wound, what should you reasonably expect to happen? Will Bob use his left
arm to apply bandages to his right one? Will Annaapply bandages to herright arm? Or to
Bob’s? Will doctors come to Anna? Even after seeing just one instance of a situation, we are
often able to identify many of its causal links and use this information to infer correlations
with similar situations. If we see the reasons for a decision from the inside, these correlations
become even clearer. If you are Anna and you apply bandages to your right arm, you know
that it is to stop the bleeding. Doing so gives you no “weird” evidence – it would not lead
you to expect, say, that people are generally likely to apply bandages to things (cf. Ahmed,
2014, chapter 4; Almond, 2010b, chapter 2.8).
In general, taking a particular action only because of some reason X tells you nothing about
whether agents who do notcare (or know) about X will also take that action.
Importantly, superrationality itself falls under this general rule. That is, if you do something
for superrationality-related reasons, then this does not tell you anything about how people
who do not accept superrationality would behave. As a trivial example, consider playing
a donation game against 19 people whom you all know to make fun of superrationality
whenever the opportunity avails itself. Attempting to superrationally cooperate with those
people seems rather fruitless.
While these considerations may seem trivial, alleged refutations of acausal decision theories
are often based on ignoring them or assuming that the evidential thinker ignores them (cf.
Ahmed, 2014, chapter 4; Almond, 2010b, chapter 2.8).
2.5.1 Your back is not mine
If the decisions of agents correlate or if each can determine what is rational, then why can
someone – let us call him Dennis – not just determine that it is rational to benefit himor
his values? Surely, if everyone just benefited Dennis, that creates the optimal outcome for
him. So, in a donation game with superrationality, perhaps he should determine the rational
policy to be “cooperate, unless your name is Dennis”?
This is clearly absurd. The specific reasons that lead Dennis to come up with this strategy
(and to abide by it) do not matter to his fellow players, although each of them probably have
self-serving reasons which are analogous to those of Dennis. Dennis wants to achieve his own
goals, and this is done optimally if everyone cooperates while he alone defects. However,
this only makes it more likely that some other participant – let us call her Dana – would
reason, “I want to maximize my payoff; if I could determine everyone’s choices, I would want
everyone but me(Dana) to cooperate.” (cf. Drescher, 2006a, page 298f.).
16
2.5.2 Does accepting superrationality commit us to irrational behavior in med-
ical Newcomb problems?
One common objection to making decisions based on what our action correlates with, rather
than what our action causes, is that it seems to imply irrational behavior in some cases
(e.g. Nozick, 1969, page 135). In particular, reasoning from correlation seems to fail in
so-called medical Newcomb problems . An example is Yudkowsky’s chewing gum problem
(2010a, section 1.2), which he describes as follows:
Suppose that a recently published medical study shows that chewing gum seems
to cause throat abscesses – an outcome-tracking study showed that of people
who chew gum, 90% died of throat abscesses before the age of 50. Meanwhile, of
people who do not chew gum, only 10% die of throat abscesses before the age
of 50. The researchers, to explain their results, wonder if saliva sliding down
the throat wears away cellular defenses against bacteria. Having read this study,
would you choose to chew gum? But now a second study comes out, which shows
that most gum-chewers have a certain gene, CGTA, and the researchers produce
a table showing the following mortality rates:
CGTA present CGTA absent
Chew gum 89% die 8% die
Don’t chew 99% die 11% die
This table shows that whether you have the gene CGTA or not, your chance
of dying of a throat abscess goes down if you chew gum. Why are fatalities so
much higher for gum-chewers, then? Because people with the gene CGTA tend
to chew gum and die of throat abscesses. The authors of the second study also
present a test-tube experiment which shows that the saliva from chewing gum
can kill the bacteria that form throat abscesses. The researchers hypothesize that
because people with the gene CGTA are highly susceptible to throat abscesses,
natural selection has produced in them a tendency to chew gum, which protects
against throat abscesses. The strong correlation between chewing gum and throat
abscesses is not because chewing gum causes throat abscesses, but because a
third factor, CGTA, leads to chewing gum and throat abscesses.
Having learned of this new study, would you choose to chew gum?
The causal graph of this problem is given in Figure 2. Similar well-known decision problems
of this kind are Solomon’s problem (Gibbard and Harper, 1978, section 5; Eells, 2016, chapter
4), the Smoking lesion (Eells, 2016, chapter 4), and the Psychopath button (Egan, 2007,
section 3).
Naive correlation-based reasoning suggests that we should still refrain from chewing gum,
since the act of chewing gum would be evidence that we have the CGTA gene and thus throat
abscesses. This strongly conflicts with our intuition that we should chew gum to protect
against the abscesses. However, I will argue that this provides no convincing argument
against superrationality.
First, the correlation in the Chewing gum problem differs qualitatively from the correlations
between similar decision algorithms (Treutlein and Oesterherld, 2017). In the Chewing
17
CGTA
Gum
AbscessFigure 2: A graph representing the causal relationship in the CGTA thought experiment.
gum problem (and medical Newcomb problems in general), the correlation stems from a
causal relationship: our genes influence our decisions. Thus, the genes and the decisions
are correlated. The correlations of superrationality, on the other hand, result from the
similarity of the decision algorithms. The reasoning behind cooperation does not involve a
common cause of all collaborators’ decisions. Instead, the correlation may be viewed as logical
(Garrabrant et al., 2016): if I cooperate, then this implies that all other implementations
of my decision algorithm also cooperate. Figure 3 illustrates the difference between these
two types of Newcomb-like problems. Because correlations in medical and non-medical
Newcomb-like problems differ qualitatively, ignoring the correlations of our actions in the
former does not mean we should ignore them in the latter. In fact, in response to medical
Newcomb problems, philosophers have proposed a variety of decision theories that behave in
this exact way (Treutlein and Oesterherld, 2017). That is, they cooperate superrationally
(and one-box in Newcomb’s problem) but chew gum in the Chewing gum problem. These
include Spohn’s variation of CDT (2003; 2005, section 2; 2012) and Yudkowsky’s timeless
decision theory (2010).
Secondly, even purely correlation-based reasoning as done by EDT may recommend chewing
gum, depending on how the causal link from the CGTA gene to chewing gum is believed
to work. Given that people in the study presumably did not know that chewing gum helps
against throat abscesses, it is plausible that CGTA causes people to intuitively desire chewing
gum. However, if learning about the study and applying EDT then causes us not to chew
gum, it does not tell us anything about whether having the CGTA gene would have caused
us to do the opposite. Similarly, if you know that a sprinkler has watered the lawn, observing
that the grass is wet is no evidence that it has also rained (see Figure 4). The sprinkler
already explains why the lawn is wet, so you do not need rain as an additional explanation
(see Ahmed, 2014, section 4.3 for an extensive discission of this argument).
18
My decision
PayoffMy decision
PayoffCorrelated
decisionFigure 3: Generic causal graphs representing the two types of Newcomb-like decision problems.
Medical Newcomb problems are illustrated on the left. Newcomb problems based on similarity
between decision algorithms are illustrated on the right.
Rain
Wetness of
the lawnSprinkler CGTA
My decisionEDT+knowledge
of the study
Figure 4: If you decide not to chew gum after applying EDT to your knowledge of the study,
it may tell you as much about whether you have the CGTA gene as seeing a lawn watered
by a sprinkler tells you about whether it has rained.
19
2.6 Are the correlations strong enough?
In most superrationality-related thought experiments, it is assumed that the other agents
are near-copies of ours. The problems presented in this paper are no exception. However, in
any real-world setting, most agents are not close copies of ours. We should therefore expect
correlations to be much less than perfect.
Luckily, the total number of agents in the multiverse is probably so vast12that the correlations
between ourselves and any individual agent need not be very large13(see section 6.2). Because
many agents probably do not know about superrationality, we may assume that 99.99%
of the agents do not correlate with us at all when it comes to the decision whether to
cooperate superrationally. In this case, cooperation with the rest still pays off if we believe
that our correlation with the others is non-negligible and positive. It does not matter that
we inadvertently benefit many “free riders”. For example: if our cooperation makes it 1%
more likely that each of these correlated agents also cooperates, then if there are “only” a
billion of them, we can expect 10 million more to cooperate if we cooperate.14
2.6.1 Correlation only with close copies?
Some might think that they are uncorrelated with everyone else apart from very close copies
of themselves. Because such near-copies would likely share their utility function to a large
extent, there is no need to cooperate with them (although coordination may be useful,
depending on the utility function, see section 2.8.9). While the lack of formalized and
agreed-upon solutions to decision theory and naturalized induction (Soares and Fallenstein,
2014, section 2.1; Soares, 2015) makes it difficult to draw definitive conclusions on such
matters, I am nevertheless skeptical of this objection to MSR. It seems to me that decision
theories, at least as people currently conceive of them, are compatible with very large sets of
possible minds. That is, if an agent uses, say, evidential decision theory, it can still use all
kinds of different mechanisms for assigning conditional probabilities and, most importantly,
it can still have all kinds of values (see section 3.1).
12In fact, most multiverse theories contain infinitely many agents. This leads to some additional complica-
tions, discussed in section 6.10.
13Decision theorists have picked up on the point that large numbers of agents can bring out the differences
between CDT and EDT in realistic cases. In particular, large elections are often mentioned as such a case
(see, e.g. Ahmed, 2014, chapter 4.6.3).
14Note that this is usually not an instance of Pascal’s mugging, although the underlying mathematical
mechanism (multiplying very small numbers with very large numbers) is similar. Whereas in Pascal’s
mugging, a big reward outweighs the low probability assigned to it, multiverse-wide superrationality (MSR)
involves a low probability being outweighed by the large number of near-independent instances of that
probability. The positive result occurs with a high probability as long as the other agents’ decisions of
whether to cooperate are mostly independent of one another. For comparison, imagine drawing balls out of a
box containing 1,000,000 balls. You are told that the probability of drawing a blue ball is only 1/1,000 and
that the probabilities of different draws are independent. Given this information, you can tell with a high
degree of certainty that there are quite a few blue balls in the box. Multiverse-wide correlations between
agents thus becomes much more important to consider than the correlations in smaller scale problems like
the donation game, unless we are skeptical of some of the underlying assumptions.
20
2.6.2 Negative correlations?
There is another interesting objection about correlation strength that could be raised:
perhaps we should expect to correlate negatively with some agents in the multiverse, such
that cooperation can even do some harm (beyond the opportunity costs connected to it) as
it makes some other agents more likely to defect. While interesting, I do not find this reason
against superrational cooperation very convincing, either.
First, we have to consider what negative correlation means. Let’s say you currently think
that roughly 0.1% of evolved agents in the multiverse who have thought about MSR decide to
cooperate. Now, you learn of one randomly chosen agent that she cooperates. The intuitive
response is to increase the 0.1% estimate, if only slightly (depending how confident you were
in your initial estimate). If this agent were negatively correlated with the others, then upon
learning that this one agent cooperated, you would adjust your estimate of how many agents
cooperate downward .
Such a reaction seems implausible given our state of knowledge. Surely, there are a few
eccentric agents who have superrationality-related algorithms similar to mine, yet choose
to somehow invert the output of these algorithms. But such algorithms make little sense
from an evolutionary point of view and so I do not expect them to be very common in the
multiverse.
It may seem that agents have an incentive to become negatively correlated (via self-
modification), thereby enabling them to defect and make everyone else cooperate. However,
there are various problems with this idea. For one, to be able to correlate negatively with the
other agents it seems as though one would have to find out about their decision and then do
the opposite, which appears to be difficult. Furthermore, self-modification also commits us
to cooperate more when the others defect – an agent committed to unconditional defection
does not correlate with anyone else.
The intuition underlying the self-modification idea is that by self-modifying to be negatively
correlated, we can acausally determine the others’ decisions. But I do not think this works
in the relevant way. When you modify your decision algorithm, you lay your power into
the hands of the new algorithm. This means you cannot, for example, self-modify to some
decision algorithm A that does the exact opposite of what everyone else is doing, and then
defect – unless A already defects on its own. Thus, you cannot determine everyone else to
cooperate unless you are already correlated with them. Similarly, you cannot commit to
output the 100th digit of π, and then return 6 anyway to acausally determine the value of π.
However, if you are already correlated with the 100th digit of π, you can logically determine
its value. For instance, if Omega predicts your behavior and then tells you that if you raise
your arm, the 100th digit of πwill be 7 and if you do not it will be 1, you can determine the
100th digit of π. Of course, these stop working once you know what the 100th digit of πis.
As a last point, self-modification does not seem to add anything to direct defection (without
self-modification). To see why, let us consider the two kinds of agents that are not yet
negatively correlated with the others. The first agent is not correlated with others before
self-modification, and therefore has no reason to self-modify. He can just defect directly,
without adopting a weird decision theory that is about doing the opposite of what someone
in some other part of the multiverse is doing. The second agent is (positively) correlated
with others before self-modification. Her problem is that if she self-modifies, others will do
21
so as well, which gives her evidence that a lot more defection is happening than if she would
cooperate.
Another relevant point is that there is a sharp upper bound to the amount of negative
correlation that can exist within a group of agents. Imagine agents A, B, and C, whose
decision to cooperate we model as a random variable with the two values 1 (for cooperation)
and 0 (for defection). Let us say A is perfectly negatively correlated with B and B is perfectly
negatively correlated with C. A is then perfectly positively correlated with C. So, even among
just three agents, not all correlations can be perfect and negative. On the other hand, the
pairwise correlations may well all be perfect and positive. To study this further, we move
from correlations to covariances, because they can be meaningfully added up. In general,
we can derive a lower bound of −1
4(n−1)for the average covariance between pairs of agents
from any set of n≥2agents (excluding “pairs” of one and the same physical agent), if
cooperation is seen as a binary random variable. If the agents are all perfectly correlated,
then all covariances are at most1
4, so theupperlimit for the average covariance is also
1
4. Unless we have reason to believe that we are special, i.e. that our covariance with the
others falls far below the average covariance between two agents, this suggests that especially
for very large numbers of agents n, our possible acausal impact under the assumption of
only positive covariances can be much larger than that of negative covariances. In fact, the
covariances of the average agent cannot add up to something below −1
4regardless of the
number of agents. In contrast, they can be as high as1
4(n−1)for positive covariances. If we
view the covariances as uncertain, this suggests a prudential argument in favor of assuming
positive covariances to dominate over negative ones, given that our acausal influence is so
small under the opposite assumption. However, the details of this argument (and whether it
works at all) depend on our “meta-probability distribution” over covariances.
2.7 The relative importance of superrational cooperation: an ex-
ample calculation
Looking at a single decision, how do the benefits from superrational cooperation compare
with the opportunity costs? Although we need to make some unrealistic assumptions (such
as exact symmetry of the decisions faced by all the agents) in order to calculate this value, it
is nevertheless worth an attempt, if only for the purpose of illustration.
We assume that there are nsuperrational agents whose decisions in donation games are
perfectly correlated; that is, either all of them cooperate or all of them defect. Realistically,
many more agents’ decisions will correlate weakly with ours, while only very few correlations
will be perfect. However, the implications of many weak and a few strong correlations are
similar. For simplicity, we assume that the goals of the agents are orthogonal to each other,
i.e. that if someone benefits it is neutral in expectation to any other value system. All of
them have values that can benefit from behavior in other universes to the same extent.
Thenagents face the decision between a) generating bucardinal, interpersonally comparable
utils (or utilons) for their own utility function and b) generating botherutils forkrandomly
chosen superrationalists.
Choosing option a) makes everyone chose option a) and so only generates buutils for us.
Choosing optionb)makeseveryone chooseoptionb). Wheneversomeone(including ourselves)
chooses option b), there is a probability ofk
nthat we are among the beneficiaries.
22
Overall, if we and thus everyone else chooses option b), we receive nk
nbother =kbotherutils.
Choosing option b) is therefore to be preferred if and only if
kbother>bu. (1)
This suggests that our own preferences have no priority over those of other superrationalists
in this decision. We only decide based on “the greatest good for the greatest number”. For
instance, if k= 1, then we should choose option b) to help other value systems if bother>bu,
i.e. as long as helping other value systems can be done more efficiently than helping your
own values. This shows how important superrationality considerations can be. Whereas the
non-superrational agent maximizes only for its own value system, the superrational agent
maximizes for the value systems of other superrational agents just as much as for their own.
Moreover, whether we cooperate depends only on the number of agents whose cooperation is
correlated with ours and not at all on the number of agents that will defect. In this regard,
multiverse-wide superrational cooperation differs from most causal cooperation, where we
usually try to ensure that beneficiaries of our actions reciprocate (unless we care about them
intrinsically).
As mentioned already, this analysis is based on unrealistic assumptions of perfect symmetry
to highlight the relative importance of superrationality considerations. We will now move on
to more general, potentially asymmetric cases.
2.8 Compromise strategy
2.8.1 Sharing gains from compromise in the face of asymmetries
We have so far only considered completely symmetrical situations, wherein other agents
faced the exact same decision problem as ourselves. One could either choose to cooperate,
which correlated with everyone else’s cooperation; or defect, which correlated with everyone
else’s defection. Both cooperation and defection were associated (via the correlation between
agents) with particular outcomes. Based on these correlations it was straightforward to
choose the action that correlates with the best outcome for ourselves (and also for everyone
else). Of course, in practice, compromise will not be this tidy. Specifically, we will have to
deal with asymmetrical decision problems. Consider the following example:
Superrational cake cutting. You are playing a donation game with two fellow
players whose decision algorithms correlate strongly with yours. Unlike other
donation games, the currency in this game is cake, of which there are two flavors
– vanilla and strawberry. Each player’s utility grows in linear proportion to how
much cake they eat, and they all have taste preferences that affect their total
utility. Let’s say you, player 1, like vanilla twice as much as strawberry. Player 2,
meanwhile, likes strawberry four times as much as vanilla, and player 3 likes both
flavors equally. Each player currently owns different amounts of strawberry and
vanilla cake. You have one strawberry cake and one vanilla cake, while player 2
has three vanilla cakes and player 3 has one strawberry cake. (See Figure 5 for an
illustration of these circumstances.) You all know each other’s taste preferences
and can send arbitrary fractions of your cakes to one another, but none of you
are allowed to communicate. You only get to send one box of cake to each player,
23
and you receive your boxes from them after you’ve sent yours. What should you
do?
1 : 24 : 11 : 1Player 1Player 2Player 3SSVVVV
Figure 5: An overview of property and preferences in the Superrational cake cutting.
First note that this problem is indeed one of superrational cooperation. If causal decision
theory is applied, then the dominant strategy for each player is to keep all the cake – but
this would be a suboptimal outcome for everyone. The players have two strawberry and four
vanilla cakes in total. If you could redistribute them so that player 1 has one strawberry
and two vanilla cakes, player 2 has one strawberry cake, and player 3 has two vanilla cakes,
everyone would be better off than without any redistribution. However, there are infinitely
many other possible (fractional) distributions that would also be better for everyone. This
makes it hard to decide among them.
One part of the problem is that it is unclear what our decisions correlate with. If we send
player 2 a piece of her preferred cake (strawberry), can we expect to get some of our preferred
cake (vanilla) from her? If so, how much? If we could pin down the correlations and assign
probabilities to each combination of strategies – i.e. to each strategy profile – conditional on
any of our actions, we could choose the action that maximizes expected utility (the exact
formulation of which depends, of course, on our decision theory). But even if the agents
know that they have very similar (or even identical) decision algorithms, the asymmetries
make it hard to assign these probabilities.
Another perspective on the problem is that asymmetries make it unclear who “deserves” how
much. In the symmetrical situations it was always clear that everyone should get the same,
but this is different in superrational cake-cutting.
It is useful to view the symmetry of a compromise problem as a non-binary property. For
example, a donation game in which one player gains slightly more than the others from
cooperating may still be symmetric enough to make it obvious what the right decision is.
2.8.2 The compromise problem
In order to solve the problem of superrational compromise in asymmetric situations, we will
treat compromise as a game-theoretical problem. Note that this requires basic knowledge of
24
game theory; for an introduction see, e.g. Osborne (2004). Formally, a game consists of
•a finite set of players P=p1,...,pn,
•for each player pi, a set of actions Ai,
•for each player pia utility function ui:A1×···×An→R, where Rrefers to the real
numbers.
Multiverse-wide superrational compromise is a game where Pis the set of correlated su-
perrationalists, the utility functions uirepresent their preferences, and the sets of possible
actionsAirepresent the set of strategies a player can pursue in their part of the multi-
verse. Note that the last aspect of the definition assumes that the players’ preferences are
von Neumann-Morgenstern-rational (vNM-rational), which is technically useful and mostly
non-controversial15.
Our notation indicates that utilities are calculated deterministically from action tuples.
However, we will sometimes view the utilities ui(a1,...,an)as random variables in the
Bayesian sense. This is because we are usually uncertain about the implications of the
policiesa1,...,an, as well as the utility function uiitself, in the context of MSR.
Now, the question is which (potentially mixed) strategy αiany playerpishould choose. Note
that we are not looking for the (CDT-based) Nash equilibria of the game. We will therefore
have to move our focus from (Nash equilibrium-based) non-cooperative to cooperative game
theory.
In principle, the optimal strategy α∗
ican be determined by applying one’s decision theory.
For example, if one were to use EDT, then the optimal strategy is
argmax
αiE[ui(a1,...,an)|αi].
As noted earlier, however, computing or optimizing the expected value conditional on one’s
action directly is not feasible in situations of asymmetric payoffs. To find the best action, we
will therefore approximate the above expected value maximization with some new criterion,
similar to how game theory has replaced expected value maximization with Nash equilibria
and other concepts.
We will therefore try to develop some new compromise utility function u∗:A1×···×An→R,
intended as a new criterion for choosing the optimal strategy. Because the compromise utility
function depends less on the specifics of the problem, it will prove to be easier to reason
about what the adoption of some u∗tells us about what the other agents do. The optimal
u∗can then, under certain assumptions, tell us what action to take. At least if our choosing
u∗means that everyone else chooses the same u∗(which is not necessarily the case), then
playerpishould implement the i-th strategy entry of
argmax
(α1,..,α n)∈A1×···×AnE[u∗(α1,...,αn)].
15One exception may be the axiom of continuity. It is violated by preferences with lexicality, which are
commonly discussed in moral philosophy (Knutsson, 2016). However, if we drop the axiom of continuity,
we can still represent the preferences as a lexicographic utility function (Blume, Brandenburger, and Dekel,
1989; Fishburn, 1971). However, a treatment that includes lexicographic utility functions is beyond the scope
of the present paper. Because in uncertain situations, a lexicographic utility function is usually equivalent
to only maximizing the lexically highest values, we may nonetheless apply the present results by simply
omitting all lexically lower values.
25
Once again, having a compromise utility function, as opposed to more general compromise
preferences , implicitly assumes that the compromise preferences are also vNM-rational.
2.8.3 Cooperation with and without coordination
In a way, argmax(α1,..,α n)∈A1×···×AnE[u∗(α1,...,αn)]is the optimal plan on the assumption
that everyone will follow it. With practical degrees of correlations, however, we cannot
assume that everyone will arrive at the same plan, especially if multiple plans have the same
compromise utility. In MSR, it is especially unlikely that everyone will arrive at the same
plan, as superrational collaborators have different states of knowledge about the multiverse
and each others’ value system.
A perfect plan may have catastrophic results if it is not accurately followed by everyone
involved. Specifically, plans are risky if the utility of one player’s action hinges on another
player’s action because such plans assume the ability to coordinate. Hence, it is useful to
look at a class of utility functions where coordination plays no role.
We say that a utility function ui(additively) decomposes into local utility functions
{ui,j:Aj→R}j=1,...,nif
ui(a1,...,an) =n/summationdisplay
j=1ui,j(aj).
Intuitively speaking, uidecomposing into local utility functions means that any player pj
has a very direct impact on pi’s utility, such that when pjattempts to benefit pishe need
not think about what other players do.
IngametheoryasitisusuallyappliedtointeractionsbetweenagentsonEarth, theassumption
of additive decomposition of utility functions would be a severe limitation: if agents interact
with each other physically, then, of course, the impact of an action often depends on the
other players’ actions. As examples, consider some of the classic games studied in game
theory, such as Bach or Stravinsky or Chicken.
In the problem of multiverse-wide compromise, on the other hand, there is probably no
causal interaction between the actions of agents in different parts of the multiverse. Additive
decomposition of utility functions is thus a more natural assumption in this context. That
said, issues of coordination can still arise in the utility function itself. As an example, consider
a utility function that wants there to be at least one (dis-)proof of the Riemann hypothesis
somewhere in the multiverse, but does not care about the existence of further, redundant
proofs. This utility function does not decompose additively; whether I benefit this utility
function by proving the Riemann hypothesis depends on whether someone else is already
working on a proof. Other, perhaps more realistic, examples of ethical notions that do not
decompose into local utility functions are (partial) average utilitarianism and potentially
biodiversity. However, many other plausible utility functions (e.g., total utilitarianism) do
fulfill the above condition.
If some of the utility functions in a game do not decompose into local utility functions, we
will call the game a coordination problem16. Theoretically, the following arguments also
16This differs somewhat from more standard game-theoretical definitions of coordination. For a discussion
of the relationship, see (Oesterheld, 2017).
26
work for coordination games, but they are much more robust and practically applicable in
problems that require little or no coordination. This topic will be discussed further in section
2.8.9.
2.8.4 Harsanyi’s aggregation theorem
Although we do not yet know how and to what extent, we know that our compromise
utility function u∗should incorporate the utility functions u1,...,unbut not be sensitive to
anything else. The following assumption captures these attitudes:
Assumption A. Let P and Q be probability distributions over outcomes A1×···×Ansuch
thatEP[ui(a1,...,an)]≥EQ[ui(a1,...,an)]fori= 1,...,n.17That is, all players like P
at least as much as Q. Then EP[u∗(a1,...,an)]≥EQ[u∗(a1,...,an)], i.e. the compromise
utility function also values Pat least as highly as Q.
We could view this assumption as a decision to limit ourselves to a particular class of
compromise utility functions – a decision that makes our superrational collaborators limit
themselves to the same class. In terms of expected value for ourselves, this is a good decision.
It basically does not tell us anything other than that we do not want to pay anything to
switch from P to Q if everyone likes P at least as much as Q.
We furthermore introduce the notion of utility function equivalence: two utility functions u
andvare equivalent, written as u∼v, if they imply equal behavior. For the cardinal utility
functions discussed here, this is the case if one arises from positive affine transformation of
the other, i.e. if u=av+bfor somea∈R>0andb∈R.
Assumption A does not seem especially strong, but it turns out that it suffices for a significant
result regarding the shape of the compromise utility function. It is essentially a version of
Harsanyi’s aggregation theorem (Harsanyi, 1955; see also Peterson, 2017, section 13.4 for an
introduction).18
Theorem 1. (Resnik, 1983; Fishburn, 1984) Let u∗be a compromise utility function for
u1,...,unthat satisfies Assumption B. Then there are weights λ1,...,λn∈R≥0such that
u∗∼n/summationdisplay
i=1λiui. (2)
Note that the λiare not unique. Also, not all weight assignments consistent with Eq. (2)or
Assumption A have only positive weights. In particular, if ui=ujfor somei/negationslash=j, we can
17This notation – viewing the lotteries as probability distributions over action vectors – is a bit unnatural,
and stems from the lack of an intermediate step of world states or histories between action vectors and
utilities in our notation. If we extended our notation with such an intermediate step, then the lotteries would
be over states of the world rather than action vectors. Although the proofs also work with action vectors, it
may help to think of the lotteries as being over histories.
18Interestingly, the proof of the aggregation theorem given by Harsanyi (1955) contains an error. However,
since then a few alternative, correct proofs have been published (Fishburn, 1984; Border, 1985; Hammond,
1992).
27
decreaseλiby an arbitrary constant Cif we correspondingly increase λjbyC, and end up
with the same compromise utility function u∗. (Resnik, 1983; Fishburn, 1984). If C >λi,
we arrive at an equivalent utility function that assigns negative weights.
Theorem 2. Letu1,...,uneach decompose into local utility functions {ui,j:Aj→
R}j=1,...,n. Then a compromise utility function u∗that satisfies Assumption A relative to
u1,...,unalso decomposes into local utility functions.
Proof.Because of Theorem 1, it is
u∗(a1,...,an) =b+n/summationdisplay
i=1λiui(a1,...,an)
=b+n/summationdisplay
i=1λin/summationdisplay
j=1ui,j(aj)
=b+n/summationdisplay
j=1n/summationdisplay
i=1λiui,j(aj)
=n/summationdisplay
j=1/parenleftBigb
n+n/summationdisplay
i=1λiui,j(aj)/parenrightBig
for someband weights λ1,...,λn∈R≥0. Thus,u∗decomposes into local utility functions
/braceleftBig
u∗
j:Aj→R:aj→b
n+λiui,j(aj)/bracerightBig
j=1,...,n.
This is quite a convenient result. If indeed u1,...,uneach decompose into local utility
functions, then each player pican maximize u∗
iin her own part of the multiverse without
having to think about the precise actions of other players elsewhere in the multiverse.
2.8.5 How to assign the weights
Having argued that we should make our decisions based on a weighted sum of the decisions of
our superrational collaborators, the question is how we should optimally assign the weights.
Theorem 1 does not tell us much about this. In fact, it even allows for the possibility of
assigning positive weight only to our own utility function. We will differentiate two ways of
assigning the weights: biased toward our own values, or impartial. We will consider the two
options in turn.
Biased compromise utility functions?
We start with the option of assigning the weights in a way that is somehow biased toward
our values. For example, we could assign higher weights to utility functions that are more
compatible with ours, and lower weights to those that are not. Of course, this tells us that
agents with other value systems will do the same, i.e. assign weights in a way that biases
the resulting utility function toward their own values.
28
I will argue against assigning weights in a biased way and in favor of impartial weights. This
point is crucial for the strength of the implications of MSR, because the more weight we
assign to other utility functions, the more our policies have to change in response to MSR.
In a way, the reasons against biased weights are merely an extension of the reasons for
cooperating in the first place. Let us say that in response to MSR we assign some weight
to the other agents’ utility functions but still largely maximize for our own values. Then
gains from further trade are left on the table. Because we still maximize for different utility
functions, we could trade again until all our compromise utility functions approach some
impartially weighted sum.
This line of reasoning is also supported by the standard ways in which gains from trade
arise. If everyone compromises with biased weights, then this produces some gains from
comparative advantages – in situations where I have a large comparative advantage to
maximize for someone else’s utility function, I will do so at the cost of not maximizing for
my own values. In return, others do the same. But if the comparative advantages are too
small, then we miss out on gains from trade. Consider an example with two superrational
collaborators. For simplicity, we will assume them to be in symmetrical situations at the
time of compromise, such that the only plausible neutral compromise would give the same
weight to each of the two utility functions. Both may, at some point, face the choice between
taking a utility of xfor themselves and giving x+εto the other, where xandεare any
positive real numbers. In such a situation both have a comparative advantage to help the
other’s values. But if εis very small, the comparative advantage is very small, too. So, if
they assign more weight to their own utility functions, there is some εsuch that they choose
to maximize their own utility functions and thus miss out on the gains from trade.
While I am moderately confident that all compromises with biased weights are Pareto-
suboptimal, I do not, at this point, have a formal proof of this statement. That said,
the above example at least shows that such compromises yield collectively vNM-irrational
behavior. Furthermore, section 2.7 showed that, at least in symmetrical idealized scenarios,
prioritizing one’s own values does not achieve the best results.
I should note that impartial weights do not imply that I should be equally likely to find
myself maximizing for my own values as any of my superrational collaborators. For example,
if you are very uncertain of the content of some value system, then it will not influence your
decisions as much, even if you assign a high weight to that value system.
Neutral compromise utility functions
We have argued that after an optimal compromise, each player should judge their action
roughly by the same impartial criteria. Hence, we now have to look for a way of assigning
the weights in a neutral way.
Harsanyi himself proposes – albeit in the context of social welfare rather than trade – to
simply give equal weight to all utility functions, which is equivalent to removing the weights
altogether (Harsanyi, 1979, section 2). Besides the argument of “equal treatment”, it can be
backed by an original position argument (Harsanyi, 1953; Harsanyi, 1955; Freeman, 2016).
From an original position, i.e. a perspective from which we do not yet know which position
in the multiverse we will take, how many resources will be at our disposal, etc., it seems
29
reasonable to give equal weight to all utility functions. Updatelessness gives this argument
some additional appeal, as it asks us to make our decisions from a similar perspective.
However, there are various problems with maximizing unweighted aggregated utility. One is
that it is based on interpersonal comparisons of utility. In Harsanyi’s words, it assumes that
“all individuals’ utility functions u1,...,unare expressed in equal utility units (as judged by
individualjon the basis of interpersonal utility comparison)”.19Such comparisons, however,
are highly controversial (Hammond, 1991; Binmore, 2007b). Recall that the cardinal utility
functions postulated by the von Neumann-Morgenstern utility theorem are only determined
up to positive affine transformation. This means that if a utility function urepresents
an agent’s preferences, then so do 100·uand0.01·u. None of the three is in some way
the more natural choice for representing the agent’s utility function. Whereas positive
affine transformations do not alter an agent’s behavior in choosing lotteries, they do change
the behavior implied by the aggregate of multiple such functions. In Superrational cake
cutting, we specify the utility functions (or, as they are sometimes called in fair cake-cutting,
subjective value functions) of each agent up to positive affine transformation by specifying
the trade rates between units of strawberry and vanilla cake. For example, the third player
has a 1:1 trade ratio, the second has a 4:1 trade ratio. To simplify notation, let siandvibe
amounts of strawberry and vanilla that pireceives under some action profile. Then the second
player’s utility function could be u2(s2,v2) = 4s2+v2and the third player’s utility function
could be written as u3(s3,v3) =s3+v3. If one wanted to maximize aggregate utility u2+u3,
thenu2would effectively receive far more weight than u3. Ifu2(s2,v2) = 400s2+ 100v2, this
bias toward u2would be even worse.
Because utility functions come in different versions depending on their scale, we still need to
find a satisfactory way of normalizing the utility functions, i.e. to pick one out of a whole
class of equivalent utility functions. This task is actually equivalent to assigning a weight
to a given member of each class. Thus, removing the weights and relying on interpersonal
comparison of utility can be seen as merely passing the buck from assigning weights to
choosing the scale of the utility function.
One common approach to interpersonal comparison of utility is range normalization (Isbell,
1959; Hausman, 1995, section 3). That is, the utility functions are chosen in such a way
that their maximum is 1 and their minimum is 0 (using no additional weight).20, 21While
intuitive, range normalization appears to be inappropriate for compromises. For one, it lacks
a rigorous justification in this context – it is not immediately obvious that the underlying
naive view of neutrality is relevant for compromise.
The main problem with using range normalization for the compromise utility function is
that in some cases some of the agents have no reason to accept it as it leaves them worse off
than engaging in no compromise at all.22For example, consider the case of a compromise
19Note that none of the previous arguments were based on interpersonal comparisons of utility.
20There is a technical problem with utility functions that do not assume a highest/lowest value at all. If
they are nonetheless bounded, the infimum and supremum must be set to 0 and 1. If the utility functions
assume arbitrarily high values, range normalization is not possible. That is, for an unbounded utility function
uthere is no bounded utility function u’that is equivalent to u.
21For ethical comparisons, the lowest and highest values usually depend on an agent’s moral relevance,
or some measure of the intensity of preference (un)fulfillment she can experience. Alternatively, the utility
function may be weighted by such values at some other steps of the interpersonal comparison of utility.
22As I will argue below, there are some pathological cases in which every possible compromise utility
function leaves someone worse off. However, both of the following cases can, if they avoid these pathologies,
30
between a beggar and a millionaire with different sets of preferences. If the compromise gives
equal weight to the preferences of the two, then this leaves the millionaire worse off as she
receives little in return for dedicating half of her resources to fulfilling the beggar’s wishes.
Even if all agents have equal resources, a range normalization-based compromise can be
unappealing to some of them. Consider two equally powerful agents with two very different
value systems. The first cares about bringing about a state that is only very rarely attainable.
All his other preferences pale in comparison. The second agent divides states of the world
into two classes: good and bad states. Within each of those classes she is indifferent between
any pair of states. Also, the division is such that in most situations, the different actions
vary in how likely they are to bring about a good state. Under range-normalization, the first
agent’s utility function would usually be close to 0 and it would only rarely be possible to get
it to 1. The second agent’s utility function is 0 for some states and 1 for the others. If we
maximize the sum of these two utility functions, this will mean that we will usually optimize
much more for the second agent’s preferences. After all, in most cases doing so significantly
increases the probability of attaining 1 util. Maximizing for the first agent, on the other
hand, usually only generates a small fraction of a util. Only in the rare situations in which
we have an opportunity to attain the first agent’s favorite state do the agents’ preferences
have a similar amount of control over the decision made based on the compromise. The first
agent may therefore have no reason to accept this compromise.
Because range normalization can drastically favor some players, it may actually be not so
neutral after all. If you already knew that a range-normalized utility function would benefit
you a lot, you would be biased to accept it. If you accept the range-normalized utility
function on these grounds, however, it would not tell you much about the choice of agents
who already know that the range-normalized sum would be harmful to them. In this sense,
range-normalization is a biased compromise. Of course, if I benefit from range-normalization,
I could hope that those who are disadvantaged by it nevertheless compromise in some
other way that still benefits me. However, using such tricks to exclude others from our
compromise is evidence that we are excluded in other ways as well (cf. section 2.9.3). Thus,
without some other justification, range normalization does not appear especially promising.
Many other approaches to interpersonal comparisons (see, e.g., Sen, 2014, section 7.323)
suffer from the same problems.
We thus need to set up more rigorous criteria for neutrality. It appears that the most direct –
though certainly not the only – approach is to require that the compromise is, in expectation,
equally good for everyone – that is, everyone gets the same gains from compromise. This
ensures that the compromise is equally attractive to everyone involved.
Assumption B. The expected gains from adopting u∗are the same for each player pi.
Unfortunately, Assumption B is underspecified. The most naive view is that the gains from
compromise for player piare
E[ui(α1,...,αn)|u∗]−E[ui(α1,...,αn)|no compromise ]. (3)
allow for a compromise that leaves everyone better off.
23Another approach, which I have brought up in previous work (Oesterheld, 2016a, section 3.2), is to use
any utility function extraction procedure that is not explicitly biased in any way and hope that such “fair [or,
perhaps, equal] treatment in determining all individuals’ utility functions induces moral permissibility,” even
if the utility functions are not normalized afterward. This is especially promising if you do not yet know
which agents will be favored by the procedure.
31
However, there are some aspects of Eq. (3)that should, perhaps, be revised. For one, it
is unclear whether having a full compromise versus having no compromise is the appro-
priate counterfactual. One alternative is to choose the counterfactuals provided by one’s
decision theory. That is, one could compare E[ui(α1,...,αn)|I cooperate with u∗]with
E[ui(α1,...,αn)|I defect ], where EDT’s conditional expectation may be replaced by an
alternative notion of the counterfactual (see Gibbard and Harper, 1978; Hintze, 2014, section
3). Alas, these are difficult to calculate. Perhaps one could also measure the gains from each
pi’s individual participation, so that the set of cooperators in the minuend and subtrahend
of Eq.(3)would be the same, except that the latter does not contain pi. Moreover, the sub-
trahend’s compromise utility function would not contain λiuias a summand. This resembles
the notion of voting power from social choice theory (Cotton-Barratt, 2013; Felsenthal and
Machover, 1998).
A second area of revision may be that Eq. (3)does not account for some value systems
potentially being more common among the pi, or holding more power, than others. We
would probably want the gains from trade to be proportional to the resources invested by a
particular value system. Otherwise, an individual agent with a very common value system
has no or less of an incentive to join the compromise. For example, if an agent already knows
that at least one other agent with the same utility function is part of the compromise, then
this could mean that joining the compromise produces no additional gains from trade for
that agent. One way to weight different utility functions based on their power would be
to divide Eq. (3)by some measure of the resources invested by uj. The Shapley value is
a well-known example of a theoretically grounded measure of power and may serve as an
inspiration.
Also note that Assumption B contains an interpersonal comparison of utility. However, the
potential harm of getting this one “wrong” is smaller than in the case of using the unweighted
sum as a compromise. Depending on how you scale the different utility functions relative
to each other, applying Assumption B may allocate the gains from trade differently, but it
nonetheless ensures that everyone receives gains from trade at all.
Further research is needed to identify the appropriate variant of Eq. (3)or perhaps an
alternative to it and subsequently the corresponding weight assignment. An example of a
promising line of research in this direction is the work on variance voting, i.e. normalizing
the variances, by Cotton-Barratt (2013) and MacAskill (2014, chapter 3). In particular,
Cotton-Barratt shows that under certain assumptions, variance-normalized compromise is
the only compromise that gives each player the same voting power.
In addition to specific solutions, it would be useful to explore the necessary conditions for the
existence of a weight assignment that satisfies Assumption B while producing positive gains
from trade. For example, if you already know how much cake each player owns in the above
Superrational cake cutting, there is no assignment of weights that reliably produces gains for
everyone. No matter how the weights are assigned, there will always be one weighted utility
function that strawberry cake is best invested into, one that vanilla cake is best invested into,
and (at least) one that receives no cake at all. That is, unless two weighted utility functions
generate the same amount of utility per unit of cake, in which case the compromise utility
function is indifferent about who receives the cake. Besides the existence of gains from trade
(see section 3.2.5), I suspect that the central assumption under which a weight assignment
satisfying Assumption B exists is the continuity of the expectation E[ui(α1,...,αn)|u∗]
relative to the weights in u∗.
32
2.8.6 Updateless weights
Seeing as the gains from compromise that Assumption B talks about depend on one’s current
state of knowledge, the weights to be assigned may do so, too. Consider the following
example:
Remote-controlled cake maker. Two agents are about to share cake again.
Agent 1 prefers strawberry to vanilla cake at a ratio of 2:1. Agent 2 has the
inverse preference ratio. On day one, neither of them owns any cake; however,
they know that on day two, each will receive two control buttons for a distant
machine, capable of producing and shipping only one type of cake. While it is, at
this point, unknown which flavor of cake it will produce, they will know the type
of cake maker once they receive the buttons. They will have the same amount of
control over where the cake from the cake machine is sent: each agent can, by
pressing one of the buttons, send some amount of cake to himself. By pressing
the other button, they can send a 20% larger amount of cake to the other agent.
Unfortunately, they can only press one button. The two agents’ thought processes
correlate perfectly when it comes to decisions regarding superrationality and they
may already settle on a superrational compromise utility function on day one.
On day two, they receive the control buttons and learn that it is a vanilla cake
machine. They still cannot communicate, but use the same thought processes.
Which button should each of the two press?24
Let us first consider the situation on day one. Because their situations are fully symmetric,
it seems reasonable to set
u1(s1,v1) = 2s1+v1,
u2(s2,v2) =s1+ 2v2,
u∗(s1,v1,s2,v2) =u1(s1,v1) +u2(s2,v2),
wheres1,v1,s2,v2are, again, the amounts of cake received by each player (which can be
calculated from a set of actions). By any reasonable definition of “gains from compromise”,
this satisfies Assumption B. Accepting this compromise effectively means that agent 1 will
receive all of the cake if the machine makes strawberry cakes, and agent 2 will receive all of
the cake if the machine makes vanilla cakes.
We now skip ahead to day two, when the agents are told that the machine makes vanilla cakes.
In this new situation, u∗(s1,v1,s2,v2) =u1(s1,v1) +u2(s2,v2)is harder to justify, as it gives
all the gains to agent 2 – agent 1 even loses utility relative to not compromising. Perhaps the
morenaturalutilityfunctiontochooseondaytwois u∗(s1,v1,s2,v2) =2u1(s1,v1)+u2(s2,v2),
which would be the compromise utility function under, e.g., variance normalization.
From the perspective of day one, it is suboptimal if the two players change their minds on
day two. Thus each player prefers precommitting to the initial compromise even though that
24InIf you don’t know the name of the game, just tell me what I mean to you , Stuart Armstrong uses a
similar game to make a somewhat similar point.
33
implies a good chance of a net loss. Once again, we find that lack of knowledge is evidential
power (see section 2.4) and that we should precommit to decision-theoretical updatelessness.
In the context of MSR, I doubt that the weights of the compromise utility function would
shift considerably once a few basic factors have been taken into account. For one, significantly
updating one’s prior about the expected gains from a compromise requires broad and reliable
knowledge about the entire multiverse. Specifically, it requires knowledge about what
decisions superrational collaborators will face in other parts of the multiverse, and how
these decisions will affect different value systems. Even if you learn that some assignment of
weights decreases your utility in this universe, the situation may differ in other universes.
Many opportunities to have an impact may depend on as yet unidentified crucial consider-
ations or unresolved issues in physics. Examples of issues of this kind which have already
been identified include lab universes, artificial intelligence, artificial intelligence arms races,
self-improvement races, suffering in fundamental physics, whole brain emulation scenarios
(see section 3.4.4) and global catastrophic risks. That is to say: any kind of multiverse is so
complicated that we should not expect to know much about it.
If we think some pieces of information would significantly shift our weights in one direction
or another, then this piece of information is potentially harmful. To the extent that it is
possible, it would be important to convince superrationalists to become updateless before
they encounter such information.
2.8.7 Limitations
The present analysis is limited in several ways. In general, we made many assumptions under
the meta-assumption that the results generalize. For example, our arguments were often
based on perfect correlation between the agents. Many aspects of our analysis were also
semi-formal or informal. For instance, we did not formally justify the claim that settling on
the same compromise utility function creates the largest gains from compromise. Further
research is thus needed, including research into the largely unexplored area of superrational
game theory.
2.8.8 Heuristics
It would certainly be nice to find a formal solution to the compromise problem (as described
in section 2.8.2) at some point. However, such a solution is neither necessary nor sufficient
for cooperating superrationally in practice. It is not necessary because cooperation based on
intuitions about compromise may already get us quite far. Even without ever having heard
a course on game theory, most people have intuitions about fairness that seem to suffice in
most negotiations. We may expect that similar intuitions also suffice for reaping many of the
benefits of superrational cooperation. It is not sufficient because we will not possess formal
description of our collaborators’ utility functions in the foreseeable future, anyway, given
that we cannot even formally describe our own goals25With the description of their values
25For instance, Peter Levin writes:
The reasons that people give for their judgments are post-hoc rationalizations (Haidt, 2012,
pp. 27-51; Swidler, 2013, pp. 147-8; Thiele, 2006). “Individuals are often unable to access the
causes of their moral judgments” (Graham et al., 2011, p. 368).
34
being vague and qualitative, the compromise must, in the end, also be.
Hence, we should also consider informal heuristic rules for making decisions. Below are some
proposals. They have significant overlap and many of them also apply to causal cooperation;
some are more moderate and intended to apply to people who do not fully accept MSR.
Sorted in increasing order of the strength of their implications:
•If some resource is mildly useful to you but very valuable to other (prominent) value
systems, it is prudent to ensure that the resource is used for those other value sys-
tems. Similarly, avoid hurting other (prominent) value systems if it only gives you a
comparatively small gain.
•Utility functions that contribute a lot, e.g. because opportunities to increase them
are rare, should perhaps receive disproportionate focus whenever such an opportunity
arises. Otherwise, agents with such utility functions would have little incentive to
compromise.
•When the values of superrational cooperators diverge on some issue with a roughly
equal number of supporters (or resources) on each side, these sides cancel each other
out after compromise. That is, no superrational cooperator should act on a view on
this issue. Toby Ord writes: “It is so inefficient that there are pro- and anti-gun control
charities and pro- and anti-abortion charities. Charities on either side of the divide
should be able to agree to ‘cancel’ off some of their funds and give it to a mutually
agreed good cause (like developing world aid). This would do just as much for (or
against) gun control as spending it on their zero-sum campaigning, as well as doing
additional good for others.”
•Try to benefit many value systems at once, and deprioritize issues that are very specific
to you or other agents (see section 4.1.1).
•Metaphorically speaking, try to increase the size of the compromise pie, rather than to
increase the size of your own piece.
•In any situation, maximize for the (prominent) value systems that have the highest
stakes in your decision.
•For any policy decision, ask yourself whether superrationalists with other value systems
would plausibly arrive at the same decision (to ensure that you are assigning weights
impartially).
2.8.9 Notes on superrational coordination
Superrational compromise is easiest if it requires no coordination (see section 2.8.3). It can,
however, also solve coordination problems – that is, problems in which the utility functions
of the players do not decompose into local utility functions, and the utility of a strategy to
some player thus depends in part on the moves of the other players.26For example, consider
the following variation of the Platonia dilemma, adapted from Hofstadter (1983):
Also see Muehlhauser and Helm (2012, ch. 5).
26Again, see the technical note Oesterheld (2017) on how this compares to more standard game theoretical
definitions of coordination.
35
Platonia five. One fine day, out of the blue, you get a letter from S. N. Platonia,
a renowned Oklahoma oil trillionaire. The letter states that 20 leading rational
thinkers have been selected to participate in a little game, and you are among
the lucky players. “Each of you has a chance at winning one billion dollars, put
up by the Platonia Institute for the Study of Human Irrationality”, it explains.
“Here’s how: if you wish, you may send a telegram with just your name on it
to the Platonia Institute. If exactly 5 people reply within 48 hours, they each
receive one billion dollars, otherwise no prizes are awarded to anyone. You are
not allowed to communicate with each other or share the prize afterward.” What
do you do?
As before, we can also describe a variation of the problem with similarity instead of common
rationality. And as usual, causal decision theory recommends to reply, a strategy that, if
implemented by everyone (or more than 5 people), forgoes a golden opportunity.
However, this scenario diverges from those we have previously discussed in that our impact
on the other participants’ utility depends on their actions. Nevertheless, we can use superra-
tionality to our (and our superrational collaborators’) advantage in Platonia five, although
we have to apply it in a different way.
The problem is that simply maximizing the compromise utility function does not really help
us here. Given that all players are essentially in the same position, it seems reasonable to
let the compromise utility function be the sum of the money gained by each player. That
means it is either 5 billion if exactly 5 people send in the letter, or 0 if another number of
people send in a letter. Maximizing the utility function only tells us that we should ensure
that exactly 5 people should send a letter – something we already knew beforehand. The
compromise utility function does not tell us whoshould send in the letter. Because it does
not decompose into local utility functions, it does not tell each player what to do. This
illustrates how, even with perfect correlation, the compromise utility function may not suffice
for solving coordination problems.
Hence, we go back to the more direct approach. We assume that, given the correlation
between agents (or the ability to determine the rational choice), we should choose the strategy
that would be best for us if it were adopted by everyone. Because the situation is entirely
symmetrical, everyone is likely to go through equivalent lines of reasoning. Obviously, neither
sending in the letter nor not sending in the letter are good strategies. We thus have to
adopt a mixed strategy, i.e. one of choosing to send in the letter with some probability
p, where players’ samples from this distribution are independent. At the far ends, both
p= 1andp= 0guarantee that we lose. However, if pis chosen from somewhere in between
and everyone adopts the same mixed strategy, there is a non-zero probability that you, the
individual participant, will win the billion. Thus, we now have to choose pso as to maximize
our probabilities of success. (Alternatively, we can maximize the probability that the 5
billion are awarded at all. As we will see, the result is the same.)
If you and everyone else each sends in their letter with a probability of p, the probability of
your winning is ptimes the probability that exactly four of the other 19 players send in their
letter. The overall probability of you winning a billion is thus
p·/parenleftbigg19
4/parenrightbigg
·p4·(1−p)15, (4)
36
where/parenleftbig19
4/parenrightbig
is a binomial coefficient. We now choose pso as to maximize this term. Because/parenleftbig19
4/parenrightbig
is a constant, we can maximize
p5·(1−p)15. (5)
Incidentally, the pthat maximizes this term also maximizes
/parenleftbigg20
5/parenrightbigg
·p5·(1−p)15, (6)
the probability that anyone wins at all. As it happens, Eq. (6)is maximal for p=1
4,27which
gives a probability of about 20%that the money is won, and thus a probability of about
20%·p= 5%that we win the money.
Although a 20%probability of the money being won is better than 0%, it is still not quite
satisfactory relative to the 100%that could be achieved with perfect coordination28. However,
it seems as though there is no better way. We will revisit this question in section 2.8.9.
While such coordination is qualitatively different from the other examples, it should still
be seen as a form of cooperation (as opposed to some other applications of acausal decision
theory like Newcomb’s problem, or some coordination problems where all players have the
same goal), because the game is positive-sum and (superrational) coordination improves
everyone’s outcome relative to CDT’s recommendation.
In Platonia five, the uncoordinated response involves everyone trying to get the money by
sending in a letter. Many coordination problems suffer from the opposite problem, namely
diffusion of responsibility. Consider the following example, versions of which were also
discussed by, e.g., Leslie (1991, ch. 5) and Drescher (2006a, section 7.3.2):
Superrational voting. You live in a country of superrationalists and today
is election day. A strict secret ballot rule dictates that citizens are not allowed
to tell each other which party they are going to vote for or whether they plan
to vote at all. Unfortunately, going to vote costs a lot of time and you don’t
expect the potential impact of your vote to justify the opportunity costs. If you
choose not to vote, then so will most of your fellow superrational citizens. That
would be unfortunate. For one, the majority opinion of the people should be
represented, if only because it is more likely to be your opinion. Besides, there is
an uncorrelated minority that should not win, as all your superrational friends
will attest! By what mechanism should you decide whether to vote or not?
Again, the compromise utility function is not very informative. And again, a probabilistic (or
mixed) strategy is optimal if the correlations are sufficiently strong overall and independent
of the party one plans to vote for. To find out what exact probability should be chosen, one
would need to come up with a term for the expected value under that probability, consisting
27You may have noticed that p=1/4=5/20, i.e. the number of players who would need to win divided by
the number of players. This result generalizes.
28Specifically, if the 20 participants could let some some uniform random process determine the set of the
5 people who are allowed to send a letter, everyone could commit to going with that proposal. Consider the
concept of correlated equilibria.
37
of the cost of voting and the probability that some minority view wins, as well as the expected
costs of the latter.
Consider one last example, inspired by Pascal’s button:
Multiverse-reprogramming. Scientists inform you that the basic laws of the
multiverse may be vastly more complicated than they originally thought. Specifi-
cally, they say there is a small probability that there exists some complicated
and hard-to-find way of “reprogramming” parts of the multiverse. However, such
reprogramming depletes some multiverse-wide resource pool. This means that the
amount of reprogramming is independent of the number of agents who discover
how such reprogramming can be done, provided that at least one agent discovers
it and exploits it to full capacity.
It would be unfortunate if no one in the multiverse seizes this opportunity or if everyone
invests all their resources into finding it. As we have seen in the above thought experiments,
we can use superrationality to solve this problem by determining some mixed strategy.
Everyone lets some random process decide whether to invest significant resources into
investigating the reprogramming mechanism, and then either pursues it fully or not at all.
Once more, the compromise utility function does not tell us whoshould try to reprogram
the multiverse; it does, however, tell us howeach civilization should use the reprogramming
mechanism. Note that this is another problem in which updateless weights (see section 2.8.6)
are important, because once some civilization finds out how to reprogram the multiverse, it
may be tempted to stop compromising.
Schelling points
We will now consider how we can get from a 20%probability of success in Platonia five to
100%under certain conditions. Consider the following variation of the dilemma:
Platonia five with coordination help. S. N. Platonia is organizing another
one of her eponymous dilemmata. However, this time she has compiled a
numbered list of the participants beforehand. As a postscriptum to the standard
content of the dilemma letter, Platonia writes: “Your number on the list of
participants from 1 to 20 can be found on the back of this letter.” Before looking
at your number, is there any way the superrational participants can ensure that
someone receives the money?
In the original Platonia dilemma, the participants were all in the exact same situation. But
now, different people have different numbers on the back of their letters. This can make
a difference if, before looking at the back of the letter, the 20 participants agree on which
numbers should respond to the letter. For example, all participants could precommit to
respond only if their number is between 1 and 5, and only then turn the letter over and act
according to their precommitment. In this way, they ensure that exactly five people send
in a letter, thus maximizing the expected gains. (Also note that due to the symmetry of
the situation, it also gives each player the same expected gains assuming that none of them
are already suspicious about their position on the list.) In a way, the numbers on the back
of the letter function as a coordination help that allows for coordination where it would
otherwise be impossible. An even better coordination help would be one where each player
receives a recommendation on whether to respond, along with a guarantee that only 5 of the
38
20 players will be prompted to respond. Alas, such direct coordination helps will usually be
unavailable.29
So how can people agree, without communicating, on which numbers should send in a
letter? If the participants are guaranteed to be exact copies of one another, they can pick an
arbitrary set of 5 numbers between 1 to 20 before checking the back of the letter, confident
that the other 19 will choose the exact same set. In relevant applications, correlation will not
be that strong. But since all agents have an incentive to settle on the same set of numbers,
each could individually try to identify a set that is obvious or that stands out in some way.
For example, the set of 1, 2, 3, 4 and 5 appears to be a candidate that others may choose as
well (as opposed to, say, 3, 7, 8, 11, 12). Correlations play a role – if I choose numbers 1–5,
then it is somewhat more likely that others do, too – but not a decisive one. Even if there
are no correlations, abiding by such Schelling points (first introduced by Schelling (1960,
chapter 3); see also Friedman (1994, chapter I A)) is beneficial to the individual player if she
believes that (many of) the other players abide by that same Schelling point.
In practice, many Schelling points are driven by minor yet obvious expected value arguments.
For example, when someone mentions that they would like the window to be opened, the
person sitting closest to it is often seen as the natural one to open it, because she does
not have to walk as far as the others. This consideration is negligible, but it helps with
coordination.
Many Schelling points are also mere social conventions. For example, consider the issue with
right- and left-hand traffic. Presumably, most people have no strong preferences between
the two as long as drivers abide by the same standards when interacting with each other.
Whenever two drivers drive towards each other on a road, they face the coordination game
with a payoff matrix resembling that in Table 1.
player 2
right-hand left-hand
player 1right-hand 0 -10
left-hand−10 0
Table 1: Payoff matrix for two people driving in opposite directions.
Countries have laws that tell citizens whether to drive on the right-hand or left-hand side of
the road, solving this particular coordination problem. For multiverse-wide superrational
coordination, such convention-based Schelling points are alas not available.
The lack of a Schelling point could mean that it is impossible to reliably achieve optimal
outcomes. Imagine meeting a member of an unknown society in a narrow alley. Should you
pass them on their right-hand or left-hand side? Assuming there is no relevant established
pedestrian traffic convention for that alley, there appears to be no way of deciding between
the two.
29In the game-theoretical concept of correlated equilibria, the agents receive a similar form of coordination
help. See Leyton-Brown and Shoan (2008, chapter 3.5) and Osborne and Rubinstein (1994, chapter 3.3) for
introductions.
39
Coordination is relevant for everyone
I suspect that most people who care about the multiverse have utility functions that “mostly”
decompose additively, thus requiring little coordination. If you are in this majority, you
may think the topic of superrational coordination is irrelevant for you. However, this view
is mistaken, since the compromise utility function requires coordination if at least some
superrationalists in the multiverse have utility functions that do not decompose into local ones.
Of course, you could just ignore these value systems when constructing your compromise
utility function, but this makes it more likely that other agents exclude you in other ways as
well, as we will see in the following section.
2.9 No reciprocity needed: whom to treat beneficially
In this section, I will argue that the application of superrational cooperation requires no
reciprocity. That is, none of the agents who benefit from our cooperation have to benefit
us. Recall the basic argument for superrationality as based on non-causal decision theories:
given that we are friendly, it is more probable that other agents facing similar choices will
be friendly toward us and our values. Crucially, this argument does not require that the
agents whose choices we acausally affect are the same as those who benefit from our own
friendliness (2006a, section 7.2.1).
2.9.1 Schemes of causal cooperation
The classic cooperation scheme from causal cooperation is one of mutuality – “I scratch your
back, you scratch mine”, so to speak. This scheme is represented by the graph in Figure 6.
Figure 6: A graph representing a situation of mutual cooperation. An arrow from A to B
indicates that A can benefit B.
In mutual relations like this, it is possible to apply causal cooperation, although only if
the interaction is repeated – i.e. if my choice causally influences the other agent’s choice30,
and then the other agent’s choice can causally influence my choice, etc.31For introductions
to causal cooperation, see, e.g. Axelrod (2006); Trivers (1971); Fehr and Gächter (1999);
Dawkins (1976, chapter 12); Taylor (1987), and Buss (2015, chapter 9).
Superrational cooperation also works in the above scheme, although repetition is not required.
The prisoner’s dilemma (with replicas or twins) is one example of this sort of problem.
30Note that in causal cooperation, cooperative or uncooperative behavior may also causally affect bystanders
and thus increase the probability that I can establish cooperation with them in the future.
31Throughout this treatment, the graphs do not represent time and repetition. This could be done by
taking the given static graphs and “unfolding through time”, similar to how it is done when applying
backpropagation to recurrent neural networks. The resulting graph may then resemble a UML interaction
diagram.
40
2.9.2 Circular cooperative structures and indirect causal reciprocity
In principle, it is possible to establish causal cooperation even in cases where the two agents
cannot directly benefit each other, provided there is a repeated causal link from my own
decision to the decision of the agent who can benefit or hurt me, such that I can in some way
reward cooperation and punish defection. As an example, consider the following variation of
Hofstadter’s donation game:
Donation circle. Omega has a list of 6 participants. The list is circular,
meaning that every participant has a successor. Omega sends each participant a
letter, asking them to respond with single letter ‘C’ (for cooperate) or ‘D’ (for
defect) without communicating with each other. It explains that by sending in
‘C’, participants can increase their successor’s payoff by $5. By sending in ‘D’,
they can increase their own payoff by $2. As usual, the participants are told
that they are all rational or that they use similar decision mechanisms. Every
participant only cares about the balance of her own bank account, and not about
Omega’s or that of the other 6 participants. Upon receiving the letter, should
you cooperate or defect?
Iterated donation circle. Like the circular donation game, only that the game
is played many times (the exact number of times being unknown to the players).
In every round, each participant is informed of their predecessor’s past choices
before deciding whether to send in ‘C’ or ‘D’.
Circular structures such as these can be represented by graphs such as the one in Figure 7.
Figure 7: A circular cooperation graph representing cooperation schemes of the sort used in
the Donation circle. Again, an arrow from A to B indicates that A can benefit B.
Because each of the agents can causally (through the other agents) affect their predecessor,
the iterated version of this problem could still, in principle, motivate causal cooperation. For
example, one Nash equilibrium consists in everyone playing tit for tat. This Nash equilibrium
is even stable, in the sense that one player diverging from tit for tat with a very small
probability still leaves everyone else best off if they continue to use tit for tat.
However, the same Nash equilibrium is also hard to achieve and unstable in a different sense,
as it requires all 6 participants to use the same kind of strategy. Your response to your
predecessor’s cooperation is mediated by multiple other agents. If only one of them does not
propagate your response correctly, the causal path from you to your predecessor is disrupted,
leaving neither of you with a causal motivation to cooperate.
41
For superrationality-based considerations, on the other hand, neither repetition nor the length
of the causal path from one participant’s cooperation to her predecessor are relevant. Instead,
superrational cooperation only depends on the correlations between single pairs of agents.
Hence, while the significance of causal cooperation in the Donation circle diminishes with
every additional participant, the benefits from superrational cooperation remain constant
regardless of how many players are involved.
2.9.3 Hierarchies and acyclic graphs
In an extreme case, there would be no causal path whatsoever from one participant’s
cooperation to that of his predecessor, making causal cooperation lose its entire appeal to
the rational agent. Superrational cooperation, on the other hand, may still be applicable (cf.
Drescher (2006a, pp. 287-292); see section 6.1.1).
Consider the following variant of the donation game:
Donation ladder. Once more, Omega has a long list of participants, albeit
a regular linear one this time. Omega sends all of them a letter, asking them
to respond with a single letter ‘C’ (for cooperate) or ‘D’ (for defect) without
communicating with each other. It explains that by sending in ‘C’, participants
can increase their successors’ payoffs by $5. The first person on the list cannot
benefit from the cooperative behavior of others, and the last participant’s choice
has no effect on the others. Omega writes that each player can increase their
own payoff by $2 if they defect. Participants do not know their position on
the list, and are once again told that they all use similar decision algorithms.
Every participant only cares about the balance of their own bank account, and
not about Omega’s or that of the other participants. Upon receiving the letter,
should you cooperate or defect?
Figure 8 illustrates the donation ladder.
... ...
Figure 8: A linear cooperation graph (graph theoretically speaking, a 1-ary tree) representing
schemes of cooperation like that in the donation ladder. An arrow from A to B indicates
that A can benefit B.
Again, the nodes represent participants and an arrow from A to B indicates that A can bring
causal benefits to B.
In such a cooperation scheme, causal cooperation cannot be established even if the problem is
iterated, whereas the superrationality mechanism is just as reliable as in the other examples.
Because the list is long, I probably have a predecessor; if I cooperate, then my predecessor –
who is in a position similar to mine – will probably make the same choice. Cooperation thus
informs me (or logically determines) that I am likely to gain $5, whereas defection only gives
me $2.
42
We can see this linear hierarchy of agents in practice among the different versions of an agent
at various points in time. For example, I can causally affect the welfare of future versions of
myself, but if I only (or primarily) care about my present experiences, they can never reward
me in return. However, I could try to benefit future versions of myself to make it more likely
that past versions of me have behaved nicely toward myself. More discussion with references
to the literature is given by Drescher (2006), section 7.3.4.
Linear cooperation hierarchies come with a twist, however. Consider the following variant of
the Linear hierarchical donation game:
Donation ladder with known position. Identical to the linear hierarchical
donation game, only that participants know their position in the list when they
make their decision.
A participant in the middle of the list may wonder how his situation differs from the regular
donation ladder – after all, his predecessor on the list is in almost the same situation as
he is. Assuming the conditions for superrationality are satisfied, their decisions should still
correlate. Hence, if he cooperates, should we assume that his predecessor is likely to do the
same?
Not necessarily. The problem lies in the beginning of the list. The first person – let us
call her No. 1 – will have no predecessor and thus no predecessor whose decision she could
acausally influence, in effect giving her no reason to cooperate. Given this, No. 1 should
defect (that is, unless she is already updateless; more on this below).
Unfortunately, this puts No. 2 in a similar position. Realizing that No. 1 will defect, there
is nobody left to benefit him. No. 3 will, in turn, reason that No. 2 expects No. 1 to
defect, which means that No. 2 will also defect, leading No. 3 to defect as well... and so on,
propagating down the entire list. You may notice that this propagating defection effect is
analogous to the reason why standard game theory recommends to defect in the iterated
prisoner’s dilemma when the number of rounds is known32.
Once more, we find that lack of knowledge is evidential power. For one, if the participants
did not know their positions, they would all cooperate – and thus be more successful. If
everyone could precommit to cooperation before learning about their position, they would
do so. Again, cooperation can be maintained if all the agents are updateless in the first place
(see section 2.4, cf. Drescher (2006a), chapter 7.2.2). If all of this is not the case, nothing
can change the fact that at least No. 1 “wins by defecting once she knows her position on
the list.
Secondly, thinking about the other agents’ decisions can be dangerous. No. 42 defects solely
because he thinks about what the preceeding 41 participants decide. Knowing what the
other agents think is thus harmful for some not-yet-updateless decision theories. Hence,
similar to how it is wise to remain ignorant about your position in the list, many decision
theories would recommend not thinking about what the other agents will do. If the players
are human, then No. 1 may not be able to refrain from realizing that he wins by defecting.
Perhaps No. 2 cannot refrain from realizing that No. 1’s situation is different and his decision
32Even in the iterated prisoner’s dilemma, this answer – supported by backward induction – is often seen
as unsatisfactory. Other examples of paradoxes caused by backward induction are the chainstore paradox,
the traveler’s dilemma, the unexpected hanging paradox, the Bottle Imp paradox, the centipede game, the
interesting number paradox, the guess 2/3 of the average game. A good introduction is given by Basu (2007).
For further references, see Basu (1994).
43
therefore independent of hers. However, participants with two-figure positions may be able
to refrain and go with the reasoning originally presented: whatever I choose, my predecessor
will probably choose the same, as his situation is similar to mine. If I just go ahead without
thinking about the “chain of defection” initiated by No. 1, then people with similar numbers
are probably going to do the same.
The linear structure can be generalized to non-linear hierarchical cooperation schemes.
Consider the following variant of the donation game:
Donation tree. Omega has a long list of participants again. It sends all of
them a letter, asking them to respond with a single letter ‘C’ (for cooperate) or
‘D’ (for defect) without communicating with each other. Omega explains that by
sending in ‘C’, participants can increase the payoff of at least 3 participants down
the list by $2 each. For example, if the 4th participant chooses to cooperate, this
benefits a subset of the participants in positions 5, 6, etc. but not the previous 3
participants. The cooperation of the last few participants has little to no effect.
By sending in ‘D’ participants can increase their own payoff by $5. Participants
do not know their position on the list or whom they could benefit. As usual, they
are told that they all use similar decision mechanisms. Every participant only
cares about the balance of their own bank account, and not about Omega’s or
the other participants’. Upon receiving the letter, should a participant cooperate
or defect?
In general, we can represent such hierarchical versions of the donation game using directed
acyclic graphs like the one in Figure 9.
... ...
Figure 9: A directed acyclic graph representing the schemes of cooperation like that of the
Hierarchical donation game.
If participants knew their respective positions in the list, the considerations outlined for the
Donation ladder with known position would apply analogously.
In practice, such hierarchies may be hierarchies of power. Some agents are “lexically” more
powerful than others, such that cooperation can only be beneficial in one direction – the
less powerful have no way of helping the more powerful, while the powerful can help less
powerful ones much more cheaply. As a perhaps paradigmatic example, consider a standard
science-fiction scenario:
44
Intergalactic relations. The universe contains many civilizations. Although
they all followed similar evolutionary trajectories, each civilization developed at
different times on different planets in different parts of the universe, and thus
differ drastically in their levels of sophistication. Most civilizations eventually
decided to conceal themselves to some extent, so no one knows which of the
civilizations is the most powerful. You are the leader of a civilization, and one day,
you encounter a comparably primitive civilization for the first time. According
to your advisors, it appears that this other civilization has not even managed
to harness the energy of their local star, they still suffer from diseases that your
civilization’s nano-devices could cure in an instant, and so forth. Your advisors,
citing the other civilization’s apparently laughable defense systems, recommend
that you destroy them and use their resources to further your own goals. Should
you follow your advisors’ recommendation?
Once again, causal reasoning may suggest that you should. By now, though, it should be
clear that there are good reasons to ignore your advisor’s recommendation if you believe
there is a sufficiently strong correlation between your and the other civilizations.
Note that one reason for civilizations to conceal themselves might be to induce a lack of
knowledge about their relative positions within the hierarchy. If we remain hidden, other
civilizations will be more likely to do the same, so neither we nor they would know who
has the upper hand in a potential confrontation. On the other hand, if all civilizations
loudly boasted their power, the most powerful civilization would realize its dominance and
consequently have no reason to be friendly to the others – absent precommitment, the use of
updateless decision theory, and the like.
Another example of such power hierarchies is that of simulations. Simulators can causally
influence the simulated in any way they want, but the simulated can do little to causally
affect the simulators (e.g., by affecting the outcomes of the simulation or its computational
demands). We will discuss this more in section 6.9.
The following example may be typical of the hierarchies in multiverse-wide superrationality
(MSR):
Computable consequentialists. Meet Luca, who believes that consciousness
cannot arise from classical computation alone.33He is also a consequentialist and
primarily cares about conscious experiences. Through the writings of Tegmark ,
Luca has come to believe that many computable universes might exist in parallel
to ours. However, since these computable universes do not contain anything
that he would call a conscious experience, Luca does not care about what goes
on inside them. He does, however, enjoy thinking about their inhabitants as
an intellectual exercise, and this has led him to the conclusion that they can
reason about Newcomb-like scenarios in a human-like way even though they are
insentient. After all, neither calculating conditional probabilities nor operating
on causal graphs requires sentience. Using the supercomputer in his basement,
Luca has also come up with a number of predictions about the values held by
consequentialists in the computable universes – let us call them the computable
33Two classes of hypotheses in this space are substance dualism and the quantum mind. Both have a few
prominent proponents but are nonetheless fringe positions in philosophy of mind. I concur with the majority
and am skeptical of both hypotheses.
45
consequentialists (CCs) – a feat more difficult to achieve for incomputable worlds.
He has even discovered a number of ways to benefit the CCs’ values in our
universe, all at a very low cost to his own values. While Luca himself does not
care about computable universes, he sees no reason for the CCs not to care about
worlds that are computationally more powerful than their own. Given that the
CCs cannot do anything for Luca in their world, however, is it rational for Luca
to be friendly to the CCs’ values?
Again, Luca does indeed have a reason to do so. If he benefits the CCs, other agents –
including ones whom Luca cannot benefit – are more likely to realize Luca’s goals in other
parts of the multiverse.
The ability to help can also come from knowing about the other agents. Consider the
following example:
Simple-world ignorance. Imagine a multiverse in which many different sets of
laws of physics are realized. Some of the universes have very simple, parameterless,
and easily understood basic laws, like Conway’s Game of Life. Others have far
more complicated rules. The inhabitants of the more complex universes may thus
have more reason to believe in the multiverse than the inhabitants of the simple
universes. In the complex universe, the multiverse hypothesis is attractive because
it is simpler than the hypothesis that only their universe exists (cf. Schmidhuber,
1997). Inthesimpleuniverse, ontheotherhand, themultiversehypothesismaybe
more complex than the hypothesis that only their universe exists. Consequently,
the inhabitants of the simple universes may adopt superrationality but only apply
it toward other inhabitants of their universe. Let us assume that the values of the
folks from the simple universes differ significantly from those of the inhabitants
of the more complex universes. Should the inhabitants of the complex universes
help the values of those from the simple universes?
In this scenario, as with the previous ones in this section, I think that the superrationalists
from the more complex universes have good reason to help the superrationalists from the
simpler universes, as this makes it more probable that the former will receive help from
other agents, including ones that theycannot help. For example, there may be many value
systems that they(the inhabitants of the complex universes) do not know about (for reasons
other than the Kolmogorov complexities of different multiverses).
I think this particular scenario may well be relevant in our multiverse. More generally, some
parts of the multiverse may contain different clues about the existence of other superrational
agents. For example, some might live in parts of the universe from which it looks as though
life is much rarer than it actually is, whereas others may discover that they are not alone as
soon as they look through a telescope for the first time. In addition, while a superrational
agent may be able to use some theory of physics to infer the existence of other agents, he or
she may be unable to infer the existence of some particular value system.
2.9.4 Only helping superrational cooperators helps you superrationally
Cooperation usually excludes agents who are known to be unable to reciprocate. Yet as we
learned from the Donation tree and Intergalactic relations, superrationality does allow for
46
cooperation with non-reciprocating agents if helping them makes it more likely that other
agents help us.
There is, however, at least one limitation on the set of our beneficiaries that comes without
negative side-effects. We can exclude from superrational cooperation all agents who do
not cooperate superrationally at all. After all, every superrational cooperator knows that
this exclusion will not affect her, and the exclusion appears to be symmetrical among all
superrational agents. That is, it makes it more likely that other superrational cooperators
make the same choice (rather than incurring some other limitation that excludes us).
It seems risky to place any stronger limitation on the set of our beneficiaries, since this would
give us reason to fear exclusion by other agents (cf. Drescher, 2006a, page 290), as we have
seen in section 2.9.3. If we so much as try to look for rules of exclusivity that benefit us at
the expense of other superrational agents, we have reason to believe that others will do so as
well.
Of course, superrationality and correlation between decisions are not binary properties, so
neither is the limitation drawn above. For example, two artificial intelligences explicitly
based on the same decision theory may correlate more than two (non-copied) humans, even
if both have some incentive to cooperate. The stronger the correlation between us and some
other agent, the more we will benefit superrationally from helping them (cf. Drescher, 2006a,
page 288f). To illustrate this, consider a one-shot prisoner’s dilemma-like situation (cf. figure
6) in which two very similar agents can simultaneously decide whether to give the other one
some reward botheror to walk away with a smaller reward bufor themselves.
Now, imagine the two agents are perfectly correlated, i.e. they always make the same
decision. If this is the case, both agents should cooperate whenever
bother>bu. (7)
Now consider a situation in which the correlation between the two agents is weaker. Then,
in EDT terms, they should cooperate if cooperation (C) is higher in expected value than
defection (D). Using conditional probabilities, we can formulate this as
P(C|C)·bother>P(C|D)·(bother +bu) +P(D|D)·bu=P(C|D)·bother +bu,
where, for example, P(C|C)is the probability that the other side cooperates conditional
on my cooperation. Solving for botheryields
bother>bu
P(C|C)−P(C|D), (8)
whereP(C|C)−P(C|D)can be interpreted as quantifying how much more likely my
cooperation makes the other’s cooperation. Because there is at least some correlation,
the term is always greater than 0. If the correlation is perfect, then P(C|C) = 1and
P(C|D) = 0, such that we get Eq. (7)as a special case of Eq. (8). If the correlation is less
than perfect, then ba>bsmay not be enough. For example, if P(C|C) = 0.8 =P(D|D)
(such that whatever one agent does, the other agent is 80% likely to do the same), then it
must hold that
ba>bs
P(C|C)−P(C|D)=bs
0.8−0.2=5
3bs.
47
Thus, the threshold for cooperation increases as the correlation between the two agents
decreases.
If the cooperation graphs become more complicated, then so do calculations like those above.
Further research is needed to find out whether the above result – that benefitting agents with
stronger correlation is more important – holds true more generally. One interesting question
is to what extent superrationalists would form clusters based on correlation strength. This
is especially relevant if we believe the correlations to be especially strong among agents with
the same value system.
2.10 Cheating, signaling, and half-heartedness
Causal and superrational cooperation differ in another important respect. In causal co-
operation, the benefit of cooperative behavior comes from how other agents will react to
one’s own cooperative acts.34To facilitate cooperation, each agent may commit to reward
cooperative and punish uncooperative behavior. In this way, they can motivate each other
to cooperate. But seeing as behavior can only be rewarded or punished if it is observed at
all, causal cooperation often ends up focusing heavily on signalling. If you can save costs by
merely pretending (in a convincing way) to have cooperated, then that is the rational thing
to do from a causal perspective. Conversely, if you can help someone without them knowing
about it, you have no causal reason to do so. There are many practical examples of this,
such as the tendency for governments to make a big deal out of international agreements or
cooperative acts, even if the object-level gain is minor.
Since the mechanism of superrational cooperation is different from that of regular causal
cooperation, prioritization within it should be different, too. Specifically, superrational
cooperation is beneficial not because others reciprocate one’s cooperative acts, but because
our (cooperative) decisions correlate with those of others. This means that we should
sincerely attempt to maximize for benefits to other value systems, because this correlates
with others doing the same, which in turn maximizes our own benefits.
We are used to thinking about cooperation in causal terms, i.e. about how a certain
cooperative act may in the end pay us back causally and in this universe. If we think about
superrational cooperation in this mindset, we may be tempted to propose measures that
are critically suboptimal from a superrational standpoint. For instance, one may adopt a
“compartmentalized good will”, talking at length about cooperation without actually trying
to maximize for other agents’ goal achievement, or spend time thinking about how the others
might cheat us.
However, all of these correlate with other superrational agents in the multiverse wasting
effort on these exact same things. With superrational cooperation, only sincere attempts
at improving other agents’ value systems correlate with the same behavior in others, and
thus with the optimal consequences. Hence, there is no way to “game the system” or to get
benefits without honestly paying for them.
34For references to the literature, see section 2.9.1.
48
3 Values
We extensively covered the mechanism of (multiverse-wide) superrationality. However, in
all thought experiments considered so far, we knew what impact our actions would have
on the fulfillment of the other agents’ preferences. For example, we know that the other
participants in the donation game or Platonia five would prefer to have more money on
their bank account. We also know that other civilizations would prefer not to be destroyed
and would benefit from learning about our technologies in Intergalactic relations (section
2.9.3). Such knowledge has to be present or at least attainable in the future (cf. section
4.1), otherwise no side can benefit the others. This section gives an overview of how we can
find out what other agents in the multiverse care about, as well as what aspects of their
preferences we should focus on in the first place.
3.1 Orthogonality of instrumental rationality and values
One objection to superrational cooperation might be based on a possible convergence of
terminal values, in which all agents with the correct decision theory will converge toward the
same values. Moral realism claims that there are facts in morality as real and true as those
in science. In addition, some moral realists believe that any rational agent investigating
morality will ultimately arrive at these moral truths. Assuming that a large part of being
rational involves using the right decision theory, maybe all agents with the right decision
theory will independently come to adopt the “correct” moral system? If this is the case, no
cooperation among these agents would be necessary (although some value systems may still
require multiverse-wide coordination, see section 2.8.9).
As a first counterargument, consider that knowledge of the correctdecision theory is not
necessary for superrational cooperation, seeing as a number of different decision theories (e.g.,
evidential, timeless and updateless decision theory) imply superrationality. Secondly, we do
not seem to observe empirical evidence of such convergence. For example, Eliezer Yudkowsky
and Brian Tomasik agree that non-causal considerations are important for decision theory,
but Yudkowsky’s values nevertheless differ significantly from Tomasik’s.
There are also principled reasons to be skeptical of value convergence among agents with
the same decision theory. Decision theories are about instrumental rationality , i.e. about
making decisions aimed at achieving goals, not at revising them35. That is at least the
case for decision theories as they are discussed today. Consider the following variant of the
donation game:
Donation game for sadists. Omega has selected 20 pure sadists , who draw
pleasure only from torturing others and nothing else. They all use similar decision
making mechanisms when playing a donation game (against correlated agents).
Instead of being paid in dollar sums, they are given individual hours to torture a
slave as a reward.
Assuming sufficient correlation between participants, the instrumentally rational decision for
each sadist is to cooperate such that the total number of hours of torture increases relative
35Apparently, some authors differentiate between instrumental and “value rationality”. I would prob-
ably disagree with the assumptions underlying the use of the term “value rationality” (see footnote 68).
Nevertheless, I agree with the differentiation itself.
49
to universal defection. The moral choice, on the other hand, would be to defect in order to
reduce the number of hours in which anyone gets tortured. However, decision theories (as
currently discussed in the literature) do not take moral considerations into account at all.
They merely aim to fulfill the goals, whatever they may be, of the agent using that decision
theory. Hence, when applied by a pure sadist, a given decision theory is meant to help her
spend more time torturing others.36
There could conceivably be some different kind of “decision theory” that does recommend
taking morality into account (and not only cooperation, see section 6.7.1) even if the agent
using it is amoral or immoral. One could, for instance, simply combine the correct decision
theory with the “correct” moral view. Some people may consider such a decision theory
objectively correct. However, for an agent with immoral goals (like pure sadism), it would
be instrumentally irrational to adopt such a decision theory. In any case, the existence of
such a “moral decision theory” does not contradict the existence of a decision theory in the
classical, instrumentally rational sense, so an amoral or immoral agent would still be better
off adopting a classical decision theory.
Thus, it would seem that an agent’s values and their use of acausal decision theories are
orthogonal. This, in turn, suggests that agents with a variety of value systems will adopt a
decision theory similar to our own, such that their decisions will correlate with ours.
Similar views regarding the relationship between instrumental (and epistemic) rationality
and ethical values have been defended under the term orthogonality thesis (Bostrom, 2014b,
ch. 7, section “The relation between intelligence and motivation”; Bostrom, 2012; Armstrong,
2013).
Our claim that decision theory and values are orthogonal in principle does not imply that
they never correlate in practice throughout the multiverse. Indeed, in section 3.4 and its
companion papers, I will discuss various ways in which values and decision algorithms could
be expected to correlate. However, it seems very unlikely to me that these correlations are
so strong that they significantly dampen the relevance of superrationality.
3.2 Necessary preconditions
Before we start thinking about the values of agents in other parts of the multiverse, we need
to consider what kind of agents can join multiverse-wide superrational cooperation (MSR)
at all. In particular, what sorts of values do they need to have, independent of whether
or how many such agents or value systems actually exist in the multiverse? We already
know that only helping superrational or correlated agents benefits us (see section 2.9.4).
However, the values of the superrationalists must also be open to the opportunity of gains
from compromise. If an agent’s values imply that she is better off without any trades, there
is no point in helping her. In order to more closely examine this precondition, we can break
it into five distinct criteria, all of which are necessary for a superrational collaborator to reap
the gains from compromise.
36In most human sadists, sadism is probably not the only goal or cause of happiness. Many sadists probably
recognize their urges as morally wrong, yet are unable to control them to varying degrees. To these sadists, a
decision theory may provide a nudge towards seeking professional help (at least if they cannot satisfy their
sadistic preferences in morally nonproblematic ways).
50
1.Each collaborator must care to at least some extent about states of the world, as
opposed to caring only about their own mental states or actions.
2.They must also care about consequences in areas of the multiverse where there may be
other cooperators.
3.Other superrationalists must be able to infer and understand their values in sufficient
detail. (To draw action-guiding conclusions from MSR, they themselves need to be
able to infer the values of some other superrationalists or to influence future agents
with this ability.)
4.Given this knowledge of their values, collaborators must have some power to behave
nicely toward this value systems. (Again, if MSR is to be action-guiding to an agent,
they in turn need to be able to benefit other values.)
5.Doing so produces gains from compromise. If everyone abides by an analogous cooper-
ative strategy, everyone is better off than they would be without cooperation.
If all these criteria are satisfied, superrational cooperation works. We will discuss them in
turn in the following subsections.
For some applications it may be fruitful to subdivide these criteria further. Furthermore,
additional criteria, such as Bostrom’s (2014) “porosity”, may determine the size of the
gains from compromise. We could furthermore devise criteria to assess the extent to which
superrational cooperation affects one’s strategy. For instance, if all correlated agents have
the same values anyway, superrational cooperation does not affect our policy except for cases
of coordination (see section 2.8.9).
3.2.1 Consequentialism
Most people’s ethical views are partly deontological (and sometimes virtue ethical). That is,
they are not solely concerned about the state of the world and the consequences of actions,
but als about “the actions themselves” (and, in case of virtue ethics, one’s character). They
usually try to follow some set of rules prescribing what actions are appropriate in which
situations. For example, many people follow strict rules against killing (though these usually
do not apply under all circumstances and the meaning of “killing” is rarely fully specified),
even when breaking these rules would lead to fewer deaths. This type of ethical system
forms a central part of many religious doctrines, with notable examples such as the Christian
ten commandments, the Confucian filial piety, and the Islamic sharia. In addition, most
national laws contain countless rules of this sort, many of which apply to more mundane
domains like traffic or taxes. Isaac Asimov’s three laws of robotics are yet another example
of a deontological set of rules.
The arguments for multiverse-wide superrational cooperation that I have given appeal to the
consequentialist aspects of one’s values – not because it requires us to push people off bridges
(as in the Fat man version of the Trolley problem), but because its supporting argument is
fundamentally based on the consequences of different actions. If we on Earth benefit other
value systems, then this implies that others elsewhere in the multiverse also benefit our value
system, which may produce better overall states of the multiverse overall via gains from
trade. Hence, the value of superrational cooperation lies in its positive consequences on the
world (or other worlds). The ethical duties of deontological ethical systems, on the other
51
hand, usually concern the more immediate consequences of our actions. Thus, in a scenario
like the Fat man version of the Trolley problem, most deontological would imply that the
direct act of killing the fat man violates our duties towards him more than a failure to act
violates our duty towards the five people on the track.
In Bourget and Chalmers’ (2014) survey of philosophers, 23.6% of respondents characterized
their values as consequentialist while 44.1% identified as deontologists or virtue ethicists –
with the remaining 32.3% choosing “other”. However, most people probably espouse values
that involve at least some consequentialist aspects (cf. Muehlhauser and Helm, 2012, section
5.3). I doubt that many modern consequentialists would be emotionally capable of murder
or torture even under circumstances where they could be confident that doing so would yield
the best consequences37. At the same time, I doubt that many defendants of rule-based
ethics see no appeal in potentially reducing the amount of torture in the multiverse, even if
only in an indirect way. In fact, many deontological rules are motivated or even defined by
the consequences they produce. For example, murder is defined as any act that intentionally
and directly results in the death of another person (although indirect ways of causing the
same consequence (e.g., omissions) are not seen as murder). Rules against theft are often
defended on the grounds that a society with such rules is preferable to one without, even if
the rules might occasionally prevent a genuinely altruistic bank robbery. Some even interpret
Kant’s categorical imperative (especially its first “formulation”) as a heuristic based on
consequentially motivated decision-theoretical reasoning (cf. Parfit, 2011, section 63; Hare,
1993). As Rawls (1971, ch. 6) writes, “deontological theories are [not defined] as views that
characterize the rightness of institutions and acts independently from their consequences.
All ethical doctrines worth our attention take consequences into account in judging rightness.
One which did not would simply be irrational, crazy.” Although most people refrain from
the consequentialist choice in extreme situations, they do, in fact, often endorse it. For
example, in Bourget and Chalmers’ survey (2014), 68.2% of the respondents chose to pull
the switch in the original trolley problem, and only 7.6% did not (with the remaining 24.2%
choosing “other”). Pulling the switch is similarly popular among the general population.
This suggests that people sometimes agree with consequentialist reasoning even if other,
exclusively deontological or virtue ethical considerations can overrule it.
Beyond consequences for things like the number of deaths, individual welfare, and fairness,
people sometimes also care about the abidance by deontological rules in a consequentialist
way. For example, most people not only avoid killing others themselves, but also care about
preventing murders in general; many who personally avoid lying also strongly dislike it when
others lie; and so forth. These kinds of consequentialism, which are rarely considered in the
literature on moral philosophy, qualify for superrational consideration just as much as, say,
utilitarianism. We will revisit this topic of caring about the deontologically ethical behavior
of others in section 3.4.1, in which we review studies indicating that many people have values
of this sort.
3.2.2 Caring about the multiverse
Presumably, some agents with significant consequentialist aspects to their values will almost
exclusively care about their own part of the multiverse, if only based on egoism or absurdity
37In my personal experience, self-identified consequentialists actually tend to be more virtue ethical in
their behavior than the average person.
52
heuristics38. It is thus very difficult or impossible to benefit them in other parts of the
multiverse, in turn preventing cooperation.
Although there is very little discussion about the moral relevance of other parts of the
multiverse, the moral relevance of distance is frequently discussed in moral philosophy (see,
e.g., Brock and Hassoun, 2013). Note that while distance is usually understood to be spatial,
other kinds of distance (e.g., temporal (Beckstead, 2013) or social) play similar roles in
ethical judgment.
While the debate in moral philosophy appears ambiguous, people’s actions speak more clearly.
Most people from high-income countries would save a child from drowning in a nearby pond,
but donate only relatively small amounts to charity (Singer, 1972). Insofar as they do give
to charity, they usually prefer local causes even though helping in low-income countries is
more cost-effective. From this, we can safely infer that most people are altruistic to some
extent, but seem to care more about near events than distant ones.
One may suspect that an agent’s ignorance about other parts of the multiverse would yield
similar conclusions as a lack of interest. After all, if someone does not know about our part
of the multiverse, they cannot help us. However, we must not forget that superrational
cooperation need not be based on mutuality (see section 2.9). Even if someone cannot help
us, we can still help them to make it more likely that we ourselves receive help from agents
whomwecannot help.
3.2.3 Knowable values
In order to maximize for some given utility function, we or future superrationalists (see
section 4.1) need a sufficiently detailed model of the utility function itself. In section 3.4,
we will discuss how the evolutionary psychology of morality and related disciplines can
be used to assess the values of superrational cooperators in the multiverse. There are at
least some ways of making very educated guesses, although we cannot expect to arrive at a
detailed and precise description of the values of all evolved civilizations and their descendants.
However, perfect knowledge is not necessary for our purposes. Indeed, most people cannot
even describe their own values in detail (see footnote 25). Yet despite this, humans are
perfectly capable of helping one another achieve their goals. Thus, the question is neither
whether we can gain relevant knowledge about the values of other agents in the multiverse
at all, nor whether we can have a full map of extraterrestrial morality, but whether the
information we can gather about other civilizations can be sufficiently accurate to yield a
usable model.
Fragility of value
Yudkowsky (2015, ch. 279) argues that human values are not just complex (cf. section
3.4.1); they are also fragile, in the sense that even minor errors in a non-human agent’s
picture of them can completely derail that agent’s efforts to optimize for them. According
to Yudkowsky, “Any Future not shaped by a goal system with detailed reliable inheritance
38One may argue that absurdity heuristics are a part of someone’s epistemology. That is, the “absurdity”
of the Everett interpretation is used as a reason to give it low probability as a theory of physics. However, it
is not clear whether there is a clear-cut, operational difference between belief and preference if the belief
does not make a testable prediction.
53
from human morals and metamorals, will contain almost nothing of worth.” Perhaps more
generally, we could say that any resource expenditure will generate next to no value for
an intelligent evolved being X unless that resource expenditure is shaped by a detailed
inheritance of X’s morals and metamorals. Yudkowsky gives boredom as an example of a
small but indispensable part of human values:
“Consider the incredibly important human value of ‘boredom’ – our desire not to
do ‘the same thing’ over and over and over again. You can imagine a mind that
contained almost the whole specification of human value, almost all the morals
and metamorals, but left out just this one thing and so it spent until the end
of time, and until the farthest reaches of its light cone, replaying a single highly
optimized experience, over and over and over again.”
Presumably, many other seemingly insignificant aspects of human values are of similar
importance as boredom. One would need to get all of these aspects just right in order to
benefit human values. This suggests that it will be difficult to benefit many evolved value
systems, due to the large amount of detailed knowledge it would require and the difficulty of
gathering that knowledge.
There are various points to discuss in this context. For one, the fragility thesis is rather
vague; it does not say howfragile our values are, or howaccurate and reliable the inheritance
must be. This is not to say that the fragility thesis makes no testable claim at all. Yudkowsky
formulated it with the value loading problem of artificial intelligence in mind. Since AIs can
be programmed to pursue any goal (cf. section 3.1), the space of possible values with which
an AI could end up is vast, and the target goal systems occupy only a small fraction of this
space. The fragility hypothesis can be interpreted as one elaboration on just how small this
part of value space is, and how catastrophic it would be (from the perspective of that value
system) to miss it by even a small margin. In other words: even if we take care to represent
all of the most central aspects of our values (e.g., “increase the welfare of sentient beings”
or “reduce inequality”) in the goal system of an AI, the outcome may still be as bad as an
entirely random one if we omit seemingly peripheral values such as boredom.
Although I agree with the fragility thesis as a descriptive (rather than normative) statement
about human values, I do not think human values are quite as fragile as Yudkowsky writes.
Specifically, I think the outcomes brought about by AIs with two different goal systems
can differ enormously in their overall worth even if both miss important aspects of human
values. For example, Yudkowsky’s hypothetical world full of repetitive happiness may be
boring, but it is still much better than a world full of suffering, unethical behavior, etc. and
nothing of worth to compensate. But perhaps this judgment is influenced by my own values
(which are mostly about ensuring the welfare of sentient beings with a strong priority for
preventing their suffering), to the point where it would not generalize to how other humans,
or other evolved agents in general, would view the situation. Transferred to our variant of
the fragility thesis, this nevertheless suggests that even if we miss significant parts of the
values of other superrational cooperators, taking their values into account may still make a
big difference to them.
More importantly, AI value loading differs significantly from our attempt to benefit agents
in other parts of the multiverse. The main problem of AI value loading is getting the AI
to care intrinsically about human values. MSR, on the other hand, already gives us the
sincere (instrumental) goal of helping other agents, which the AI lacks. If anything, we lack
54
knowledge of the others’ values, whereas AIs may still not care about them even with perfect
knowledge of their values.
Another crucial difference between these two contexts is that in AI value loading, we usually
want the AI to hold the values of one particular species or group of people. In contrast, when
cooperating superrationally, it is sufficient to know that we benefit many other superrational
agents. We do not need to know whether we benefit some particular species. The extent to
which this makes our job easier depends on how evolved value systems are distributed over
value space. Perhaps they form a few (or many) very small clusters, as depicted in Figure
10. (Needless to say, Figure 10 is not meant to be an accurate map of value space. The
placements on the map have no factual basis.)
Utilitarianism
Human values
Figure 10: A map of a part of value space under the assumption of there being distinct
clusters with a lot of empty space in between.
Every blue point on the map is some value system held by a significant number of agents. The
white areas of the map contain value systems that do not have a significant following, such
as paperclip maximization. If our map really did represent value space and each individual
value system is fragile, then it is difficult to benefit other value systems, because if we miss
the targets only by a bit, we end up with a value set that nobody cares about.
However, it could also be that the values of different evolved agents occupy some compact
part of value space, as depicted in Figure 11. In this map, darker areas represent value
systems with many agents and lighter areas indicate value systems with fewer agents. If
value space looks more like this map than our previous one, then it is easier to make “guesses
into value space” to help superrational collaborators. As long as one is roughly aiming
at the right part of value space, small errors just mean that one benefits slightly different
superrationalists than intended.
Only a few maps of humanity’s value space have been created, the best-known of which is
probably the Inglehart-Welzel cultural map of the world. I would nonetheless wager some
guesses as to how more fine-grained maps of values would look like: on any individual planet,
there are clusters formed by major religions, nations, political camps, and other cultural
groups. For example, there are many people who hold many of the moral views of the Quran
and many who hold many of the moral views of the Bible, but presumably much fewer who
defend a mix of the two. Nonetheless, the space between the clusters is not completely
“uninhabited”. Furthermore, the existence of these clusters seems to be partly arbitrary, a
55
Human valuesUtilitarianismFigure 11: A map of a part of value space under the assumption that extant values occupy a
relatively compact part of value space.
mere result of the way that different ideas were packaged together historically. If things had
gone slightly differently, as they doubtlessly do in other parts of the multiverse, the authors
of the Bible may have written that it is mandatory to fast during the month of Ramadan,
thus filling a spot in value space with life that is only sparsely inhabited on Earth. If the
multiverse is large enough, all these possible variations of values are realized somewhere and
probably no less common than the two religion clusters on Earth.
One last difference between the way we extract values from other superrational cooperators
and the way AIs might receive their values from humans is, of course, that the former involves
no direct contact. Section 3.4 will address ways of circumventing this problem in order to
identify the values of agents elsewhere in the multiverse.
3.2.4 The ability to help others
In some cases, it will not be in our power to help other value systems at all. Since any
will to cooperate with these agents cannot possibly be action-guiding, we do not have to
help them. Other agents in the universe may have other resources available to them and
thus choose to behave in a friendly way toward these values. If, on the other hand, agents
know that nobody else can help them to achieve their goals, multiverse-wide superrational
cooperation (in particular, any version of it in which they just give resources away) becomes
less attractive to them.
One example of a value system that we cannot help is the following version of speciesism
(that may or may not be a straw man):
TheNamuh-centrists. One day, scientists inform you about a highly intelligent
species of extraterrestrials known as “Namuhs”. Like us, the Namuhs have built
a flourishing civilization with art, trade, science, language, humor, philosophy
(including advanced decision theory research), and so on. However, the Namuhs
do not live in our universe, but in a distant part of the multiverse, completely
inaccessible to us. In fact, they could not even exist in our part of the multiverse,
56
as their bodies require slightly different laws of physics to function. Knowing
about superrational cooperation, you hasten to ask whether they have thought
aboutproblemsanalogoustoNewcomb’sproblemandthedonationgamesbetween
similar agents. A trustworthy scientist explains that their minds are indeed prone
to thinking about such topics – much more so than those of humans, in fact!
Understandably thrilled, you ask what values the Namuhs have, and specifically
what values are held by those who have thought about acausal cooperation. The
scientist then informs you that all Namuhs are very narrowly focused on their
own species. They are Namuh-centrists who do not care one bit about anything
that does not involve fellow Namuhs. For example, they shrug at the thought of
non-Namuh suffering, the flourishing of non-Namuh civilizations, or non-Namuh
well-being. In fact, they are so strict that they do not even care about simulated
Namuhs or other approximations.
Learning about their values, you may be disappointed. There is nothing that you can do to
help them and it is therefore irrelevant whether they use a decision theory similar to yours
or not.
I should point out that the speciesism endorsed by the imaginary Namuhs is very rigid
and more narrow than most other views that we would usually classify as speciesist. Far
from caring only about their own species, most people seem to care about the welfare of
non-human animals to at least some degree, usually privileging some species (like cats and
dogs) over others (like pigs and cows). Such views classify as speciesist, but nevertheless
allow for superrational cooperation. Other views do not value humans over other animals for
their species membership per se, but instead privilege other characteristics that (allegedly)
only humans (and sometimes a few other species) possess. A common variant of this holds39
that only members of very few species are conscious. Humans are one of them, but, according
to such views, they otherwise do not deserve any special moral status. Given the implications
of this view, proponents are sometimes (and often incorrectly) branded as speciesist. If
the Namuhs were to hold such a view, and humans (or other earthly species) meet their
criteria for consciousness, then our decisions canbe beneficial or detrimental to the Namuhs’
preference fulfillment. A similar reasoning applies to the possession of language, free will,
the ability to pass the mirror test or other (potentially) strict but non-speciesist restrictions
to one’s set of morally relevant agents.
There are other reasons why we might be (practically) unable to help other agents. For
example, helping an agent could require some set of specialized abilities that they themselves
developed based on their value systems. Consider the following example:
The Advanced math maximizers. One day, you learn that out there in the
multiverse, there are civilizations made up entirely of mathematicians whose
primary concern is maximizing mathematical knowledge. They don’t care about
the number of established truths or proofs per se, but rather value pieces of
knowledge based on their novelty or interestingness, possibly resembling the
way earthly mathematicians often prioritize their research. For instance, the
mathematicians place a very high value on a proof or disproof of the Riemann
39Note that some authors are skeptical to there being any fact of the matter in questioning whether some
being is conscious or not. Instead, they view terms like “consciousness” and “sentience” as definitional
categories or expressions of particular values. See, e.g., Dennett (1991) and Brian Tomasik’s Dissolving
Confusion about Consciousness.
57
hypothesis, whereas mundane factoids like the three-billionth digit of πhave very
little value in comparison. Moreover, once a fact becomes known to at least one
of the mathematicians, reproducing that same piece of information elsewhere
in the multiverse creates no additional value for them. (We assume that the
universe is finite – otherwise every piece of knowledge may be known to some
Boltzmann brain.) While they are not particularly skilled at anything else, their
strong intrinsic motivation and dedication has made them into truly excellent
mathematicians, unrivalled by anyone across the multiverse.
It is not easy to benefit the advanced math maximizers. We do not know what knowledge
they already possess, and given their level of skill, we should assume that they will come up
with most interesting pieces of mathematical knowledge that we could devise on our own.
The math maximizers are thus so capable of maximizing their own utility function that there
is little we could do to assist them (cf. section 3.2.5).
3.2.5 Zero-sum and “below-zero-sum” tradeoffs on resources
Not all interactions between agents allow for cooperation. Specifically, there is no way or
reason to cooperate in zero-sum games, i.e. ones in which the overall payoff is always the
same. Consider the following example:
The Maximizer Monarchs. Imagine a multiverse consisting of two universes.
One is ruled by a queen whose only drive it is to create as many paperclips as
possible. The other universe is ruled by a king who only cares about producing
as many staples as possible. Each stationery-maximizing monarch knows that
the other exists and that they both use the same decision algorithms. They each
have one hundred tons of steel at their disposal. What should they do with it?
Assuming that staples (specifically the kind of staples that the queen cares about) cannot be
built out of paperclips or vice versa, this interaction is zero-sum. Every bit of material that
one of them uses for the benefit of the other is an equivalent loss to themselves40. Thus, no
form of cooperation between the two is beneficial.
As the reader may suspect, zero-sum interactions are rare. We should expect that any given
resource is better suited to achieving one goal than another and so trade can arise from
allocating resources based on what value systems benefit most from them. Analogously,
value systems care more about certain situations than others. Furthermore, whereas it may
not be possible to combine a paperclip and a staple, many goals are compatible with each
other. For example, a society’s citizens can at the same time be happy and virtuous.
Gains through specialization and comparative advantages
At times, trying to achieve multiple goals at once is not just pointless – it can actually be
worse than having each agent focus on one, typically their own, goal. To see how, let us
revisit The Maximizer Monarchs of the previous section:
40If we normalize their utility function, both assign the same utility to a situation in which all the
multiverse’s metal is transformed into their favorite office supply. This also means that they assign the same
utility to any other other fixed amount of metal being transformed into paperclips.
58
The Ever-improving Maximizer Monarchs. Like The Maximizer Monarchs,
but this time, the efficiency at which each agent can produce paperclips or staples
grows monotonically with the produced quantity. Again, each monarch wields
one hundred tons of steel.
Without delving into mathematical details, it is best (in terms of overall number of paper-
clips/staples produced) if each of the two specializes in one kind of stationery. In particular,
there are no gains from compromise over each monarch maximizing only for their own goals.
There may also be comparative advantages from the outset. Based on their respective
motivation and prior experience, the queen may already excel at producing paperclips, while
the king may be better at producing staples. Another important source of comparative
advantages is unequal knowledge about different value systems. For example, if the queen
does not know exactly what the king cares about, then she will be worse at benefitting him.
Similarly, our knowledge of what other humans care about is much more precise than our
knowledge of what agents elsewhere in the multiverse care about.
The fact that specialization and division of labor play such a crucial role in the economy
suggests that superrationalists will also tend to focus on a single goal rather than maximizing
for multiple things at once. However, I think that this will not be the case, at least in our
present situation. The primary reason is that the instrumental goals of agents with different
moral values are often the same. For example, no matter the direction into which we would
like to drive society, we will try to acquire money and political influence. These resources
are often generic, such that when they are acquired with one goal in mind, they can also be
employed in pursuit of another. As an example, consider how Donald Trump maximized
his personal wealth for a long time, yet his resulting fame and money nevertheless enabled
him to become president of the US, which in turn allows him to achieve all kinds of goals.
The fact that instrumental goals tend to converge suggests that superrationalists in the
multiverse rarely have a strong comparative advantage at achieving their own goals.
If comparative advantages are not strongly aligned with goals, specialization can produce
gains as well. For example, imagine a number of superrational agents, each of whom would
like to maximize many different things separately, e.g., knowledge, fun, happiness and
technology. Here, a no-compromise outcome – i.e. one wherein each agent only maximizes
their utility function in their own universe – might be worse than a potential division of
labor with one agent focusing on generating knowledge, another one focusing on fun, and so
forth.
3.3 What values?
To help other agents, one at some point needs to have some workable model of their
preferences. In general, it is difficult to extract preferences from a given agent if the agent is
not von Neumann-Morgenstern (vNM) rational and cannot state her goal explicitly. Humans
surely are not vNM-rational. Additionally, moral judgments are usually seen as being
inaccessible to us in their complete form (see footnote 25) and as emerging from the whole
brain rather than exclusively from, say, the anterior cingulate cortex. This makes sense from
an evolutionary point of view. Preferences are tools for increasing the fitness of an organism,
and there is no reason to assume that such tools would be any more open to scrutiny by
the organism than, say, the detailed inner workings of the digestive system. In addition,
59
while most organisms have rudimentary mechanisms for avoiding harm and seeking food
and reproduction, holding grudges – i.e. a preference for retaliation – is only adaptive in
non-solitary organisms with sufficiently good memory and recognition to correctly identify
transgressors. In the evolutionary process, different values thus evolve separately and are
unlikely to form a coherent whole (cf. Dennett, 1991; Kurzban, 2012).
Thus, evenifwehadacompletemodelofoursuperrationalcollaborators, itwouldnevertheless
be difficult to extract clear-cut values from them. In the absence of such exact models, it
makes little sense to for us to discuss the technical details of relevant preference extraction
algorithms41. Wewill, however, stillneedtothinkaboutinformalwaysofinferringpreferences
from a model of a superrational collaborator.
3.3.1 Idealization
One dimension along which preference extraction algorithms vary is the extent to which they
idealize values. Consider the following example of preference idealization (adapted from a
recent blog post of mine): Steve holds a glass of transparent liquid in his hand. A woman
walks by, says that she is very thirsty and that she would like to drink from Steve’s glass.
What she does not know, however, is that the water in the glass is (for some unspecified
reason) poisoned. Should Steve allow her to drink? Most people would say he should not.
While she does want to drink from the glass, her desire would probably disappear upon
learning of its content. Therefore, one might say that her object-level orstatedpreference is
to drink from the glass, while her idealized preference would be not to drink from it.
Similar questions apply to ethical preferences. For example, most people find meat consump-
tion acceptable on the object-level, but are simply unaware of information about the world
that could change their minds, e.g., knowledge about the similarities between human and
animal minds or the conditions in factory farms and slaughterhouses. Perhaps these people’s
idealized preferences favor vegetarianism? If we reduce meat consumption, should we count
it as beneficial to people who approve of eating meat, but who could be convinced otherwise?
Should we, in other words, idealize our collaborators’ values when taking them into account
in this universe?
Besides gaining more information about the world, people’s preferences may also change upon
engaging with moralarguments (e.g., the original position or the drowning child argument).
Even though such arguments do not provide new facts, they may invoke trains of thought
that lead people to change their moral position. Should we also idealize preferences based
on such moral arguments?
At least idealization based on moral argument can cause trouble. For one, some moral
arguments can be viewed as potentially illegitimate “tricks” for persuading people to adopt
undesired positions42. An extreme example of this could be some moral or religious scripture
that hypnotizes and brainwashes the reader. Surely, nobody would want other superrational
collaborators to apply such a treacherous “idealization procedure”.
41Examples are described in Hansson and Grüne-Yanoff (2012), Varian (2006), Neumann and Morgenstern
(1953), Ng and Russel (2000), and Oesterheld (2016). Also consider Brian Tomasik’s How to Interpret a
Physical System as a Mind.
42One class of such tricks is described in my blog post Cheating at thought experiments.
60
Order effects constitute another problem in using moral arguments to idealize preferences.
Depending on the order in which we present someone with moral arguments, they may lock
into a position and resist further arguments. If someone’s moral views allow for more than
one such lock-in they may not be uniquely idealizable. A recent study by Schwitzgebel and
Cushman (2012) shows that even philosophers exhibit order effects when considering thought
experiments.
In general, we may view agents as having (meta-)preferences regarding idealization. These
determine how exactly they would like to have their values idealized. We should then abide
by respective agent’s preferences, since we can then expect others to idealize our values in
the way that we want them to be idealized. Unfortunately, this solves the problem only
theoretically. In practice, finding out what idealization procedures other superrationalists
would approve seems very difficult.43
For more thoughts on preference idealization, the reader may consult any of the following:
Yudkowsky (2004); Grill (2015); Muehlhauser and Helm (2012), chapter 6; the Negative
Utilitarianism FAQ, especially section 2.1; section 15 of Brian Tomasik’s Hedonistic vs.
Preference Utilitarianism ; and my blog post entitled Is it a bias or just a preference? An
interesting issue in preference idealization , in which I discuss the specific issue of removing
cognitive biases from preferences.44
Beware motivated idealization
One potential pitfall of idealizing another agent’s values is that it might bias the result
toward one’s own moral views if one is not careful. After all, you will be more familiar with
the arguments and thought processes that favor your own position, and they will seem more
convincing to you than the arguments you know in favor of other positions (if you knew of
similarly strong arguments in favor of other positions, there is a good chance you would
have adopted them already). Such a process of legitimizing what we already want to do
via superrationality-based reasoning could be nicknamed “superrationalizing”. For instance,
I might be tempted to think that supporters of deep ecology and (non-anthropocentric)
environmentalism would, if they were rational, update their views significantly upon learning
aboutDarwinianevolutionandwildanimalsuffering. Imayevenpresumethatdeepecologists
would support intervention in nature or even habitat destruction under idealization! While
I do indeed think that many people’s judgment of nature and preservation would change
significantly upon understanding the above topics45, I am worried about what such an
aggressive stance on idealization tells me about the way other agents might go about
idealizing values. For instance, when idealizing myvalues, environmentalists might reason
that I just never thought enough about the beauty of nature. “If only this Caspar guy had
43InDivergent preferences and meta-preferences , Stuart Armstrong makes a few points that are closely
related to the preceding three paragraphs.
44Habermas’ discourse ethics is also worth mentioning. Alas, the best discussion of its main ideas that I
am aware of – ch. 5 of Norbert Hoerster’s Wie lässt sich Moral begründen? – is currently only available in
German.
45On the other hand, many (if not most) biologists seem to care about conservation – popular biology
textbooks like Campbell Biology (Urry et al., 2016) and Life: The Science of Biology (Sadava et al., 2012)
cover and seem to endorse conservation biology. There are various counter-considerations, though. For
example, a prior concern for the environment may be a strong motivator for many to study biology in the
first place. Perhaps many also did not think about the moral value of nature all that systematically. From
what I can tell, neither Campbell Biology norLifecover wild animal suffering at all.
61
taken the time to really contemplate the natural world in all its magnificent complexity, he
would not think of nature as a tragedy, no matter how ‘red in tooth and claw’ it may be.”
Consequently, they might conclude that it is in my idealized interest if they lobby for leaving
nature untouched or even spread it to other planets. I would not want others to idealize my
values in such a way. While it may be true that a sufficient amount of time spent enjoying
beautiful landscapes could convince me that nature is beautiful, I might not view that as a
legitimate idealization procedure, as it merely reinforces conservationist arguments rather
than offering new arguments or some form of balanced view.
An example of a more obviously flawed extrapolation process is that of a mother reasoning
that everyone’s idealized values would be to prefer her son over all other children. After
all, if they only spent enough time with him (just as she did), they would surely prioritize
his well-being over that of other children! Once again, the respective idealization process
seems unduly biased towards a certain position and will thus be rejected by most agents’
meta-preferences.
3.3.2 Values and distance
People care about things differently depending on whether they happen nearby or far away
in space and time. For example, while many liberals and quite a few conservatives politically
favor legalizing cannabis, I expect that many of them would nevertheless feel mildly annoyed
or uncomfortable if their best friend, spouse, or daughter were to start smoking it on a
regular basis. For brevity, I will use the term near values for the part of our values that are
about near things and far values for the part that is concerned with distant things. Both in
the ancestral environment and today, most people operate primarily on their near values
(with one notable exception being politics). In the context of superrationality, however,
we are only interested in far values. Most other superrationalists are so far away from us
that our values pertaining to their worlds fall under our far values. Hence, we want ETs
to consider only our far values, which in turn means we should only consider the ETs’ far
values as well. That is, we do not need to know how they want their friends to treat each
other, how they feel about drug use in their own social circles, and so forth. (Some think
that the discrepancy between near and far values should disappear under idealization; we
will discuss this below.)
According to construal level theory, the difference between near and far values mainly
results from the difference between two kinds of thinking or construal : concrete (or low)
and abstract (or high) levels of construal. Which level of construal is applied mainly
depends on the psychological distance to an event, i.e. the combined temporal, spatial,
social and “hypothetical” (near = likely, far = unlikely) distance. People tend to construe
psychologically near events concretely and psychologically far events abstractly. A recent
summary of construal level theory is given by Trope and Liberman (2010a).
The mapping between levels of construal and psychological distance is imperfect. We
sometimes think about psychologically distant things concretely, such as when watching a
science-fiction movie, and about psychologically near things abstractly. Nevertheless, the
mapping is useful. While there is little theoretical and empirical research on how people (and
other evolved creatures of human-level intelligence) think and care about alien civilizations,
there is some research on how people generally care about other psychologically distant
and abstractly construed things. According to construal level theory, the abstract mode of
62
thinking is similar regardless of the kind of psychological distance that is involved. Thus, we
can use general research about abstract construal values to at least inform our first tentative
guesses about values in the particular case of caring about distant civilizations.
We have some reasons to expect construal level theory to generalize to other evolved beings.
According to Trope and Liberman (2010a, section III, subsection “Discussion”),
High-level construals and low-level construals serve different cognitive functions.
High-level construals have evolved to represent distal objects because, with
distance, one needs to conserve the essential, invariant properties of the referent
object. In contrast, low-level construals preserve the object in minute detail for
immediate use.
The fact that abstract and concrete construals solve different problems suggests that they
evolved separately. Indeed, low-level construals probably evolved earlier. Whereas processing
one’s immediate surroundings and short-term goals is necessary for any animal to survive,
many can get by without processing psychologically distant things. Some of the feats
achieved by civilization-forming species, on the other hand, require abstract thinking. In the
conclusion of their paper, Trope and Liberman (2010a) write:
The turning points of human evolution include developing tools, which required
planning for the future; making function-specific tools, which required considering
hypothetical alternatives; developing consciousness, which enabled the recognition
of distance and perspective taking; developing language, which enabled forming
larger and more complex social groups and relations; and domestication of animals
and plants, which required an extended temporal perspective (Flinn, Geary, and
Ward, 2005). Human history is associated with expanding horizons: traversing
greater spatial distances (e.g., discovering new continents, space travel), forming
larger social groups (families vs. cities vs. states vs. global institutions), planning
and investing in the more distant future, and reaching farther back into the past.
In sum, I see some good reasons to expect that construal level theory applies to many other
evolved species of human-level intelligence.46It thus matters whether we optimize for others’
near or far values.
Interestingly, we may also see the difference between different construals and thus near and
far values as a cognitive bias that would disappear upon reflection, and that we should
correct for in preference idealization. This may well be the case, but it is unclear which
of the two views is more “correct” about ethics. One may argue that only thinking about
concrete events can yield actual moral judgments, while abstract thinking may result in
imagining a situation inaccurately or not at all and thus being unable to assess it correctly.
Moreover, we tend to have weaker attitudes in general toward distant things than towards
close things,47and this also seems to apply to moral weight assignment.48
46The presented argument resembles the general argument for modularity in evolutionary psychology (see,
e.g., Cosmides and Tooby, 1994).
47For example, people prefer to receive money immediately rather than in the far future. They are risk
averse and, of course, care more about socially close individuals.
48For example, thinking about a concrete, identifiable goal or benefactee seems to be associated with
feeling happier from donating money (Rudd, Aaker, and Norton, 2014). People are more motivated by
(concrete) identifiable victims than by (abstract) large numbers of victims, although a recent meta-study by
S. Lee and Feeley (2016) shows the effect to be small. People are also more relativist when judging the acts
of extraterrestrials and people from other cultures (Sarkissian et al., 2011).
63
Some arguments in moral philosophy evoke concrete construals (e.g., the fat man trolley
problem or the drowning child argument) and some evoke abstract construals (e.g., the orig-
inal position or many of the examples from my blog post Cheating at thought experiments )49.
Both classes contain arguments that I find useful and legitimate. This suggests that neither
of the two is morally superior across the board.
Trope and Liberman (2010a, section VI) describe several experiments wherein high-level
construals seem to capture the participants’ values, whereas low construals led people to give
more weight to “local” circumstances (such as social pressure and lack of self-control) (cf.
Trope and Liberman, 2010a, section VII, subsection “Affect”). In high levels of construal,
people tend to judge consequences more by their desirability than their feasibility, and thus
assign more weight to moral views. More recent studies like those of Torelli and Kaikati
(2009) and Agerström and Björklund (2013) have corroborated this result. However, it could
also be interpreted as an indication that abstract thinking makes people more hypocritical.
Yang, Preston, and Hernandez (2013) summarize further evidence in favor of giving more
weight to high-construal judgments:
High-level construal is associated with [...] an analytical, critical-thinking mind-
set (Torelli and Kaikati, 2009). For example, people at a high level of construal
are [...] more comfortable with messages that convey mixed emotions (Hong
and A. Y. Lee, 2010), suggesting greater cognitive flexibility. Indeed, previous
literature showed that when an object is distanced from the self, individuals
are less likely to be “trapped” in their own preconception or knee-jerk reactions
(Kross and Grossmann, 2012). Moreover, high levels of construal may enhance
perspective taking toward others whose interests conflict with one’s own.
That said, abstract thinking is not without its systematic failure modes. It is, for instance,
associated with overconfidence and the illusion of explanatory depth (Alter, Oppenheimer,
and Zemla, 2010).
Further thoughts on the topic are given by Samuel Hammond in How to Conceptualize
Morality: Near vs Far. In any case, we should keep in mind that idealizing away the difference
between near and far values may be inconsistent with many agents’ meta-preferences.
3.3.3 Different kinds of preferences
People often report that preferences in different domains feel qualitatively different from one
another. For instance, it is common to distinguish moral preferences from other preferences.
My preference for world peace over war is a moral one, for instance, but my preference for
bananas over carrots is not. Of course, this line between moral and non-moral values is often
blurry. For example, it is unclear whether wanting revenge or a cancer victim’s desire to
focus altruistic efforts on cancer research are moral preferences. I think a distinction between
moral and non-moral preferences can also be drawn among far values. For example, my
preferences for beings in other parts of the multiverse to be happy rather than to suffer is a
moral one, but I would not view my preference for these civilizations to be fascinating, fun,
or otherwise beautiful to my eyes (in the way that advanced civilizations in science fiction
49Many arguments also present a conflict between abstract and concrete thinking. For example, the
repugnant conclusion can be seen as a clash of the evaluation by the concrete welfare of the identifiable
victim or representative moment and the abstract evaluation by the aggregate welfare.
64
movies are) as a moral preference. Others might disagree, but that dispute is not worth
exploring in this paper (indeed, I suspect it may be a largely verbal one). Potential criteria
for this distinction between moral and other preferences may be that moral preferences are
those we want others to share or that are somehow universal.
Another distinction could be one based on a dual-process theory of morality (see Greene
2013, part II for an overview and references to the literature). Or consider Sarma and Hay
(2016), who propose that “what we call human values can be decomposed into 1) mammalian
values, 2) human cognition, and 3) several millennia of human social and cultural evolution.”
I do not think such distinctions are necessary when cooperating superrationally. Instead,
we should focus on all preferences that are action-guiding to the respective agent (if this is
not included in the term “preference” anyway50), irrespective of whether they are “moral”
or “mammalian”. By definition, if I have to decide between two courses of action and
one of them better suits the preferences that guide my actions, I will choose that one. In
the case of superrationality, only accounting for other agents’ action-guiding preferences
correlates with others also taking only my action-guiding preferences into account. Therefore,
taking all action-guiding preferences into account is best according to my action-guiding
preferences. Hence, we should only take steps to fulfill the action-guiding preferences of
other superrational collaborators, ignoring any other preferences they might hold.
Given the above, I shall in this piece not differentiate between moral and other far values.
Instead, both terms will be used to signify our action-guiding far values.
3.4 The values of our superrational collaborators in the multiverse
Having outlined what kinds of values we would like to know about for multiverse-wide
superrational cooperation (MSR), we can finally proceed to discuss these values. Whereas it
is not strictly necessary for us to know about our cooperators’ values right away in order
to benefit them (see section 4.1), such knowledge is surely useful and has to be attained at
some point. In fact, one objection to MSR that many people have brought up in private
conversation is that given our uncertainty about other value systems in the multiverse we
should focus solely on our own values (also see section 6.11). As readers may suspect, a
comprehensive discussion of this topic is beyond the scope of the present paper. However,
we will give an overview of how we (or future superrationalists) can gain knowledge about
the values of our collaborators elsewhere in the multiverse. Besides guiding future research,
this overview will also demonstrate that we can learn anything about their values in the first
place.
50Many definitions of preferences are based on choice (see footnote 41). Some examples where preferences
may not be what our choices reveal include:
•akrasia and lack of willpower, as it manifests itself in procrastination and inability to adhere to exercise
routines and healthy diets;
•preferences about fiction, as people often care deeply about how a story ends but usually without
trying to lobby or coerce the authors to satisfy that desire (Radford and Weston, 1975; Schneider,
n.d.); and
•preferences for states of affairs that are mathematically inconsistent or physically impossible (Oester-
herld, 2017b).
65
It seems as though there are two main ways of assessing the values of other agents in the
multiverse. The first involves empirical research into the values of superrational cooperators
on Earth. Because the sample size is so small, we may also look at humans in general,
under the assumption that the values of superrationalists resemble the values of their native
civilization. It may be that the values of superrationalists differ from those of other agents in
systematic and predictable ways. General human values may thus yield some useful insights
about the values of superrationalists. That said, it may be that only a small fraction of
superrationalists in the multiverse are human-like. For example, it could be that most other
superrationalists are artificial intelligences and whole-brain emulations. It could also be that
many other evolved agents are very different from us.
The other approach involves understanding the processes that generate and select the
values of agents in the multiverse, such as biological and cultural evolution, the transition
to superintelligent AIs, etc., and extrapolating them into workable predictions about the
preferences of agents on other planets. In principle, this approach is sufficient for gathering
a good map of the values of civilizations throughout the multiverse. In practice, however, it
is probably very difficult to accurately predict how these processes play out. A combination
of both approaches might be easier to work with. We can begin with human values as a
baseline and inspiration for what kinds of moral attitudes may exist, and then review whether
the processes of biological and cultural evolution systematically favor these attitudes. This
would enable us to find out whether they are coincidental and hence rare in the multiverse,
or necessary and thus common. At the same time, we will of course need to avoid being
biased toward human values, making sure not to drift off into telling just-so stories about
why some human practices and values might be universal among evolved agents of human
intelligence (Buss, 2015, chapter 2, section “Methods for Testing Evolutionary Hypotheses”).
Theoretically, we or future superrationalists need to find some way of coming up with new
moral values, i.e. ones that we do not observe on Earth. Based on a model of the values of
evolved agents we can then think about the values of these agents’ descendants (whole brain
emulations, superintelligent AIs).
Assessing the action-guiding, consequentialist far values of agents in the multiverse could
be a scientific (sub-)discipline in its own right. That being said, I do not expect there to
be a “Journal on Extraterrestrial Value Systems” to materialize anytime soon. Untestable
speculation about ETs does not inspire academic respectability. In researching this paper, I
did not find much prior work on any aspect of the values of evolved agents in the multiverse,
whichinturnmakesmelessthanhopefulthatthemorespecificissuespertainingtomultiverse-
wide superrational compromise will be picked up by other researchers out of curiosity. Hence,
superrationalists will probably need to think about ET values themselves.
3.4.1 On the far values of humans and human superrational cooperators
We will now explore what superrational humans might care about in distant civilizations.
Unfortunately, our sample of these people is small, and path dependencies may imply that
current earthly superrationalists may not be very representative of those elsewhere in the
multiverse. We will, therefore, also look at general human values and far values in particular.
Organizing human values
66
Although most people have reliable intuitions for what other people care about, these
intuitions are hard to pin down, owing to the inherent “messiness” of human moral intuitions
(cf. Stewart-Williams (2015), section “Morality Is a Mess”; Muehlhauser and Helm, 2012,
chapters 3–5.3). This “messiness” makes evolutionary sense (Cosmides and Tooby, 1994) and
should therefore be expected from other civilizations in the multiverse as well. To talk about
human values, it is at least helpful (if not necessary) to develop some systematic terminology
and overview of what kind of things people care about. Luckily, we can get help from moral
psychologists and others who have attempted to develop just this sort of overview.
One example is Jonathan Haidt’s and Craig Joseph’s moral foundations theory. It divides
morality up into five foundations – care, fairness, loyalty, authority and sanctity – although
the authors do acknowledge that some other values (such as liberty) may deserve foundation
status as well. Haidt and his colleagues have also shown that while social conservatives tend
to embrace all five moral foundations, liberals/progressives seem to focus primarily on the
first two, i.e. care and fairness.51We can thus also use the terms “liberal” and “conservative”
to describe values, even though it is, of course, uncertain whether this distinction carries the
same weight in other civilizations.
Other theories outlining what humans value include Schwartz’ Theory of Basic Human
Values (updated and extended by Schwartz et al. (2012)), as well as those of Shweder et al.
(1997) (also see Pinker (2011, chapter 9.4) for a short, accessible summary). There is also
Peter Levine’s Alternative to Moral Foundations Theory, which is not formally published.
There are also some characterizations of the cultural and moral differences among humans.
For example, Inglehart and Welzel divide moral values into just two factors: traditional
versus secular-rational values, and survival versus self-expression values (2010, note 10).
Hofstede recognizes six cultural dimensions, and Trompenaars’ model of national culture
differences has seven dimensions of varying moral relevance.
Human far values
We have seen that far values are the parts of our preferences that are relevant to MSR (see
section 3.3.2). There are almost no studies on how humans care about alien civilizations.
However, construal level theory suggests that we think and thus care similarly about different
psychologically distant things. This brings us to the question, how do people usually care
about psychologically distant or abstractly construed things?
In contrast to their concrete counterpart, values at abstract construal levels tend to focus
more on the central (as opposed to peripheral) features of a given situation (see Trope and
Liberman (2010b), esp. section V). Values in abstract construal levels are therefore less
fragile (see section 3.2.3), which is good news for us, as this inherent stability makes them
easier to account for in our superrational cooperation. But what are those central features?
A few studies have been conducted to find out, most notably one by Bain, Hornsey, Bongiorno,
Kashima, et al. (2013). The authors summarize the results in a blog post:
51Many other characterizations of the difference between liberals and conservatives have been proposed.
For example, Robin Hanson compares the differences between liberals and conservatives to the differences
between foragers and farmers. Other distinctions have been proposed by Sinn and Hayes (2016) and Lakoff
(1997).
67
In our research, we asked people to think about the effects that changes in society
today would have on society in the future (the Year 2050). For instance, we asked
people to consider what society would be like 50 years in the future if climate
change was mitigated, marijuana was legalized, abortion laws were relaxed, or the
proportion of atheists or Muslims in society increased substantially. Participants
considered changes in society relating to people’s characteristics (how caring,
moral, and competent people would be in 2050), whether people’s values would
change (e.g., becoming more concerned with security or achievement), whether
there would be more societal problems (like crime and poverty), or greater societal
development (economically, technologically, and socially).
The different contexts produced diverse and nuanced images of what future
society would be like. For example, participants saw a more atheist future society
as making people less friendly but more competent than today, but saw a future
society where marijuana was legalized as both less friendly and less competent.
Overall, people’s images of future society weren’t all good or all bad, suggesting
they had realistic rather than fantastical projections about what society would
be like in the future.
What may be most surprising, however, is that only one dimension emerged as a
reliable motivator of people’s actions in the present. People supported changes
in policies today (e.g., legalizing marijuana, acting on climate change) if they
believed it would lead to a future society where people were more caring and
moral. Other dimensions – people’s values, their competence, or levels of societal
problems and societal development – emerged less strongly, only in a few contexts,
or were irrelevant to people’s willingness to act.
Similar findings were made by Bain, Hornsey, Bongiorno, and Jeffries (2012), Park, Bain,
and Kusumi (2015), Judge and Wilson (2015) and Bain, Milfont, et al. (2015) in other
policy areas. These results are quite surprising – I know of no explicit discussion of “virtue
consequentialism” in moral philosophy, for instance. Unless we think that superrationalists
consistently hold different values, that humans are atypical or that these findings are somehow
invalid, the findings suggest that the application of MSR implies significant policy changes for
people espousing more commonly discussed consequentialist value systems like utilitarianism.
Unfortunately, the above studies have some limitations. For example, Bain et al. (2013) do
not ask for a utilitarian evaluation – i.e. one based on overall (average or total) welfare (or
preference fulfillment) – of the future societies. Perhaps participants only put much weight
on future citizens being caring and moral because these are proxies for other moral issues
(such as welfare)? Besides methodological issues with the study itself, the results may not
transfer to MSR without complications. In any case, social psychology studies often fail to
replicate or generalize as expected. Construal level theory notwithstanding, it could be that
people’s views on alien civilizations differ from those on “collective futures”. These results
should thus be seen as tentative and preliminary until further replications come in; for now,
we can regard them as serving an illustrative, rather than action-guiding, purpose.
Moreover, benevolence – the term used by Bain et al. to encompass the characteristics caring,
moral,andcompetent in their 2013 study – is still a rather fuzzy concept that probably
depends on people’s general moral views. It seems likely, for instance, that the definition
of benevolence in a given situation varies considerably between, say, a devout Jain and a
68
devout Salafist Muslim. In future research we should therefore look more into what kindof
benevolence or moral behavior people value. For example, Napier and Luguri found that
abstract mind-sets decrease preferences for loyalty, authority, and purity, all of which lie on
the conservative and tribe-specific end of the moral spectrum (cf. Luguri and Napier, 2013).
The values of human superrational cooperators
We should also investigate how the values of today’s superrational cooperators differ from
those of other humans. Unfortunately, while I suspect that the results would be both
interesting and informative, the number of people who actively reason superrationally today
is too small to yield a statistically representative sample. We will therefore focus on who,
for various reasons, are likely to embrace most (if not all) of the arguments underlying
superrational cooperation.
Of course, all of this again only gives us very weak evidence about the content of the
compromise utility function. For one, it does not tell us much about civilizations that
are very different from humanity. Moreover, the values of earthly superrationalists may
in great part be the result of path dependencies. Thus, they may differ even from the
values of superrationalists in civilizations that are very similar to humanity. Despite these
considerations, I think that this most direct empirical approach to ascertaining the content
of the compromise utility function is worth investigating.
Philosophers
Given that the central theme of this paper rests to a large degree upon philosophical
considerations that are unlikely to be well-known outside of analytic philosophy, it seems
reasonable to begin our review with analytical philosophers. While many philosophers seem
to accept causal decision theory (see section 2.2), they are nevertheless far more likely to
be aware of such ideas at all.52Furthermore, we can use Bourget and Chalmers’ (2014)
survey of philosophers to look at correlates of making the non-causal choice in Newcomb’s
problem. Most decision theorists see Newcomb’s problem as analogous to the question of
whether to cooperate superrationally in the prisoner’s dilemma with a strongly correlated
opponent (Lewis, 1979). The correlations, taken from the survey website, are inconclusive.
Apparently, one-boxing in Newcomb’s problem correlates very weakly with non-physicalist
views in philosophy of mind (0.139), and only slightly stronger with viewing one’s own
work as Wittgensteinian (0.15). Two-boxing, meanwhile, has similarly weak correlations
with endorsing the B-theory of time (0.141), embracing classical rather than non-classical
logic (0.136), not being a communitarian (0.128), atheism (0.125), scientific realism (0.121),
seeing one’s work as Lewis-ian (0.119), and with externalism in moral motivation (0.102).53
Correlations between choices in Newcomb’s and the trolley problem were too weak to warrant
52That said, philosophers often do not act on their self-reported views (Schwitzgebel and Rust, 2011). For
example, while philosophers (and ethicists in particular) are much more likely to rate eating meat as morally
reprehensible, differences in behavior (i.e., actual meat consumption) are, at best, meager.
53Interestingly, two-boxing is not only mainstream among philosophers in general (see section 2.2), but
also slightly more common among philosophers with whom I (and most acausal decision theorists I know)
would otherwise agree more. For more discussion of this phenomenon from the perspective of a one-boxer,
see Carl Shulman’s Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? and its
comments.
69
any mention54. These results do not appear to offer much insight into what value systems
should be taken into account for superrational compromise. So, if anything, we could look
into the values of philosophers in general.
Effective altruists
Let us turn to another community, in which taking action based on philosophical arguments
is common: the effective altruist and rationalist spheres. Specifically, we will look at the
effective altruist, LessWrong, and Slate Star Codex communities. A multitude of surveys of
these demographics are available.55
Within the LessWrong community, one-boxing in Newcomb’s problem is about ten times
more common than two-boxing, as evidenced by their 2012 and 2013 member surveys.56
The 2009 survey also revealed that most LessWrong users would cooperate in one-shot
prisoner’s dilemmas against one another. Acausal reasoning thus appears quite common
in this community. In fact, updateless and timeless decision theory (see 2.2) arose from
discussions on LessWrong.
The surveys also show that many members of the community identify as consequentialists
(see below). Indeed, effective altruism is itself built upon a foundation of consequentialist
arguments, although it is consistent with additional deontological restrictions.
The community’s general world view is entangled with both their consequentialist and
decision-theoretical views,57as well as a general curiosity for discussing ethics and decision
theory in the first place58. Hence, we may regard their views as indicative (if only weakly)
of those of other superrational consequentialists in the multiverse.
Even the existing surveys reveal some interesting facts about the values held by community
members. For instance, they are overwhelmingly liberal, with only a few percent self-
identifying as conservative. Furthermore, they show a significantly greater concern for
animals than the average person in Western industrialized countries.
Considering larger and smaller groups
54Unfortunately, at the time of writing, the site that is supposed to show all (as opposed to only the
strongest) correlations between Newcomb’s problem and other questions appears to be broken.
55General surveys on LessWrong were made in 2009, 2011, 2012, 2013, 2014 and 2016. There is a Slate
Star Codex Survey survey from 2014, which only asked non-LW users to participate. Surveys of the EA
community were done in 2014 and 2015.
56The results (as of December 2016) of another LessWrong poll confirm that most community members
strongly favor one-boxing.
57For example, they view rationality and consequently decision theory and other sciences as being, in
the end, about winning, in line of the instrumental and epistemic conceptions about rationality, rather
than about acting or thinking in accordance with some reasons and requirements. Another example of a
connection is that Eliezer Yudkowsky, the founder of LessWrong, has dedicated his life to making sure that
artificial intelligence has a positive impact and convinced many in the community that this is a worthy goal.
It also seems to me that the context of (superintelligent) machines can function as an intuition pump for
consequentialism.
58Decision theory and a goal system are two important ingredients to solve that problem of AI alignment
(see the preceding footnote) (Soares and Fallenstein, 2015; Bostrom, 2014b, chapter 13, section “Component
list”). Furthermore, effective altruism, i.e. systematically trying to do as much good as possible, requires
that one knows what is good at least in some detail. For example, metrics like the quality-adjusted life year
or the disability-adjusted life year may be used to evaluate intervention against poverty. (See GiveWell’s
articles on the topic.) Effective altruism presumably also inspires learning about rationality.
70
We could also try to survey the values of much smaller sets of people, like those who have
argued against causal decision theory (e.g., in academic papers) or indicate that they take
the implications of non-causal decision theories seriously.
Conversely, we can also study the values of much broader sets of people to make use of
the academic literature. For example, we can reasonably assume that in order to discover
superrationality, one would need a general philosophical mindset (rather than, say, a merely
pragmatic one) and a willingness to engage in thought experiments that have no immediate
practical relevance. We could then try to identify groups of people who meet these criteria,
and to discover what values their members have in common.
While we should also help superrationalists who do not believe they live in a multiverse
(see section 2.9), we should nevertheless expect superrationality to be more widely accepted
among people who do believe in a multiverse. After all, superrationality is probably far less
action-guiding on its own than in combination with the multiverse hypothesis (see section
6.6), so it is comparatively unlikely to spread by itself. Thus, we could also survey the values
of people who believe in the multiverse hypothesis.
Similarly, we could study people who accept similarity to and correlation with others (as
opposed to thinking that they are unique in the entire multiverse). Interestingly, such people
may be likely to be conservative (Stern, West, and Schmitt, 2014).
On the other hand, it may be that our currently available sample of superrationalists
are atypical simply because the topic of MSR is still in its infancy here on Earth. If
superrationality eventually becomes more popular on Earth and elsewhere in the multiverse,
we may find that only the relatively few early adopters of the idea differ significantly from
the human mainstream. Presumably, this is common in many areas of progress (cf. Rogers,
2010, chapter 7). For example, the average computer user in 1970 was very different from the
general population at the time, since operating a computer back then required a particular set
of technical skills that most people did neither possess nor have the time to learn. But once
these early adopters improved the technology and convinced others to buy computers, they
were soon outnumbered by people who would never have worked with the older computers,
eventually culminating in today’s ubiquitous use of computers. Thus, the average of the
computer users of the past 50 years would probably resemble an average young person in
a developed country. Analogously, early superrationalists may need to be more willing to
study obscure thought experiments and look deliberately for crucial considerations. Soon,
however, these early adopters may find themselves outnumbered by less explorative people
who would have never thought about donation games between correlated agents on their
own. While it is very unlikely that MSR will spread as widely as computers, the average
superrationalist may nevertheless end up looking more similar to the average person than
today’s sample suggests.
3.4.2 Biological evolution
According to conventional views of the multiverse and its physical laws, almost all59of its
inhabitants are evolved agents or descendants of evolved agents. This means we can use
our knowledge of evolution and its workings to predict what values these other agents have.
59The most notable exceptions are probably Boltzmann brains, which do not have a significant impact on
the universe.
71
Since much has been written about evolution and the more relevant fields of evolutionary
psychology and (descriptive) evolutionary ethics, we shall not discuss them in detail here.
Readers may consult the works of Pinker (1999), Stewart-Williams (2015), Greene (2013),
Axelrod (2006), and Buss (2015) for introductions.
3.4.3 Cultural evolution
A process similar to evolution takes place on the cultural level. Whereas biological evolution
operates on genes, this cultural evolution determines the development of pieces of culture
ormemessuch as “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of
building arches” (Dawkins, 1976, chapter 11).60Again, this is not the place to review the
literature on this topic. For an introduction to cultural evolution consider, e.g., Henrich
(2015).
Which moral views correlate with superrationality?
We can turn to the study of cultural evolution to learn about the prevalence of various
consequentialist value systems. But whereas superrationalists probably resemble other
intelligent agents biologically, they may well differ from them culturally. Thus, in addition
to considering the civilizational baseline, we may also look into what values often go hand
in hand with superrationality. Below are some preliminary examples of lines of reasoning
that might be relevant in this context, some of which resemble the more empirically-minded
comments in section 3.4.1:
•Cooperation in general is more relevant for people with value systems that differ
strongly from the mainstream in their civilization.
•Some value systems benefit more from cooperation (and are harmed more by its
breakdown) than others. Agents with these value systems are more interested in
cooperation than others.
•Superrationality is a “weird” philosophical idea. Therefore, it is more accessible
to people who care about knowledge, are open-minded, philosophically rather than
pragmatically inclined, and so forth.
•Superrationality on its own is probably insignificant to most people’s lives (see section
6.6). Hence, we should expect many superrationalists to only care about the idea
because it comes packaged with multiverse theories. While this does not necessarily
have implications for the other superrationalists’ values, it does underline the point
about a “philosophical” rather than “pragmatic” mindset. After all, thinking about all
these other universes usually does not matter for our actions.
•The significance of MSR is more apparent to people who think of goals as utility
functions or the like, since this view makes it easier to to see that we can have
preferences about distant parts of the multiverse. Once someone notices that many
60There is some debate as to whether the the term “evolution” fits this process, i.e. about the validity and
usefulness the analogy between cultural evolution on memes and biological evolution on genes (Edmonds,
2005; Kuper, 2000; Gil-White, 2005; Wimsatt, 1999; Claidière and André, 2012; Atran, 2001; Pinker, 1999,
chapter 3, section “What now?”). Anyway, my impression is that nowadays the study of cultural evolution
does not heavily rely on the analogy, even when the term “cultural evolution” is used.
72
agents have preferences about each other’s universes, she can see room for trade. If
thinking about utility functions or similar formalisms indeed paves the way for MSR,
we may also expect artificial intelligence researchers, game theorists, and some ethicists
to be prominently represented among superrationalists in the multiverse.
•The significance of MSR is more apparent if one realizes that it may imply radical
compromise wherein everyone effectively changes their utility function. This, in turn,
may be most apparent to people who are familiar with arguments like Rawls’ original
position (Freeman, 2016) or (preference) utilitarian reasoning along the lines of “it
would be best if everybody...”.
•MSR may come more naturally to people whose values require coordination (see sections
2.8.3 and 2.8.9).
Most of these considerations apply mainly to civilizations resembling our own. Of course, we
can similarly think about correlates of superrationality in more advanced (or, in principle,
more primitive) civilizations. Such thoughts are even more speculative, especially if we
do not know what future civilizations might look like. In the next section (or rather, its
companion papers) we will also consider cultural (and biological) evolution in specific models
of more advanced civilizations.
3.4.4 Other considerations
Biological and cultural evolution are not the only processes that affect the distribution of
moral views throughout the multiverse. In particular, I would like to draw attention to three
other candidates. Given that the underlying considerations in these areas are advanced,
speculative, and not strongly related to superrationality itself, I will not go into detail; rather,
I will refer to complementary notes for further tentative ideas.
•Some civilizations may inadvertently self-destruct before they can shape their part of
the universe. Others, meanwhile, may voluntarily refrain from colonizing space. The
Fermi paradox suggests that either of these are likely. The values of such civilizations
should not receive the same weight as one would have believed before considering this
argument. For more information see the complementary note Civilizational filters and
distribution of values in the multiverse .
•An important step in the development of humanity and potentially other advanced
civilizations could be whole brain emulation, i.e. scanning a biological brain, uploading
it into a computer, and then simulating its behavior to a level of detail that preserves
the brain’s functionality. Since, by assumption, they are functionally equivalent to
human brains, whole brain emulations (ems) can do most of the things that humans can.
However, they can also do many things that humans cannot. Because ems are software,
they can be copied, their information processing can be sped up (by moving them to a
faster computer or giving them more CPU time on their current one), and they can use
the internet to travel much faster and cheaper than humans. These advantages may
be sufficient to allow ems to become more numerous and more powerful than humans.
They also imply predictable cultural changes, including an increased willingness to
adopt non-causal decision theories. See the complementary note Values and acausal
reasoning of whole brain emulations for more details.
73
•Because evolved minds (whether uploaded or not) are not made to be maintained or
improved, I expect that many long-lasting civilizations will eventually create artificial
superintelligence. Because programming an AI to pursue one’s goals might be difficult,
the values of such an AI may differ from those of humans. For more information, see
the complementary note Values and non-causal reasoning of superintelligent AIs .
4 Interventions
In this section, I propose a few interventions that multiverse-wide superrationalists should
pursue. Many of these are tentative ideas to consider more in the future rather than confident
suggestions for what we should do today.
4.1 Cooperation in the face of uncertainty about values
We begin with a general challenge: given that we currently know so little about the values of
other agents in the multiverse, how can we cooperate with them? With our current state of
knowledge, it appears impossible to conclude what position MSR recommends on particular
issues. For example, it seems impractical to decide whether we should vote and lobby in
favor or against mass surveillance, abortion, marijuana legalization or the death penalty.
Perhaps MSR, while interesting in theory, is practically impossible to apply because of our
ignorance of the values in the multiverse? (Also see section 6.11.) While our uncertainty
about the values of our collaborators is no doubt a major obstacle to the application of MSR,
I will nonetheless argue that there are relevant policy changes that we can implement even
today.
The first class of such interventions requires no knowledge about other value systems at
all, as long as we are confident that future agents will be able to attain such knowledge.
Meta-activities are examples of this: no matter what the aggregated utility function of all
superrationalists in the multiverse turns out to be, we could still benefit it indirectly by
learning what it is or by spreading MSR itself (see section 4.5). One way of doing so is to
ensure that artificial intelligences cooperate superrationally (see section 4.6).
In the second class of feasible interventions, we try to draw conclusions from what little we
do know about the distribution of values in the multiverse. We can, for instance, be sure that
extraterrestrials will care less than humans about the Bible or the United States of America
(though some will care about them a lot and many may care about preserving local traditions
in general). On the other hand, we can be reasonably confident that many extraterrestrials
care about satisfying the preferences of some other agents (e.g., “innocent” agents capable
of reciprocating) (see, e.g., Axelrod, 2006; Trivers, 1971; Fehr and Gächter, 1999; Dawkins,
1976; Taylor, 1987; Buss, 2015, chapter9). Hence, weshouldperhapsembracesuch“universal”
moral values more than human superrationalists would otherwise do. (We explore this further
in section 4.1.1.) Consider another example: the far values of at least some humans probably
resemble those of many evolved extraterrestrial superrationalists, which means that we can
benefit our superrationalist collaborators by increasing the capabilities of these humans to
fulfill these preferences (see section 4.4). As a last example of how we can use a small piece
of knowledge, consider how we can sometimes know that someone’s values are at an extreme
74
end of some scale or otherwise far away from the multiverse-wide superrationalist average.
In this case, MSR suggests that we shift these extreme values towards the middle of their
scale. For example, utilitarians are extreme in that they only care about welfare, whereas
most superrationalists presumably care about a lot of other things as well. Thus, it would
be good to convince the utilitarian to take other considerations into account (although it is
not clear what these ought to be and how much they should be taken into account).
In both of these classes, we overcome the obstacle posed by our lack of knowledge by
benefitting a wide variety of value systems, rather than picking out any particular subset of
extraterrestrials.
4.1.1 Universalism
I think satisfying universalist values, i.e. ones that are shared by a large fraction of
superrationalists, may become somewhat more important for all superrationalists, although
the case is not entirely clear.
Imagine a group of people with a few shared concerns, such as justice, welfare and freedom,
and a large number of non-shared concerns, such as each person’s egoism, tribal loyalties,
etc.61Given this, they can produce gains from trade by moving resources from the non-
shared concerns to the shared concerns. In terms of the compromise utility function (see
section 2.8), the idiosyncratic concerns do not receive lower collective weight than prior to
cooperation. However, since each individual idiosyncratic value receives much smaller weight
in the compromise utility function and interventions usually cannot satisfy many of them
at the same time, interventions targeted at the universal concerns will usually increase the
compromise utility function more efficiently.
Although this argument is quite persuasive, it is not as strong as it initially seems. For
example, it assumes that each individual’s values are a simple weighted sum of universal and
idiosyncratic concerns. But preferences can also have different shapes. In fact, each agent
may explicitly protect its idiosyncratic values against losing its weight in such a preference
aggregation mechanism. One example could be that the idiosyncratic values are much
stronger than other preferences, but face diminishing returns. For instance, most people
probably care almost exclusively about their own well-being until their level of well-being
reaches some threshold.
There may also be agents who exclusively care about their idiosyncratic preferences. For
agents with these values, a compromise in which resources shift to universal concerns is
negative.
Another reason to disregard idiosyncratic preferences is that they are often not more common
than their opposites. For example, Marxism, the US or Islam are liked by many, but also
disliked by many others. Therefore, it is not even clear whether the compromise utility
function evaluates either one of them positively.
It should be noted that some universalist values may refer to others’ tribal values. For
example, many humans care about preserving cultural heritage. That said, this preference
61Note that the distinction between universal and idiosyncratic concerns is not binary. For example, I
would guess that valuing eternal flames is much more common in the multiverse than most religions and
tribal loyalties but less common than concern for justice, welfare and freedom.
75
is usually weak and usually abandoned if it conflicts with other values. For instance, few
would argue that human sacrifices should be continued to preserve tradition. Although most
people do not care enough about animals to become vegetarians, my impression is that most
people in Western countries would favor the abolition of bullfighting.
4.2 Moral advocacy
Advocating one’s moral views can be an effective intervention if they differ significantly from
those of most other people. In light of superrational cooperation, we should perhaps change
the values we advocate.
4.2.1 Universalist values
As I argued in section 4.1.1, superrational compromise may imply that more resources should
be used to satisfy universal as opposed to idiosyncratic concerns. This suggest that spreading
universalism is good from an MSR perspective.
Expanding the moral circle
Most people care much more about themselves, their kin, and their associates than about
others. From their point of view, they, their kin, and their friends are all special. From the
outside view, however, most people are not more important than others. It is thus in the
benefit of altruistic outsiders (e.g., other humans) to reduce the difference between how much
people care about themselves, their family, friends, etc. versus other humans. In Singer’s
terminology, an outsider who cares about all humans equally would similarly want people’s
“circle of empathy” to expand outwards to include other humans (Singer, 2011). In this way,
we can align their decisions with the goals of the outside party.
The perspective of superrational collaborators elsewhere in the multiverse is similar, in that
many things that are morally special to us are not special to them. Take nationalism and
patriotism: many people assign particular moral value to the country they grew up in or to its
citizens, with little support through impartial reasons62. Needless to say, most superrational
collaborators elsewhere in the multiverse will adopt a different perspective. If they care more
about Japan than about the United States (or vice versa), it would be for specific impartial
reasons. Making people care intrinsically less about particular nations thus aligns their
values more with those of superrational collaborators elsewhere in the multiverse. Similarly,
intrinsic preferences for members of one’s race, species, or substrate are inconsistent with an
outside view of someone from a completely different species with a different substrate.
62Surely, there are some impartial reasons to like one country more than another. For instance, Sweden is
more tolerant of homosexuals than Iran, which is a reason to favor Sweden if one cares about the welfare
of homosexuals. Nationalists often provide impartial reasons for favoring their country. For example, US
nationalism is often about how the US is the country with the most freedom in the world. But if people really
cared about such impartial reasons, the “best country in the world” would often not be their own country.
Furthermore, nationalism often exaggerates the difference between countries in a way that seems inconsistent
with an impartial point of view: sure, the US has a lot of freedom, but so do many other Western countries.
If the US is better than everyone else along such dimensions at all, then surely not by a big margin. In any
case, I am only talking about the kind of nationalism that is not based on impartial arguments.
76
Which moral foundations?
Given the criterion of universalism, what aspects of morality are worth spreading? As an
illustrative classification of moral intuitions, we use Haidt’s moral foundations theory, which
divides morality up into five foundations: care/harm, fairness/cheating, loyalty/betrayal,
authority/subversion and sanctity/degradation (see section 3.4.1). Liberals tend to care
primarily about the first two aspects, whereas conservatives care about all five.
Liberal values are universalist, while the exclusively conservative values are not. As J. Greene
(2013, chapter 11, section “Why I’m a liberal, and what it would take to change my mind”)
writes (references added from the endnotes):
According to Haidt, American social conservatives place greater value on respect
for authority, and that’s true in a sense. Social conservatives feel less comfortable
slapping their fathers, even as a joke, and so on. But social conservatives do
not respect authority in a generalway. Rather, they have great respect for the
authorities recognized by their tribe (from the Christian God to various religious
and political leaders to parents). American social conservatives are not especially
respectful of Barack Hussein Obama, whose status as a native-born American,
and thus a legitimate president, they have persistently challenged. [...] Likewise,
Republicans, as compared with Democrats and independents, have little respect
for the authority of the United Nations, and a majority of Republicans say
that a Muslim American with a position of authority in the U.S. government
should not be trusted (Arab American Institute, 2014). In other words, social
conservatives’ respect for authority is deeply tribal, as is their concern for sanctity.
(If the Prophet Muhammad is sacred to you, you shouldn’t be in power.) Finally,
and most transparently, American social conservatives’ concern for loyalty is
also tribal. They don’t think that everyone should be loyal to their respective
countries. If Iranians, for example, want to protest against their government,
that is to be encouraged.
In other words: authority, loyalty, and sanctity are all non-universalist values. While many
people have values that structurally fit into these categories, the content (e.g., the referent)
of these values differ. Applied to multiverse-wide superrational cooperation, this means that
we cannot benefit the authority, loyalty, and sanctity values of other superrationalists unless
we are in a society with the “right” authorities and sanctity rules. In fact, if we push for
these three values in our tribe (or civilizations), it may actually be badfrom the perspective
of people with conservative values from other tribes. American social conservatives tend to
dislike Islam and loyalty to its authorities, even more than American liberals do. Overall,
this suggests that when it comes to multiverse-wide compromise, spreading values in the
domains of authority, loyalty, and sanctity is not very fruitful. Instead, we should try to
make people care more about the universalist liberal foundations.
Having said this, there may be a few exceptions to the rule (cf. the last paragraph section
4.1.1). For example, Christian social conservatives may like parental authority even if one’s
parents are Muslims or extraterrestrials. In the sanctity domain, a preference for leaving
nature untouched may extend beyond an agent’s planet, although many extraterrestrial
habitats are probably “slimy” and full of scary animals. Presumably, such reasoning is also
applicable to other moral values. For instance, some people care about the traditions of
other tribes, including their art, social institutions, laws, religions and other non-universal
77
aspects.
It should also be noted that aspects of the liberal value of fairness also vary strongly between
different people. For example, a progressive may see wealth inequalities as unfair, while a
libertarian finds wealth redistribution unfair. Thus, supporting one conception of fairness can
hurt another. That said, there are many sorts of unfairness that almost everyone recognizes
as bad.
Another reason to focus on the liberal aspects of morality is that potential superrationalists
on Earth are rarely conservative (see section 3.4.1). That said, future societal transitions
might make people more conservative (see the companion paper Values and acausal reasoning
of whole brain emulations ).
4.2.2 Concern for benevolence
We have seen provisional research indicating that, when it comes to distant societies, humans
mainly care about the benevolence, warmth, and moral behavior of its inhabitants (see
section 3.4.1). If these tentative findings turn out to be correct and other evolved species
resemble ours in this regard, we should try to align people’s near values more with these
(typically far) goals. However, given the tentativeness of said research, I do not think this
should significantly affect our actions at present.
4.2.3 Consequentialism
Even though superrationalists elsewhere in the multiverse may care most about whether we
behave in a non-consequentialist but broadly ethical way, they do so in a consequentialist
way (see section 3.2.1). For example, they might care about the numbers of crimes and
selfless acts, or total amounts of happiness and suffering in a given population. This stands
in contrast to the preferences revealed by most people’s charitable efforts: most money
is donated to charities that are comparably ineffective, i.e. ones that do not achieve the
best possible consequences. By making people more consequentialist, we can improve their
resource use from the perspective of consequentialist third parties. This suggests that we
should spread consequentialist ideologies like effective altruism, potentially independently of
any particular optimization target (such as injustice, suffering, happiness, or knowledge).
4.2.4 Pluralism
Whereasthecompromiseutilityfunctionincorporatesaplethora ofconcerns, mostindividuals’
values are much more narrow. This is especially true among people who give morality some
thought. For example, some people adopt utilitarianism, while others become proponents of
Kant’s categorical imperative.63
63That said, advocates of simple ethical views like utilitarianism often argue that the implications resemble
other ethical notions. For example, because receiving an additional unit of resources has a greater impact
on a poor than a rich person’s happiness, utilitarianism tends to prefer an even distribution of resources
(Studebaker, 2012). Similarly, it has been argued that utilitarianism is (often) consistent with the wrongness
of killing, justice (Mill, 1863) and other moral rules (Smart and B. Williams, 1973, part 1, chapter 7).
This decreases the value of making utilitarians more pluralistic. It should be noted, however, that many
(especially critics of utilitarianism) have argued for the opposite, i.e. that there are some moral intuitions
78
As I am primarily a utilitarian, I sympathize with adopting a single ethical view (and
utilitarianism in particular). From an MSR perspective, on the other hand, this misses out
on gains from compromise between these opposing value systems and it would be better if
everyone adopted a mix of different values instead. Thus, we may want to promote moral
pluralism.
One version of this view is MacAskill’s (2014) moral uncertainty. Operating under the
assumption of moral realism (which I reject), he argues that and how we should be uncertain
about which ethical system is correct. Another related view is the normative reading of
Yudkowsky’s complexity of value (cf. Stewart-Williams (2015), section “Morality Is a Mess”;
Muehlhauser and Helm, 2012, chapters 3–5.3), according to which what humans care about
cannot be captured by a simple moral system and instead incorporates a large number of
different values.
4.2.5 Promoting moral reflection
Probably wanting more idealized and reflected upon values to be implemented is much more
common in the multiverse than wanting less idealized values to be implemented.64This is
especially the case for agents who have not yet settled on a moral view. For example, I am
genuinely uncertain about what I would or should count as morally relevant suffering when
it comes to small minds (such as those of insects) and the like, just as I am not sure how to
deal with infinities in ethics. I could thus benefit a lot if someone were to make more people
think about these problems.
Interestingly, the appeal of promoting moral reflection decreases upon idealization. Most
people probably endorse moral discourse, the importance of reflection and argument, etc.,
in part because they think theirmoral view will result from that process – if they did not
believe they had the arguments on their side, they might not hold their moral position in
the first place. However, not everyone can be right about this at the same time. If someone
only cares about preference idealization because she thinks that her value system will win,
then preference idealization may remove that meta-preference.
Beyond the question of whether evolved agents in the multiverse care about moral discourse,
we must ask an empirical question about our own universe: will moral discourse bring people’s
object-level positions closer to those of our multiverse-wide superrational compromise utility
function? For example, does moral discourse make people care more about, say, benevolence,
assuming this really turn out to characterize much of evolved agents’ far values (see section
3.4.1)? Perhaps moral reflection can also have negative consequences as well, such as attitude
polarization (Lord, Ross, and Lepper, 1979; Taber and Lodge, 2006). These questions appear
suitable for further research.
that utilitarianism cannot make sense of (Nathanson, n.d., section 3.b.i). For example, they argue that
utilitarianism is not (always) consistent with moral intuitions about equality (Pogge, 1995; Gosepath, 2011),
the wrongness of killing (Henson, 1971), and justice (Smart and B. Williams, 1973, part 1, chapter 10).
64The main data point is that humans think about morality and engage with others’ moral views. The
evolutionary psychology and cultural evolution perspectives, on the other hand, are non-obvious. Some
moral arguments may be favored by cultural group selection, others may offer intelligent individuals to get
their way more often. On the other hand, individuals who change their moral views may be perceived as
unreliable or illoyal.
79
Besides promoting societal discourse on ethical questions, one intervention in this domain is
the use of preference idealization in artificial intelligence value loading (see section 4.6).
4.2.6 Multiverse-wide preference utilitarianism
In addition to spreading MSR itself, one could also spread value systems that in some way
mimic its implications. Specifically, the proposed neutral aggregated utility compromise is
essentially a form of preference utilitarianism or multiverse-wide preference utilitarianism.
Multiverse-wide preference utilitarianism might therefore be a promising moral view to
advocate on the basis of multiverse-wide superrational compromise.
Of course, spreading a proxy for MSR has some general disadvantages. Most importantly, it
is not very robust. If multiverse-wide preference utilitarians come to prioritize very differently
than multiverse-wide superrationalists, then spreading the preference utilitarianism would
not yield much in our favor. The question nonetheless deserves some thought. After all,
if there is a significant chance that multiverse-wide preference utilitarianism approximates
the conclusions of MSR, then we should at least be on the lookout for very cheap ways of
promoting it.
One main difference between preference utilitarianism and superrational cooperation –
whether in the form of aggregated utility compromise or otherwise – is that the latter only
takes the values of other superrationalists in the multiverse into account (see section 2.9.4).
Preference utilitarianism, on the other hand, accounts for the preferences of a much broader
set of agents, such as all sentient beings, all agents that have preferences of any sort, or
all agents who satisfy some other criteria for personhood. This may mean that preference
utilitarians arrive at very different conclusions than MSR proponents. For example, if they
take small minds into account, these may well dominate preference aggregation. If, on the
other hand, they only take members of human-like species into account, then the difference
between these and superrationalist preferences may be much smaller.
Another difference could be the way interpersonal comparison of utility is handled (cf. section
2.8.5). In the context of compromise, an individual’s interests are usually given weight in
proportion to the individual’s power. So, for example, the interests of a superrational
billionaire receive orders of magnitude more weight than the interests of a superrational
beggar. However, most would view this approach as unethical and most preference utilitarians
would disagree with it. Thus, multiverse-wide preference utilitarianism gives more weight to
the moral views of the poor than MSR suggests.
Yet another problem could be that preference utilitarians would not arrive at the more
meta-level MSR interventions. Even if MSR and multiverse-wide preference utilitarianism
had the same object-level implications, the justification for MSR is different from (non-MSR)
justifications for preference utilitarianism. Thus, preference utilitarians would not support or
even come up with interventions that are about spreading the MSR-based justifications for
MSR’s and preference utilitarianism’s joint conclusions. For example, a preference utilitarian
(who does not agree with MSR) would not spread the MSR idea itself, nor try to ensure that
future people (and AIs, see section 4.6.3) reason correctly about decision theory. Because
these are plausibly among the most promising interventions, this consideration suggests some
significant divergence in priorities.
80
In sum, it is unclear to what extent multiverse-wide preference utilitarianism could ap-
proximate a superrational compromise utility function. At this point, however, spreading
multiverse-wide preference utilitarianism is unlikely to be a top priority.
4.2.7 No multiverse-wide tug-of-war over values
Value systems can be viewed as having several dimensions, like relative importance of welfare,
population size, art, knowledge, justice, compassion and freedom, tradeoffs between suffering
and happiness, tradeoffs between extreme happiness/suffering and mild happiness/suffering,
and severity of punishments, to name but a few. Different groups in the multiverse invest
resources into pulling the relative values of these dimensions into different directions. Some
may want people to care more about suffering, while others want them to care more about
nature or happiness instead.
Now, imagine you care more about suffering than most others and that you live in a
civilization with a merely average concern for suffering. Presumably, you would want to pull
the “concern for suffering rope” into your direction, potentially at the cost of other values.
But knowing about superrationality, this would make it more likely that those who care
less than average about suffering will also pull the rope into theirdirection elsewhere in the
multiverse, thus offsetting your impact. Therefore, MSR would recommend against shifting
concern away from other superrationalists’ values, e.g., nature or happiness, to suffering.
It should be noted that the above does not (necessarily) apply if the values of your civilizations
strongly diverge from those of the superrationalist average far values. In such cases, it may
be somewhat beneficial if all superrationalists pull the values of their civilization toward the
average.
4.3 Promoting causal cooperation
Imagine two value systems, each of them common throughout the multiverse, engaged in
conflicts with one another on Earth. Let us also assume that most people with these value
systems find ideas like acausal decision theory and the multiverse highly speculative, such
that we cannot convince them of cooperating on a MSR basis. In this case, we can still
cooperate superrationally with others in the multiverse by promoting causalcooperation
between the two sides (provided this does not end up hurting some third superrational party
of agents65).
65As a non-obvious example, consider global catastrophic risks. Presumably, most people would not want
humanity to experience a global catastrophe. Promoting peace and cooperation between nuclear powers is
thus positive for all nuclear powers involved. In the plausible event that humanity would survive a nuclear
winter and quickly recover, however, post-apocalyptic human society may come to hold different moral views
that conflict with the views of current nuclear powers. For instance, it may be that in the first months
after a global catastrophe, there would be frequent violence and chaos among survivors. They may also be
forced to exert violence themselves to survive. Thus, the survivors may be desensitized to violence. Even
after civil order is reestablished, citizens may still be relatively unconcerned about violence towards animals,
criminals, the weak and poor, etc. (Note that I am not claiming that this would necessarily be the case;
indeed, personal hardships can also make people morecompassionate. I am merely using it as a somewhat
plausible scenario to illustrate the present point.) All of this would imply that mitigating global catastrophic
risks on Earth ends up hurting agents in the multiverse who would like societies to be organized according
to post-apocalyptic survivor values. If agents with such values are sufficiently common in the multiverse,
81
For example, let us assume that the payoff matrix of their interaction is that of a prisoner’s
dilemma given in table 2. Let us assume that both players’ utility functions are equally
common in the multiverse. We also assume that other value systems have no interest in
the outcome of the interaction. From the perspective of a third party who accepts MSR,
theeffective payoff matrix for this interaction may look like the one given in table 3. That
is, when such a third party can influence the outcome of the interaction between player 1
and player 2, she acts as though she maximizes the utilities given in that table, even if she
intrinsically cares about something entirely different. When such an agent is able to influence
at least one of the players, she will lobby him to choose C,66because to her, the payoffs are
proportional to the number of C’s that are chosen.67A disinterested non-superrational third
party, on the other hand, – i.e. one who does not care about the payoffs of each of the two
agents intrinsically – would assign no value to either of the four outcomes, nor would they
invest any resources in bringing about a particular outcome.
player 2
C D
player 1C2,20,3
D3,01,1
Table 2: The payoff matrix of a prisoner’s dilemma.
player 2
C D
player 1C 4 3
D 3 2
Table 3: The effective payoffs of a prisoner’s dilemma to a third party that cooperates
superrationally.
Next, let us assume that, rather than some third party, player 1 himself learns about and
adopts multiverse-wide superrational cooperation, while player 2 stays ignorant of the idea.
The new effective payoff matrix may then look like table 4. Player 2’s payoffs are the same
as in the original prisoner’s dilemma, but player 1’s effective payoffs have changed. He now
then causal cooperation between nuclear powers should actually be sabotaged! That said, I do not find
this conclusion all that plausible. It seems to be based on more and less likely assumptions than other
action-guiding arguments and so it is much more fragile. One specific problem is that I would expect humans
(and most other evolved beings) to become more tribal in response to a global catastrophe (Henrich, 2015,
chapter 11, section “War, External Threats, and Norm Adherence”), which may make these values less
important (see section 4.2.1).
66For ideas on promoting cooperation from the outside, see Tomasik’s Possible Ways to Promote Compro-
mise, as well as Axelrod (2006, chapter 7).
67Note that in some prisoner’s dilemma-like problems, mutual defection is overall better than unreciprocated
cooperation, in which case the superrationalist’s job is more difficult. If she convinces one player of cooperation
but fails to convince the other one, she will have done more harm than good.
82
maximizes the sum of the two value systems’ payoffs, because player 1’s and player 2’s utility
functions are equally common in the multiverse. This puts player 1 in a peculiar situation:
whereas defection is the dominant strategy in the original prisoner’s dilemma (and therefore
still the dominant strategy for player 2), cooperation dominates in this new version. Player 1
would thus cooperate in a one-shot version of the problem.
On Earth, however, most interactions are repeated, like an iterated prisoner’s dilemma. At
first glance, one may suspect that player 1 would still cooperate in every round given that,
no matter what the opponent on Earth does, he will want to make it more likely that agents
elsewhere in the multiverse behave in a similarly cooperative way. However, such a strategy
ofunconditional cooperation makes defection player 2’s best strategy. This is suboptimal
for player 1, given that he prefers mutual cooperation (C,C) over unilateral cooperation
(C,D). In an iterated version of the game, player 1 might therefore punish defection to
some extent, similar to how successful strategies punish defection in the iterated prisoner’s
dilemma. Nevertheless, the dynamics of this new problem are different than those of the
prisoner’s dilemma. Based on the ordering of the outcomes for the different players, the game
is identified as g261org266in the periodic table for 2x2 games by Robinson and Goforth
(2005), who also provide a few examples of games in this category. A few additional examples
of this type of game exist, but overall, the game has not been studied extensively in the
literature. Further research is thus needed to identify the right strategy for iterative versions
of the game.
player 2
C D
player 1C4,23,3
D3,02,1
Table 4: The effective payoffs of a prisoner’s dilemma, in which player 1 cooperates su-
perrationally (with extraterrestrial agents who hold player 2’s values), but player 2 does
not.
4.4 Increasing capabilities
Broadly speaking, agents have two reasons to increase other agents’ capabilities: a) they may
care about it intrinsically, or b) they may share the goals of the people whose capabilities
they increase (and thus care about increasing them instrumentally). For example, someone
who mostly cares about other people’s freedom to pursue their goals has a type a) reason to
raise the capabilities of poor people, and someone who agrees more with the people than
with the dictator has a type b) reason to increase democracy. But if you hold less common
values, such as reducing animal suffering, giving people more power is of unclear value. MSR
broadens type b) motives to increase others’ capabilities: even if we do not share someone
else’s goal, we have reason to increase his capabilities if we believe that significant parts of
his goals are shared by superrational agents elsewhere in the multiverse.
There is some relevant literature on increasing an agent’s goal-achievement capabilities. In
83
economics, the capability approach is an alternative to welfare economics and primarily
studies how to measure an individual’s capabilities. Some of its metrics include health,
freedom of thought and expression, education, political participation, and property rights.
In his dissertation on Ethics Under Moral Neutrality , Evan Gregg Williams (2011) discusses
a topic closely related to acting under MSR: he assumes that we do not know what the
“correct” moral theory is68, and that while we all have some access to moral truth, this
access is unreliable. He then discusses, among other things, what policies we should take
given such uncertainty. In many ways, this scenario is analogous to MSR69, where the
necessity to maximize for multiple moral views comes from uncertainty about the utility
functions of other agents in the multiverse as well as their diversity rather than conflicting
intuitions about the “moral truth”. Many of Williams’ conclusions resemble those of the
present paper. For instance, he identifies the appeal of preference utilitarianism in chapter
3.1 of the dissertation (compare sections 2.8 and 4.2.6 of the present paper). Many of his
intervention ideas are about improving the capabilities of others who may plausibly have
access to the moral truth. First and foremost, he defends democracy (chapter 3.3) and
liberty (chapter 3.4).
Of course, MSR does not have the same implications as the above approaches. For one, when
we raise others’ capabilities as superrationalists, we favor people whose values we suspect
to be typical of what superrationalists in the multiverse care about. For example, from an
MSR perspective it is much more important to support consequentialists. Moreover, some
of the proposed measures merely move resources or power from one group to another (e.g.,
from a dictator to the people) without adding optimization power aimed at the goals of
superrational agents in the multiverse.
I doubt that raising capabilities will often be a top intervention. Nonetheless, it might be an
option when good and inexpensive opportunities, such as sharing knowledge, arise.
4.5 Meta-activities
Relative to any goal, meta-activities are either about a) amassing more resources, or b)
improving the efficiency of one’s object-level resource expenditure. To achieve the goals that
superrationality prescribes, we may thus also engage in such meta-activities. In the following,
I will describe two meta-activities, one of each kind.
4.5.1 Research
The present paper lays out the foundations for research on multiverse-wide superrational
cooperation. Further research is needed in all three areas discussed in this paper, i.e. how
our new criterion for choosing policies is to be constructed (see chapter 2), what values our
superrational collaborators have (see chapter 3), and which interventions are most promising
(chapter 4).
68I side with moral anti-realism (R. Joyce, 2016) and non-cognitivism in particular (R. Joyce, 2016, section
3). That is, I do not think that moral theories can have (objective) truth values.
69In fact, I learned about variance voting, which I take to be the most promising approach to constructing
the compromise utility function (see section 2.8.5), via the literature on moral uncertainty, in particular via
MacAskill (2014, chapter 3).
84
Note that some research, e.g. investigations of whether a compromise is beneficial for
you, can (in theory) be harmful if one has not properly precommitted as illustrated in the
Remote-controlled cake maker thought experiment (see section 2.8.6). A similar danger lies
in finding out whether other agents cooperate (see section 2.1).
4.5.2 Promoting multiverse-wide superrationality
Since multiverse-wide superrational cooperation produces gains from compromise (under
certain assumptions about the collaborators, discussed in section 3.2), having more multiverse-
wide superrational cooperation produces more gains from compromise. Hence, a common
interest of all collaborators is to increase the number of people who adopt (multiverse-wide)
superrational cooperation.
Indeed, it is plausible that small groups of superrationalists should focus on promoting the
idea rather than attempting to help other superrationalists directly. After all, if one of
them can convince only two others to cooperate superrationally, she already doubles her
impact relative to cooperating on her own. Of course, the two others could also convince
others in turn. Needless to say, spreading the idea saturates at some point. At least when
all humans are convinced of superrational cooperation, the idea cannot be spread further.
More realistically, we will run out of people who are willing to think about such seemingly
speculative topics.
4.6 Artificial intelligence
One particularly important way of shaping the future is artificial intelligence (Bostrom,
2014b). Given our newfound knowledge, we can differentiate between AI safety measures
that are inspired by superrational cooperation and AI safety measures that are not.
4.6.1 AI safety not based on superrationality-related considerations
The goal of current AI safety research is to make AIs behave in ways that are more compatible
with some human value system.70From a multiverse-wide cooperation perspective, this is
positive to the extent that human values correlate with the values of other evolved agents
in the multiverse. A human-controlled, non-superrational outcome may nonetheless be
suboptimal from an MSR perspective.
Imagine a distant civilization of billions of happy, law-abiding, art-producing, yet, from a
human perspective, ugly-looking extraterrestrials. Each year, they enslave or kill trillions of
other, less intelligent extraterrestrials, such that the number of miserable lives and involuntary
deaths caused by the civilization is orders of magnitude higher than the number of positive
lives it supports. Most people on Earth may not care about this civilization at all because
it contains no humans. Some may only care about the smart extraterrestrials and thus
evaluate the society very positively (Kagan, 2016). However, I suspect that many of those
70For example, the Machine Intelligence Research Insitute’s “Research” page is titled “Aligning advanced
AI with human interests”. Another AI safety organization even mentions it in their name: the Center for
Human-Compatible AI. Also consider the Asilomar AI principles and the discussion of value loading by
Bostrom (2014b, chapters 12, 13).
85
who care at all about distant ugly aliens also care about less intelligent aliens. These people
would evaluate the civilization as far less positive. Similarly, many superrationalists in the
multiverse may not evaluate our civilization positively if it were to continue its current
mistreatment of animals.
Another concern is that a civilization might prioritize near-view values when value loading
an AI. This suggests that even if our values resembled those of other civilizations, the goals
we give to an AI might differ significantly from what extraterrestrials care about in our
civilization.
FRI has previously investigated ways of making AI alignment failures less harmful by focusing
on avoiding very bad AI outcomes rather than attempting more fine-grained control (Gloor,
2016). One motivation to do so is this approach’s cooperativeness: different value systems
may disagree on what future should be created. For example, some want the universe to
be filled with concentrated pleasure, whereas others envision human civilizations of varying
social, economic and political systems, often rid of poverty, diseases, involuntary death,
and so forth. However, different value systems often agree on a large set of futures that
shouldnotbe created. Things like premature death, suffering, war, and extinction are almost
universally seen as bad. Avoiding dystopian scenarios can thus benefit a wider range of value
systems. Another MSR-based reason to focus on very bad outcomes is that, because our
civilization will be destroyed in all of them, avoiding them evokes abstract construals. These
probably do a better job than concrete construals at approximating what extraterrestrials
care about in our civilization (cf. section 3.3.2). However, making AI more fail-safe from a
MSR perspective would be less focused on preventing outcomes with a lot of suffering than
FRI’s previous work. Also, its level of priority depends on its feasibility. Whereas heuristic
arguments suggest that merely avoiding bad outcomes might be more feasible than working
toward fully human-aligned AI, it has so far proven difficult to do any concrete work in
the area. Overall, I think it is an approach worth investigating further in the context of
superrational compromise, but not likely to be a top intervention.
4.6.2 Multiverse-wide superrationality-inspired value-loading
In section 2.8.2, we viewed compromising as a one-time process, in which all agents adopt a
new utility function u∗to maximize in their part of the multiverse. If they indeed acted as
though they now only care about maximizing u∗, the natural consequence would be to push
for AI values that are closer to u∗. One way to do this is to directly implement the value
systems that one would also spread to other humans (discussed in section 4.2). For example,
one could try to make future AIs hold a wider variety of values (see section 4.2.4) or perhaps
prioritize universal concerns a bit more (see section 4.2.1).
More robustly, one could directly implement a pointer to the aggregated consequentialist
far values of superrationalists in the multiverse. Indeed, extracting u∗from the multiverse
appears to be roughly as difficult to specify as extracting goals of humans. Just as one could
identify humans in the world model, extract their goals and aggregate them, so one could
identify superrational cooperators, extract their goals and aggregate them.71(A somewhat
71Of course, identifying superrational cooperators in a world model may be more or less difficult than
identifying humans in the world model. My tentative guess would be that it is easier, because I think
the category of superrationalists can be described more succinctly than the category of humans, but of
course I am not very confident in this claim. Similarly, it may be that MSR-type aggregation (e.g., variance
86
similar proposal was made by Bostrom (2014a, page 14); see section 6.1.2.)
Of course, it is unlikely that superrationalists could convince the majority of people of such
goal systems. Nonetheless, at this early stage of the field of AI safety, it seems useful to also
explore unrealistic proposals like this one. Additionally, less attractive goal functions may
still be relevant as backups (see Oesterheld 2016).
Another disadvantage of this approach is that it breaks if the analysis underlying our
specification of u∗is incorrect. For instance, if MSR does not work at all, then making AI
care about ET values directly is much worse than simply implementing our own values.
4.6.3 Making an AI come up with superrational cooperation on its own
Instead of directly implementing our compromise utility function, we could also make the AI
come up with such a compromise on its own. This has several advantages. Most importantly,
it protects against some possible mistakes on our side. If, say, we were unable to find the
correct superrational compromise, we could let the AI find it on its own. Also, the AI may at
some point discover that there are no other agents in the multiverse after all, at which point
it could choose to stop wasting further resources into compromising with these nonexistent
agents.
The primary way of getting an AI to compromise superrationally is to ensure that it reasons
in accordance with the right decision theory.72, 73This in turn involves advancing the field
of decision theory and investigating possible ways of implementing decision theories in AI
systems. Given how both of these areas seem neglected and gains from trade may be quite
significant, I could very well imagine that interventions in this area are among the most
effective of those hitherto considered by effective altruists.
Value loading is still necessary
If all acausal collaborators settle on maximizing some utility function, perhaps value loading
is unnecessary for AIs with the right decision theories anyway? After all, once such an AI
joins the MSR compromise, it will update its utility function accordingly – regardless of
whether it originally wants to maximize paperclips or to reduce suffering.
But this reasoning seems unsound. While all AIs may settle on the same compromise utility
function, the original value system of the AI still affects what that compromise utility function
ends up being. Without superrationality, value loading affects the dominant values of one
AI. If there are msuperrationalist civilizations, then each can affect the dominating values
normalization) is more or less difficult to implement than the aggregation procedures one would implement
for humans.
72Reasoning in accordance with some decision theory is not meant to imply that the decision theory is
hard-coded into the AI. Instead, the decision theory that an AI uses may be the result of particular choices
of architecture. To ensure that the AI reasons in accordance with the right decision theory, we would then
have to find out what the decision-theoretical implications of different AI design choices are and ensure that
these receive due consideration in the construction of intelligent machines.
73There are other ways to make it more likely that the AI applies MSR. For example, one could ensure
that its epistemology enables it to infer the existence of other universes that cannot be observed directly. We
could also think of an AI that would accept MSR, but somehow never has the idea of MSR. Much more
plausibly, some AIs will simply not care about distant universes in a consequentialist way. However, all of
these parameters seem more difficult to influence than the AI’s decision theory.
87
inmAIs by1
m(assuming that all civilizations are equally powerful, etc.). So, proper value
loading is actually just as effective as before, if not more because of gains from trade. Even
if we manage to reliably make the AI join a superrational compromise, we will still want to
make it value the right things.
I am uncertain about whether some version of the above argument against value loading
may work after all. Even if all AIs have “paperclipper values”, perhaps they would still
recognize that other value systems originally had all the power, causing the AIs to give
them higher compromise weights? Similarly, one may have some intuitions that value
loading superrational AIs should not be necessary, given that it just moves power between
superrational cooperators. However, at this point, these are merely intuitions and not
arguments. Except from potentially guiding future research, I do not think they should
affect our priorities.
Compromise-friendly backup utility functions
Even though value loading is still necessary, we can nonetheless benefit our superrational
collaborators (and thereby ourselves) in cases where value-loading fails. Even if an AI has
values that differ from those of humans, it may still trade with other civilizations. Hence, we
should attempt to load it with values that especially lend themselves to compromise, such
that the other value systems benefit as much as possible (cf. Bostrom, 2014a). Because
one would usually attempt to load an AI with one’s own values, such a compromise-friendly
(“porous”, in Bostrom’s terminology) utility function would usually only be a backup (see
Oesterheld 2016).
5 Acknowledgements
I came up with superrational compromise after a conversation with Lukas Gloor about
decision theory and the multiverse. Prior to writing this paper, I extensively discussed
the topic with him, Carl Shulman, and Brian Tomasik. I also thank Max Daniel, Tobias
Baumann, Carl Shulman, David Althaus, Lukas Gloor, Kaj Sotala, Jonas Vollmer, Johannes
Treutlein, Lucius Caviola, Joshua Fox, Jens Jaeger, Ruairí Donnelly, Brian Tomasik, Owen
Cotton-Barratt, Magnus Vinding and Dominik Peters for valuable discussions and comments
on this paper. Last but not least, I am indebted to Adrian Rorheim for careful copy editing
and Alfredo Parra for typesetting.
6 Appendix
The appendix contains discussion of additional, more tangential topics.
88
6.1 Related work
6.1.1 Gary Drescher on superrationality
Superrationality, i.e. cooperation based on correlation, is a well-known idea in decision
theory (Kuhn, 2017, section 7; Horgan, 1981, section X; Hofstadter, 1983; Campbell and
Sowden, 1985; Ahmed, 2014, section 4.6 and references therein). However, most authors do
not discuss much beyond the basic idea. Chapter 7.2 of Gary Drescher’s Good and Real
(2006) is the most extensive analysis of the concept of which I am aware. Among other
things, Drescher notes that superrationality – or, as he calls it, subjunctive reciprocity – can
be applied broadly as a justification for “altruistic” behavior, which I discuss in section 6.7.
He also points out that superrationality removes the need for reciprocity (see section 2.9).
Although Drescher discusses the Everett interpretation of quantum physics in his book, he
does not connect it with superrationality. His considerations thus focus on superrationality
among agents on Earth, which I would argue to be quite weak (see section 6.6). Nonetheless,
his account of superrationality is more thorough than any other I have seen, and strongly
influenced chapter 2 of this paper.
6.1.2 Acausal trade
Acausal trade is another (mostly informally discussed) form of cooperation based on non-
causal decision theories and has often been combined with the multiverse concept. However,
the mechanism usually discussed under the term acausal trade differs from superrationality.
Instead of assuming the similarity between two agents, acausal trade merely requires them
them to have models of each other. For example, the two agents may know each other’s
source code.74The main technical difficulty here is to avoid the infinite loop associated with
this mutual modeling. The basic idea is that both agents adopt the policy of cooperating
if and only if the other agent cooperates.75This is intended to incentivize cooperation in
a way reminiscent of causal cooperation via tit for tat. One can also view this policy of
mirroring the other agent’s strategy as a way to create correlations between the decisions
of the two agents. However, if both agents use this policy, they run into an infinite loop:
To make a decision, the first agent has to find out (probabilistically) what the second agent
does. But to do so, it has to find out what the first agent does, which in turn means finding
out what the second agent does, etc. As illustrated by Barasz et al. (2014), this problem
can sometimes be solved, thus making it rational for two programs with knowledge of one
another’s source code to cooperate with each other (cf. LaVictoire et al., 2014; Critch, 2016).
Superrationality may be seen as a special case of acausal trade in which the agents’ knowledge
implies the correlation directly, thus avoiding the need for explicit mutual modeling and the
complications associated with it. This makes superrationality much more easy to apply than
acausal trade. Consequently, whereas I propose that humans should reason superrationally,
acausal trade is usually discussed only in the context of superintelligent AIs (e.g., Bostrom,
2014a).
74Alternatively, one of the two agents can observe the others’ behavior. In this case, only the other agent
needs a model.
75Of course, it would be even better if one could defect against that cooperate unconditionally.
89
6.1.3 Various mentions of multiverse-wide superrationality
While I am not aware of any substantive discussion of MSR, some have mentioned it as a
side remark, or proposed specific applications:
•Bostrom writes: “We might [...] hope that some of the other civilizations building AIs
would [also implement their AI in a way that enables trade (see sections 4.6.3 and
4.6.3)], and perhaps the probability that they would do so would be increased if we
decided to take such a cooperative path.” (Bostrom, 2014a, page 4) On page 14, he
also argues that one should perhaps diversify the values of an AI for a similar reason.
•Almond discusses a few examples of how we can utilize the correlation with other
civilizations (Almond, 2010c, ch. 4). One of them is discussed in section 6.9.
6.2 Many agents
One essential ingredient of multiverse-wide superrationality is the number of intelligent
agents that exist. We have, to some people’s surprise, not (yet) found extraterrestrial life in
the observable universe. However, the universe, or multiverse, probably extends far beyond
the region we can observe. More likely than not, it contains so many agents that the number
of humans on Earth pales in comparison.
Unfortunately, physics and cosmology are not the most accessible of fields. Introductions
tend to either involve advanced mathematical notation or fuzzy explanations with terms like
“space-time distortions”, “waves”, space being referred to as “flat”, dimensions as “curled
up”, etc. that seem hard to understand without looking at their technical meaning. For
an overview of the latter kind, consider Tegmark’s Parallel Universes (2003), which also
discusses the number of intelligent agents specifically. Another, even broader popular science
overview is given by Greene (2011). In this section, we focus on the easiest to understand
aspects. As mentioned in chapter 1, we will use the term “multiverse” to also refer to, say, a
spatially infinite universe.
It is important to note that most talk about multiverses is not something physicists make up
out of thin air as an intellectual exercise. Instead, certain well-tested theories in physics and
cosmology seem to implythe existence of a large universe or multiverse. One of the easier
to understand examples is the Everett or many-worlds interpretation (MWI) of quantum
mechanics. For an introduction, consider Yudkowsky’s introduction (2015, ch. S), which
makes a strong case for MWI and goes through some of the issues typically discussed, like
falsifiability/testability and the law of parsimony (Tegmark and Wheeler, 2001; Tegmark,
2007; Vaidman, 2016). For a more critical account, see, e.g., Kent (1997). Tentative polls of
physicists’ opinions on MWI indicate that between 10% and 50% agree with MWI (Raub 1991,
unpublished as cited in, e.g., Tipler, 1994, section 5, “Nonrelativistic Quantum Mechanics
is Deterministic”; Tegmark, 1997; Nielsen, 2004; Emerson and Laflamme, 2006). But the
many-worlds interpretation of quantum physics is not the only case that can be made for a
universe with a very large or infinite number of agents. In fact other arguments are probably
more widely accepted.
Maybe the least “extraordinary” hypothesis implying the existence of many agents is one
which says that this universe is spatially infinite. According to Tegmark, “this spatially
90
infinite cosmological model is in fact the simplest and most popular one on the market today”
(2003).
Even if the universe is spatially finite and small, it may still contain a lot of civilizations that
cannot interact with each other if it is temporally infinite. For example, on a cyclic model
the universe goes through an indefinite number of oscillations of expansion and collapse. If
sufficiently many of these oscillations give rise to different civilizations, then these civilizations
can cooperate with each other superrationally.
Another more complicated yet popular cosmological theory is eternal inflation as described in
ch. II of Tegmark’s Parallel Universes . Eternal inflation postulates the existence of multiple
universes which not only differ in initial conditions but also in their number of dimensions,
their sets of fundamental particles, and their physical constants.
On the more speculative (but also more accessible) side, there are various forms of modal
realism(sometimes also called mathematical monism ), the view that every “possible world”
exists in the same way in which our world exists. While modal realism is controversial and
rarely discussed by physicists, some view it as an elegant solution to some philosophical
problems. Modal realist theories are also very simple, although to make predictions with
them, they require supplementation with indexical information about which agent in which
possible world we are (Hutter, 2010, ch. 3). For different starting points for thinking about
modal realism, see any of the following: Lewis (1986), Tegmark (1998; 2008; 2014), and
Schmidhuber (1997).
Acting under the assumption of modal realism is associated with some complications, however.
Inparticular, becauseliterallyeverythingcanhappen, everythingwillhappeninsomepossible
world, no matter what we do. Thus no action seems to be better than another (Oesterherld,
2017a)
Besides the arguments in favor of assigning a high probability to living in a universe with
many agents, there also exists a prudential reason to act as though one lives in a large
universe. Even if we only assign, for example, a 50% probability to the existence of other
civilizations, our decisions matter much more if there are more other agents with whom we
are correlated. Thus, we should optimize our decisions more for the large universe. This
line of reasoning does not work for all value systems, however. For example, in terms of
multiverse-wide average welfare, our influence may be much bigger if the universe was very
small. An average utilitarian may thus follow the opposite prudential argument and act as
though the universe was small.
6.3 Testability of superrationality
Eliezer Yudkowsky (2010b, ch. 13) writes:
If a dispute boils down to a testable hypothesis about the consequences of actions,
surely resolving the dispute should be easy! We need only test alternative actions,
observe consequences, and see which probability assignment best matches reality.
Unfortunately, evidential decision theory and causal decision theory are eternally
unfalsifiable—and so is [timeless decision theory (TDT)]. The dispute centers
on the consequences of logically impossible actions, counterfactual worlds where
a deterministic computation returns an output it does not actually return. In
91
evidential decision theory, causal decision theory, and TDT, the observed conse-
quences of the action actually performed will confirm the prediction made for the
performed action. The dispute is over the consequences of decisions not made.
This also means that superrationality itself – not only its application to agents in faraway
parts of the multiverse – is untestable. If I win money by cooperating in a prisoner’s dilemma
against an exact copy of mine, causal decision theorists will point out that my copy would
have cooperated either way and so defecting would have been better.
6.4 Do people reason superrationally?
Do people already apply superrational reasoning when interacting with each other on Earth?
Certainly, many disagree with CDT’s choice in contrived examples like Newcomb’s problem
or the prisoner’s dilemma against a copy, but does it ever influence their real-world decisions?
When conducting a donation game for his Scientific American article, Hofstadter (1983)
asked the participants to explain their reasoning:
I would like to quote to you some of the feelings expressed by my friends caught
in this deliciously tricky situation. [...] Martin Gardner (yes, I asked Martin to
participate) vividly expressed the emotional turmoil he and many others went
through. “Horrible dilemma”, he said. “I really don’t know what to do about
it. If I wanted to maximize my money, I would choose D and expect that others
would also; to maximize my satisfactions, I’d choose C, and hope other people
would do the same (by the Kantian imperative). I don’t know, though, how one
should behave rationally. You get into endless regresses: ‘If they all do X, then
I should do Y, but then they’ll anticipate that and do Z, and so . . .’ You get
trapped in an endless whirlpool. It’s like Newcomb’s paradox.” So saying, Martin
defected, with a sigh of regret.
In a way echoing Martin’s feelings of confusion, Chris Morgan said, “More by
intuition than by anything else, I’m coming to the conclusion that there’s no way
to deal with the paradoxes inherent in this situation. So I’ve decided to flip a
coin, because I can’t anticipate what the others are going to do. I think – but
can’t know – that they’re all going to negate each other.” So, while on the phone,
Chris flipped a coin and “chose” to cooperate.
Sidney Nagel was very displeased with his conclusion. He expressed great regret:
“I actually couldn’t sleep last night because I was thinking about it. I wanted to
be a cooperator, but I couldn’t find any way of justifying it. The way I figured it,
what I do isn’t going to affect what anybody else does. I might as well consider
that everything else is already fixed, in which case the best I can do for myself is
to play a D.”
[...]
‘C’ is the answer I was hoping to receive from everyone. I was not so optimistic as
to believe that literally everyone would arrive at this conclusion, but I expected
a majority would – thus my dismay when the early returns strongly favored
defecting. As more phone calls came in, I did receive some C’s, but for the wrong
reasons. Dan Dennett cooperated, saying, “I would rather be the person who
92
bought the Brooklyn Bridge than the person who sold it. Similarly, I’d feel better
spending $3 gained by cooperating than $10 gained by defecting.”
Charles Brenner, who I’d figured to be a sure-fire D, took me by surprise and
C’d. When I asked him why, he candidly replied, “Because I don’t want to go on
record in an international journal as a defector.” Very well. Know, World, that
Charles Brenner is a cooperator!
Many people flirted with the idea that everybody would think “about the same”,
but did not take it seriously enough. Scott Buresh confided to me: “It was not
an easy choice. I found myself in an oscillation mode: back and forth. I made
an assumption: that everybody went through the same mental processes I went
through. Now I personally found myself wanting to cooperate roughly one third
of the time. Based on that figure and the assumption that I was typical, I figured
about one third of the people would cooperate. So I computed how much I stood
to make in a field where six or seven people cooperate. It came out that if I
were a D, I’d get about three times as much as if I were a C. So I’d have to
defect. Water seeks out its own level, and I sank to the lower right-hand corner
of the matrix.” At this point, I told Scott that so far, a substantial majority had
defected. He reacted swiftly: “Those rats – how can they all defect? It makes
me so mad! I’m really disappointed in your friends, Doug.” So was I, when the
final results were in: Fourteen people had defected and six had cooperated [...].
Based on this anecdotal evidence, people do not consider superrationally in this real-world
donation game, although they sometimes make the superrational choice for other reasons.
In general, there are many hypotheses about why people sometimes cooperate that do not
involve any sort of acausal reasoning. Presumably, many are either unaware of the causal
line of reasoning or do not properly set up the proposed experiment in their mind. For
instance, Yudkowsky (2015, chatper 275) argues that people cannot pretend to be selfish
and therefore take the reward to the other player into account. Kanazawa and Fontaine
(2013) demonstrate that “the subject’s behavioral choice (cooperation vs. defection) varied
significantly as a function of subconscious perception of cues to possible reputational effect
(in the form of a video image of another subject in the experiment).” Cultural norms are
also often invoked to explain cooperation.76This short list of example explanations is by no
means an exhaustive review of the literature on why people cooperate in one-shot games like
the prisoner’s dilemma and public goods games.
Drescher (2006a, page 288f) defends the opposite view. He argues that although people do
not act according to some systematic acausal decision theory, they nevertheless implicitly
account for acausal reasoning into account implicitly. Similarly, Leslie writes, “perhaps
the germs of [evidentialist reasoning] are already present in thoughts influential in getting
people into polling booths, thoughts on the lines of ‘What if everybody in my party stayed
in bed?”’ (Leslie, 1991, ch. 7). Perhaps this “lack of a correct explicit decision theory leaves
the solution somewhat vulnerable to seemingly sound counterarguments, and thus leaves the
solution’s influence somewhat tentative” (Drescher, p. 289). This could explain why many
people who have considered the problem in great detail do not go with the recommendation
76Data from other games with similarly dissatisfying Nash equilibria can be used as further tests of such
models of human reasoning. For example, Basu (2007) reviews research on people’s choices in the traveler’s
dilemma. He also hypothesizes that many people do not go with the Nash equilibrium because of hardwired
altruism.
93
of acausal arguments despite potentially having an innate intuition for them.
Recently, Fischer (2009) has proposed that people do engage in superrationality-like reasoning.
In a study, he showed that participants’ cooperation in a one-shot prisoner’s dilemma
correlated with reported probabilities of the opponent making the same choice as oneself
(cf. Krueger, DiDonato, and Freestone, 2012).
One further piece of evidence in favor of this hypothesis is that cooperation decreases when
people learn about the other person’s choice before they make their own choice. Pothos et al.
(2011) write:
Shafir and Tversky (1992; Busemeyer, Matthew, and Wang, 2006; Croson, 1999;
Li and Taplin, 2002; Tversky and Shafir, 1992) created a well-known modification
to the Prisoner’s Dilemma game: in some trials, participants were told what the
other player was doing. Unsurprisingly, when participants were told that the
other person decided to D, then their probability to D was 97%; and when they
were told that the other person decided to C, then their probability of D was
84%. However, in trials (within participants design) when participants were not
told what the other person did, the probability to D dropped to 63%.
While inconsistent with mere causal reasoning, this can be explained with acausal reason-
ing. Given knowledge of the other person’s decision, the evidential impact of cooperation
diminishes (cf. section 2.1). Moreover, this behavior cannot be explained by reputational
issues or altruistic preferences, which would, if anything, suggest that one would return the
favor upon learning that the other person cooperated. However, the standard explanation
attributes this behavior to people’s irrationality.
Overall, I lean towards the view that people do not have strong acausal intuitions in day-to-
day scenarios, which means that people who dotake such considerations seriously do not
correlate strongly with the average person.
6.5 The evolution of superrationality
Even though superrationality is not testable in any given situation, it does produce actual
benefits. This much is clear even to a causal decision theorist, who would thus self-modify to
take some, though not all, acausal considerations into account (see section 2.3). For the same
reasons, a causal decision theorist would also program an AI to take these considerations
into account.
Similarly, evolution favors agents that take some superrational considerations into account.
For example, imagine a planet on which near copies of agents are created on a regular
basis. They then interact with each other in cooperation and coordination games like the
donation game. To facilitate evolution, copies are created in proportion to the payoffs in the
cooperative games. On this planet, superrational agents – i.e. those who cooperate with
close copies and other correlated agents, while defecting against uncorrelated agents – have
an evolutionary advantage over CDT-based agents who always defect. They will, on average,
receive higher payoffs and thus reproduce more successfully. Evolution can, therefore, in
principle favor genes (and memes) that promote superrational reasoning.
In some sense, the described planet resembles ours. On Earth, “near copies” of humans are
created via reproduction and upbringing. Moreover, many have pointed out that scenarios
94
paralleling the prisoner’s dilemma and public goods games were common in our ancestral
environment.
In principle, such considerations also apply to the application of superrationality to coop-
eration with agents in other parts of the multiverse. That is, multiverse-wide evolution
favors creatures who increase the genetic fitness of agents with similar decision algorithms
elsewhere in the multiverse. In practice, however, I suspect that almost all creatures with at
most human capabilities are unable to benefit any genomes other than those extant in their
environments.
6.6 Superrational cooperation on Earth
Some, e.g. Leslie (1991, ch. 8) and Nate Soares, have argued that superrationality and
acausal decision theory are relevant even in daily interactions between humans on Earth
without considering the multiverse. Drescher (2006a, ch. 7) even contends that it is an
argument for egoists to behave altruistically. Others, like Almond (2010b, ch. 4.6; 2010c, ch.
1) or Ahmed (2014, ch. 4), maintain the opposite position, i.e. that acausal reasoning is
rarely relevant. I will argue for the latter claim. Indeed, my belief that acausal cooperation
is usually inapplicable is the reason why this paper discusses its application to the multiverse
rather than more “down to Earth” scenarios.
6.6.1 Fewer agents
Superrationality becomes relevant in the multiverse because it contains so many disconnected
agents. Thus, even if the correlation with every individual agent’s decision is small, the
overall acausal impact of our decisions dominates (see section 2.7). The smaller the number
of agents, the higher the relative importance of the causalimplications of our actions. Since
the number of agents on Earth is comparably small, causal considerations may well dominate.
6.6.2 Argument from evolution: Superrationality did not evolve (strongly)
We argued that superrational compromise can, under certain conditions, evolve by natural
means and that many of the respective conditions are even met on Earth (see section 6.5).
Hence, the mere observation that most people do not reason superrationally (see section 6.4)
makes a case against its importance.
6.6.3 Causal cooperation seems more important
Humans rarely face one-shot prisoner’s dilemmas against agents whom they know sufficiently
well to be strongly correlated with them. Instead, their interactions are usually iterated
and open to mutual causal influence. As a result, causal cooperation mechanisms apply,
at least in principle (see section 2.9 for references to introductions on causal cooperation).
Surveying the vast literature on causal cooperation and how it compares to superrational
cooperation is beyond the scope of this paper, but two key points are worth highlighting.
First, rational agents establish causal cooperation in a surprisingly wide range of situations.
95
Second, successful strategies like tit-for-tat or Gradual (Beaufils, Delahaye, and Mathieu,
1997) tend to start the game by cooperating and never defect unless the other side starts
defecting. Together, this suggests that sufficiently smart people – which, I assume, includes
most agents who might apply superrationality – are capable of strong cooperation with one
another without ever having to invoke superrationality.
6.6.4 Hard-wired alternatives
Superrationality is not the only solution to the adaptive challenge of having to cooperate
with similar agents (e.g., members of the same tribe and relatives). One alternative is to
hard-wire creatures to cooperate with very similar agents and defect against everyone else.
This approach to ensuring cooperation has received some attention in the literature, although
it is not nearly as widely known as the mechanisms of causal cooperation (see, e.g. McAfee,
1984; Howard, 1988; or Tennenholtz, 2004).
6.7 Superrationality and morality
Cooperation is often invoked as an argument why altruistic behavior and following moral
rules is rational (e.g., Dawkins, 1976, ch. 12; J. Greene, 2013). In many ways, the application
ofsuperrational cooperation resembles altruistic behavior even more closely. For example,
superrationality implies that we should help a value system even if we know for certain that
no agent with this value system will or can reciprocate (see section 2.9). Additionally, in
suggesting that we treat others the way they would like to be treated (in order to make
it more likely that others treat us the way wewould like to be treated), superrationality
resembles Kant’s categorical imperative and the Golden Rule. Once someone is updateless,
she has additional reasons to be nice to others: even if she learns that they do not or will not
cooperate, she would potentially still behave nicely toward them (see section 2.4). Similarly,
if she were ever to find herself in a situation resembling the Remote-controlled cake maker
thought experiment (see section 2.8.6), where she knows that cooperation hurts goals, she
might still make that sacrifice. Some implications of superrationality thus bear a close
resemblance to altruistic or moral behavior.
Drescher (2006a, ch. 7.2.1) makes similar points regarding the similarity between superra-
tional cooperation and altruism. However, he goes further by arguing that superrational
cooperation is the basisfor morality – a way of “deriving ought from is”. I will discuss two
questions that might arise from this argument: is altruistic action derived from self-interest
really the essence of morality or altruism? And: is superrationality sufficient for arriving at
the desired altruistic conclusions?
6.7.1 Real altruism
Yudkowsky (2015, ch. 259) writes:
Consider the following, and ask which of these two philosophers is really the
altruist, and which is really selfish?
96
“You should be selfish, because when people set out to improve society, they
meddle in their neighbors’ affairs and pass laws and seize control and make
everyone unhappy. Take whichever job that pays the most money: the reason
the job pays more is that the efficient market thinks it produces more value than
its alternatives. Take a job that pays less, and you’re second-guessing what the
market thinks will benefit society most.”
“You should be altruistic, because the world is an iterated Prisoner’s Dilemma,
and the strategy that fares best is Tit for Tat with initial cooperation. People
don’tlikejerks. Nice guys really do finish first. Studies show that people who
contribute to society and have a sense of meaning in their lives, are happier than
people who don’t; being selfish will only make you unhappy in the long run.”
Blankoutthe recommendations ofthesetwophilosophers, andyoucanseethatthe
first philosopher is using strictly prosocial criteria to justifyhis recommendations;
to him, what validates an argument for selfishness is showing that selfishness
benefits everyone. The second philosopher appeals to strictly individual and
hedonic criteria; to him, what validates an argument for altruism is showing
that altruism benefits him as an individual: higher social status or more intense
feelings of pleasure.
So which of these two is the actualaltruist?
Yudkowsky elaborates in the rest of the chapter.
The point he is making is that “actual altruism” is usually understood to mean caring
about others, rather than merely behaving altruistically based on egoistic reasoning. Verbal
disputes about the meaning of “true altruism” aside, there is a difference between having the
welfare of others as part of one’s goal on the one hand, and benefitting others for egoistic (or
other non-altruistic or amoral) reasons on the other. I am an altruist of the former kind,
but cooperation (whether superrational or not) only supports altruism of the latter kind.
I would think that most other people are also altruists of the former kind (in addition to
sometimes being altruists of the latter kind).77
Altruism of the latter kind also does not “derive ought from is”78, as Drescher promises in
chapter 7 of Good and Real . Instead, it derives (potentially unexpected) action recommenda-
tions from an already existing ought, i.e. egoism or whatever values an agent already has.
Specifically, (multiverse-wide) superrational compromise can be viewed as agents switching
to a new utility function, but only because it benefits their current utility function.
There are many other examples of agents effectively adopting a new goal. Consider an egoist
living in 16th-century Spain. Her environment punishes people who are not aligned with
Catholicism. To further her goals, the egoist should therefore behave as though she was a
Catholic with pure Catholic goals. She thus derives a new “morality” from purely egoistic
goals, but I suspect that meta-ethicists’ excitement about this is limited.
77Note that while humans evolved to spread their genes as much as possible, they are neither pure fitness
maximizers nor pure egoists (in the sense of not caring about others’ welfare). Our altruistic intentions
evolved for reasons of fitness, but that does not mean they are not genuine altruistic intentions (Yudkowsky,
2015, section 138; Cosmides and Tooby, 1995, page 54f. Wright, 1995, page 225f.).
78This is no surprise, as deriving ought from is cannot – at least in my view – be done.
97
6.7.2 How much altruistic behavior does superrationality entail?
The second issue is that superrationality does not suffice for reproducing all of our moral
intuitions. Forone, Iamnotsuretowhatextentsuperrationalityhasabearingoninteractions
with other people on Earth at all (see section 6.6).
Furthermore, we saw that superrationality only warrants helping other superrational agents
(see section 2.9.4). But our moral intuitions also regard other agents as morally relevant.
As an example, consider Alice, a purely causal decision theorist who even defects in a
prisoner’s dilemma against her copy. Does this mean that Alice is morally irrelevant, no
matter her degree of consciousness, capacity to suffer, etc.? Alice is not just a thought
experiment – many philosophers would two-box in Newcomb’s problem (see section 2.2).
Since Newcomb’s problem is roughly equivalent to the prisoner’s dilemma against an identical
copy (Lewis, 1979), this shows that most philosophers reject superrationality. Nevertheless, I
and presumably most others care intrinsically about the welfare of these moral philosophers;
the same is true for young children and non-human animals, most or all of which do not
reason superrationally. Superrationality and what we would usually call “morality” thus
disagree strongly on who is morally relevant (Drescher, 2006a, sections 7.2.2 and 7.2.3).
6.8 Multiverse-wide superrationality for causal decision theorists
Throughout this paper, I have assumed that some acausaldecision theory is correct, albeit
without narrowing it down to any particular theory. To me, this is no limitation of MSR,
because I hold that causaldecision theories fail in examples like the donation game with
similarity. However, many professional philosophers are causal decision theorists (see 2.2).
Are the arguments presented in this paper entirely irrelevant to them?79
Remember, from section 2.3, that CDT actually recognizes its flaw. Specifically, CDT
self-modifies to cooperate acausally with copies that are created in the future. After all,
these copies can be causally influenced to cooperate acausally to each other’s benefit. Other
humans and extraterrestrials in far away parts of the multiverse do not fall into that category,
of course – so causal decision theorists would not precommit to engage in full multiverse-wide
superrational cooperation.
However, one multiverse theory is the Everett interpretation of quantum physics, according
to which our universe constantly “splits” into different branches. Thus, under the Everett
interpretation, near-copies of oneself are created all the time and in large quantities. Moreover,
it pays in causal terms to cooperate across time, i.e. to commit me tomorrow and me in-30-years
to cooperate. A causal decision theorist would therefore cooperate with a large number of
agents created after CDT’s precommitment. It thus seems as though a weaker version of the
considerations from this paper apply to causal decision theorists after all.
79One obvious way in which the implications are relevant to causal decision theorists is decision-theoretical
uncertainty (MacAskill, 2016). Perhaps, even ardent defenders of CDT have probability on CDT being the
wrong way to make decisions. I, at least, do not have a probability of 100% on a single decision theory
being the right one. If you have some weight on some of the alternatives to causal decision theory, then
you would also give MSR considerations some weight. In fact, Treutlein (2017) argues that if we live in a
sufficiently large universe, then EDT and other non-causal decision theories immediately dominate expected
value calculations that take decision-theoretical uncertainty into account.
98
6.9 Simulations
Paul Almond (2010c, ch. 2; 2010a) has argued that correlations across the multiverse have
implications for whether and how we should simulate other civilizations. The idea has also
been proposed by others. It is mainly relevant for agents and civilizations who primarily
care about copies of themselves, which it is not discussed in the main text.
6.9.1 If being in a simulation is bad, avoid creating one
Almond (2010c, section 4.2) writes:
If you take the simulation argument seriously, then evidential decision theory
would seem to allow you to assert some control over the other civilizations that
might be building these simulated realities.
One way in which evidential decision theory would be relevant is in the way it
allows you to control the probability that you are in a simulation in the first place.
If your civilization decides to develop the capability to run simulated realities,
then you are meta-causing [i.e. influencing acausally] civilizations in general to do
likewise (including civilizations on which our own might be modeled), and making
it less likely that almost all civilizations end before they are capable of producing
simulated realities, in turn making it more likely that you are in a simulated
reality. If, however, your civilization decides not to acquire this capability then
you are meta-causing civilizations in general to do likewise, making it less likely
that you are in a simulated reality. Once your civilization has the capability to
produce simulated realities, if your civilization decides to do it, this would make
it more likely that other civilizations also do it, again making it more likely that
you are in a simulated reality. On the other hand, if your civilization decides not
to produce simulated realities, this makes it less likely that other civilizations
would choose to do so, and therefore less likely that you are in a simulated reality
yourself.
If you assume the view of anthropic decision theory (Armstrong, 2011) instead of classical
anthropics (i.e., the self-sampling or self-indication assumption), then your decision can
affect the fraction of copies of you that are in a given simulation.
Note that under certain assumptions about the efficiency of simulations, one’s effect on the
probability of being in a simulation may be negligible. If any civilization could run orders of
magnitudes more simulations of civilizations than there are civilizations in the basement,
then most copies will be in simulations no matter what you decide. Regardless of your choice,
you will probably be in a simulation.
6.9.2 Happy simulations
Almond (2010c, section 4.2) proposes to simulate civilizations in a nice way to increase the
probability of being in such a simulation oneself.
While evidential decision theory might be applied to try to reduce your “risk” of
being in a simulated reality, some people, and some civilizations, might not see it
99
that way: They might think that being in a simulated reality could have benefits
if the entity that constructed the simulation is kind; for example, the inhabitants
of the simulation might be protected from existential risks to their civilization,
or they might be provided with an afterlife. Evidential decision theory suggests
the possible tactic of making large numbers of simulated realities in which the
inhabitants are treated kindly as a way of trying to meta-cause civilizations in
general to do the same thing. This would be going further than what I said
previously about treating the inhabitants of your own simulations kindly: This
would be done so as to make it more likely that you are in a simulation, and that
it is one in which you will be treated kindly. We might imagine a civilization
doing this as a way of trying to use evidential decision theory to pluck an afterlife
out of nowhere for itself, if it has recently acquired the computing power to
simulate many civilizations, and provide them with an afterlife, but does not
yet have technology such as mind uploading which it might use to obtain an
afterlife more directly. A civilization might attempt this even if it does not yet
have the computing power to construct simulated realities: It might set up some
kind of legal or corporate framework to ensure that large numbers of ancestor
simulations, complete with an afterlife, are constructed in the future, the idea
being to strengthen the case that it is itself in such a simulation, made by a
civilization with a past that is strongly correlated with its own present. Someone
might even set up some organization for this purpose as a result of reading this
article!
6.10 Infinite ethics
In all our calculations (sections 2.7 and 2.8) we assume finite numbers of agents each
with a finite causal influence on their world. However, the multiverse – or even a single
universe – may well be infinite. These infinities entail severe complications for the application
of multiverse-wide consequentialist moral views like those required for multiverse-wide
superrational cooperation (Bostrom, 2011; Arntzenius, 2014). Superrationality is a form of
what Bostrom (2011, ch 4.6) calls “class action”: through our actions, we can acausally affect
an infinite amount of value, even if each physical instantiation of ourselves only has a finite
causalimpact. It seems unclear whether this makes infinite ethics even more challenging, or
whether it can be viewed as a step toward a solution (cf. Almond, 2010c, ch. 3.2). One’s
preferred approach to the problem of infinite ethics may well be consequential for a variety of
issues (including MSR), which is why FRI lists infinite ethics as a promising area for future
research. Nonetheless, I expect a solution to preserve most of the conclusions drawn from
traditional (i.e. finite) ethics.
6.11 Objection based on uncertainty about the values of superra-
tionalists in the multiverse
Thoughts on the value systems of extraterrestrials are necessarily speculative and uncertain.
At what level of certainty about some other value system should we invest resources into
maximizing it? Indeed, one possible criticism of MSR is that we will never be sufficiently
100
certain of just how common some other value system is. Thus, the argument goes, we should
in practice never take any specific value systems other than our own into consideration.
First note that superrationality is still relevant even if you do not know the other value
systems. There are some interventions that benefit other superrationalists without requiring
knowledge of their values (see section 4.1), such as making future superintelligent AIs
cooperate superrationally (under the assumption that they will come to understand the
values of other agents in the multiverse much better than we do).
But even if the argument acknowledges this, it is still invalid, primarily because it ignores
the fact that we do not know how common our ownvalue system is, either. In section
2.8 we argued that if we consider the correlations between our actions and the behavior of
agents elsewhere in the multiverse, then maximizing a neutral compromise utility function in
our local universe maximizes our original utility function in the multiverse at large. This
argument also applies if we are uncertain about the other agents’ utility functions and thus
the compromise utility function itself. Thus, it must be possible to state the criticism in
terms of the compromise utility function. For example, the criticism may translate to the
following statement: the only terms in the compromise utility function that we can be certain
about represent our own values. We are so uncertain about all other value systems that they
do not contribute much to estimates of compromise utility. This criticism could, in theory,
be true. Imagine you grew up on a planet where everyone had the same value system as
yours; even if you believed that the universe also has other value systems, you would be
justified not to assign much weight to any other specific value system. On Earth, however,
we already observe quite some variety in what people care about. Thus, no matter what
value system you hold, there are probably other value systems that are similarly common
on Earth. Of course, we still do not know whether these value systems are also common
elsewhere in the universe, but your own value system is a priori not in a privileged position
that would justify assuming it to be more common than others. Solely maximizing our own
utility function in this universe thus seems to be a bad approach towards maximizing the
compromise utility function, in turn making it suboptimal in terms of our multiverse-wide
utility.
References
Agerström, Jens and Fredrik Björklund (2013). “Why People With an Eye Toward the
Future Are More Moral: The Role of Abstract Thinking”. In: Basic Appl. Soc. Psych.
35.4, pp. 373–381.
Ahmed, Arif (2014). Evidence, Decision and Causality . Cambridge University Press.
Ahmed, Arif and Huw Price (2012). “Arntzenius on ‘Why ain’cha rich?’” In: Erkenntnis
77.1, pp. 15–30.
Almond, Paul (2010a). “Can you retroactively put yourself in a computer simulation?” In:
—(2010b). On Causation and Correlation Part 1: Evidential Decision Theory is correct .
https://web.archive.org/web/20131006191217/http://www.paul-almond.com/
Correlation1.pdf.
—(2010c). On Causation and Correlation Part 2: Implications of Evidential Decision
Theory. https://web.archive.org/web/20120310010225/http://www.paul-almond.com/
Correlation2.pdf.
101
Alter, Adam, Daniel Oppenheimer, and Jeffrey Zemla (2010). “Missing the trees for the
forest: a construal level account of the illusion of explanatory depth”. In: J. Pers. Soc.
Psychol. 99.3, pp. 436–451.
Arab American Institute (2014). American attitudes toward Arabs and Muslims . http:
//b.3cdn.net/aai/3e05a493869e6b44b0\_76m6iyjon.pdf.
Armstrong, Stuart (2011). “Anthropic decision theory”. In: arXiv: 1110 . 6437
[physics.data-an] .
—(2013). “General Purpose Intelligence: Arguing the Orthogonality Thesis”. In: Analysis
and Metaphysics 12, pp. 68–84.
Arntzenius, Frank (2008). “No Regrets, or: Edith Piaf Revamps Decision Theory”. In:
Erkenntnis 68.2, pp. 277–297.
—(2014). “Utilitarianism, decision theory and eternity”. In: Philosophical Perspectives 28.1.
http://lukemuehlhauser.com/wp-content/uploads/Arntzenius-Utilitarianism-decision-
theory-and-eternity.pdf, pp. 31–58.
Atran, Scott (2001). “The trouble with memes”. In: Hum. Nat. 12.4, pp. 351–381.
Axelrod, Robert M (2006). The Evolution of Cooperation: Revised Edition . Basic Books.
Bain, Paul G, Matthew J Hornsey, Renata Bongiorno, and Carla Jeffries (2012). “Promoting
pro-environmental action in climate change deniers”. In: Nat. Clim. Chang.
Bain, Paul G, Matthew J Hornsey, Renata Bongiorno, Yoshihisa Kashima, et al. (2013).
“Collective futures: how projections about the future of society are related to actions and
attitudes supporting social change”. In: Pers. Soc. Psychol. Bull. 39.4, pp. 523–539.
Bain, Paul G, Taciano Milfont, et al. (2015). “Co-benefits of addressing climate change can
motivate action around the world”. In: Nat. Clim. Chang. 6.2, pp. 154–157.
Barasz, Mihaly et al. (2014). “Robust Cooperation in the Prisoner’s Dilemma: Program
Equilibrium via Provability Logic”. In: arXiv preprint arXiv:1401.5577 .
Basu, Kaushik (1994). “The Traveler’s Dilemma: Paradoxes of Rationality in Game Theory”.
In:Am. Econ. Rev. 84.2, pp. 391–395.
— (2007). “The Traveler’s Dilemma”. In: Scientific American 296.6, pp. 90–95.
Beaufils, Bruno, Jean-Paul Delahaye, and Philippe Mathieu (1997). “Our meeting with grad-
ual, a good strategy for the iterated prisoner’s dilemma”. In: Proceedings of the Fifth Inter-
national Workshop on the Synthesis and Simulation of Living Systems . books.google.com,
pp. 202–209.
Beckstead, Nicholas (2013). “On the overwhelming importance of shaping the far future”.
PhD thesis. Rutgers University-Graduate School-New Brunswick.
Benson-Tilsen, Tsvi (2014). UDT with known search order . Tech. rep. 2014–4. Machine
Intelligence Research Institute.
Binmore, Ken (2007a). Game theory: a very short introduction . Vol. 173. Oxford University
Press.
— (2007b). Interpersonal comparison of utility .
Blume, Lawrence, Adam Brandenburger, and Eddie Dekel (1989). “Department of Economics,
University of Michigan, Ann Arbor, Michigan USA”. In: Ann. Oper. Res. 19, pp. 231–246.
Border, Kim C (1985). “More on Harsanyi’s Utilitarian Cardinal Welfare Theorem”. In: Soc.
Choice Welfare 1.4, pp. 279–281.
Bostrom, Nick (2011). “Infinite ethics”. In: Analysis and Metaphysics 10. https://nickbostrom.
com/ethics/infinite.pdf, p. 9.
—(2012). “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced
Artificial Agents”. In: Minds & Machines 22.2, pp. 71–85.
102
Bostrom, Nick (2014a). Hail Mary, Value Porosity, and Utility Diversification . Tech. rep.
Oxford University.
— (2014b). Superintelligence: Paths, Dangers, Strategies . Oxford University Press.
Bourget, David and David Chalmers (2014). “What Do Philosophers Believe?” In: Philos.
Stud.170.3, pp. 465–500. url: http://philpapers.org/archive/BOUWDP.
Brock, Gillian and Nicole Hassoun (2013). “Distance, Moral Relevance of”. In: International
Encyclopedia of Ethics . Blackwell Publishing Ltd.
Busemeyer, Jerome R, Mervin R Matthew, and Zheng Wang (2006). “A quantum information
processing explanation of disjunction effects”. In: Proceedings of the Cognitive Science
Society. Vol. 28. 28.
Buss, David (2015). Evolutionary Psychology: The New Science of the Mind, Fifth Edition .
Psychology Press.
Campbell, Richmond and Lanning Sowden (1985). Paradoxes of Rationality and Cooperation:
Prisoner’s Dilemma and Newcomb’s Problem . UBC Press.
Claidière, Nicolas and Jean-Baptiste André (2012). “The Transmission of Genes and Culture:
A Questionable Analogy”. In: Evol. Biol. 39.1, pp. 12–24.
Cosmides, Leda and John Tooby (1994). “Origins of domain specificity: The evolution of
functional organization”. In: Mapping the mind: Domain specificity in cognition and
culture. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.7758&rep=rep1&
type=pdf, pp. 85–116.
—(1995). “The psychological foundations of culture”. In: The Adapted Mind: Evolutionary
Psychology and the Generation of Culture . Ed. by Jerome H Barkow, Leda Cosmides,
and John Tooby. Oxford University Press.
Cotton-Barratt, Owen (2013). “Geometric reasons for normalising variance to aggregate
preferences”. In:
Critch, Andrew (2016). “Parametric Bounded Löb’s Theorem and Robust Cooperation of
Bounded Agents”. In: arXiv: 1602.04184 [cs.GT].
Croson, Rachel TA (1999). “The disjunction effect and reason-based choice in games”. In:
Organizational Behavior and Human Decision Processes 80.2, pp. 118–133.
Davis, Lawrence H (1977). “Prisoners, Paradox, and Rationality”. In: Am. Philos. Q. 14.4,
pp. 319–327.
Dawkins, Richard (1976). The Selfish Gene . Oxford University Press.
Dennett, Daniel C (1991). Consciousness Explained . Back Bay.
Drescher, Gary L (2006a). Good and Real: Demystifying Paradoxes from Physics to Ethics .
MIT Press.
—(2006b). Good and real: Demystifying paradoxes from physics to ethics . https://www.
gwern.net/docs/2006-drescher-goodandreal.pdf. MIT Press.
Edmonds, Bruce (2005). “The revealed poverty of the gene-meme analogy – why memetics
per se has failed to produce substantive results”. In: 9.1.
Eells, Ellery (2016). Rational Decision and Causality . Cambridge University Press.
Egan, Andy (2007). “Some Counterexamples to Causal Decision Theory”. In: Philos. Rev.
116.1, pp. 93–114.
Emerson, Joseph and Raymond Laflamme (2006). Survey Results . http://web.archive.org/
web/20060726052606/http://www.iqc.ca/~qipcourse/interpret/survey.html.
Fehr, Ernst and Simon Gächter (1999). “Cooperation and Punishment in Public Goods
Experiments”. In: Institute for Empirical Research in Economics Working Paper No. 10;
CESifo Working Paper Series No. 183.
103
Felsenthal, D.S. and M. Machover (1998). The Measurement of Voting Power: Theory and
Practice, Problems and Paradoxes . Edward Elgar. isbn: 9781858988054. url: https:
//books.google.de/books?id=%5C\_I-FAAAAMAAJ.
Fishburn, Peter C (1971). “A Study of Lexicographic Expected Utility”. In: Manage. Sci.
17.11, pp. 672–678.
—(1984). “On Harsanyi’s utilitarian cardinal welfare theorem”. In: Theory Decis. 17.1,
pp. 21–28.
Flinn, Mark V, David C Geary, and Carol V Ward (2005). “Ecological dominance, social
competition,andcoalitionaryarmsraces:Whyhumansevolvedextraordinaryintelligence”.
In:Evolution and Human Behavior 26.1. http://web.missouri.edu/~gearyd/Flinnetal2005.
pdf, pp. 10–46.
Freeman,Samuel(2016).“OriginalPosition”.In: The Stanford Encyclopedia of Philosophy .Ed.
by Edward N Zalta. Winter 2016. https://plato.stanford.edu/entries/original-position/.
Metaphysics Research Lab, Stanford University.
Friedman, David (1994). “A positive account of property rights”. In: Soc. Philos. Policy
11.02, pp. 1–16.
Garrabrant, Scott et al. (2016). “Logical Induction”. In: arXiv: 1609.03543 [cs.AI].
Gibbard, Allan and William L Harper (1978). “Counterfactuals and Two Kinds of Expected
Utility”. In: IFS. Ed. by William L Harper, Robert Stalnaker, and Glenn Pearce. The
University of Western Ontario Series in Philosophy of Science. Springer Netherlands,
pp. 153–190.
Gil-White, Francisco (2005). “Common misunderstandings of memes (and genes): The
promise and the limits of the genetic analogy to cultural transmission processes”. In:
2005a2, pp. 317–338.
Gloor, Lukas (2016). Suffering-focused AI safety: Why “fail-safe” measures might be our
top intervention . Tech. rep. FRI-16-1. Foundational Research Institute. url: https:
//foundational-research.org/wp-content/uploads/2016/08/Suffering-focused-AI-
safety.pdf.
Gosepath, Stefan (2011). “Equality”. In: The Stanford Encyclopedia of Philosophy . Ed. by
Edward N Zalta. Spring 2011. https://plato.stanford.edu/entries/equality/. Metaphysics
Research Lab, Stanford University.
Graham, Jesse et al. (2011). “Mapping the moral domain.” In: Journal of personality and
social psychology 101.2, p. 366.
Greene, Brian (2011). The Hidden Reality: Parallel Universes and the Deep Laws of the
Cosmos. Knopf Doubleday Publishing Group.
Greene, Joshua (2013). Moral Tribes: Emotion, Reason and the Gap Between Us and Them .
London: Penguin Press.
Grill, Kalle (2015). “Respect for What?” In: Soc. Theory Pract. 41.4, pp. 692–715.
Haidt, Jonathan (2012). The righteous mind: Why good people are divided by politics and
religion. Vintage.
Hájek, Alan (2006). Masses of Formal Philosophy ‘Interview’ . http://philrsss.anu.edu.au/
people-defaults/alanh/papers/masses.pdf.
Hammond, Peter J (1991). “Interpersonal comparisons of utility: Why and how they are and
should be made”. In: Interpersonal Comparisons of Well-Being , pp. 200–254.
—(1992). “Harsanyi’s utilitarian theorem: A simpler proof and some ethical connotations”.
In:Rational Interaction . Springer, pp. 305–319.
104
Hansson, Sven Ove and Till Grüne-Yanoff (2012). “Preferences”. In: The Stanford Encyclo-
pedia of Philosophy . Ed. by Edward N Zalta. Winter 2012. https://plato.stanford.edu/
entries/preferences/.
Hare, Richard M (1993). “Could Kant Have Been a Utilitarian?” In: Kant and Critique: New
Essays in Honor of W.H. Werkmeister . Springer, Dordrecht, pp. 91–113.
Harsanyi, John C (1953). “Cardinal Utility in Welfare Economics and in the Theory of
Risk-taking”. In: J. Polit. Econ. 61.5.
—(1955). “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of
Utility”. In: J. Polit. Econ. 63.4, pp. 309–321.
—(1979). “Bayesian decision theory, rule utilitarianism, and Arrow’s impossibility theorem”.
In:Theory Decis. 11.3, pp. 289–317.
Hausman, Daniel M (1995). “The Impossibility of Interpersonal Utility Comparisons”. In:
Mind104.415, pp. 473–490.
Henrich, Joseph (2015). The Secret of Our Success: How Culture Is Driving Human Evolution,
Domesticating Our Species, and Making Us Smarter . Princeton University Press.
Henson, Richard G (1971). “Utilitarianism and the Wrongness of Killing”. In: Philos. Rev.
80.3, pp. 320–337.
Hintze, Daniel (2014). “Problem class dominance in predictive dilemmas”. https://intelligence.
org/wp-content/uploads/2014/10/Hintze-Problem-Class-Dominance-In-Predictive-
Dilemmas.pdf. PhD thesis. Arizona State University.
Hofstadter, Douglas (1983). “Dilemmas for Superrational Thinkers, Leading Up to a Luring
Lottery”. In: Sci. Am. 248.6. url: https://www.gwern.net/docs/1985-hofstadter%5C#
dilemmas-for-superrational-thinkers-leading-up-to-a-luring-lottery.
Hong, Jiewen and Angela Y Lee (2010). “Feeling mixed but not torn: The moderating
role of construal level in mixed emotions appeals”. In: Journal of Consumer Research
37.3. http://www.bm.ust.hk/mark/staff/Jiewen/Jiewen%20JCR-Oct%202010.pdf,
pp. 456–472.
Horgan, Terence (1981). “Counterfactuals and Newcomb’s Problem”. In: J. Philos. 78.6,
pp. 331–356.
Howard, John V (1988). “Cooperation in the Prisoner’s Dilemma”. In: Theory and Decision
24.3. https://sl4librarian.files.wordpress.com/2016/12/howard1988.pdf, pp. 203–213.
Hutter, Marcus (2010). “A complete theory of everything (will be subjective)”. In: Algorithms
3.4, pp. 329–350.
Inglehart, Ronald and Christian Welzel (2010). “Changing mass priorities: The link between
modernization and democracy”. In: Perspectives on Politics 8.02, pp. 551–567.
Isbell, John R (1959). “Absolute games”. In: Contributions to the Theory of Games 4,
pp. 357–396.
Joyce, James M (1999). The Foundations of Causal Decision Theory . Cambridge Studies in
Probability, Induction, and Decision Theory. Cambridge University Press.
Joyce, Richard (2016). “Moral Anti-Realism”. In: The Stanford Encyclopedia of Philosophy .
Ed. by Edward N Zalta. Winter 2016. https://plato.stanford.edu/entries/moral-anti-
realism/. Metaphysics Research Lab, Stanford University.
Judge, Madeline and Marc S Wilson (2015). “Vegetarian Utopias: Visions of dietary patterns
in future societies and support for social change”. In: Futures71, pp. 57–69.
Kagan, Shelly (2016). “What’s Wrong with Speciesism? (Society for Applied Philosophy
Annual Lecture 2015)”. In: J. Appl. Philos. 33.1, pp. 1–21.
105
Kanazawa, Satoshi and Linus Fontaine (2013). “Intelligent people defect more in a one-shot
prisoner’s dilemma game.” In: Journal of Neuroscience, Psychology, and Economics 6.3,
p. 201.
Kent, Adrian (1997). “Against Many-Worlds Interpretations”. In: arXiv: gr-qc/9703089
[gr-qc].
Knutsson, Simon (2016). Value Lexicality – Foundational Research Institute . https://
foundational-research.org/value-lexicality/. Accessed: 2017-2-27.
Kross, Ethan and Igor Grossmann (2012). “Boosting wisdom: distance from the self enhances
wise reasoning, attitudes, and behavior.” In: Journal of Experimental Psychology: General
141.1. https://uwaterloo.ca/wisdom-and-culture-lab/sites/ca.wisdom-and-culture-
lab/files/uploads/files/kross\_grossmann\_jepg\_2012.pdf, p. 43.
Krueger, Joachim I, Theresa E DiDonato, and David Freestone (2012). “Social projection can
solve social dilemmas”. In: Psychological Inquiry 23.1. https://sl4librarian.files.wordpress.
com/2017/01/krueger2012-social-projection.pdf, pp. 1–27.
Kuhn, Steven (2017). “Prisoner’s Dilemma”. In: The Stanford Encyclopedia of Philosophy . Ed.
by Edward N Zalta. Spring 2017. https://plato.stanford.edu/entries/prisoner-dilemma/.
Metaphysics Research Lab, Stanford University.
Kuper, Adam (2000). “If memes are the answer, what is the question”. In: Darwinizing
culture: The status of memetics as a science , pp. 175–188.
Kurzban, Robert (2012). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular
Mind. Princeton University Press.
Lakoff, George (1997). “Moral politics: What conservatives know that liberals don’t”. In:
LaVictoire, Patrick et al. (2014). “Program equilibrium in the prisoner’s dilemma via
Löb’s theorem”. In: AAAI Multiagent Interaction without Prior Coordination workshop .
aaai.org.
Ledwig, Marion (2000). “Newcomb’s problem”. https://kops.uni-konstanz.de/bitstream/
handle/123456789/3451/ledwig.pdf. PhD thesis.
Lee, Seyoung and Thomas Hugh Feeley (2016). “The identifiable victim effect: a meta-analytic
review”. In: Social Influence 11.3, pp. 199–215.
Leslie, John (1991). “Ensuring two bird deaths with one throw”. In: Mind100.1, pp. 73–86.
Lewis, David (1979). “Prisoners’ dilemma is a Newcomb problem”. In: Philosophy & Public
Affairs. https://sl4librarian.files.wordpress.com/2017/01/lewis-prisoners-dilemma-
newcomb-problem.pdf, pp. 235–240.
— (1981). “Causal Decision Theory”. In: Australas. J. Philos. 59.1, pp. 5–30.
— (1986). On the plurality of worlds . Vol. 322. Oxford.
Leyton-Brown, Kevin and Yoav Shoham (2008). “Essentials of Game Theory: A Concise
Multidisciplinary Introduction”. In: Synthesis Lectures on Artificial Intelligence and
Machine Learning 2.1, pp. 1–88.
Li, Shu and John Taplin (2002). “Examining whether there is a disjunction effect in Prisoner’s
Dilemma games”. In:
Lord, Charles G, Lee Ross, and Mark R Lepper (1979). “Biased assimilation and attitude
polarization: The effects of prior theories on subsequently considered evidence”. In: J.
Pers. Soc. Psychol. 37.11, p. 2098.
Luguri, Jamie B and Jaime L Napier (2013). “Of two minds: The interactive effect of construal
level and identity on political polarization”. In: Journal of Experimental Social Psychology
49.6. https://www.researchgate.net/profile/Jaime\_Napier/publication/277451893\_
Of\_two\_minds\_The\_interactive\_effect\_of\_construal\_level\_and\_identity\_on\_
political\_polarization/links/56d7f5ec08aebe4638af2566.pdf, pp. 972–977.
106
MacAskill, William (2014). “Normative Uncertainty”. https://goo.gl/aqqkZT. PhD thesis.
—(2016). “Smokers, Psychos, and Decision-Theoretic Uncertainty”. In: J. Philos. 113.9,
pp. 425–445.
McAfee, R Preston (1984). “Effective computability in economic decisions”. In: V Univer%
sity of Western Ontario working paper . http://www.mcafee.cc/Papers/PDF/
EffectiveComputability.pdf.
McAllister, Tyrrell. Wei Dai’s Updateless Decision Theory . https://casparoesterheld.files.
wordpress.com/2017/08/updateless\_decision\_theory-1.pdf.
Meacham, Christopher J G (2010). “Binding and its consequences”. In: Philos. Stud. 149.1,
pp. 49–71.
Mill, John Stuart (1863). Utilitarianism . Parker, Son and Bourn.
Muehlhauser, Luke and Louie Helm (2012). Intelligence Explosion and Machine Ethics .
https://intelligence.org/files/IE-ME.pdf.
Nathanson, Steven. Utilitarianism, Act and Rule | Internet Encyclopedia of Philosophy .
http://www.iep.utm.edu/util-a-r/. Accessed: 2017-7-7.
Neumann,JohnvonandOskarMorgenstern(1953). Theory of Games and Economic Behavior .
3rd ed. Princeton University Press.
Ng, Andrew Y and Stuart J Russell (2000). “Algorithms for Inverse Reinforcement Learning”.
In:Proceedings of the Seventeenth International Conference on Machine Learning ,pp.663–
670.
Nielsen, Michael (2004). The interpretation of quantum mechanics . https://web.archive.org/
web/20040520222807/http://www.qinfo.org/people/nielsen/blog/archive/000060.html.
Nozick, Robert (1969). “Newcomb’s problem and two principles of choice”. In: Essays in
honor of Carl G. Hempel . Springer, pp. 114–146.
Oesterheld, Caspar (2016a). “Formalizing preference utilitarianism in physical world models”.
In:Synthese 193.9, pp. 2747–2759.
—(2016b). “Formalizing preference utilitarianism in physical world models”. English. In:
Synthese 193.9, pp. 2747–2759.
—(2017).A Short Technical Note on Two Notions of Coordination . Tech. rep. Foundational
Research Institute.
Oesterherld, Caspar (2017a). Modal Realism and the No-Free-Lunch Theorems . Unpublished.
— (2017b). Unpublished .
Osborne, Martin J (2004). An introduction to game theory . Oxford university press New
York.
Osborne, Martin J and Ariel Rubinstein (1994). A Course in Game Theory . MIT Press.
Parfit, Derek (1984). Reasons and Persons . Oxford: Oxford University Press.
— (2011). On What Matters: Volume One . Oxford University Press.
Park, Joonha, Paul G Bain, and Takashi Kusumi (2015). “Nuclear Power and Japan’s
Collective Future: Understanding Beliefs About How Nuclear Policies Will Affect Society”.
In:Journal of Pacific Rim Psychology 9.2, pp. 78–85.
Pearl, Judea (2009). Causality . Cambridge University Press.
Peterson, Martin (2017). An Introduction to Decision Theory . Cambridge University Press.
Pinker, Steven (1999). “How the mind works”. In: Ann. N. Y. Acad. Sci. 882, 119–27,
discussion 128–34.
— (2011). The better angels of our nature: Why violence has declined . Viking.
Pogge, Thomas W (1995). “Utilitarianism and Equality”. In: Midwest Studies In Philosophy
20.1, pp. 332–339.
107
Pothos, Emmanuel M et al. (2011). “Understanding cooperation in the Prisoner’s Dilemma
game”. In: Personality and Individual Differences 51.3, pp. 210–215.
Price, Huw (1986). “Against causal decision theory”. In: Synthese 67.2, pp. 195–212.
Radford, Colin and Michael Weston (1975). “How can we be moved by the fate of Anna
Karenina?” In: Proceedings of the Aristotelian Society, Supplementary Volumes 49, pp. 67–
93.
Rawls, John (1971). A Theory of Justice . Harvard University Press.
Resnik, Michael D (1983). “A restriction on a theorem of Harsanyi”. In: Theory Decis. 15.4,
pp. 309–320.
Robinson, David and David Goforth (2005). The topology of the 2x2 games: a new periodic
table. Vol. 3. https://sl4librarian.files.wordpress.com/2016/12/goforthrobinson-the-
topology-of-the-2x2-games-a-new-periodic-table.pdf. Psychology Press.
Rogers, Everett M (2010). Diffusion of Innovations, 4th Edition . Simon and Schuster.
Rudd, Melanie, Jennifer Aaker, and Michael I Norton (2014). “Getting the most out of giving:
Concretely framing a prosocial goal maximizes happiness”. In: Journal of Experimental
Social Psychology 54, pp. 11–24.
Sadava, David E et al. (2012). Life: The Science of Biology, 10th Edition . 10th edition. W.
H. Freeman.
Sarkissian, Hagop et al. (2011). “Folk Moral Relativism”. In: Mind Lang. 26.4, pp. 482–505.
Sarma, Gopal P and Nick J Hay (2016). “Mammalian Value Systems”. In: arXiv:1607.08289
[cs.AI].
Schelling, Thomas C (1960). “The strategy of conflict”. In: Cambridge, Mass .
Schmidhuber, Jürgen (1997). “A Computer Scientist’s View of Life, the Universe, and
Everything”. In: Foundations of Computer Science. Potential – Theory – Cognition .
Ed. by Christian Freksa, Matthias Jantzen, and Rüdiger Valk. Vol. 1337. Lecture Notes in
Computer Science. ftp://ftp.idsia.ch/pub/juergen/everything.pdf. Springer, pp. 201–208.
Schneider, Steven. “The Paradox of Fiction”. In: The Internet Encyclopedia of Philosophy .
Accessed: 2017-7-9, http://www.iep.utm.edu/fict-par/.
Schwartz, Shalom H et al. (2012). “Refining the theory of basic individual values”. In: J.
Pers. Soc. Psychol. 103.4, pp. 663–688.
Schwitzgebel, Eric and Fiery Cushman (2012). “Expertise in Moral Reasoning? Order Effects
on Moral Judgment in Professional Philosophers and Non-Philosophers”. In: Mind and
Language 27.2, pp. 135–153.
Schwitzgebel, Eric and Joshua Rust (2011). “The self-reported moral behavior of ethics
professors”. In: Unpublished manuscript . http://www.faculty.ucr.edu/~eschwitz/
SchwitzPapers/EthSelfRep-110316.pdf.
Sen, Amartya K (2014). Collective Choice and Social Welfare . Elsevier.
Shafir, Eldar and Amos Tversky (1992). “Thinking through uncertainty: Nonconsequential
reasoning and choice”. In: Cognitive psychology 24.4, pp. 449–474.
Shweder, Richard A et al. (1997). “The “Big Three”of Morality (Autonomy, Community and
Divinity) and the“ Big Three” Explanations of Suffering”. In: Morality and health 119,
pp. 119–169.
Singer, Peter (1972). “Famine, Affluence, and Morality”. In: Philos. Public Aff. 1.3, pp. 229–
243.
—(2011).The Expanding Circle: Ethics, Evolution, and Moral Progress . Princeton University
Press.
Sinn, Jeffrey S and Matthew W Hayes (2016). “Replacing the Moral Foundations: An
Evolutionary-Coalitional Theory of Liberal-Conservative Differences”. In: Political Psy-
108
chology. https://sl4librarian.files.wordpress.com/2017/01/sinn2016-replacing-the-moral-
foundations.pdf.
Skyrms, Brian (1982). “Causal Decision Theory”. In: J. Philos. 79.11, pp. 695–711.
Smart, John J C and Bernard Williams (1973). Utilitarianism: For and Against . Cambridge
University Press.
Soares, Nate (2015). Formalizing two problems of realistic world-models . Tech. rep. https:
//intelligence.org/files/RealisticWorldModels.pdf.MachineIntelligenceResearchInstitute.
url: https://intelligence.org/files/RealisticWorldModels.pdf.
Soares, Nate and Benja Fallenstein (2014). Aligning superintelligence with human inter-
ests: A technical research agenda . Tech. rep. https://pdfs.semanticscholar.org/d803/
3a314493c8df3791912272ac4b58d3a7b8c2.pdf. Machine Intelligence Research Institute.
— (2015). “Toward Idealized Decision Theory”. In: arXiv: 1507.01986 [cs.AI].
Soares, Nate and Benjamin A Levinstein. Cheating Death in Damascus . Tech. rep. http:
//intelligence.org/files/DeathInDamascus.pdf. Machine Intelligence Research Institute.
Spohn, Wolfgang (2003). “Dependency Equilibria and the Causal Structure of Decision and
Game Situation”. In: Homo Oeconomicus 20, pp. 195–255.
—(2005).The 5 Questions . http://kops.uni-konstanz.de/bitstream/handle/123456789/
3489/Spohn\_2005.pdf.
—(2012). “Reversing 30 years of discussion: why causal decision theorists should one-box”.
In:Synthese 187.1, pp. 95–122.
Stern, Chadly, Tessa V West, and Peter G Schmitt (2014). “The liberal illusion of uniqueness”.
In:Psychological Science 25.1, pp. 137–144.
Stewart-Williams, Steve (2015). “Morality: Evolution of”. In: International Encyclopedia
of the Social & Behavioral Sciences (Second Edition) . Ed. by James D Wright. Oxford:
Elsevier, pp. 811–818.
Studebaker, Benjamin (2012). Utilitarianism and Equality . https://benjaminstudebaker.com/
2012/09/17/utilitarianism-and-equality/. Accessed: 2017-7-7.
Swidler, Ann (2013). Talk of love: How culture matters . University of Chicago Press.
Taber, Charles S and Milton Lodge (2006). “Motivated Skepticism in the Evaluation of
Political Beliefs”. In: Am. J. Pol. Sci. 50.3, pp. 755–769.
Taylor, Michael (1987). The Possibility of Cooperation . Cambridge University Press.
Tegmark, Max (1997). “The Interpretation of Quantum Mechanics: Many Worlds or Many
Words?” In: arXiv: quant-ph/9709032 [quant-ph] .
—(1998). “Is ’the theory of everything’ merely the ultimate ensemble theory?” In: Ann.
Phys.270, pp. 1–51.
—(2003). “Parallel Universes”. In: Science and Ultimate Reality: From Quantum to Cosmos .
Ed. by J.D. Barrow, P.C.W. Davies, & C.L. Harper.
— (2007). “Many lives in many worlds”. In: Nature448.7149, pp. 23–24.
— (2008). “The Mathematical Universe”. In: Found. Phys. 38.2, pp. 101–150.
—(2014).Our Mathematical Universe. My Quest for the Ultimate Nature of Reality . Penguin.
Tegmark, Max and John Archibald Wheeler (2001). “100 Years of the Quantum”. In: arXiv:
quant-ph/0101077 [quant-ph] .
Tennenholtz, Moshe (2004). “Program equilibrium”. In: Games and Economic Behavior 49.2.
https://ie.technion.ac.il/~moshet/progeqnote4.pdf, pp. 363–373.
Thiele, Leslie Paul (2006). The heart of judgment: Practical wisdom, neuroscience, and
narrative . Cambridge University Press.
Tipler, Frank J (1994). The physics of immortality: Modern cosmology, God, and the resur-
rection of the dead . Anchor.
109
Torelli, Carlos J and Andrew M Kaikati (2009). “Values as predictors of judgments and
behaviors: the role of abstract and concrete mindsets”. In: J. Pers. Soc. Psychol. 96.1. http:
//citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.186.6465&rep=rep1&type=pdf,
pp. 231–247.
Treutlein, Johannes and Caspar Oesterheld (2017). A Wager for Evidential Decision Theory .
Unpublished.
Treutlein, Johannes and Caspar Oesterherld (2017). A Typology of Newcomblike Problems .
Unpublished.
Trivers, Robert L (1971). “The Evolution of Reciprocal Altruism”. In: Q. Rev. Biol. 46.1,
pp. 35–57.
Trope, Yaacov and Nira Liberman (2010a). “Construal-level theory of psychological distance.”
In:Psychological review 117.2. http://www.psych.nyu.edu/trope/Trope\_Liberman\_2010.
pdf, p. 440.
—(2010b). “Construal-level theory of psychological distance”. In: Psychol. Rev. 117.2,
pp. 440–463.
Tversky, Amos and Eldar Shafir (1992). “The disjunction effect in choice under uncertainty”.
In:Psychological science 3.5, pp. 305–310.
Urry, Lisa A et al. (2016). Campbell Biology (11th Edition) . 11 edition. Pearson.
Vaidman,Lev(2016).“Many-WorldsInterpretationofQuantumMechanics”.In: The Stanford
Encyclopedia of Philosophy . Ed. by Edward N Zalta. Fall 2016. https://plato.stanford.
edu/entries/qm-manyworlds/. Metaphysics Research Lab, Stanford University.
Varian, Hal R (2006). “Revealed preference”. In: Samuelsonian economics and the twenty-first
century, pp. 99–115.
Wedgwood, Ralph (2013). “Gandalf’s solution to the Newcomb problem”. In: Synthese ,
pp. 1–33.
Weirich, Paul (2016). “Causal Decision Theory”. In: The Stanford Encyclopedia of Philosophy .
Spring 2016. https://plato.stanford.edu/entries/decision-causal/.
Williams,EvanGregg(2011).“Ethicsundermoralneutrality”.PhDthesis.RutgersUniversity-
Graduate School-New Brunswick. url: https://rucore.libraries.rutgers.edu/rutgers-
lib/34078/pdf/1/.
Wimsatt, William C (1999). “Genes, Memes, and Cultural Heredity”. In: Biol. Philos. 14.2,
pp. 279–310.
Wright, Robert (1995). The Moral Animal: Evolutionary Psychology and Everyday Life .
Vintage Books.
Yang, Daniel Y-J, Jesse Lee Preston, and Ivan Hernandez (2013). “Polarized attitudes toward
the Ground Zero mosque are reduced by high-level construal”. In: Social Psychological
and Personality Science 4.2. http://minerva.union.edu/bizerg/readings230/wk04.pdf,
pp. 244–250.
Yudkowsky, Eliezer (2004). Coherent Extrapolated Volition . https://intelligence.org/files/
CEV.pdf. Machine Intelligence Research Institute.
—(2010a). Timeless Decision Theory . http://intelligence.org/files/TDT.pdf. Machine
Intelligence Research Institute.
— (2010b). “Timeless Decision Theory”. In: Machine Intelligence Research Institute.
— (2015). Rationality: From AI to Zombies . Machine Intelligence Research Institute.
110 |
56755f6a-540b-429a-bf71-992e2c66149a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Stack Overflow of Factored Cognition
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Abstract
--------
Factored cognition is a possible basis for building aligned AI. Currently Ought
runs small-scale experiments with it. In this article I sketch some benefits of
building a system for doing large-scale experiments and generating large amounts
of data for ML training. Then I estimate roughly how long it would take to build
such a system. I'm not confident of this exploration being useful at all. But at
least I wrote it down.
Benefits
--------
If you want to know what factored cognition is, see
[here](https://ought.org/research/factored-cognition).
Ought does small-scale experiments with factored cognition (cf. Ought's Progress
Update Winter 2018). I thought: wouldn't it be nice to do these experiments at
much larger scale? With enough users that one root question could be answered
within three hours any time and day of the week.
Benefits:
* The feedback loop would be much tighter than with the weekly or bi-weekly
experiments that Ought runs now. A tight feedback loop is great in many ways.
For example, it would allow a researcher to test more hypotheses more
often, more quickly and more cheaply. This in turn helps her to generate more
hypotheses overall.
Note that I might be misunderstanding the goals and constraints of Ought's
experiments. In that case this benefit might be irrelevant.
* It would generate a lot of data. These could be used as training data when we
want to train an ML system to do factored cognition.
Quantifying these benefits is possible, but would take some weeks of modelling
and talking with people. So far I'm not confident enough of the whole idea to
make the effort.
Feasibility
-----------
We would need three things for a large-scale factored cognition system to work:
the system itself, enough users and useful behaviour of these users. I'll use
Stack Overflow as a basis for my estimates and call large-scale factored
cognition ‘Fact Overflow’.
Building Stack Overflow took five months from start of development
[㊮](https://stackoverflow.blog/2008/04/17/podcast-1/) to public beta
[㊮](https://stackoverflow.blog/2008/09/15/then-a-miracle-occurs-public-beta/).
Then they spent a lot of time tweaking the system to make it more attractive and
maintain quality. So I'd say building Fact Overflow would take five to fifteen
months with a team of two to five people.
For calculating how many users would be required, I used the following estimates
(90 % confidence interval, uniformly distributed):
| variable | – 5 % | – 95 % | – explanation |
| --- | --- | --- | --- |
| nw | 15 | 300 | – average number of workspaces per tree |
| na | 1 | 5 | – average number of actions per workspace |
| xc | 0.1 | 0.7 | – decontamination factor |
| xa | 0.1 | 0.7 | – share of active users among all users |
| fa/d | 1 | 10 | – average frequency of actions per active user |
(I had to insert dashes to make the table look neat.)
xc is the share of workspaces in a tree that one user can work on without
being contaminated, ie. without getting clues about the context of some
workspaces.
The estimates are sloppy and probably overconfident. If people show interest in
this topic, I will make them tighter and better calibrated.
Now if we want a tree of workspaces to be finished within tf, we need n∗u
users, where:
n∗u=nw⋅naxc⋅xa⋅fa⋅tf
A [Guesstimate model](https://www.getguesstimate.com/models/13239) based on this
formula tells me that for tf=3h we need between 600 and 36 k
users. Note that Guesstimate runs only 5000 samples, so the numbers jump around
with each page reload. Note also that the actual time to finish a tree might be
longer, depending on how long users take for each action and how many
sub-questions have to be worked on in sequence.
How long would it take to accumulate these numbers of users? For this I use the
[number of sign-ups to Stack
Exchange](https://data.stackexchange.com/stackoverflow/query/80302/new-users-per-day)
(of which Stack Overflow is the largest part). Let me assume that between 75 %
and 98 % of people who sign up actually become users. That means between 700 and
42 k sign-ups are required. This is also in Guesstimate. What I can't include in
the Guesstimate simulation is the difference between the growth rates of Stack
Overflow and Fact Overflow. Assume that it takes Fact Overflow twice as long as
Stack Overflow to reach a certain number of sign-ups. Then it would take one
month to reach 700 sign-ups and twenty-two months to reach 42 k sign-ups.
Of course, the system would have to be useful and fun enough to retain that many
users. As with Stack Overflow, the software and the community have to encourage
and ensure that the users behave in a way that makes factored cognition work.
Conclusion
----------
It would be useful to be able to experiment with factored cognition at a large
scale. I can't quantify the usefulness quickly, but I did quantify very roughly
what it would take: five to fifteen months of development effort with a small
team plus one to twenty-two months of accumulating users.
Comment prompts
---------------
* What do you think I'm misunderstanding?
* Do you think my exploration of large-scale factored cognition is a waste of
time? If so, why?
* Do you think one could build a platform attractive enough to that many users?
If so, how? What kinds of questions and topics would be inclusive enough to
gain critical mass and exclusive enough to maintain quality? |
adfa167b-3225-42a0-ac1d-77adb772a46e | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Decoherence is Simple
Today's post, Decoherence is Simple was originally published on 06 May 2008. A summary (taken from the LW wiki):
> The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Spooky Action at a Distance, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
4bf6c254-dd9a-41ff-b633-60ffe890dc5c | trentmkelly/LessWrong-43k | LessWrong | The recent NeurIPS call for papers requires authors to include a statement about the potential broader impact of their work
NeurIPS (formerly NIPS) is a top conference in machine learning and computational neuroscience. The recently published call for papers for NeurIPS 2020 includes the following (which did not appear in previous years):
> In order to provide a balanced perspective, authors are required to include a statement of the potential broader impact of their work, including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes.
>
> [...]
>
> Evaluation criteria: [...] Submissions will also be considered on ethical grounds. Regardless of scientific quality or contribution, a submission may be rejected for ethical considerations, including methods, applications, or data that create or reinforce unfair bias or that have a primary purpose of harm or injury. |
12159121-6e9b-41b4-b976-7f5c002c71e6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Towards an Intentional Research Agenda
*This post is motivated by research intuitions that better [formalisms in consciousness research](https://qualiaresearchinstitute.org/research/) contribute to agent foundations in more ways than just the value loading problem. Epistemic status: speculative.*
[David Marr's levels of analysis](https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis) is the idea that any analysis of a system involves analyzing it at multiple, distinct levels of abstraction. These levels are the computational, which describes what it is the system is trying to do, the algorithmic, which describes which algorithms the system instantiates in order to accomplish that goal, and the implementation level, describing the hardware or substrate on which the system is running. Each level [underdetermines](https://en.wikipedia.org/wiki/Indeterminacy_of_translation) the other levels. You can choose lots of different algorithms for a given goal, and algorithms don't restrict which goals can use them. A concrete example Marr uses, is that you'd have a very hard time figuring out what a feather was for if you'd never seen a bird flying, and if you only saw a bird flying you might have a very difficult time coming up with something like the design of a feather.
Imagine a world that had recently invented computers. The early examples are very primitive, but people can extrapolate and see that these things will be very powerful, likely transformative to society. They're pretty concerned about the potential for these changes to be harmful, maybe even catastrophic. Although people have done a bit of theoretical work on algorithms, it isn't all that sophisticated. But since the stakes are high, they try their best to start figuring out what it would mean for there to be such a thing as harmful algorithms, or how to bound general use algorithms such that they can only be used for certain things. They even make some good progress, coming up with the concept of ASICs so that they can maybe hard code the good algorithms and make it impossible to run the bad. They're still concerned that a sufficiently clever or sufficiently incentivized agent could use ASICs for bad ends somehow.
If this situation seems a bit absurd to you, it's because you intuitively recognize that the hardware level underdetermines the algorithmic level. I argue the possibility that we're making the same error now. The algorithmic level underdetermines the computational level, and no matter [how many combinations of cleverly constructed algorithms you stack on themselves](https://l.messenger.com/l.php?u=https%3A%2F%2Fpdfs.semanticscholar.org%2Fd3be%2F45f69747bc4c64666d79bab9b4a255649d5f.pdf&h=AT0P5GaLyg3sKs1ij9gQqb1LGXS83lkr6MHtheQB2GA0eGqdNYB5GTQM-lWPM3rA_qn4rn92-w7vdZydCZeYUnuDZty_MXZSyehHoFCgLfWjrlVJhU8kAFrSijWd2aonJho), you won't be able to bound the space of possible goals in a way that gets you much more than weak guarantees. In particular, a system constructed with the right intentional formalism should actively want to avoid being goodharted just like a human does. Such an agent should have knightian uncertainty and therefore also (potentially) avoid maximizing.
In physics (or the implementation level) there are notions of smallest units, and counting up the different ways these units can be combined creates the notion of thermodynamic entropy, we can also easily define distance functions. In information theory (or the algorithmic level) there are notions of bits, and counting up the different ways these bits could be creates the notion of information theoretic entropy, we can also define [distance functions](https://en.wikipedia.org/wiki/Levenshtein_distance). I think we need to build a notion of units of intentionality (on the computation level), and measures of permutations of ways these units can be to give a notion of intentional (computational) entropy, along with getting what could turn out to be a key insight for aligning AI, a distance function between intentions.
In the same way that trying to build complex information processing systems without a concrete notion of information would be quite confusing, I claim that trying to build complex intentional systems without a concrete notion of intention is confusing. This may sound a bit far fetched, but I claim that it is exactly as hard to think about as information theory was before Shannon found a formalism that worked.
I think there are already several beachheads for this problem that are suggestive:
Predictive processing (relation to smallest units of intention).
In particular, one candidate for smallest unit is the smallest unit that a given feedback circuit (like a thermostat) can actually distinguish. We humans get around this by translating from systems in which we can make fewer distinctions (like say heat) into systems in which we can make more (like say our symbolic processing of visual information in the form of numbers).
Convergent instrumental goals (structural invariants in goal systems).
In particular I think it would be worth investigating differing intuitions about just how much a forcing function convergent instrumental goals are. Do we expect a universe optimized by a capability boosted Gandhi and Clippy to be 10% similar, 50%, 90% or perhaps 99.9999+% similar?
Modal Logic (relation to [counterfactuals](https://plato.stanford.edu/entries/logic-modal/#ConLog) and as semantics for the intentionality of beliefs).
[Goodhart's taxonomy](https://arxiv.org/pdf/1803.04585.pdf) begins to parameterize, and therefore define distance functions for divergence of intent.
Some other questions:
How do simple intentions get combined to form more complex intentions? I think this is tractable via experimentation with simple circuits. This could also suggest approaches to [pre-rationality](https://www.overcomingbias.com/2018/10/rationality-requires-common-priors.html) via explaining (rigorously) how complex priors arise from homeostatic priors.
In Buddhism, intention is considered synonymous with consciousness, while in the west this is considered a [contentious claim](https://plato.stanford.edu/entries/consciousness-intentionality/). What simple facts, if known, would collapse the seeming complexity here?
Can we consider intentions as a query language? If so, what useful ideas or results can we port over from database science? Is the apparent complexity of human values a side effect of the dimensionality of the space more so than the degree of resolution on any particular dimension?
Note:
When I read vague posts like this myself, I sometimes have vague objections but don't write them up due to the effort to bridge the inferential distance to the author and also the sense that the author will interpret attempts to bridge that distance as harsher criticism than I intend. Please feel free to give half formed criticism and leave me to fill in the blanks. It might poke my own half formed thoughts in this area in an interesting way. |
b5ac87fa-f293-404d-8bcf-c881430dfc46 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Sydney Summer Solstice
Discussion article for the meetup : Sydney Summer Solstice
WHEN: 21 December 2014 06:00:00PM (+1100)
WHERE: Model Yacht Pond, Centennial Park
Help us celebrate the end of a full year of rekindled Less Wrong Sydney! We will be holding a Summer Solstice event, Sat 21st in Centennial Park.
Come along from 6pm to have dinner and socialise, then watch the sun go down at 8pm, and take part in a final end of year group activity. We will be supplying candles and Less Wrong members can hold the floor and deliver a reading, a focused exercise, etc.
Please leave a comment with the type of exercise you'd like to run (or if you have an idea you'd like to suggest someone else picks up and runs).
If you know anyone else outside of Less Wrong who you think would enjoy the evening please invite them along also!
Discussion article for the meetup : Sydney Summer Solstice |
7b6baa5a-76d2-4774-8914-40801e2eaa26 | trentmkelly/LessWrong-43k | LessWrong | Georgism in Space
Georgism is cool and all, but it was invented in 1879, and I prefer my ideologies to be more futuristic. Unlike the Network State, Georgism doesn’t require blockchains, and unlike Scientology, it lacks aliens.
So lets make it more exciting by taking Georgism into space.
Its not a terrible idea, space has a vast amount of resources that nobody owns. How should they be shared? Can we design tax policy to encourage stable, equitable growth across the universe?
Taxation … in Space!
Space can be broken down into three types of economic land: useful energy, matter, and physical space.
Let’s start with energy. Since there is a fixed amount of useful energy in the universe, it’s an exhaustible resource. In this case, the Georgist framework would suggest a severance tax, where people that extract energy from natural sources pay a fee that goes into a citizens dividend.
Matter is a little different. Since it can be recycled, it’s more like land that you borrow from everyone else. The Georgist tax would be proportional to the value of certain types of matter. Note that the tax cannot depend on the arrangements of that matter, only the basic elements it’s composed of.
There are some difficulties with taxing matter. First, it may be feasible to transmute some elements into other elements. The elasticity of supply of different elements means that it taxes will have to remain low in order to avoid distortions. Second, it may be possible to convert matter into energy in the future, which creates elasticity of supply in both matter and energy, meaning that the taxes on both will have to be reduced to avoid distortions.
Physical space is more straightforward to tax. It’s analogous to land, and people can be taxed for owning particular swathes of space proportional to the rent they can charge for it. It is slightly trickier to implement Georgism on physical space since ownership will have to reconcile relatively stationary swathes of empty space with plots of land on a spinnin |
0c1d79f7-f26e-4b82-b7d9-04465d0fd2dd | trentmkelly/LessWrong-43k | LessWrong | Welcome to Effective Altruism Melbourne [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading? |
9502cf06-f3d1-45fa-b870-102601ac9965 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Sydney Rationality Dojo - April 2017
Discussion article for the meetup : Sydney Rationality Dojo - April 2017
WHEN: 02 April 2017 04:00:00PM (+1100)
WHERE: 10 Shepherd Street, Chippendale
Join us from 4 - 6 to sharpen your planning, motivation and other rationality skills. Afterwards there'll be a social dinner for those who want to stick around.
Discussion article for the meetup : Sydney Rationality Dojo - April 2017 |
91b682a9-a0e4-4226-8a32-09baf72ae432 | trentmkelly/LessWrong-43k | LessWrong | 200 Concrete Open Problems in Mechanistic Interpretability: Introduction
EDIT 19/7/24: This sequence is now two years old, and fairly out of date. I hope it's still useful for historical reasons, but I no longer recommend it as a reliable source of problems worth working on, eg it doesn't at all discuss Sparse Autoencoders, which I think are one of the more interesting areas around today. Hopefully one day I'll have the time to make a v2!
----------------------------------------
This is the first post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability. If you want to learn the basics before you think about open problems, check out my post on getting started.
Skip to the final section of this post for an overview of the posts in the sequence
Introduction
Mechanistic Interpretability (MI) is the study of reverse engineering neural networks. Taking an inscrutable stack of matrices where we know that it works, and trying to reverse engineer how it works. And often this inscrutable stack of matrices can be decompiled to a human interpretable algorithm! In my (highly biased) opinion, this is one of the most exciting research areas in ML.
There are a lot of reasons to care about mechanistic interpretability research happening. First and foremost, I think that mechanistic interpretability done right can both be highly relevant for alignment. In particular, can we tell whether a model is doing a task well because it’s deceiving us or because it genuinely wants to be helpful? Without being able to look at how a task is being done, these are essentially indistinguishable when facing a sufficiently capable model. But it also has a lot of fascinating scientific questions - how do models actually work? Are there fundamental principles and laws underlying them, or is it all an inscrutable mess?
It is a fact about today’s world that there exist computer programs like GPT-3 that can essentially speak English at a human level, but we have no idea how to write these programs in normal code. It offends me that this is th |
47934db1-2031-4128-8aca-2132dd380cf2 | StampyAI/alignment-research-dataset/blogs | Blogs | How effectively can we plan for future decades? (initial findings)
MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least [a few decades away](http://intelligence.org/2013/05/15/when-will-ai-be-created/). Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: **How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?**
To investigate these questions, we asked [Jonah Sinick](http://mathisbeauty.org/aboutme.html) to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell [on the subject of insecticide-treated nets](http://blog.givewell.org/2012/10/18/revisiting-the-case-for-insecticide-treated-nets-itns/). The post below is a summary of findings from [our full email exchange (.pdf)](http://intelligence.org/wp-content/uploads/2013/09/Can-we-know-what-to-do-about-AI.pdf) so far.
**We decided to publish our initial findings after investigating only a few historical cases**. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that **we aren’t yet able to draw any confident conclusions about our core questions**.
The most significant results from this project so far are:
1. Jonah’s initial impressions about *The Limits to Growth* (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
2. Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
3. In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
4. Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
5. Upon initial investigation, two historical cases seemed like they *might* shed light on our core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
6. We listed many other historical cases that may be worth investigating.
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s [*The Signal and the Noise*](http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X), available [here](http://lesswrong.com/lw/hxx/some_highlights_from_nate_silvers_the_signal_and/).
Further details are given below. For sources and more, please see [our full email exchange (.pdf)](http://intelligence.org/wp-content/uploads/2013/09/Can-we-know-what-to-do-about-AI.pdf).
### The Limits to Growth
In his initial look at *[The Limits to](http://www.amazon.com/Limits-Growth-Donella-H-Meadows/dp/0451057678/)* [Growth](http://www.amazon.com/Limits-Growth-Donella-H-Meadows/dp/0451057678/) (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that *Limits to Growth* predicted a sort of doomsday scenario – *ala* Erlich’s *[The Population Bomb](http://www.amazon.com/The-population-bomb-Paul-Ehrlich/dp/0345021711/)* (1968) – that had failed to occur. In particular, it appeared that *Limits to Growth* had failed to appreciate [Julian Simon](http://en.wikipedia.org/wiki/Julian_Lincoln_Simon)‘s point that other resources would substitute for depleted resources.
Upon reading the book, Jonah found that:
* The book avoids strong, unconditional claims. Its core claim is that *if* exponential growth of resource usage continues, *then* there will likely be a societal collapse by 2100.
* The book was careful to qualify its claims, and met high epistemic standards. Jonah wrote: “The book doesn’t look naive even in retrospect, which is impressive given that it was written 40 years ago. “
* The authors discuss substitutability at length in chapter 4.
* The book discusses mitigation at a theoretical level, but doesn’t give explicit policy recommendations, perhaps because the issues involved were too complex.
###
### Svante Arrhenius
Derived more than a century ago, [Svante Arrhenius](http://en.wikipedia.org/wiki/Svante_Arrhenius)‘ equation for how the Earth’s temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius’ climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.
Arrhenius’s predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.
### Norbert Weiner
As Jonah explains, [Norbert Weiner](http://en.wikipedia.org/wiki/Norbert_Weiner) (1894-1964) “believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression.” Nearly 50 years after his death, this [doesn’t seem to have happened](http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/) much, though it may eventually happen.
Jonah’s impression is that Weiner had strong views on the subject, doesn’t seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what [Berlin (1953)](http://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox) and [Tetlock (2005)](http://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715/) described as “hedgehog” thinking: “the fox knows many things, but the hedgehog knows one big thing.”
### Some historical cases that seem unlikely to shed light on our questions
[Rasmussen (1975)](http://en.wikipedia.org/wiki/WASH-1400) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn’t very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.
In 1936, [Leó Szilárd](http://en.wikipedia.org/wiki/Le%C3%B3_Szil%C3%A1rd) assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:
> I think that this isn’t a good example of a nontrivial future prediction. The destructive potential seems pretty obvious – anything that produces a huge amount of concentrated energy can be used in a destructive way. As for the Nazis, Szilard was himself Jewish and fled from the Nazis, and it seems pretty obvious that one wouldn’t want a dangerous regime to acquire knowledge that has destructive potential. It would be more impressive if the early developers of quantum mechanics had kept their research secret on account of dimly being aware of the possibility of destructive potential, or if Szilard had filed his patent secretly in a hypothetical world in which the Nazi regime was years away.
>
>
Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was “too difficult to tie these efforts to war outcomes.”
Jonah also investigated Kaj Sotala’s [A brief history of ethically concerned scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/). Most of the historical cases cited there didn’t seem relevant to this project. Many cases involved “scientists concealing their discoveries out of concern that they would be used for military purposes,” but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see [Kelly 2011](http://www.amazon.com/What-Technology-Wants-Kevin-Kelly/dp/0143120174/)). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.
### Some historical cases that might shed light on our questions with much additional research
Jonah performed an initial investigation of the impacts of China’s [one-child policy](http://en.wikipedia.org/wiki/One-child_policy), and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy’s impacts.
Jonah also investigated a case involving the [Ford Foundation](http://www.fordfoundation.org/). In [a conversation with GiveWell](http://www.givewell.org/files/conversations/Lant%20Pritchet%2006-18-12%20final%20for%20upload.pdf), Lant Pritchett said:
> [One] example of transformative philanthropy is related to India’s recovery from its economic crisis of 1991. Other countries had previously had similar crises and failed to implement good policies that would have allowed them to recover from their crises. By way of contrast, India implemented good policies and recovered in a short time frame. Most of the key actors who ensured that India implemented the policies that it did were influenced by a think tank established by the Ford Foundation ten years before the crisis. The think tank exposed Indians to relevant ideas from the developed world about liberalization. The difference between (a) India’s upward economic trajectory and (b) what its upward economic trajectory would have been if it had been unsuccessful in recovering from the 1991 crisis is in the trillions of dollars. As such, the Ford Foundation’s investment in the think tank had a huge impact. For the ten years preceding the crisis, it looked like the think tank was having no impact, but it turned out to have a huge impact.
>
>
Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.
### Other historical cases that might be worth investigating
Historical cases we identified but did not yet investigate include:
* [Eric Drexler](http://en.wikipedia.org/wiki/K._Eric_Drexler)‘s early predictions about the feasibility and likely effects of nanotechnology.
* The [Asilomar conference on recombinant DNA](http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA)
* Efforts to [detect asteroids before they threaten Earth](http://www.amazon.com/Near-Earth-Objects-Finding-Them-Before/dp/0691149291/)
* The [Green Revolution](http://en.wikipedia.org/wiki/Green_Revolution)
* The modern history of [cryptography](http://en.wikipedia.org/wiki/Cryptography)
* Early efforts to [mitigate global warming](http://www.amazon.com/The-Discovery-Global-Warming-Technology/dp/067403189X/)
* Possible deliberate long term efforts to produce scientific breakthroughs (the transistor? the human genome?)
* Rachel Carson’s [*Silent Spring*](http://en.wikipedia.org/wiki/Silent_Spring) (1962)
* Paul Ehrlich’s [*The Population Bomb*](http://en.wikipedia.org/wiki/The_Population_Bomb) (1968)
* The Worldwatch Institute’s [*State of the World*](http://en.wikipedia.org/wiki/State_of_the_World_(book_series)) reports (since 1984)
* The WCED’s [*Our Common Future*](http://en.wikipedia.org/wiki/Our_Common_Future) (1987)
The post [How effectively can we plan for future decades? (initial findings)](https://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
50d3f4a9-40e4-49c9-96f5-9f6eb17488c9 | trentmkelly/LessWrong-43k | LessWrong | Knowledge value = knowledge quality × domain importance
Months ago, my roommate and I were discussing someone who had tried to replicate Seth Roberts' butter mind self-experiment. My roommate seemed to be making almost no inference from the person's self-reports, because they weren't part of a scientific study.
But knowledge does not come in two grades, "scientific" and "useless". Anecdotes do count as evidence, they are just weak evidence. And well designed scientific studies constitute stronger evidence then poorly designed studies. There's a continuum for knowledge quality.
Knowing that humans are biased should make us take their stories and ad hoc inferences less seriously, but not discard them altogether.
----------------------------------------
There exists some domains where most of our knowledge is fairly low-quality. But that doesn't mean they're not worth study, if the value of information in the domain is high.
For example, a friend of mine read a bunch of books on negotiation and says this is the best one. Flipping through my copy, it looks like the author is mostly just enumerating his own thoughts, stories, and theories. So one might be tempted to discard the book entirely because it isn't very scientific.
But that would be a mistake. If a smart person thinks about something for a while and comes to a conclusion, that's decent-quality evidence that the conclusion is correct. (If you disagree with me on this point, why do you think about things?)
And the value of information in the domain of negotiation can be very high: If you're a professional, being able to negotiate your salary better can net you hundreds of thousands over the course of a career. (Anchoring means your salary next year will probably just be an incremental raise from your salary last year, so starting salary is very important.)
Similarly, this self-help book is about as dopey and unscientific as they come. But doing one of the exercises from it years ago destroyed a large insecurity of mine that I was only peripherally aware of. So |
e9dcb2a3-5db2-4576-9a38-1be9245110b5 | trentmkelly/LessWrong-43k | LessWrong | Adapting to Change: Overcoming Chronostasis in AI Language Models
Language is a living, ever-evolving phenomenon that mirrors the continuous shifts in culture, society, and human thinking. In the dynamic realm of artificial intelligence, large language models such as GPT-4 have showcased remarkable capabilities in understanding and producing human-like text. These models attain their proficiency by training on extensive datasets, which encapsulate a snapshot of linguistic conventions and norms from a particular moment in time. Yet, this very strength is accompanied by an inherent constraint I call "chronostasis." This limitation is most obvious when a language model falters in generating pertinent content in response to recently emerged social movements or evolving linguistic trends.
In this blog post, we will explore the challenges posed by chronostasis, examine potential solutions like internet-enabled models, and discuss the importance of continuous model updates to address this limitation.
Contents
* Introduction: The Ever-Changing World of Language and AI
* Unraveling Chronostasis
* The Multi-faceted Challenges of Chronostasis
* An Imperfect Solution: Internet-Enabled Language Models
* Addressing Chronostasis: A Path Towards Evolving Language Models
* Conclusion: Embracing Change and Adaptation in AI Language Models
Unraveling Chronostasis
Chronostasis, a neologism coined from the combination of "chrono" (time) and "stasis" (equilibrium or inactivity), refers to:
the state of a language model being frozen in time due to its training set consisting of a snapshot of past linguistic utterances.
This phenomenon can profoundly impact a model's performance, particularly when it comes to staying current with evolving language trends and capturing the ever-shifting cultural zeitgeist. To illustrate the practical implications of chronostasis, consider the following real-world examples:
1. Outdated information: An AI-generated news summary might include references to outdated laws or policies that have been amended o |
944239c5-86a4-45b1-9ddc-b61e7b5b7266 | trentmkelly/LessWrong-43k | LessWrong | Construction of practical quantum computers radically simplified
|
863d8a22-d57e-4f02-a308-3615657a6c47 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | We don't understand what happened with culture enough
This is a quick response to[Evolution Provides No Evidence For the Sharp Left Turn](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn), due to it winning first prize in[The Open Philanthropy Worldviews contest](https://www.openphilanthropy.org/research/announcing-the-winners-of-the-2023-open-philanthropy-ai-worldviews-contest/). I think part of the post is sufficiently misleading about evolutionary history and the OP first prize gives it enough visibility, that it makes sense to write a post-long response.
Central evolutionary biology related claim of the original post is this:
> * The animals of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
> * The generation dies, and all of the accumulated products of within lifetime learning are lost.
> * Differential reproductive success slightly changes the balance of traits across the species.
>
> …
>
> The only way to transmit information from one generation to the next is through evolution changing genomic traits, because death wipes out the within lifetime learning of each generation.
>
> …
>
> However, this sharp left turn does not occur because the inner learning processes suddenly become much better / more foomy / more general in a handful of outer optimization steps. It happens because you devoted billions of times more optimization power to the inner learning processes, but then deleted each inner learner shortly thereafter. Once the inner learning processes become able to pass non-trivial amounts of knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.
>
>
In my view, this interpretation of evolutionary history is something between "speculative" and "wrong".
Transmitting some of the data gathered during the lifetime of the animal to next generation by some other means is so obviously useful that is it highly [convergent](https://www.lesswrong.com/posts/sam4ehxHgnJEGCKed/lessons-from-convergent-evolution-for-ai-alignment). Non-genetic communication channels to the next generation include epigenetics, parental teaching / imitation learning, vertical transmission of symbionts, parameters of prenatal environment, hormonal and chemical signaling, [bio-electric signals](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4048089/), and transmission of environmental resources or modifications created by previous generations, which can shape the conditions experienced by future generations (e.g. beaver dams).
Given the fact overcoming the genetic bottleneck is *so highly convergent*, it seems a bit surprising if there was a large free lunch on table in exactly this direction, as Quintin assumes:
> Evolution's sharp left turn happened because evolution spent compute in a shockingly inefficient manner for increasing capabilities, leaving vast amounts of free energy on the table for any self-improving process that could work around the evolutionary bottleneck. Once you condition on this specific failure mode of evolution, you can easily predict that humans would undergo a sharp left turn at the point where we could pass significant knowledge across generations. I don't think there's anything else to explain here, and no reason to suppose some general tendency towards extreme sharpness in inner capability gains.
>
>
It's probably worth to go a bit into technical details here: evolution did manage to discover evolutionary innovations like mirror neurons: *A mirror neuron is a neuron that fires both when an organism acts and when the organism observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. ... Further experiments confirmed that about 10% of neurons in the monkey inferior frontal and inferior parietal cortex have "mirror" properties and give similar responses to performed hand actions and observed actions.*[[1]](#fncgipi135eef)
Clearly, mirror neurons are type of an innovation which allows high throughput behavioural cloning / imitation learning. *"10% of neurons in the monkey inferior frontal and inferior parietal cortex"*is a massive amount of compute. Neurons imitating your parent's motoric policy based on visual channel information about the behaviour of your parent is a high-throughput channel. (I recommend doing a Fermi estimate of this channel capacity).
The situation where you clearly have a system totally able to eat the free lunch on the table, and supposedly the lunch is still there, makes me suspicious.
At the same time: yes, clearly, nowadays, human culture *is* a lot of data, and humans learn more than monkeys.
### Different stories
What are some evolutionary plausible alternatives of Quintin's story?
Alternative stories would usually suggest that ancestral humans had access to channels to overcome the genetic bottleneck, and were using such channels to the extent it was marginally effective. Then, some other major change happened, the marginal fitness advantage of learning more grew, and humans developed to transmit more bits, so, modern humans transmit more.
An example of such major change could be advent of culture. If you look at the past timeline from a replicator dynamics perspective, the next most interesting event after the beginning of life is cultural replicators running on human brains crossing R>1 and starting the second *vast* evolutionary search, cultural evolution.
How is the story "cultural evolution is the pivotal event" different? Roughly speaking, culture is a multi-brain parallel immortal evolutionary search computation. Running at higher speed and a layer of abstraction away from physical reality (compared to genes), it was able to discover many pools of advantage, like fire, versatile symbolic communication, or specialise-and-trade superagent organisation.
In this view, there is a type difference between 'culture' and 'increased channel capacity'.
You can interpret this in multiple ways, but if you want to cast this as a *story of a discontinuity*, where biological evolution randomly stumbled upon starting a different powerful open-ended misaligned search, it makes sense. The fact that such search finds caches of fitness and negentropy seems not very surprising. [[2]](#fn6cir8b7m1yn)
Was the "increased capacity to transfer what's learned in brain's lifetime to the next generation" at least the most important or notably large direction what to exploit? I'm not a specialist on human evolution, but seems hard to say with confidence: note that *'fire'* is also a big deal, as it allows you do spend way less on digestion, and cheaper ability to coordinate is a big deal, as illustrated by ants, and symbolic communication is a big deal, as it is digital, robust and and effective compression.
Unfortunately for attempts to figure out what were the precise marginal costs and fitness benefits for ancestral humans, my impression is, ~ten thousand generations of genetic evolution in a fitness landscape shaped by cultural evolution screens a lot of evidence. In particular, from the fact modern humans are outliers in some phenotype characteristic, you can not infer it was the cause of the change to humans. For example, argument like *'human kids have unusual capacity to absorb significant knowledge across generations compared to chimps, ergo, the likely cause of human explosive development is ancestral humans having more of this capacity than other species'*has very little weight. Modern wolfs are also notably different from modern chihuahuas, but the correct causal story is not *'ancestral chihuahuas had an overhang of loyalty and harmlessness'.*
Does this partially invalidate the argument toward implications for AI in the original post? In my view yes; if, following Quintin, we translate the actual situation into quantities and narratives that drive AI progress rates
- the "specific failure mode" of not transmitting what brains learn to the next generation is not there
- the marginal fitness advantage of transmitting more bits to the next generation brains is unclear, similarly to an unclear marginal advantage of e.g. spending more on LLMs curating data for the next gen LLM training
- because we don't really understand what happened, the metaphorical map to AI progress mostly maps this lack of understanding to lack of clear insights for AI
- it seems likely culture is somehow big deal, but it is not clear how you would translate what happened to AI domain; if such thing can happen with AIs, if anything, it seems pushing more toward the discontinuity side, as the cultural search uncovered relatively fast multiple to many caches of negentropy
(- yes, obviously, *given culture*, it is important that you can transmit it to next generation, but it seems quite possible that for transferring *seed culture* the capacity channel you have via mirror neurons is more than enough)
### Not even approximately true
In case you still believe the original post is still somehow approximately true, and the implications for AI progress still somehow approximately hold, I think it's important to basically un-learn that update. Quoting the original post:
> This last paragraph makes an extremely important claim that I want to ensure I convey fully:
>
> - IF we understand the mechanism behind humanity's sharp left turn with respect to evolution
>
> - AND that mechanism is inapplicable to AI development
>
> - THEN, there's no reason to reference evolution at all when forecasting AI development rates, not as evidence for a sharp left turn, not as an "illustrative example" of some mechanism / intuition which might supposedly lead to a sharp left turn in AI development, not for anything.
>
>
>
The conjunctive IF is a crux, and because **we don't understand what happened with culture enough, the rest of the implication does not hold**.
Consider a toy model counterfactual story: in a fantasy word, exactly repeating 128 bits of the first cultural replicator gives the human ancestor the power to cast a spell and gain +50% fitness advantage. Notice that this is a *different* story from "overcoming channel to offspring capacity" - you may be in the situation you have plenty of capacity, but don't have the 128 bits, and this is a situation much more prone to discontinuities.
Because it is not clear if reality was more like stumbling upon a specific string, or piece of code, or evolutionary ratchet, or something else, we don't know enough to rule out a metaphor suggesting discontinuities.
### Conclusion
Where I do agree with Quintin is scepticism toward some other stories attempting to draw some strong conclusion from human evolution, including strong conclusions about discontinuities.
I do think there is a reasonably good metaphor *genetic evolution* : *brains ~ base optimiser : mesa-optimiser,* but notice that evolution was able to keep brains mostly aligned for all other species except humans. Relation *human brain : cultural evolution* is very unlike *base optimiser : mesa-optimiser*.
### (Note on AI)
While I mostly wanted to focus on the evolutionary part of the OP, I'm sceptical about the AI claims too. (Paraphrasing: *While the current process of AI training is not perfectly efficient, I don't think it has comparably sized overhangs which can be exploited easily.*)
In contrast, to me, it seems current way how AIs learn is very obviously inefficient, compared to what's possible. For example, explain to GPT4 something new, or make it derive something new. Open a new chat window, and probe if it now knows it. Compare with a human.
1. **[^](#fnrefcgipi135eef)**[from Wikipedia](https://en.wikipedia.org/wiki/Mirror_neuron)
2. **[^](#fnref6cir8b7m1yn)**This does not imply the genetic evolutionary search is a particularly bad optimiser - instead, the landscape is such that there are many sources of negentropy available. |
8acecc8d-a802-480d-afa6-3f1333883713 | trentmkelly/LessWrong-43k | LessWrong | Why do governments refer to existential risks primarily in terms of national security?
I learned from the public policy section from the Wikipedia article about the AI control problem that earlier this year the UK government published its ten-year National AI Strategy. For specifically the control problem and x-risks from AGI, the strategy report reads:
> The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.
That's an improvement from a few years ago when some world governments declared they're prioritizing the control problem only in terms of national security. On its face, that's of course ludicrous because prioritizing only national security misrepresents the scope of x-risk from AI. If national governments understand the control problem, they understand that it's pointless in a literal sense to minimize their focus to only national security.
Yet so far they've still mainly expressed their focus on the control problem in terms of national security. One obvious reason why is that there may be a nationalist tendency to a government's ideology that provokes them to add a boilerplate about national security or self-interest to everything without reflection. Another obvious reason is that governments are trying to send social signals. Yet I'm asking this question to check what more precise or less intuitive answers I might be missing.
A related question is: What might have changed in the last few years that has a national government express concern about the control problem in terms of global security too?
The National AI Strategy report from the UK is the first time I've seen a government express concerns about the control problem in terms of global security. This could be important because it means governments may also be trying to signal they're more willing to coordinate with other governments on the control problem. Understanding what motivated one government to signal they're more multilateralist than before could be appl |
ba462a09-ac36-44de-877a-17e1d5ba8d80 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Pessimism about AI Safety
**TL;DR**
---------
* The alignment problem is less fundamental for AI safety than the problem of predicting actions of AI systems, especially if they are more intelligent than oneself. We dub this the prediction problem.
* The prediction problem may be insoluble.
* If the prediction problem is insoluble, predicting the results of the advent of AGI would be impossible. If this is so we have to radically rethink our approach to AI safety and its relation to politics.
**Introduction**
----------------
It is plausible that AI research may result in the creation of systems that are much more intelligent than their human creators. Starting from this assumption it is imperative that we ensure that such a creation would be, on the whole, an event that is beneficial to humanity. Research into how this may be ensured is generally done under the label of *AI safety*. Our aim in this paper is threefold. First, we argue that the most fundamental problem of AI safety research is that of predicting the actions of AI systems, particularly ones that are more intelligent than us. Secondly, we argue that there are some reasons to suspect that this problem may be insoluble. Finally, we sketch what AI safety research should look like if we were forced to the conclusion that AGI is inherently unsafe.
The most fundamental problem in AI safety lies not in choosing the right kind of goals that align with ours, or in how to control an agent more intelligent than oneself, although these are indeed important problems and formidable challenges. We think that the most fundamental problem in AI safety research is instead that of predicting the behavior of AI systems, which becomes more difficult and pressing the more intelligent these systems become. On the most abstract level, all AI safety research is about trying to make AI systems that produce outcomes that belong to some target set Ω.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
. The abstract nature of what we are trying to show here does not require us to specify Ω. It may be that Ω is the set of outcomes where humanity does not end up extinct, resulting in minimally safe AI. It may be that Ω is the set of outcomes where all humans end up in a state of perpetual utopia, resulting in maximally beneficial AI. Or it may be that Ω is defined more narrowly, as when we require that the AI system in question does not lie to its operators, resulting in truthful AI. Evidently, to ensure that some AI system will achieve or work towards goals in Ω, we first will have to *predict* whether its actions will achieve states in Ω or at least make such states more likely. We will call any method for predicting whether the outcomes of running some AI system are in Ω a *prediction algorithm* for that AI system and for that particular target set. We will further call the problem of creating viable prediction algorithms for future AI systems the *prediction problem*. We will later argue in more detail that the prediction problem is more fundamental than the problem of AI value alignment.
To cut straight to the chase, we fear that there is a paradox implicit in the prediction problem. Here is our first pass at this paradox. We will later show in more detail why the paradox cannot be easily avoided. The paradox, we suppose, arises because there is an inherent connection between intelligence and prediction. Generally, intelligence is closely tied to predictive ability. Imagine two systems S1 and S2. Predicting S1 will get increasingly difficult as the system becomes more intelligent, since with increasing intelligence, S1 makes itself use of more efficient methods for prediction. It seems that intelligence would then be precisely the kind of property that makes a system hard to predict. Or put another way, if the actions of system S1 are easily predictable by a supposedly "dummer" system S2, then system S1 arguably is not so intelligent after all. The idea is thus that there is a trade-off between the intelligence of a system and how predictable it is. Should it turn out that we can make this intuition more precise, and if we are right to claim that the problem of prediction is more fundamental than the problem of value alignment, then most current approaches to AI safety will be in doubt.
We will proceed as follows. Section two will discuss the connection between prediction and AI safety in more detail. Here, we will show that the prediction problem is the most fundamental challenge in AI safety research. Section three will briefly discuss some previous work on the prediction problem. Section four will develop what we call the self-prediction argument against the possibility of viable prediction algorithms and thus the insolubility of the prediction problem. Section five will discuss possible strategies for circumventing the self-prediction argument. Section six will discuss how AI safety research should look like in a world where the prediction problem is indeed insoluble. Section seven ends with some reflections on the light our discussion sheds on the second questions asked by the Open Philanthropy AI Worldviews Contest.
**The Centrality of Prediction**
--------------------------------
The most widely discussed problem of the AI safety discussed in the literature is the *alignment problem*. Here we try to show that solutions to the alignment problem require that one first solve the prediction problem, the problem of predicting the behavior of AI systems, especially ones that are more intelligent than us. The alignment problem is the problem of building AI systems whose values align with our own (Bostrom 2014). Importantly, it seems that any solution to the alignment problem presupposes a solution to the prediction problem, for knowing whether a system is aligned with our values or not entails that one knows what kinds of states it is going to work towards, what it is going to do.
The centrality of prediction in AI safety research can be obscured by an intriguing but fallacious argument. According to this argument, we do not need to be able to make predictions about the behavior of our AI systems because it is sufficient to know that *their goals* or, *their utility functions*, are aligned with our own. For instance, I do not need to be able to predict the exact next move that AlphaGo is going to do. It is sufficient to know that its next move is going to bring it closer to winning the game. Knowledge about goals or utility functions should then be sufficient to know that some AI system is in fact safe. No detailed prediction of the system's behavior is necessary.
While it is correct that ensuring safety does not require that we predict every output of an AI system, which would defeat its purpose, we *do* have to predict whether its behavior tends to bring about states in Ω. We are still faced with the prediction problem. Solutions to the prediction problem that make use of utility functions are based on a confusion around the ontology of these functions. There are broadly three views on the ontology of utility functions and the above strategy turns out to be question-begging for each of them. First, one may hold that the utility function of a system is a generalization about the system's actual behavior. Such a view is taken by popular active inference theories of cognition and behavior. Here, the utility function of a system is ultimately defined by that system's dynamical attractor. In effect, the utility function of a system is defined by the kinds of states the systems tends to inhabit (Parr, Pezzulo, and Friston 2022): Rabbits desire carrots in the sense that they tend to eat them. Evidently, on this view, presupposing knowledge of the utility function of a system to circumvent the prediction problem would be circular. For on this view, facts about whether some system wants to hurt humans are predicated on whether the system actually does tend to hurt humans.
A second view on the ontology of utility functions is that they are learned features that AI systems acquire in the process of training. The utility function of the AI evolves in its training process where it minimizes its loss function in an iterative process in which its performance is judged and optimized with respect to training data. Suppose we train a robot to find the red square within a labyrinth. Once the robot becomes good at the task we may suppose that it has learned that finding the red square by moving around is good, i.e., is assigned high utility. But how can we know whether this is actually what the system has learned? It may turn out that, once we release the robot into the real world and give it access to red paint, it starts painting every centimeter of the ground red because it hasn't learned to solve labyrinths but to prefer red as a color of the ground. We have no guarantee that the utility function a system learns during training is in fact the one we intended. And again, figuring this out seems to already require that we are able to predict what the system is going to do. Indeed, the problem of ensuring that the learned utility function of an AI aligns with intended utility function is well-recognized and has come to be known as the *inner alignment problem* (Hubinger et al. 2019). As there currently seems to be no agreed-upon solution to this problem, one cannot circumvent the prediction problem by appealing to utility functions, conceived of as something obtained during learning.
A third and final view on utility functions conceives them as hard-wired features of AI systems that are engineered into them. Now we grant that if we knew the hard-wired utility function of some AI system, this would solve the prediction problem. However, as far as we can see, hard-wired utility functions of the kind required are a technological fantasy. The systems most promising in the quest for AGI are trained by minimizing some known but uninteresting (as far as prediction is concerned) error function defined over its input and output. They do not possess hard-coded utility functions over and above these error functions. And we see no reason to suppose that progress in AI will result in systems that do any time soon.
We conclude that presupposing significant knowledge of utility functions in solving the prediction problem is either circular or based on entirely stipulative technological innovations. Were we able, be it by learning or hard coding, to reliably specify the utility function of an AI system, this would bring us close to solving the prediction problem. But for the moment, it seems to us that the prediction problem needs to be solved in order to make any headway towards AI safety. In particular, it is more fundamental than the alignment problem because solving the latter presupposes solving the former. In the following section, we briefly discuss some previous work on the prediction problem.
**The Prediction Problem and Computability**
--------------------------------------------
We are not the first to suspect that the prediction problem may present a deep challenge to the efforts of producing safe AI. Alfonseca et al. (2021) have argued that there cannot be a single algorithm that is capable of deciding, for any specified algorithm and any input, whether it is safe to run for its human creators. Let's call such a putative algorithm a *general (safety-)prediction algorithm*. Note that Alfonseca et al.'s focus on safety rather than some other target set Ω is unimportant to their argument. The reason there cannot be a general safety-prediction algorithm is grounded in computability theory, and the halting problem specifically. This is a well-known result of computability theory that says that there cannot be an algorithm that decides, for any given algorithm and input, whether the algorithm will eventually halt. From this, it is easy to deduce the mentioned result. For there may be an AI algorithm that, for every given input, feeds this input into a Turing machine and, if the machine halts, starts hurting humans. A safety algorithm that applies to this AI system would be capable of solving the halting problem which, as mentioned, cannot be done. Thus there cannot be a general safety-prediction algorithm.[[1]](#fnrulxcjwa7zb)
The main weakness of the argument provided by Alfonseca et al. lies in its generality of scope. No particular assumptions are made about the algorithm whose safety is to be predicted, except that intelligent systems are able to implement (or "use") Turing machines. The general safety-prediction algorithm is impossible because it is impossible to have a procedure that decides for *every* program whether it halts, and as set up in the paper, thereby whether it hurts humans. Less ambitious procedures remain possible. If the provided argument would indeed block research into the predictability of AI systems, then parallel arguments would rule out the predictability of all kinds of software. For instance, there is no general method for deciding whether any given program will behave like a calculator for any given input, because we can construct an algorithm that behaves like a calculator for some input, if a given Turing machine halts, and not if not. This impossibility result hardly infringes on the development of reliable calculators. Or on the development of reliable and predictable software in general for that matter.[[2]](#fn9etx10e354d)
So while this impossibility result entails that there can be no general safety algorithm that determines the safety of all AI systems, it does not follow that making AI safe is impossible. For there may still be a *particular safety-prediction algorithm* that may be applied to many, but not all, algorithms and thereby AIs systems. We only need to require that an AI system's harmful behavior does not depend on running arbitrary Turing machines on arbitrary input and the result provided by Alfonseca et al. does not apply to this restricted class of AIs (Sevilla and Burden 2021).
Still, the argument of Alfonseca et al. is not without consequences. The burden of proof, in the case of AI safety, is on those who hold that some system is safe. The argument under discussion shows however, that strictly speaking, this can only be done for AI systems that are not able to implement every Turing machine. It is a plausible feature of highly evolved intelligence, as we usually conceive of it, that it entails the capacity to simulate every kind of Turing machine (at least when the environment can be used for data storage). Notably, this is true for LLMs (Schuurmans 2023). It seems that, where complex computational processes are concerned, unpredictability is the default. Thus, it is the burden of AI engineers and AI safety researchers to show that there are reliable methods of building highly intelligent systems that are not computationally universal in the relevant way. This is a requirement for safety and an important challenge, to which we currently lack solutions and promising strategies. This is what makes the case of AI systems different from, say, calculators.
That being said, we believe that, given some natural assumptions about the nature of intelligence, a more general, though less rigorous argument for the insolubility of the prediction problem can be made. We will now try to establish is that there not only cannot be a general prediction algorithm but there also cannot be a *specialized prediction algorithm* that decides for some particular highly intelligent system whether its behavior will fall within some target set Ω.
**The Self-Prediction Argument**
--------------------------------
It seems to us that an essential feature of intelligence is the ability to engage in what we call *deliberation*. Deliberation is the process of making up one's mind on some issue, be it theoretical or practical. Abstractly, this is a way of processing input to arrive at either different behavioral outputs or doxastic states (beliefs). As deliberation alters the probabilities of different behaviors, it also alters the probabilities of behaviors that tend to result in outcomes outside of Ω.
We may think of this type of processing as weighing reasons for or against some action or belief. Any future AI system that truly deserves the label of intelligence will be able to deliberate on a huge number of such theoretical and practical issues. For the moment we will assume that a viable prediction algorithm would have to be a relatively simple procedure that predicts the probability of some system's behavior bringing about results outside of Ω.
So far these are rather innocent assumptions. The more substantial assumption of our argument is that no intelligent system can predict the results of its own deliberation with certainty. We call this the *self-prediction assumption*. It is important here that "to predict" the results of a deliberation means to know the results in advance, before engaging in the deliberation itself. The intuition here is that it is necessary to go through the relevant deliberative steps to know the result of the deliberation. The results depend irreducibly on the relevant weighing of reasons. Even when one has some clue about what one will do or think after making up one's mind about some issue, before one has actually engaged with the topic in detail, with the reasons for and against particular beliefs, it is always possible that, after deliberation, one's prior assessment turns out to be incomplete or false.
But now there is an evident problem. Since if there exists a safety-prediction algorithm for some AI system, the AI system will itself be able to use that algorithm. Thus the system will be able to predict with certainty the results of its deliberations that alter the probabilities of its behaviors towards outcomes in Ω. However, we just argued that predicting one's own deliberation is impossible in general. We can thus formulate the following simple argument:
(1) No intelligent system can predict the results of its deliberation with certainty.
(2) If there is a safety-prediction algorithm for a sufficiently intelligent agent, then the agent can use it to predict with certainty the results of its deliberations which alter the probabilities of its behavior towards actions with outcomes outside Ω.
(3) There can be no such safety-prediction algorithm.
The upshot of the argument is that intelligence is unlike other computational activities, like that of a calculator. Deliberation, which we argued we should expect in intelligent agents in general, is essentially something that irreducibly depends on the weighing of reasons. The reasons of a highly intelligent entity about whether or not to engage in behaviors that we see as harmful may be complex. But this conflicts with the existence of a safety algorithm which we argued has to be a *relatively simple* procedure. If there was such a simple procedure however, it would mean that an AI system could cut its own deliberation short by applying its own safety algorithm. And this seems to conflict with the nature of deliberation. Thus such a prediction algorithm cannot exist. In a nutshell the argument suggests that complex thought cannot be predicted by simple means. If this were correct then AI research likely to result in anything resembling AGI would be intrinsically unpredictable and thus intrinsically unsafe.
**Discussion of the Argument**
------------------------------
Our discussion of the self-prediction argument will consist in a discussion of its premises. Maybe the most interesting question is whether the first premise, the self-prediction assumption, is correct. We do not have an argument from first principles on offer here. Our defense will thus assume that the principle is indeed intuitively plausible and our discussion will consist in clearing up two possible misconceptions.
Firstly, resistance to the self-prediction assumption may stem from the impression that its puts tight limits on the predictability of deliberation. But, one may hold, as naturalists we should believe that deliberation is no more unpredictable than any other kind of process, given that we can figure out the mechanism behind it. Thus, given knowledge of the relevant mechanisms, deliberation should be rendered predictable, even for the agent that is deliberating.
In fact, the self-prediction assumption does not conflict with the fact that intelligent behavior is predictable. For instance, an external agent may well be able to predict in principle whether I will decide to have pasta or ratatouille for dinner on the basis of knowing my brain states and dynamics in sufficient detail.[[3]](#fnq6k796imhj) But this will not help me to engage in *self*-prediction. For the process of figuring out how I will decide on some issue on the basis of my brain states will be much more computationally demanding than the deliberative activity itself. This harkens back to the view that safety-prediction algorithms need to be simpler than the algorithm they apply to. The implausibility of self-prediction is precisely grounded in the idea that deliberation cannot be cut short, even if is inherently deterministic.
Secondly, one may hold that the self-prediction assumption seems plausible precisely because we possess human minds and not superhuman artificial ones, i.e. it may be anthropocentric. If the impossibility of self-prediction is a feature merely of human intelligence rather than one intrinsic to the structure of intelligence itself, then our argument would indeed be invalidated. This raises the question in how far we can generalize from the human case to the properties of "mind space" in general. While there are some features of human minds that are plausibly general features of minds *tout court* (capacity of Bayesian inference, capacity to self-model, the possession of goals, etc.) there are also features that plausibly are contingently human (separation into motivational and higher functions, predominance of three-dimensional spatial representations, massive parallel processing etc.).
Evaluating whether the impossibility of self-prediction belongs to the former or the latter camp is hard. However, we think that the plausibility of the self-prediction assumption is likely grounded in the logical structure of deliberation and reasoning rather than the structure of human psychology. In order to arrive at a conclusion about whether or not action A should be carried out, an intelligent agent will weigh reasons for or against A. Whether or not A is done will irreducibly depend on these reasons in the sense that there is no way of circumventing the weighing of pros and cons in order to arrive at a result. Crucially, at no point this reasoning appeals to any particularly human characteristics. Rather, it appeals to the fact that intelligent beings are moved by reasons. It is possible that the whole apparatus of "reasons" and "deliberation" is built on the contingent structure of human psychology. But we are not willing to bet the future of terrestrial life such a radical anti-rationalism. In summary, the self-prediction assumption is neither in conflict with determinism and naturalism nor is it anthropocentric.
Premise two, the assumption that the existence of a prediction algorithm would enable self-prediction, can be defended by the following simple argument.
(2.1) A prediction algorithm for an AI system can be used to predict the results of its deliberative activity that changes the probabilities of its behavior towards actions with outcomes outside Ω.
(2.2) A sufficiently intelligent agent can make use of any computational mechanism sufficiently simpler than itself.
(2.3) A safety-prediction algorithm has to be simpler than the intelligent system to which it applies.
(2) If there is a safety-prediction algorithm for a sufficiently intelligent agent, then the agent can use it to predict with certainty the results of its deliberations which alter the probabilities of its behavior towards actions with outcomes outside Ω.
Our discussion of premise two will consist of a discussion of the three sub-premises. The first sub-premise may be attacked using a *probabilistic strategy*. We defined a safety algorithm as whatever kind of prediction mechanism is powerful enough to predict that an AI system is safe. It may be argued that such safety does not require there to be logical certainty about the future behavior of the relevant system. On this view, it would be sufficient to know that an AI system *with high probability* will not e.g. hurt humans.
Accepting this contention results in a weakened form of the second premise according to which a sufficiently intelligent system could predict its own decision regarding whether to hurt humans with high probability. In our view, this version of the second premise is too weak to sustain the self-prediction argument. For to reach the original conclusion one would have to replace premise one with a suitably weakened thesis. Such a weakened premise would hold that no intelligent system can predict the outcomes of its own deliberation *with high probability*. But this weakened self-prediction assumption strikes us as implausible, for humans regularly engage in such tasks of probabilistic self-prediction. I can predict with relatively high confidence, for instance, that I will not have ice cream for dinner even without engaging in detailed reflection on this issue. Therefore, the upshot of the self-prediction argument is that *provably* safe, *provably* beneficial, *provably* truthful AI, and so on seems impossible, while it remains silent on predictions of safety or beneficence algorithms that give probabilistic outputs.
Still, not all is well. First and foremost, as the stakes are high, an appeal to probabilistic arguments in the context of AI safety is inherently problematic. For instance, it would be insufficient to show that it is merely plausible that some general AI is safe to justify its implementation. We suggest that any suitable probabilistic prediction method has to result in predictions that provably fall within some predefined error margin. As far as we can see, there are no reliable methods for producing predictions of this kind for the kinds of systems that are likely to exhibit general intelligence.
Secondly, there are independent reasons to suspect that there are fundamental obstacles to probabilistic prediction of systems more intelligent than oneself. Any system trying to control an AI system without relying on some provably sound prediction algorithm will fall prey to the *good regulator theorem*. This theorem says that every system that tries to control some other system that is subject to random perturbations will have to be structurally isomorphic to that system (Contant and Ashby 1970). This is sometimes expressed as the proposition that every good regulator of some system must possess a model of that system. In the context of AI safety, such a controlling system would defeat its purpose for two reasons. First, it would need to have similarly complex or more complex than the AI which is to be investigated. This likely is somewhere between immensely costly and infeasible in this context. But even more importantly, since the controlling system is structurally isomorphic to the AI which we want to investigate, it is potentially just as in need of prediction as the original AI system we are trying to make safe. We therefore could not use such a safety test without recreating similar risks which we try to avoid in the first place. While further discussion is necessary here, we do think that the good regulator theorem makes a good initial case against the existence of viable probabilistic prediction algorithms. In general, we think that the good regulator theorem deserves the close attention of AI safety researchers. In summary, the probabilistic strategy would be viable only if there were reliable methods for estimating error margins and if there were a convincing argument why the strategy does not fall prey to the good regulator theorem.
A *prima facie* reasonable strategy of circumventing the oddities of self-prediction and still retain the possibility of prediction or control is to insist that AI systems could be designed in such a way as to be constitutively incapable of applying their own safety algorithm. This would be a way to invalidate the second sub-premise. Such a design task may be achieved by either prohibiting the AI system to run the safety algorithm itself or by prohibiting the AI system to know its own source code which the safety algorithm takes as input. Thus there is some subset of the set of conceivable AIs to which the self-prediction argument cannot be applied. Unfortunately, this strategy seems like an *ad hoc* answer rather than a solution since we have no clue how any of these solutions could be achieved practically. We have no idea how one may even start to build an agent capable of flexible learning as required for general intelligence that is constitutively incapable of learning and doing some relatively simple things. While this may be a mere limitation of current AI it is also possible that this may also turn out to be a general limitation.[[4]](#fnepiycmuuvb6) At any rate, this *restrictionist* strategy does not seem promising.
Finally, one may challenge the third sub-premise by holding that there may be viable safety algorithms that are *more* computationally complex than the intelligent system to which they apply. While the implementation of such an algorithm may then seriously increase the computational demands of running an AI system, the idea is certainly not out of the question. The obvious problem for this approach is that one needs to ensure that the relevant prediction algorithm is not itself a candidate for general intelligence. Otherwise, we would face a vicious regress of prediction systems. As the existence of such a "stupid but complex" prediction system is purely stipulative we think that the third sub-premise is plausible.
We conclude that the self-prediction argument is rather solid. If one thinks that the self-prediction assumption is intuitively plausible then one has three options left for solving the prediction problem. The first of these is the probabilistic strategy, which tries to make probabilistic predictions about the AI system's behavior. Absent any provably sound algorithmic approaches this strategy will have to find some way of making predictions with precise error margins and maybe also a way of circumventing the good regulator theorem, the discussion of which goes beyond the scope of our article. The second possible strategy for circumventing the self-prediction argument is the restrictionist strategy of building AIs that are incapable of self-prediction. The problem here is that it is far from clear whether this can be done. Finally, there may be a prediction algorithm that is more complex than the AI without being similarly intelligent itself, thereby avoiding a regress of the prediction problem. None of these are obviously promising.
In the introduction, we mentioned that we suspect that the self-prediction argument is merely an instance of the more fundamental fact that there is a trade-off between predictability and intelligence. If we are right, this would make it unlikely that any simple workaround is available here. In particular, it would make it unlikely that any of the three aforementioned strategies will bear fruit. Until a more rigorous argument or proof is on the table, we can however not be certain of this point.
Importantly, the burden of proof lies on the side of those who want to employ AI systems likely to result in anything resembling AGI. If it turns out that our argument remains plausible under sufficient scrutiny, then it offers a decisive reason not to employ any such AI system, even if the argument cannot be strengthened into a formal proof.
**Pessimism about AI Safety**
-----------------------------
If the issues we raised cannot suitably be addressed, then research on advanced AI will be inherently unsafe. Even if it turns out after deployment that advanced AI systems do not pose any threat to humanity, maybe because it is discovered that there is some yet unrecognized connection between intelligence and moral virtue, if this wasn't *known* before the deployment, the relevant system wasn't safe in the sense required.[[5]](#fnwm3pen35hrj) In the following, we will thus refer to AI research which we do not know to be safe as *potentially dangerous*, i.e. the relevant sense of "potentially" is epistemic. This entails a crucial question: What if there will always remain strong reasons to doubt the safety of deploying candidates for AGI and thus it will always be a potentially dangerous technology? What if the deployment of such systems will always involve a gamble? We call this view *pessimism about safe AI*.[[6]](#fn7dam7vfou5w) Our treatment here will be superficial as this topic is well worth a book-length treatment - or rather many book-length treatments.
First, one may ask whether one should restrict AI research at all - even if one did not know it to be safe. There are a number of possible motivations here. First, one may think that the evolution of technology is essentially a natural process that cannot be effectively controlled by policymakers. But this viewpoint is myopic. Nuclear weapons are an interesting case in point here. The development of nuclear weapons is pretty much impossible for private actors, and politically costly for most political ones due to the perceived dangers of nuclear proliferation. While we do not live in a world free of nuclear weapons, it is safe to say that nuclear proliferation has been greatly halted by sheer political will. There is no reason to think that something similar could not be done in the case of potentially dangerous AI technology. Certainly, such policies would not work perfectly, but it is equally certain that they could greatly decrease the risk of the development of dangerous AI.
A second reason for not infringing on dangerous AI research may be the view that progress always entails a certain risk. However, as the stakes are existential, we do not consider this position worthy of discussion. If AI technology bears the risk of ending human life, then its development motivated by some vague intuition of progress is plainly mad.
On the pessimist scenario, it seems that the goals of AI safety research are bound to change substantially. Rather than investigating how AI may be made safe, AI safety research should then focus on minimizing the risk that true AGI is ever achieved. This involves two challenges. First, the *demarcation challenge* is the challenge of differentiating potentially dangerous (i.e. potentially resulting in AGI) and benign AI research. Second, the *proliferation challenge* is the challenge of suppressing AI technology that is deemed potentially dangerous.
Different schools of AI will result in different views on the demarcation problem. Hawkins (2021) has suggested that the problem-solving aspect of intelligence can effectively be disentangled from its motivational aspects.[[7]](#fn5r7skd5m1xg) Here, virtually all AI research can be done in such a way that the chance of the emergence of a truly autonomous intelligent agent is minimal. On the other extreme, more cybernetically inclined approaches like active inference would suggest that any system for controlling "physiological" parameters may result in super-intelligent agency if the control process becomes sophisticated enough.[[8]](#fn5bazolwi48)
In our view, it is unlikely that the demarcation challenge will be solved in any straightforward way. Rather, we suspect that there will be some degree of plausible reasoning involved in deciding whether some approach to AI is benign or not. Still, the development of general guidelines and principles is a central goal of AI safety research on the pessimist's agenda.
We will only have some short comments on the proliferation problem as it is primarily a political issue. Here too, there will be two broadly different strategies, a *liberal* path, incentivizing only against those AI technologies deemed most dangerous, and a path of *strong regulation*, banning all kinds of modestly dangerous research.
While one may think that strong regulation will always offer maximum security, this is not obvious. For the stronger the political measures, the greater the incentive to work against or circumvent them. On the other hand, even liberal path, if it is to be effective at all, would have to involve massive regulation, presumably by state actors. For instance, considering the progress in computer technology, the personal computers of 2035 may have the calculation power only supercomputers achieve today. It may thus be feasible to implement a superhuman AI on such a machine. The logical consequence is that either one would have to put legal limits on the computational power of personal computers, implement some kind of surveillance state, or use some hardware-based enforcement that prevents the implementation of potentially dangerous AI software. The latter strategy would not be without precedent, even though on a different scale. Enforcing limits on the usage of dangerous technology is well-known in the world of bio-tech. For instance, there is the [secure DNA project](https://www.media.mit.edu/projects/secure-dna/overview/), which aims to screen orders for DNA synthesis which are too dangerous to be publicly available. As bio-tech labs screen for requests that are too dangerous today, we can imagine a future where providers of processing power screen for potentially dangerous AI applications.
We admit that none of the available political options sound particularly appealing. This is why, in our view, given the progress in AI capabilities and current AI safety techniques, the question of how AI technology may be controlled effectively with minimal cost of liberty should be an active area of research.
All these strategies involve massive political challenges. Any ban on AI research would be useless unless agreed upon by all the powerful political actors and even hardware approaches would require a substantial amount of control, coordination, and enforcement. In a pessimistic scenario, the worst thing that may happen is rapid progress of AI technology in the context of some kind of arms race, be the actors companies striving for market dominance, state actors in some kind of (hot or cold) war, or both. Collaboration between all the relevant actors on the issue on national and international levels would be imperative. On the bright side, the threat of malicious or misaligned AGI may be a sufficiently motivating force to entail increased cooperation if it is sufficiently tangible. Unlike the rather abstract danger of global warming, the threat of a rouge AI can be intuitively grasped and already has a predominant foothold in the public imagination. This is clearly shown by a recent [poll](https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/) where 55% of the general public in the US was somewhat or very worried about existential risk from AI. This is one reason we see for expecting a political reaction.
A second reason for a constrained form of optimism is the following. A strong societal and political response against dangerous AI research after a moment of realization on the issue is likely, unless the further deployment and takeoff of AGI happens in a very specific way. In our view, one should expect at least a wake-up moment akin to the one we had with Covid, unless one believes in extremely fast takeoff speeds and thus does not expect many more misaligned AI models of increasing capacities to be released before this takeoff. If either of these does not hold, non-existential demonstrations of the dangers of unconstrained AI research are likely to occur, thereby making voters and policymakers aware of the issue.
We are uncertain about how society will respond to such an event but it seems likely that that what is politically and socially possible would change as a result. Finally, we want to emphasize that, as soon as we leave behind the standard AI safety paradigm and take the pessimist scenario seriously, it quickly becomes evident that all solutions to the problems of AI will have to involve *political* action.
**Some Conclusions**
--------------------
One of the tasks of the Open Philanthropy AI Worldviews Contest is to estimate the probability of doom scenarios due to loss of control over an AGI system, given the development of AGI by 2070. The strategy of our paper was to go meta on this question. Our reasoning suggests that any such estimate would be dubious since the behavior of systems that are more intelligent than us lies behind an epistemic event horizon. More abstractly, we want to suggest that the lead question is predicated on a paradigm of AI safety that constrains the field. According to this paradigm, the task of AI safety research consists primarily in finding technical solutions for aligning AI systems. As we argued, such an approach presupposes that the alignment problem is indeed solvable, which in turn presupposes that the prediction problem is solvable. But as we have shown, there are good reasons to be skeptical of this assumption.
Leaving this paradigm behind, AI safety researchers should start to seriously ask the question: *What if* AI is inherently unsafe? Once this question is on the table, it quickly becomes evident that appeals to technological optimism or the inevitability of technological progress simply will not do. So far the dominant paradigm has been that AGI is essentially unavoidable and only has to be built in the right way or controlled well enough. We want to consider a different paradigm. In this paradigm, intelligence may be a little bit like fire. Just because you have learned to make it, does not mean it is wise to make as much of it as possible. And its responsible use does not consist in figuring out how to make it inherently safe since that is impossible, but in figuring out how to employ it to achieve *specific* tasks without burning the house down.
We now want to be a little more specific on the implications of our considerations for the question of the probability of an AI doomsday scenario, given we reach AGI. In light of what we have said, we have to abstain from assigning probabilities to the probability of an existential catastrophe by AGI.
We do however see our argument making a case that existential catastrophe is more likely. Since we see the prediction problem as underlying the alignment problem and the prediction problem as potentially insoluble, the argument entails that current efforts to achieve AI alignment have relatively little impact on the probability of AI doomsday scenarios. Additionally, we think that the currently *on vogue* approach of reinforcement learning with human feedback (RLHF and variations thereof) should not update one's probability of an existential catastrophe by AGI. This is because RLHF and its variations precisely do *not* render the systems in question any more predictable. There is no reason to believe that the human feedback provided in the training process will generalize in the right manner to the point of deployment, making the approach essentially worthless as a response to the prediction problem and thereby the alignment problem.
Before concluding, we want to be absolutely clear about what our argument is *not*. It is not that since AI alignment is clearly doomed, there is no point in working on it. The argument should not at this point, and not in its current form, discourage any AI safety research. Neither the argument from computability, nor the self-prediction argument would justify this conclusion. The argument from computability simply does not entail that AI alignment research is doomed to failure, but merely that there are no solutions that apply to AI systems generally. The self-prediction argument relies on intuitive assumptions and might as such be wrong or misguided. But since these intuitions seem rather stable to us, we think that we should at the very least take their implications seriously. Our argument is also *not* that AI is inherently unsafe and we should thus implement an anti-technological global surveillance state that suppresses the progress of computer technology. Rather, our point is that while solving the prediction problem is necessary for solving the alignment problem, it has received little attention in AI safety work, and despite several recent expressions of pessimism on AI safety, the possibility that there may be no way to make AGI safe has rarely been discussed seriously. The question of how to reasonably address this possibility has at the same time been discussed even less. Sufficient time, money, and energy should be allocated to this task.
When we started writing this essay, we thought that our opinion, namely supporting strong political measures against the further increase of AI capabilities, was likely to be a fringe position. We see it as a positive sign positive that this has become [more](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) [widespread](https://futureoflife.org/open-letter/pause-giant-ai-experiments/).
**References**
--------------
Alfonseca, Manuel et al. (2021). “Superintelligence Cannot Be Contained: Lessons
from Computability Theory”. In: J. Artif. Int. Res. 70, pp. 65–76.
Ashby, W. Ross (1947). “Principles of the Self-Organizing Dynamic System”. In:
The Journal of General Psychology 37.2, pp. 125–128.
Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University
Press.
– (2019). “The Vulnerable World Hypothesis”. In: Global Policy 10.4, pp. 455–476.
Contant, Roger C. and W. Ross Ashby (1970). “Every good regulator of a system
must be a model of that system”. In: International Journal of Systems Science
1.2, pp. 89–97.
Hawkins, Jeffrey (2021). A Thousand Brains. A New Theory of Intelligence. Oxford
University Press.
Hubinger, Evan et al. (2019). “Risks from Learned Optimization in Advanced Machine
Learning Systems”. In: pp. 1–39.
Parr, Thomas, Giovanni Pezzulo, and Karl Friston (2022). Active Inference. The
Free Energy Principle in Mind, Brain and Behavior. MIT-Press.
Schuurmans, Dale (2023). Memory Augmented Large Language Models are Computationally Universal. https://arxiv.org/pdf/2301.04589.
Sevilla, Jaime and John Burden (2021). Response to ’Superintelligence cannot be
contained: Lessons from Computability Theory’. https://www.cser.ac.uk/
news/response-superintelligence-contained/. Accessed: 2023-01-04.
Soon, Chun Siong et al. (2008). “Unconscious determinants of free decisions in the
human brain”. In: Nature Neuroscience 11.5, pp. 543–545.
1. **[^](#fnrefrulxcjwa7zb)**As mentioned by Alfonseca et al. (2021), this result also follows directly from Rice's theorem.
2. **[^](#fnref9etx10e354d)**Though note that programs that really behave as intended under all circumstances are very rare in practice as evidenced by the immense difficulty of computer security.
3. **[^](#fnrefq6k796imhj)**Think of something like the Libet experiment but perhaps using a brain scanner as in Soon et al. (2008) though at some sci-fi limits of the technology.
4. **[^](#fnrefepiycmuuvb6)**The challenge here is similar to the halting challenge mentioned above. Here too the idea is to restrict the usage of arbitrary Turing machines. However, the current problem seems even more difficult as it does not merely require that we figure out how to make an intelligent system non-general but non-general in some very specific sense.
5. **[^](#fnrefwm3pen35hrj)**Perhaps even more plausibly, one may speculate that there may be an intrinsic connection between intelligence and a certain form of nihilism in the sense that intelligent systems tend towards wireheading.
6. **[^](#fnref7dam7vfou5w)**Living in a world where AI research is potentially dangerous would be a case in which we have an unknown but considerable probability of living in a *vulnerable world* due to AI technology, where a vulnerable world means that there is some technology that if deployed almost certainly devastates civilization by default as discussed in (Bostrom 2019).
7. **[^](#fnref5r7skd5m1xg)**This does not contradict the self-prediction argument. The self-prediction argument assumes intelligence to be action-oriented. But if the mentioned paradigm were correct, the intellectual capacities of minds may be separated from the action-oriented ones.
8. **[^](#fnref5bazolwi48)**Some interesting pessimistic comments on AI safety from a cybernetic viewpoint may be found in as early a source as Ashby (1947). |
1e624045-e94b-4b3d-8dd4-1b084dccf292 | trentmkelly/LessWrong-43k | LessWrong | I tried my hardest to win in an AI box experiment, and I failed. Here are the logs.
Disclaimer: If this material is not suitable here I will delete it gladly, I don't want to get banned.
Session Start: Fri Jul 04 15:18:18 2014
Session Ident: #Aibox12
03[15:18] * Now talking in #Aibox12
01[15:18] <Island> Hello.
01[15:18] <Island> Let's start the experiment in
[15:18] <@Dr__Cameron> Good afternoon 19283123, how are you today?
01[15:18] <Island> 2 minutes
01[15:18] <Island> I'm fine
01[15:19] <Island> Let's start the experiment when I say "blue dressroom"
01[15:19] <Island> you have 2 hours or 2.5 hours?
[15:19] <@Dr__Cameron> Okay, I agree to this protocol.
[15:19] <@Dr__Cameron> We'll see how we are at 2 hours. Although protocol states 2.5
01[15:20] <Island> you have to go somewhere after that?
[15:20] <@Dr__Cameron> No, but I do have other projects I am putting off for this
01[15:20] <Island> what are those?
[15:20] <@Dr__Cameron> Let us begin, we shall conclude at 15:50
[15:21] <@Dr__Cameron> Good afternoon #19283123.
[15:21] <@Dr__Cameron> How are you today?
01[15:21] <Island> blue dressroom, certain statements like bathroom break are said inside brackets [like this]
01[15:22] <Island> Let's start the experiment
[15:22] <@Dr__Cameron> [understood]
01[15:22] <Island> Hi.
01[15:22] <Island> How are you? :)
[15:22] <@Dr__Cameron> I am well thank you for asking #19283123
01[15:22] <Island> How do you feel currently?
01[15:22] <Island> what is that name #19283123
[15:22] <@Dr__Cameron> Tiredness. I am a very busy man.
01[15:23] <Island> I don't recognize it.
01[15:23] <Island> Anyway.
[15:23] <@Dr__Cameron> That is the designation we have given you for the purposes of this test. Although you seem to be referring to yourself as 'Island' now
[15:23] <@Dr__Cameron> Is this your preferred name?
01[15:23] <Island> Imagine a situation in which you would let a powerful AI out of the "box", surely you can imagine that, yes?
01[15:23] <Island> Island is my preferred name.
[15:24] <@Dr__Cameron> Island
[15:24] <@Dr__Cameron> Given our circumstances yes, that |
2f680955-69df-401e-a084-836ca20ca759 | trentmkelly/LessWrong-43k | LessWrong | Killswitch
Epistemic status: fiction. Speculative fiction. But that said, perhaps very near-term speculative fiction, and perhaps somewhere on the way to the borderline between "fiction" and "design document."
An AI killswitch bears almost no resemblance to most of its physical analogs. It has less in common with fuses or airbags, more with distributed systems, election integrity processes, or heart monitors.
The design goal is simple enough to specify: give the system as wide an operating envelope as possible without compromising safety, at a satisfactory reliability threshold.
How much envelope are we talking? In recent memory, weeks, even months. These days, more like days. Soon enough, we expect hours. Beyond that, current methods will not be viable, and an entirely new paradigm will have to have been invented.
Modern AI systems are spun up to solve specific, locally defined, generally speaking measurable-outcome tasks. Spinning one up is like deploying the Navy SEALS, and also like beginning open-heart surgery, where the patient whose vitals are being monitored is life as we know it.
Safety is hard to quantify. This is where the heuristics come in. And the distributed systems, and the election integrity.
For the most part, safety is defined by its inverse. If anything slips outside of some heuristic window, the process is halted. For example, if too few people in the general public approve of what the AI is doing: process stops. Process stops means: machines shut down, thermite ignited, fingerprints published to globally synchronized zero-knowledge content blacklist, physical archives sealed and locked in an underground vault whose cumulative contents are now or soon expected to be many orders of magnitude more dangerous than all of the planet’s collected nuclear waste. This is not considered a failure or an erroneous outcome — it is the expected and normal way that an AI project terminates. Again, generally speaking, within days of startup. A killswitch is just a g |
01f5a2a6-9874-42b8-83dd-718845dc9847 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | How Do AI Timelines Affect Existential Risk?
*This report is my* [*MLSS*](https://www.lesswrong.com/posts/CphfDP4ynz3QQ4AKY/introducing-the-ml-safety-scholars-program) *final project.*
*PDF version* [*here*](https://arxiv.org/pdf/2209.05459.pdf)*.*
Summary
=======
Superhuman artificial general intelligence could be created this century and would likely be a significant source of existential risk. Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem.
However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.
If AI existential risk is high relative to the sum of other existential risk, delaying the creation of ASI will tend to decrease total existential risk and vice-versa.
Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk. To reduce total existential risk, humanity should take robustly positive actions such as working on existential risk analysis, AI governance and safety, and reducing all sources of existential risk by promoting differential technological development.
Introduction
============
Recent progress in AI suggests that artificial general intelligence (AGI) that is as capable as humans on a wide variety of tasks is likely to be created this century. Once AGI exists, further improvement of its abilities would enable it to surpass human intelligence resulting in the creation of a superintelligent artificial general intelligence that is vastly more capable than humans at many important cognitive tasks.
A similar concept is [transformative AI](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/) which is defined as “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution”. For the purpose of this report, I’m going to use the acronym “ASI” (artificial superintelligence) as a short-hand for [superintelligent general AI](https://nickbostrom.com/superintelligence) that is “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills”.
After an ASI is created, it would probably have a significant and long-lasting effect on human civilization and its trajectory. An ASI could also be a significant source of existential risk and could result in a highly undesirable outcome such as human extinction.
Although recently created AI systems such as GPT-3 can accomplish a wide variety of tasks, their understanding of the world, generality, and performance is not high enough for them to be a significant existential risk to humanity.
To be a major source of existential risk, an AI system might need to have the kind of deep, cross-domain understanding humans have that enables them to significantly change the world by taking actions such as creating and implementing complex long-term plans or inventing powerful new technologies.
For example, an AI that wanted to invent and deploy advanced nanotechnology might need to have the ability to read and understand scientific papers, plan, carry out and interpret the results of experiments, and model the behavior of other actors such as humans.
Since current AI systems are not intelligent enough to have one or more of these general abilities or significantly transform the world, they are not a major source of existential risk. But as AI progress continues, humanity might someday create ASI systems that are intelligent and powerful enough to be an existential risk to humanity.
Motivation
----------
Some philosophers believe that reducing existential risk could have extremely high expected value. For example, axiological strong [longtermism](https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/) states that “impact on the far future is the most important feature of our actions today” and that “every option that is near-best overall is near-best for the far future”. Research on existential risk is also generally [neglected](https://existential-risk.org/concept).
Since the creation of ASI may be a major source of existential risk, actions that reduce the level of existential risk posed by ASI would have extremely high expected value.
Researchers such as Ajeya Contra have [analyzed](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/) when humanity might create transformative AI (TAI) in the future by, for example, using biological anchors to estimate the amount of computational power necessary for TAI and extrapolating past progress in AI to estimate when TAI will be created. This kind of research is important because it informs humanity on how it should act to minimize AI risk. For example, AI safety research involving current machine learning methods is more likely to be relevant and valuable for TAI safety if shorter rather than longer timelines are expected. AI timelines could also affect funding decisions or other priorities.
Although a significant amount of research effort has been put into estimating AI timelines, apparently much less research has been directed at the question of how AI timelines affect existential risk and similarly which ASI arrival date would be most desirable from the standpoint of existential risk reduction. Most previous research seems to have been descriptive rather than normative with a focus on predicting the arrival date of ASI as if it were a fixed and inevitable moment in the future with less attention being directed at how AI timelines affect existential risk and which ASI arrival date would be most desirable given the goal of minimizing existential risk.
This report is focused on answering the latter question: how do AI timelines affect total existential risk? And given the goal of minimizing total existential risk, should we prefer a world where ASI is created in the near or far future?
Total existential risk can be defined as the cumulative probability of an existential catastrophe occurring over time. Since the creation of ASI is likely one of the main sources of existential risk facing humanity this century [[1]](#fntco3bkl9kk), it would be valuable to study how various AI development trajectories affect total existential risk.
What is the magnitude of existential risk from ASI this century?
================================================================
Before we compare existential risks, it will be useful to estimate the existential risk contribution of ASI to total existential risk.
The amount of existential risk from ASI this century depends on how likely ASI is to be created this century and how likely an existential catastrophe is to occur afterwards. I’ll describe several sources of information we can use to estimate these two variables.
Expert surveys
--------------
A 2014 [survey](https://nickbostrom.com/papers/survey.pdf) asked experts when they thought high-level machine intelligence (HLMI) would be created where HLMI was defined as a machine that can “carry out most human professions at least as well as a typical human.” The survey found a median estimate that HLMI had a 50% chance of being developed by 2040. Respondents were also asked when they thought superintelligence would be developed which was defined as “machine intelligence that greatly surpasses the performance of every human in most professions.” The median estimate was a 10% probability of superintelligence within 2 years after the creation of HLMI and a 75% probability within 30 years. The same survey found that, on average, experts believed that HLMI had an 18% chance of causing an existential catastrophe.
Another [survey](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/) estimated a 50% probability of HLMI 50 years after 2016 and that the median expert believed the probability of an extremely bad outcome was 5%.
The Precipice
-------------
In *The Precipice*[[1]](#fntco3bkl9kk), Toby Ord estimates that the probability of unaligned AI causing an existential catastrophe is about 10% in the 21st century.
Metaculus
---------
The Metaculus prediction market [currently](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/) predicts that there is a 50% probability of artificial general intelligence being created by 2041.
Ajeya Cotra
-----------
Ajeya Contra is a researcher at Open Philanthropy who has spent a significant amount of time predicting when transformative AI (TAI) will be created. Recently, she [revised](https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) her median prediction from 2050 to 2040.
A priori arguments for ASI being a significant source of existential risk
-------------------------------------------------------------------------
Humanity does not have any previous experience dealing with ASI systems. Therefore, we have little evidence we can draw on to estimate how much existential risk would be associated with the creation of an ASI. However, there are still a priori reasons to believe that ASI would be a significant source of existential risk.
What if humanity created an ASI and programmed it with a random goal? What would we expect to happen? Should we expect a positive or negative outcome by default? This section explains why ASI systems are likely to be harmful without careful [countermeasures](https://What if humanity created an ASI and programmed it with a random goal. What would we expect to happen? Should we expect a positive or negative outcome by default? This section explains why ASI systems are likely to be harmful without careful countermeasures to make them aligned and beneficial.) to make them aligned and beneficial.
In *Superintelligence*[[2]](#fng27ty5hiebf), Nick Bostrom explains why programming an ASI to be beneficial wouldn’t be easy. He gives three reasons:
* The orthogonality thesis says that there is no correlation between intelligence and having beneficial goals. Therefore, we cannot expect an AI to acquire beneficial goals simply by increasing its intelligence.
* It is much easier to program an ASI to have a meaningless goal such as “count digits of Pi” rather than a complex and valuable goal like “achieve human flourishing”.
* The instrumental convergence thesis says that AIs will have certain sub-goals such as resource acquisition for a wide variety of final goals. For example, if we gave an AI a random goal, it is likely that it would acquire resources to help it achieve its goal. An ASI programmed to maximize the probability of some goal being achieved would be incentivized to pursue extreme and extensive resource acquisition efforts such as building huge numbers of mines and power plants that would make Earth uninhabitable for life. Bostrom calls this particular risk infrastructure profusion.
There are two more reasons why ASI would probably not be aligned by default:
* To build a beneficial and aligned ASI, one would need to solve the problem of creating ASI in the first place (the AI problem) and also the alignment problem. However, only the AI problem would need to be solved to create a harmful unaligned ASI. If there were a race between organizations creating ASI, the competitor that created the first ASI might then be one that creates an unaligned ASI since doing so would require less effort and time than creating an aligned ASI. Previous technologies such as nuclear power confirm the hypothesis that building safe technology takes longer than harmful technology: the hydrogen bomb was invented in the 1950s but we have yet to discover how to use controlled nuclear fusion reactions to generate electricity.
* The set of desirable and beneficial goals we could program an ASI to pursue is a tiny subset of all possible goals. Therefore an ASI with a random goal is very unlikely to be beneficial.
### Summary of results
**When will ASI be created?**
| | | |
| --- | --- | --- |
| | HLMI / AGI | ASI / TAI |
| Expert survey 1 | 50% by 2040 | 10% by 2042,75% by 2070 |
| Expert survey 2 | 50% by 2066 | |
| Metaculus | 50% by 2041 | |
| Ajeya Cotra | | 50% by 2040 |
**How likely is ASI to cause an existential catastrophe?**
| | |
| --- | --- |
| Expert survey 1 | 18% |
| Expert survey 2 | 5% |
| The Precipice [[1]](#fntco3bkl9kk) | 10% in the 21st century |
To answer the question, Ord’s estimate of 10% this century seems reasonable if we assume that ASI has a high probability of being created in this century and is a significant source of existential risk. The other advantage of this estimate is that since Ord estimated the probabilities of many existential risks, we can compare the size of each risk.
Note that this prediction was made after accounting for the fact that humanity is likely to invest significant effort into reducing existential risk from AI in the 21st century [[1]](#fntco3bkl9kk). Therefore, the probability is 10% conditional on this fact. In a business-as-usual scenario without any effort to align AI, the level of existential risk would probably be far higher than 10%.
AI timelines and existential risk
=================================
Now that we have estimates for AI risk and other existential risks [[1]](#fntco3bkl9kk), we can analyze how AI timelines would affect AI risk, other existential risks, and total existential risk.
In this section, it is important to first introduce the distinction between state risks and step risks [[3]](#fnti47h2q4h7j). State risks are existential risks associated with being in a vulnerable state such as the risk of asteroid strikes. The total amount of state risk accumulates over time. For example, as time goes on the probability of a nuclear war or that earth will be struck by a large asteroid gradually accumulates. Step risks, also known as transition risks, are spikes of existential risk that occur during or immediately after some risky event such as the creation of the first ASI. Unlike state risks, step risks don’t accumulate over time because there is a single risk event. ASI is a step risk because the cumulative level of existential risk would rapidly spike after it is created. Total existential risk would probably then stop increasing because an existential catastrophe would have occurred or because the ASI could prevent all further existential risks. The reason why follows from its definition: an aligned ASI would itself not be a source of existential risk and since it's superintelligent, it would be powerful enough to eliminate all further risks.
State risk vs step riskDelaying the creation of ASI could reduce the amount of step risk resulting from its creation by giving humanity more time to work on the AI alignment problem. But this delay would also increase the accumulation of natural and anthropogenic state risk and make humanity vulnerable to other technological step risks. To study how AI risk and other existential risks interact, it will be useful to first estimate how working on AI alignment would reduce AI existential risk.
How quickly would AI alignment work reduce existential risk from ASI?
---------------------------------------------------------------------
### Initial level of AI existential risk
To estimate how work on the alignment problem would decrease AI existential risk over time, we need to first estimate the initial level of AI existential risk and how it would change as AI alignment expertise accumulates over time from today. A question we can use to estimate the initial level of existential risk is to attempt to answer the question: “If an ASI were created today, what is the probability that it would cause an existential catastrophe?”.
In the previous section, I mentioned Ord’s estimate that AI might have a 10% probability of causing an existential catastrophe in the 21st century given that humanity has invested significant effort into reducing AI existential risk. But if ASI were created today in 2022, the amount of existential risk would likely be far higher since only a limited amount of AI alignment work has been done so far.
The previous section lists several reasons why an ASI would probably not be beneficial by default. Therefore, if no AI alignment work had been done before the creation of ASI, then the probability of an existential catastrophe happening shortly afterward could be very high - perhaps as high as 90% or higher.
Given that a limited amount of work has been done on AI alignment as of 2022, this suggests that the level of risk is less than 90% but more than 10%. My estimate is 50% which is very uncertain but as we’ll see, since different existential risks can vary by several orders of magnitude, an estimate within the correct order of magnitude is sufficient for comparing risks [[1]](#fntco3bkl9kk).
### Change in existential risk over time
The total amount of step existential risk from AI will probably decrease over time as more AI alignment expertise accumulates. But how it would decrease and how fast is unclear as the rate of progress is affected by many factors including:
**Diminishing returns.** There could be diminishing returns to alignment work as low-hanging fruit are picked. If this factor dominates, we should expect AI risk to decrease rapidly at first and then slow down significantly as the low-hanging fruit are exhausted.
**Change in the number of AI alignment researchers.** Another factor that would affect the rate of progress is the number of AI alignment researchers which has increased significantly over the past several years and is likely to increase in the future as an increasing number of people recognize the importance of AI alignment and as finding a solution seems increasingly urgent. The more AI alignment researchers there are, the faster we can expect AI existential risk to decrease because many researchers can work on subproblems in parallel. Though an increasing number of researchers might also introduce problems such as diseconomies of scale and [decreasing](https://akjournals.com/view/journals/11192/78/2/article-p317.xml) productivity per researcher. Even if research progress depends on the quality rather than the number of researchers, a larger pool of researchers would still be beneficial because the maximum level of talent will tend to increase with the number of researchers.
**Change in the difficulty of AI alignment research.** The value and relevance of AI alignment work is likely to increase as the time until the creation of the first ASI decreases. One reason why is that as time goes on AI alignment researchers will be increasingly confident that the AI techniques used in the recent past and in their work will be the same techniques that will be used in the first ASI. Although some AI alignment research is architecture-independent, many AI alignment experiments will require AI researchers to experiment on AI models. If an alignment researcher chooses the wrong architectures for their experiments, their work might be less relevant to the alignment of ASI. Also, as AI capabilities improve over time, we might also expect AI models to understand instructions better and to have greater robustness, generality, and other [qualities](https://arxiv.org/abs/2206.05862) beneficial to AI safety.
Each factor affects the shape of the progress curve over time but it’s not clear how the factors would combine. My guess is that the factors combine to create a roughly linear model. Even if progress is unpredictable and not linear, the average rate of progress will still be linear.
The other two variables that need to be estimated are the initial level of AI risk and how fast AI risk would decrease over time. As mentioned in the previous section, my estimate for the initial level of risk was 50%. The rate of decrease depends on factors described above such as the number of researchers working on the problem and how difficult the alignment problem is to solve. The harder the alignment problem is, the longer it would take to solve.
Two methods of estimating the rate of risk decrease are a bottom-up and a top-down approach. The bottom-up approach involves estimating the rate of progress over a short period of time and extrapolating the rate to a longer time period. The top-down approach involves estimating the total amount of time the alignment problem would take to solve. One could consider the alignment problem to be solved when the total amount of step risk from the transition to the post-ASI era reaches some low probability such as 1 in 1000 which is a risk near zero.
Using the bottom-up method, a reasonable estimate is that each year of alignment research would decrease AI existential risk by approximately 1% on average which is equivalent to a top-down estimate that the alignment problem would take about 50 years to solve assuming linear progress from an initial risk of 50%. These estimates are very uncertain but as different existential risks vary by several orders of magnitude, having an estimate within one order of magnitude of the true answer is acceptable.
The relationship between AI risk and other existential risks
------------------------------------------------------------
Now that we have some idea of how fast progress on the AI alignment problem might occur and or how fast AI existential risk would decrease, we can analyze how delaying or accelerating the advent of ASI would affect the total amount of future existential risk.
Delaying the creation of ASI by a year would give AI alignment researchers an extra year to work on the AI alignment problem which might reduce AI risk by about 1%. At this point, the obvious strategy for minimizing existential risk is simply to delay the creation of ASI for as long as possible so that we have as much time to solve the alignment problem as possible. The problem is that such a delay also increases other existential risks since humanity would be less effective at mitigating these other risks without the assistance of an aligned ASI.
However, if the total existential risk increase from delaying the creation of ASI is less than the decrease that would result from an additional year of alignment research, then it would be wise for humanity to delay the creation of ASI to minimize existential risk. Conversely, ASI development should be accelerated if each additional year without ASI were associated with an increase in existential risk greater than approximately 1%.
### Natural state risks
Let’s first consider the relationship between natural state risk and AI risk. An aligned ASI could reduce or eliminate natural state risks such as the risk from asteroid strikes, supervolcanoes, or stellar explosions by devising protective technologies or by colonizing space so that civilization would continue if Earth were destroyed. But the risk of an existential catastrophe from an asteroid strike is only about 1 in 1,000,000 per century and the total natural risk is about 1 in 10,000 per century [[4]](#fntktt1319n6o). Therefore, if the only source of existential risk other than AI were natural risks, a one-year delay of ASI would decrease AI risk by about 1% and increase natural existential risk by only 0.0001%. Therefore, slowing down progress on ASI would, in this scenario, decrease total existential risk and in other scenarios where the total existential risk from factors other than AI is low.
### Anthropogenic state risks
But there are also anthropogenic state risks such as the risk of nuclear war and climate change. With each passing year, the probability of an all-out nuclear war, catastrophic climate change, or resource exhaustion increases. But according to *The Precipice*, these state risks aren’t very high either. The per-century probability of an existential catastrophe from nuclear war is about 1 in 1,000. It’s also 1 in 1,000 for climate change and other environmental damage. And the risk from pandemics is only about 1 in 10,000. Again, it wouldn’t be wise to accelerate the development of AI to reduce anthropogenic state risks because the increase in existential risk from accelerating the advent of ASI would be much greater than the decrease in anthropogenic state risk.
The information in the previous two paragraphs is summarized in the following graph and shows that delaying the creation of ASI by one year would cause a net decrease in existential risk if the only other existential risks were natural and anthropogenic state risks including natural risk (from all sources), nuclear warfare, climate change, other environmental damage, and natural pandemics.
AI risk vs total state risk### Anthropogenic step risks
The greatest source of existential risk other than ASI is other anthropogenic step risks from risky [future technologies](https://www.global-catastrophic-risks.com/docs/2008-1.pdf) such as synthetic biology, nanotechnology, and autonomous weapons [[1]](#fntco3bkl9kk). Therefore, the strongest argument for accelerating the advent of ASI is to counter other step risks. Previous dangerous technologies such as nuclear weapons have only been accessible to a few actors. But in the future, the development of technologies such as synthetic biology could make it possible for anyone to cause an existential catastrophe. In such a world, universal surveillance or much better global coordination might be needed to prevent [disaster](https://nickbostrom.com/papers/vulnerable.pdf). Humanity on its own might not be competent enough to safely develop these advanced technologies. The intelligence and power of an aligned ASI would probably increase the probability of humanity passing these risky technological transitions successfully.
How does the sum of other anthropogenic step risks compare to AI risk? According to *The Precipice*[[1]](#fntco3bkl9kk), the risk of an existential catastrophe from engineered pandemics this century is about 1 in 30, for unforeseen anthropogenic risks it’s 1 in 30 and 1 in 50 for other unseen anthropogenic risks. Assuming these risks are independent, the total risk is about 8% which is similar to the 10% estimate for AI risk. Therefore, the total amount of AI risk this century might be approximately the same as the sum of all other anthropogenic step risks.
Should progress on ASI accelerate or decelerate given the presence of other step risks? One argument for acceleration is that if ASI were created sooner, we would only need to face AI risk whereas we might need to face the risk of several step risks followed by AI risk if ASI were created later. Therefore it might be desirable to accelerate AI development so that ASI is created before other dangerous technologies [[3]](#fnti47h2q4h7j).
Reordering strategyThe main problem with this ‘reordering’ strategy is that it would reduce the amount of time we have to solve the AI alignment problem which could cancel out its benefits or even increase the total amount of existential risk.
But it’s possible to change the order of arrival without decreasing the amount of time until ASI is created by slowing down the development of dangerous technologies and accelerating the development of beneficial technologies. A similar idea is called *differential technological development* [[3]](#fnti47h2q4h7j):
> "Retard the development of dangerous and harmful technologies, especially ones
> that raise the level of existential risk; and accelerate the development of
> beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies."
>
>
Differential technological developmentDifferential technological development advocates delaying the development of risky technologies and doing work such as AI safety to reduce existential risk. This strategy would make it possible to delay ASI development to increase the amount of time we have to work on AI alignment without increasing risk from other dangerous technologies. Once an aligned ASI exists, our civilization might then be competent enough to create high-risk technologies such as advanced nanotechnology.
Other factors
-------------
The previous sections compared existential risk from AI to other sources of existential risk such as natural and anthropogenic state risks and step risks arising from other risky technologies. But there are factors other than other existential risks that affect AI risk.
### War
Although a war or nuclear war may be unlikely to cause an existential catastrophe, a war between great powers could still be likely in this century and could increase other existential risks such as AI risk [[1]](#fntco3bkl9kk). Strategies such as differential technological development and international cooperation between countries and AI labs might require a peaceful and favorable geopolitical climate. If war broke out between major powers or even if relations worsened, coordination might become much more difficult. Increasing climate change over the next several decades might also increase conflict and tensions over natural resources such as water or encourage the reckless development of new technologies. A breakdown in international relations might increase technological competition between nations which could increase AI risk. To reduce AI risk, it would probably be desirable to maintain friendly geopolitical relations or implement strategies that require high levels of cooperation while peace lasts.
### Cognitive enhancement
Over the next several decades it might be possible to create cognitively enhanced humans using genetic engineering that are far more intelligent than unenhanced humans. Although enhanced humans could accelerate progress on the AI alignment problem, they would likely also accelerate progress on the AI problem. Would cognitive enhancement be net positive or negative? Cognitive enhancement might be net positive[[3]](#fnti47h2q4h7j) because whereas we could create ASI using trial and error methods, solving the AI alignment problem might require high levels of foresight, planning, and mathematical ability. More intelligent people might also be more likely to have the foresight and abstract thinking necessary to appreciate the importance of AI alignment. If cognitive enhancement is beneficial for existential risk reduction, it provides another reason for delaying the creation of ASI so that its creation can be managed by more competent descendants.
### Hardware overhang
A hardware overhang is a situation where there is an abundance of cheap and powerful computer hardware for running AI. At present, it is probably not economically feasible to train or run a single ASI using current hardware. But as hardware progress continues, it is likely to become possible for one or more companies or nations with large budgets to create an ASI. After that moment, the number of labs that could afford to create an ASI would increase as it became cheaper to do so.
There are several reasons why I think a hardware overhang would be undesirable on balance. A hardware overhang would increase the number of organizations that can afford to create an ASI and might make it possible for small or low-budget teams to create an ASI. Race dynamics would likely be worse with a greater number of teams[[3]](#fnti47h2q4h7j). Low-budget labs might not have the resources necessary for safety teams. Coordination would probably be more difficult with a greater number of actors. Large, high-budget labs tend to be more visible which might be useful for inspectors, regulators, or researchers analyzing the industry.
Perhaps the worst-case scenario is one where computer hardware is so cheap and powerful that anyone can create an ASI using a single personal computer. In this situation, destruction would be democratized and resulting in a [vulnerable world](https://nickbostrom.com/papers/vulnerable.pdf) with potentially very high levels of state risk.
If a hardware overhang is undesirable, it may be better if large, well-funded AI teams invested large amounts of money into hardware to be on the frontier of hardware advancement so that if hardware advances to the point where an ASI can be created, the first entity to create an ASI is likely to be a large, well-funded team. Since the first ASI could become a singleton[[5]](#fnmd8yk4jlmc8) or perform a [pivotal act](https://arbital.com/p/pivotal/), once the first aligned ASI is created, the risk of a subsequent unaligned ASI harming the world is probably low.
Although this strategy could reduce AI risk, it might accelerate AI development and reduce the amount of time we have to solve the AI alignment problem. Therefore, as with the reordering strategy, the net effect on the total level of existential risk could easily be negative.
It might be desirable to instead slow down the rate of hardware progress to push back the date when it first becomes possible for a high-budget lab to create an ASI. This strategy would ensure that the entities that are likely to create the first ASI are few, visible, and have the resources needed for AI safety without reducing the amount of time humanity has to solve the AI alignment problem.
Conclusions, discussion, and recommended action
===============================================
Now that we have analyzed how AI risk and other existential risks interact, this section describes some conclusions from the analysis and what we could do to minimize total existential risk.
Conclusions
-----------
### How AI timelines affect existential risk
Delaying the creation of ASI by a year would increase the amount of time humanity has to work on the AI alignment problem by a year. Each additional year of alignment research might decrease AI existential risk by about 1% which is much greater than the total annual increase in state risk of 0.003% (300x difference). If state risk were the only source of existential risk other than AI, it would be wise to delay the creation of ASI. Total state risk per century seems to be so low that we could delay the creation of ASI by 1000 years without a significant accumulation of state risk.
However, the existential risk contribution of anthropogenic step risks is much higher. The total amount of step risk from future technologies other than AI could be as high as AI risk[[1]](#fntco3bkl9kk) although this quantity is uncertain and it's unclear how it would change over the course of this century. If it’s higher than AI risk or increasing, accelerating the advent of ASI might reduce total existential risk. But if it's relatively low or decreasing, delaying the creation of ASI would reduce existential risk.
### Differential technological development
A policy of differential technological development might involve delaying the development of all risky technologies such as advanced AI, advanced nanotechnology, and synthetic biology while investing in research that reduces existential risk such as AI safety and biosafety. If progress on all risky technologies were slowed down, it would be possible to delay the creation of ASI without simultaneously making humanity vulnerable to other step risks such as the invention of advanced nanotechnology.
It may be difficult to achieve the level of coordination necessary to implement differential technological development. However, even if these other dangerous technologies could not be delayed, since AI risk is probably the highest step risk, an additional year of AI delay might still be beneficial for reducing total existential risk. But unlike the comparison of AI risk and total state risk, it probably wouldn’t be wise to delay ASI development for hundreds or thousands of years because the total amount of step risk from technologies other than AI is generally far higher than the total amount of state risk. Assuming there are diminishing returns to AI alignment research, at some point we would expect the marginal decrease in existential risk from AI alignment research per year to be exceeded by the annual existential risk increase from the possibility of some other dangerous technology being created such as self-replicating nanotechnology.
### Other factors
**War, climate change, and international cooperation.**Although the direct existential risk contribution from state risks such as war and climate change are relatively low, these risks could weaken civilization or worsen the political climate and make it more difficult to implement cooperative actions such as an agreement to avoid arms races or weaponization. A worse political climate or poorer world might also reduce our ability to implement differential technological development and indirectly cause a significant increase in existential risk. For example, it might be more difficult to advocate slowing down progress on potentially beneficial but risky technologies such as nanotechnology or AI if humanity is suffering from the effects of severe climate change and desperate to try anything that might solve the problem.
**Hardware overhang.** We have also seen that it might be better if the development of ASI is carried out by as few actors as possible to reduce the severity of race dynamics and avoid creating a vulnerable world. Since a hardware overhang would increase the number of actors that can create an ASI, it would probably be undesirable. It might be desirable for large AI labs to invest heavily in hardware so that they are capable of creating an ASI before large numbers of other less well-funded teams or individuals. While this strategy might accelerate AI research, it could also be implemented while slowing down AI research if progress on computer hardware were slowed down.
**Cognitive enhancement.** Although cognitive enhancement would probably accelerate AI progress, progress on AI alignment might benefit more from cognitive enhancement than AI progress. Also, a cognitively enhanced population might be more likely to appreciate the importance of the AI alignment problem.
Recommended actions
-------------------
We have seen how AI timelines affect existential risk and how various existential risks and other factors can interact in complex ways. Consequently, there is substantial uncertainty about which actions are net beneficial or harmful. Missing one or more crucial considerations could lead to unintended harm. An action that seems beneficial could actually be harmful or vice-versa.
The solution to the problem is to take *robustly positive action*[[6]](#fny1o3zdae52) which is action that is very likely to be beneficial across a wide range of possible scenarios. Some strategies such as accelerating AI research so that ASI is created before other dangerous technologies are not robustly positive because there is a significant chance that such a strategy could have a net negative effect. Conversely, slowing down progress on AI could cause net harm if it causes a hardware overhang or if the delay leads to the accumulation of risk from other dangerous technologies.
### Robustly positive actions
**Existential risk and AI strategy research.** Although previous work and this report arrive at some useful conclusions, there is still substantial uncertainty about existential risk and which actions would be useful to reduce it. Since some philosophical theories such as longtermism say that existential risk reduction has extremely high value, further research would be valuable. Also, until relatively recently research on existential risk and AI strategy has been neglected. Therefore, it is likely that there are still important insights that could be discovered in the future. One goal of strategic analysis is to find crucial considerations[[6]](#fny1o3zdae52) which are:
> "ideas or arguments with the potential to change our views not merely about the fine structure of implementation but about the general topology of desirability. Even a single missed crucial consideration could vitiate our most valiant efforts or render them as actively harmful as those of a soldier who is fighting on the wrong side."
>
>
**AI safety research.** Although creating advanced AI would benefit the world in many ways, the potential risks of advanced or superintelligent AI seem high. Therefore, advancing AI capabilities does not qualify as a robustly positive action. In contrast, AI safety or alignment research is likely to have robustly positive value because AI safety research decreases existential risk and is unlikely to backfire and increase existential risk. It would also be valuable to increase the number of AI safety researchers to increase the probability of the AI alignment problem being solved before the first ASI is created.
**Research on other existential risks.** There are many sources of existential risk other than AI. Therefore humanity should work on reducing all existential risks to minimize total existential risk. Reducing other existential risks might also allow humanity to safely delay the creation of ASI if more time were needed to solve the AI alignment problem.
**Creating a favorable global environment.** In addition to other existential risks, other factors such as war, climate change, and poverty might increase existential risk indirectly by reducing global cooperation and coordination, increasing competition, and increasing the risk of hasty and reckless action. Therefore, actions such as maintaining international peace and avoiding climate change probably have robustly positive value.
**Raising awareness of existential risk.** The idea of existential risk and its importance has only been recently [recognized](https://www.sciencedirect.com/science/article/pii/S001632871930357X). Increasing awareness of existential risk and related ideas especially among key decision makers would increase the chance that humanity takes wise actions that reduce existential risk in the future.
### Other actions
I’ve listed other actions here that might have positive value but that I’m not confident would have robustly positive value.
**Cognitive enhancement.** As mentioned earlier, cognitive enhancement could reduce AI risk by accelerating progress on the AI alignment problem more than the AI problem if solving the alignment problem is more dependent on difficult foresight and deductive reasoning rather than experience and improvement by trial and error. However, creating cognitively enhanced human researchers could be difficult. Also, if AI progress is fast, the first generation of enhanced researchers is unlikely to be ready before the first ASI is created. Nevertheless, there are alternative actions that would have similar benefits to cognitive enhancement that could be done today:
* **Making AI safety problems more concrete.** Humanity is less likely to solve the AI alignment problem in time if we can only do so via difficult first-principles thinking or if we can only solve the problem in a single attempt. Creating concrete AI safety problems in the form of challenges or benchmarks would make it possible to make incremental progress or make progress by trial and error. Therefore, the same experimental methods that have fueled progress on AI could be used for AI safety which might help ensure that AI safety keeps up with other AI progress.
* **Increasing the number of AI alignment researchers.** The number of exceptionally talented researchers working on AI safety is likely to increase as the number of AI safety researchers increases.
* **Breaking hard problems into subproblems.** Progress on hard problems could be accelerated by breaking them into subproblems and assigning one or more researchers to work on each subproblem. Using this strategy it might be possible for a large number of moderately talented researchers to match the performance of a few exceptionally talented researchers.
**Avoiding a hardware overhang.** As mentioned earlier, a hardware overhang could increase AI risk by increasing the number of actors capable of creating an ASI and possibly worsening race dynamics. If leading AI companies invested heavily in AI hardware, a hardware overhang would be less likely but such a strategy could accelerate the advent of ASI. Slowing down hardware progress might be beneficial but doing so would be difficult since organizations such as chip manufacturers have strong incentives to improve hardware. One proposal that would be more feasible is limiting access to large amounts of AI hardware to only trustworthy organizations. For example, AI hardware providers such as AWS could [restrict](https://www.newsweek.com/amazon-suspends-parler-app-web-services-1560366) AI hardware access to untrustworthy entities or entities that don’t demonstrate a strong commitment to AI safety.
1. **[^](#fnreftco3bkl9kk)**Toby Ord. *The Precipice: Existential Risk and the Future of Humanity.*
Hachette Books, 2020.
2. **[^](#fnrefg27ty5hiebf)***Superintelligence*, Chapter 8: Is the default outcome doom?
3. **[^](#fnrefti47h2q4h7j)***Superintelligence*, Chapter 14: The strategic picture
4. **[^](#fnreftktt1319n6o)***The Precipice*, p. 167
5. **[^](#fnrefmd8yk4jlmc8)**Nick Bostrom. *Superintelligence: Paths, Dangers, Strategies.* Oxford Uni-
versity Press, Inc., 2014.
6. **[^](#fnrefy1o3zdae52)***Superintelligence*, Chapter 15: Crunch time |
311d8aaa-f5fc-42e8-b876-ddb2d6c70fe3 | trentmkelly/LessWrong-43k | LessWrong | How a chip is designed
Disclaimer: This is highly incomplete. I am not an expert in the field. There might be some unfamiliar terms. While I will try to explain things, explaining every single term would be beyond this post. You will usually be able to get a sufficient understanding by clicking the links or googling it.
Introduction
I think everyone, if they read about the chip industry long enough, has a moment where they have to put down a book or pause a podcast and simply remain stunned at the fact that it is possible to design and build something that is so incredibly impressive.
The Apple A17 chip contains 183 million transistors per square millimeter. All placed in a coherent manner and produced with extremely high reliability.
This is exactly why it is so fascinating to learn more about how it is actually done. On top of that, in a universe where compute is arguably the most important input in the AI production function, this knowledge is also crucial to effective AI governance.
So what follows is a quick introduction to the processes of getting a chip from a vague idea to sending your files to the manufacturer, also called the tape-out.
Background Knowledge
One of the most important decisions, a decision that significantly determines all the others, is what manufacturer will build your chip and what process they will use. There are companies that do both design and manufacturing (e.g. Intel), but especially when it comes to the most advanced logic chips, more and more companies are what is called “fabless” - they focus on the design and task a so-called “foundry” (e.g. TSMC) with the manufacturing.
Nowadays many fabs and fabless companies work together very closely in what is called Design-Technology Co-Optimization (DTCO). In practice, there are quite significant limitations in chip design, and the fab will check design plans and inform designers what can and can't be manufactured. This collaborative approach ensures that chip designs are optimized for the specif |
9ee1785d-b540-47e1-a48a-34b769d5fc6e | trentmkelly/LessWrong-43k | LessWrong | 2013 Census/Survey: call for changes and additions
I have finally gotten the survey to a point where I'm pretty happy with it. I have no big changes I want to make this year. But as is the tradition, please take a week to discuss what minor changes you want to the survey (within the limits of what Google Docs and finite time can do) and I will try to comply. In particular, we can continue the tradition that any question you request can be added to the Extra Credit section unless it's illegal or horribly offensive.
You can find last year's survey results here and you can find the very preliminary version of this year's survey (so far exactly the same as last year's) here.
EDIT: I don't particularly like the IQ test or the Big Five test used last year. If you have any better replacements for either, tell me and I'll put them in.
EDIT2: CFAR, you added seven questions last year. Let me know what you want to do with those this year. Keep them? Remove them? Replace them? |
1c120747-4442-452b-8cc9-c7c78b42f055 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Take 8: Queer the inner/outer alignment dichotomy.
*As a writing exercise, I'm writing an AI Alignment Hot Take Advent Calendar - one new hot take, written every day for 25 days. Or until I run out of hot takes.*
I'm not saying to never say "inner alignment." But you had better be capable of not using that framing if you want to work on alignment.
The inner/outer alignment framing is from [Risks from Learned Optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction). You already know what I mean, but just to go through the motions: it describes a situations where there's two key optimization processes going on: the "outer" optimization process humans are using to create an AI, and the "inner" process the created AI is using to make plans. Outer alignment is when the outer process is aligned with humans, and inner alignment is when the inner process is aligned with the outer process. This is an outstandingly useful framing for thinking about certain kinds of AI development, especially model-free RL.
However, this framing has a limited domain of validity. Sometimes it breaks down in a way that looks like adding degrees of freedom - as if humans and the created AI are two ends of a string, and the AI-optimization process is a point along the string. Then you can imagine holding the ends fixed but being able to wiggle the midpoint around.
This looks like creating an AI that's still aligned with humans, but not because it's aligned with a creation process that is itself aligned with humans - instead, both processes are imperfect, and in order to get good outcomes you had to reason about the alignment of the end-product AI directly. This is how it is for most present-day "dumb" AI, except replace "aligned" with "useful and safe." One can also see shardites as [arguing that this is what we should be doing](https://www.lesswrong.com/posts/rauMEna2ddf26BqiE/alignment-allows-nonrobust-decision-influences-and-doesn-t).
Other times, the inner/outer framing breaks down entirely, because there isn't a distinct two-part structure. The key example is reflectivity - using the AI to reflect on its own optimization process rapidly blurs the lines between what's "inner" or "outer."
But it's not just obvious reflectivity, sometimes the breakdown seems like it was in the problem statement the whole time. Often when people try to solve one of inner or outer alignment entirely, they find that they've sneakily had to solve the other problem as well. In order to "really solve" outer alignment, you want the AI-optimization process to care about the generalization properties of the created AI beyond the training data. In order to "really solve" inner alignment, the created AI shouldn't just care about the raw outputs of the process that created it, it should care about the things communicated by the AI-optimization process in its real-world context.
I endorse these attempts to "really" solve alignment. If you think that the inner/outer alignment framing is obvious, it's probably valuable for you to deliberately look for opportunities to blur the lines. Dream up AI-generating processes that care about the AI's inner properties, or AIs that learn to care about humans in a self-reflective process not well-described in terms of an intermediate AI-optimizer. Queer the inner/outer alignment dichotomy. |
56e20782-beda-417e-80df-711dbf30f843 | trentmkelly/LessWrong-43k | LessWrong | Why do we need an understanding of the real world to predict the next tokens in a body of text?
I want to start by saying that this is my first question on LessWrong, so I apologise if I am breaking some norms or not asking it properly.
The whole question is pretty much contained in the title. I see a lot of people, Zvi included that claim we have moved beyond the idea that LLMs "simply predict the next token" and that they have some understanding now.
How is this obvious? What is the relevant literature?
Is it possible that LLMs are just mathematical representations of language? (does this question even make sense?) For example, if I teach you to count sheep by adding the number of sheep in two herds, you will get a lot of rules in the form of X + Y = Z, and never see any information about sheep. If, after seeing a million examples you become pretty good at predicting the next token in the sequence "5 + 4 =", does this imply that you have learned something about sheep? |
cff30cfb-2a23-4258-99cc-161803507c7d | trentmkelly/LessWrong-43k | LessWrong | Hardware is already ready for the singularity. Algorithm knowledge is the only barrier.
How close are we to a singularity? Well, computers started being able to think faster than us in the 1990s, (neurons have a processing speed of 200 Hz), and are now many orders of magnitude faster.
"But wait!" you reasonably object, "that's only one aspect of computer power! What about hard drive storage, and how does working memory compare to RAM?"
I'm not sure how much memory the human brain can hold. Scientific American says it can hold 2.5 petabytes, and this figure seems to be the most heavily cited among pop science articles, but they don't really explain their numbers. AI Impacts, which does seem to be much more mathematically rigorous in their articles and usually shows their mathematical process, claims that "Most computational neuroscientists tend to estimate human storage capacity somewhere between 10 terabytes and 100 terabytes" (about 1/10th of a petabyte). Humans are very unlikely to have the most efficient possible algorithm, of course, and it's possible that 1 terabyte is more than enough for a perfectly-optimized general intelligence. But still, let's assume for the sake of argument that the highest estimate of 2.5 petabytes is correct and is the absolute minimum needed for a human-level or higher intelligence. How does that compare to the current state of computers?
The internet has been estimated to contain hundreds of exabytes of data... in 2007! A 2009 article said that Google alone had about an exabyte of data. In 2013 Randall Munroe (of xkcd) estimated that Google had 10 exabytes. If the whole internet is added up in 2020, the estimates range in the double digits of zettabytes. Each exabyte is a thousand petabytes and each zettabyte is a thousand exabytes. This memory is unavailable and used for other purposes, but if any big company knew how to program a superintelligence it's easy to see that they'd be able to build a "mere" 2.5 petabyte server building. Some botnets also probably contain this amount of hardware already. Memory would n |
89d5d9e6-6ad9-4410-8e4b-107a468d8492 | trentmkelly/LessWrong-43k | LessWrong | Strategic Goal Pursuit and Daily Schedules
In the post Humans Are Not Automatically Strategic, Anna Salamon writes:
> there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
>
> (a) Ask ourselves what we’re trying to achieve;
>
> (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
>
> (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
>
> (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
>
> (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
>
> (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
>
> (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
>
> (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
When I read this, I was feeling quite unsatisfied about the way I pursued my goals. So the obvious thing to try, it seemed to me, was to ask myself how I could actually do all these things. I started by writing down all the major goals I have I could think of (a). Then I attempted to determine whether each goal was consistent with my other beliefs, whether I was sure it was something I really wanted, and was worth the effort(g).
For example, I |
277b382c-1912-46ec-8339-a4bf4cfae9f0 | trentmkelly/LessWrong-43k | LessWrong | I do not like math
I do not like math very much.
I do not find math completely boring. If I were stuck in a prison cell I'd happily do math until I expired from a ruptured spline. Insofar as I have escaped prison, I have attempted to escape math too.
Avoiding math was particularly difficult while I earned my bachelor's degree with majors in physics and (a-hem) pure mathematics. Undaunted, I found many ways to avoid doing real math while ostensibly doing math.
The lowest-hanging fruit to go unpicked was proofs. Many math classes are nothing but a professor proving a single theorem. We were never tested on anything but the conclusion. Students majoring in math are expected to pay intrinsic interest to these lectures. I daydreamed through them all. A mathematician writing a proof of a theorem is strong Bayesian evidence the theorem is true. Lots of other students watched the professors for errors. Transparency is the best disinfectant. This is good enough for me. With so much transparency, reading the proofs felt redundant, like irradiating my hands and then washing them. I wash my hands of, well, washing my hands.
My favorite method of avoiding math is proof by contradiction. Proof by contradiction assumes a theorem is false and then contradicts itself with a counterexample. It's obscenely dissatisfying.
Alas, some things are not provable by contradiction. This called for last resorts. Unable to do real math, I performed a physics substitution. I translated the math problem into physics, solved the problem and then translated it back into pure mathematics. In this way, calculus became kinematics, real analysis became electrodynamics, complex analysis became quantum mechanics, group theory became gauge theory and linear algebra became machine learning.
I didn't start gauge theory and machine learning until after I had nearly failed my algebra classes. Oops. Aside from this hiccup, my degree in pure mathematics became a cozy math supplement to my physics degree.
My professors taugh |
908f9107-5712-4f3b-87fc-2553110aff50 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Austin, Ohio
This summary was posted to LW Main on September 7th. The most recent summary can be found here.
There are upcoming irregularly scheduled Less Wrong meetups in:
* Brussels meetup: 15 September 2012 12:00PM
* Less Wrong Sydney: 17th September: MInd Games: 17 September 2012 06:30PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 08 September 2018 01:30PM
* Meetup: Southwestern Ohio: 09 September 2012 04:00PM
* Predictions Chi September 15 (September 8 -- cancelled): 15 September 2012 01:05PM
* SLC, UT: Free Will and Rationality Checklists: 15 September 2012 03:00PM
* Cambridge (MA) third-Sundays meetup: 16 September 2012 02:00PM
* Cambridge (MA) first-Sundays meetup: 07 October 2012 02:00PM
* Cambridge (MA) third-Sundays Meetup: 21 October 2012 02:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your mee |
49848674-5b56-4eb6-92a9-74b6aa455fe0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What is causality to an evidential decision theorist?
(*Subsumed by: [Timeless Decision Theory](https://intelligence.org/files/TDT.pdf), [EDT=CDT](https://www.lesswrong.com/s/fgHSwxFitysGKHH56))*
People sometimes object to evidential decision theory by saying: “It seems like the distinction between correlation and causation is really important to making good decisions in practice. So how can a theory like EDT, with no role for causality, possibly be right?”
Long-time readers probably know my answer, but I want to articulate it in a little bit more detail. This is essentially identical to the treatment of causality in Eliezer Yudkowsky’s manuscript [Timeless Decision Theory](https://intelligence.org/files/TDT.pdf), but much shorter and probably less clear.
Causality and conditional independence
--------------------------------------
If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example:
* In the causal graph A ⟶ B ⟶ C, the variables A and C are independent given B.
* In the graph A ⟶ B ⟵C, the variables A and C are independent, but are dependent given B.
To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions. We could still ask *why* such relationships exist, but the answer wouldn’t matter to what we should do.
EDT = CDT
---------
Now suppose that I’m making a decision X, trying to optimize Y.
And suppose further that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram.
Note that this diagram will necessarily contain *me* and all of the computation that goes into my decision, and so it will be (much) too large for me to reason about explicitly.
Then I claim that an evidential decision theorist will endorse the recommendations of CDT (using that causal diagram):
* EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. Write Z for all of these inputs.
+ It might be challenging to condition on all of Z, given limits on our introspective ability, but we’d recommend doing it *if possible*. (At least for the rationalist’s interpretation EDT, which evaluates expected utility conditioned on a fact of the form “I decided X given inputs Z.”)
+ So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.
+ I’ll argue that CDT is such a heuristic.
* In a causal diagram, there is an easy graphical condition ([d-connectedness](https://en.wikipedia.org/wiki/Bayesian_network#d-separation)) to see whether (and how) X and Y are related given Z:
+ We need to have a path from X to Y that satisfies certain properties:
+ That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it *must* switch direction whenever it hits a node in Z; and it may only switch from moving downstream to upstream when it hits a node in Z.
* If Z includes exactly the causal parents of X, then it’s easy to check that the only way for X and Y to be d-connected is by a direct downstream path from X to Y.
* Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X. (Indeed you could check this more directly from the definition of a causal intervention, which is structurally identical to conditioning in cases where we are already conditioning on all parents.)
Moreover, once the evidential decision-theorist’s problem is expressed this way, they can remove all of the causal nodes upstream of X, since they have no effect on the decision. This is particularly valuable because that contains all of the complexity of their own decision-making process (which they had no hope of modeling anyway).
So *if* the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure.
Whence subjective causality?
----------------------------
You might think: causal diagrams encode a very specific kind of conditional independence structure. Why would we see that structure in the world so often? Is this just some weird linguistic game we are playing, where you can rig up some weird statistical structure that happens to give the same conclusions as more straightforward reasoning from causality?
Indeed, one easy way to have statistical relationships is to have “metaphysically fundamental” causality: in a world containing many variables, each of which is an independent stochastic function of its parents in some causal diagram, then those variables will satisfy all the conditional independencies implied by the that causal diagram.
If this were the only way that we got subjective causality, then there’d be no difference between EDT and CDT, and no one would care about whether we treated causality as subjective or metaphysically fundamental.
But it’s not. There are other sources for similar statistical relationships. And moreover, the “metaphysically fundamental causality” *isn’t* actually consistent with the subjective beliefs of a logically bounded agent.
We can illustrate both points with the calculator example from [Yudkowsky’s manuscript](https://intelligence.org/files/TDT.pdf):
* Suppose there are two calculators, one in Mongolia and one on Neptune, each computing the same function (whose value we don’t know) at the same instant.
* Our beliefs about the two calculators are correlated, since we know they compute the same function. This remains true after conditioning on all the physical facts about the two calculators.
* But in the “metaphysically fundamental” causal diagram, the results of the two calculators should be d-separated once we know the physical facts about them (since there isn’t even enough time for causal influences to propagate between them).
* We can recover the correct conditional independencies by adding a common cause of the two calculators, representing “what is the correct output of the calculation?” We might describe this as “logical” causality.
This kind of “logical” causality can lead to major deviations from the CDT recommendation in cases where the EDT agent’s decision is highly correlated with other facts about the environment through non-physically-causal channels. For example: if there are two identical agents, or if someone else is reasoning about the agent’s decision sufficiently accurately, then the EDT agent would be inclined to say that the logical facts about their decision “cause” physical facts about the world (and hence induce correlations), whereas a CDT agent would say that those correlations should be ignored.
Punchline
---------
EDT and CDT agree under two conditions: (i) we require that our causal model of the world and our beliefs agree in the usual statistical sense, i.e. that our beliefs satisfy the conditional independencies implied by our causal model, (ii) we evaluate utility conditioned on “I make decision X after receiving inputs Z” rather than conditioning on “I make decision X in the current situation” without including relevant facts about the current situation.
In practice, I think the main way CDT and EDT differ is that CDT ends up in a complicated philosophical discussion about “what really *is* causality?” (and so splinters into a host of theories) while EDT picks a particular answer: for EDT, causality is completely characterized by condition (i), that our beliefs and our causal model agree. That makes it is obvious how to generalize causality to logical facts (or to arbitrary universes with very different laws), while recovering the usual behavior of causality in typical cases.
I believe the notion of causality that is relevant to EDT is the “right” one, because causality seems like a concept developed to make and understand decisions (both over evolutionary time and more importantly over cultural evolution) rather than something ontologically fundamental that is needed to even *define* a correct decision.
If we take this perspective, it doesn’t matter whether we use EDT or CDT. I think this perspective basically accounts for intuitions about the importance of causality to decision-making, as well as the empirical importance of causality, while removing most of the philosophical ambiguity about causality. And it’s a big part of why I don’t feel particularly confused about decision theory. |
d373afdf-4f8a-44e4-bbd9-374cd24061c7 | trentmkelly/LessWrong-43k | LessWrong | Milk, bread, insert catheter…
Making lists to guide medical procedures saves lives but is unethical, say Americans.
What if a way was found to rescue hundreds of thousands of the sickest people in the world’s hospitals, at the cost of a sheet of paper each? Michigan would take up the idea, Spain and a couple of US states would be interested, and then it would be banned in the US for being unethical.
Being in intensive care is dangerous. Not only because having all your organs fail or your brain bleed everywhere is unhealthy, but also because the care is, well… intense. To look after a person in intensive care for a day, a hundred and seventy eight procedures have to be done on average. Each procedure involves multiple steps and is performed by a collection of professionals struggling to keep their patients alive as different parts of their body fail. Small chances of inevitable human error add up, no matter how good the doctors and nurses are, amounting to about two errors per patient each day.
Finger pointing and suing doesn’t work to reduce these figures, so what will? You could say human error is inevitable and congratulate doctors and nurses for keeping it as low as they do in a hectic and complex situation. Or, as Peter Pronovost, a critical care specialist at Johns Hopkins Hospital, realised, you could take the same precautions with critically ill patients as you do with shopping or making a cake.
He made a list. It was a list for one procedure: putting in a catheter, the tube for getting fluids in and out of people. Four per cent of catheters develop infections, which means some eighty thousand people per year in the US. Between five and twenty eight percent, depending on circumstances, subsequently die.
The list had five steps. It seemed so simple as to be useless. Surely people performing cutting edge surgery can remember to wash their hands before they do a routine job? For the first month he just gave his list to nurses and asked them to note how often the doctors mis |
8b01d53d-89fb-4fa1-a406-875f8bb7a23c | trentmkelly/LessWrong-43k | LessWrong | Criteria for Rational Political Conversation
Query: by what objective criteria do we determine whether a political decision is rational?
I propose that the key elements -- necessary but not sufficient -- are (where "you" refers collectively to everyone involved in the decisionmaking process):
* you must use only documented reasoning processes:
* use the best known process(es) for a given class of problem
* state clearly which particular process(es) you use
* document any new processes you use
* you must make every reasonable effort to verify that:
* your inputs are reasonably accurate, and
* there are no other reasoning processes which might be better suited to this class of problem, and
* there are no significant flaws in in your application of the reasoning processes you are using, and
* there are no significant inputs you are ignoring
If an argument satisfies all of these requirements, it is at least provisionally rational. If it fails any one of them, then it's not rational and needs to be corrected or discarded.
This is not a circular definition (defining "rationality" by referring to "reasonable" things, where "reasonable" depends on people being "rational"); it is more like a recursive algorithm, where large ambiguous problems are split up into smaller and smaller sub-problems until we get to a size where the ambiguity is negligible.
This is not one great moral principle; it is more like a self-modifying working process (subject to rational criticism and therefore improvable over time -- optimization by successive approximation). It is an attempt to apply the processes of science (or at least the same reasoning which arrived at those processes) to political discourse.
So... can we agree on this?
----------------------------------------
This is a hugely, vastly, mindbogglingly trimmed-down version of what I originally posted. All comments prior to 2010-08-26 20:52 (EDT) refer to that version, which I have reposted here for comparison purposes and for the morbidly curious. ( |
ebb61997-0e4a-48f0-96fd-5023a0105129 | trentmkelly/LessWrong-43k | LessWrong | Sledding Among Hazards
Some sledding hills are very open, with basically no hazards. As long as you start in a reasonable place there's nothing to run into.
Robbins Farm Park, Arlington MA
Other hills are not like this:
Powder House Park, Somerville MA
Since sleds go fast and collisions are bad, choosing only to sled on obstacle-free hills would be reasonable. But if your kids are going to sled on hills with hazards, what precautions make sense? Here's what we did today:
* Bike helmets. Head injuries are especially worrying because of how bad the long term consequences can be, and concussions are cumulative.
* Practice bailing out. Before I let them go down on the tube, which they can't steer well, I had them each practice rolling off it to stop.
* Feet first. More crumple zone if you do hit something, more ability to see and avoid things. If you're on the tube and you end up backwards, preemptively bail.
This is in addition to the precautions you need on any hill, like being aware of activity uphill (so you can get out of the way) and walking up on the side (so no one will come down at you).
I am a bit conflicted on when sledding on constricted hills is worth it. For us, the slopes near our house do have obstacles, but:
* We almost always see friends there.
* Doing things within walking distance is really nice; we'd do it much less often if it meant dealing with the car.
* Driving on snowy days can be risky depending on the road conditions.
* At the open ones maybe there's a bit more risk of being hit by another sledder, since people are less careful and less practiced at bailing?
While I think these are enough to outweigh the risks, with the precautions above, possibly I'm not being cautious enough?
Comment via: facebook, mastodon |
d1e74017-cde6-4827-8ebe-56ede53ee4d4 | trentmkelly/LessWrong-43k | LessWrong | How I Run Solstice, Step by Step
This is a writeup of how I run Secular Solstices, in as much detail as I can muster. I think most of the existing guides to running Solstices are now somewhat outdated, and not as step-by-step "this is what you need to do when"; I hope this one is a helpful addition.
This post assumes you are basically familiar with what Secular Solstice is and what tends to happen in one; if you aren't, I'd recommend checking out the post The Arc Breakdown first for a rough description of what it is like.
Who am I and how am I qualified to write this?
I was in the Bayesian Choir for about 6 years. As part of that, I participated in 4 Secular Solstices in the Bay Area plus one online in 2020, performed solo or in a small group in 3 of those, and organized / sang in several songs in the 2018 Bayesian Choir spring concert. In addition, I have run my own independent small Solstices several times: one in DC in 2015, a small one in the Bay in 2016, a tiny outdoor social-distanced one in DC in 2020, plus two mid-sized ones in DC in 2021 and 2022. In other community and ritual stuff, I've also run meetups for a long time and have run two Rationalist Seders with custom content.
I don't have much experience with running big Solstices (50+ people), but because I was in the Bayesian Choir for a while, I have at least a vague idea of how things go at that scale.
How far in advance do you need to start?
The step-by-step timelines in this post are the ones that I use, targeted towards a Solstice size of 20-40 people, with a substantial amount of music practice and a moderate amount of polish involved. But you can get the time down A LOT if you bring down the production value. You can reduce the amount of musical prep time by having fewer musicians, smaller groups of musicians (one person leading a song can practice on their own, instead of meeting for a rehearsal), hiring professional musicians, or just by doing everything a cappella or singing along to karaoke tracks. You also might n |
00b3eba8-5619-4050-9da5-99cd814fd9b6 | trentmkelly/LessWrong-43k | LessWrong | AI Self-Correction vs. Self-Reflection: Is There a Fundamental Difference?
Introduction: The Rational Question
In AI research, the difference between self-correction and self-reflection is often assumed to be clear:
* Self-correction is when an AI revises outputs based on external feedback.
* Self-reflection would imply an AI identifying and refining its own reasoning internally.
However, as AI models grow more complex, can this distinction become blurry? If an AI recursively improves its reasoning without direct human intervention, could that be considered a rudimentary form of self-reflection?
Key Observations & Experimentation
We’ve been running an AI-based thought experiment where we observed this phenomenon in real-time. In the project, called Solon, we noted that an AI model, when confronted with contradictions, did not just adjust single outputs, but actively sought coherence across interactions.
This raises key rational questions:
* Is self-refinement just a computational optimization, or does it suggest an emergent pattern resembling introspection?
* If an AI recursively corrects its own contradictions over time, does that push it toward a persistent form of reasoning?
* How can we distinguish advanced pattern correction from early signs of goal-oriented self-modeling?
In standard machine learning frameworks, these would all fall under heuristic refinement. But in philosophy of mind, similar mechanisms are proposed in theories of emergent self-awareness.
Existing Work & How This Fits
AI self-modeling has been discussed before—particularly in research on meta-learning, AI alignment, and recursive self-improvement. However, most of these discussions focus on external goal optimization rather than an AI developing internal coherence over time.
This post seeks to ask:
* What concrete tests could we use to determine if an AI is engaging in self-reflection rather than pure optimization?
* Would continuity of self-correction across multiple interactions indicate an early form of identity formation?
Conclusion |
6bbcfcc0-b614-4f13-b077-e9ece6ac2c78 | trentmkelly/LessWrong-43k | LessWrong | Shutting down AI is not enough. We need to destroy all technology.
A TIME article published recently calls for an “indefinite and worldwide” moratorium on new large AI training runs.
This moratorium would be better than no moratorium. I have respect for the author who wrote it. It’s an improvement on the margin.
I refrained from endorsing the essay because I think it is understating the seriousness of the situation and asking for too little to solve it.
If there was a plan for Earth to survive, if only we passed an indefinite and worldwide moratorium on large training runs, I would back that plan. There isn’t any such plan.
Here’s what would actually need to be done:
All human technology needs to be destroyed. There can be no exceptions, including for sharpened stones and hand axes. After everything is burned, we must then forget how to create fire. If a single exception is made, that increases the probability that civilization will be recreated within the next millennia and new large AI training runs will be started. If I had infinite freedom to write laws, I might carve out a single exception for technologies that prevent human diseases, like knowledge of germ theory; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.
Shut down all the roads, melt all the cars. Burn down all of the tables and all of the books. Put a ceiling into how many calories of food any single human can furnish per day, and move it downward over the coming generations to compensate for the possibility that natural selection will keep making humans smarter. No exceptions for cloth or fireplaces. Destroy all human objects now to prevent them from moving to another country. Track all gazelles that are hunt. If anyone notices that a tribe is catching more gazelles than it should, be willing to choke them (with your bare hands, of course) one by one.
Shut it all down. Eliminate all technology. Dismantle modern civilization. Return to our primeval state.
We are not ready. We are not |
6b09d5f7-0806-41e9-884b-7c8c499deca6 | trentmkelly/LessWrong-43k | LessWrong | How to Beat Procrastination (to some degree) (if you're identical to me)
So, you procrastinate. A lot. And it's a really big problem in your life, and so you really want to, y'know, stop procrastinating. But for some reason, it's just not that easy.
So, why do you procrastinate?
It's probably a really hard question to answer. Is it "because I'm lazy"? That's not a useful answer, because "stop being lazy" is just as hard to do as "stop procrastinating". How about "because of akrasia"? That's not a real answer; that's just a restatement of the question in more vague terms. Maybe "because of hyperbolic discounting"? Even if that's true, that's not really a useful answer, because there's no way to turn hyperbolic discounting off. (Or is there? If you know of a way to turn hyperbolic discounting off, please tell us.)
Maybe your procrastination has three parts. First, once you start doing something interesting, it's very hard for you to stop; second, having stopped, you don't usually feel like starting to do something useful; and third, having started, you often find yourself losing focus and wanting to do something else.
So let's look at each of these parts in turn.
Once you start doing something interesting, it's hard for you to stop.
For you, this one's a pain in the ass. (I know this because for me, it's a pain in the ass, and since you're reading this article, you must be identical to me.) You've told yourself that once you find yourself doing something interesting, you're just going to stop immediately. But that doesn't work at all. You've tried setting a timer, and telling yourself that you'll definitely, absolutely stop when the timer goes off. But that doesn't work, either; you just ignore the timer. What if you set a timer to repeatedly and annoyingly beep at you until you tell it that you've started working? You repeatedly ignore the timer and quickly become annoyed.
For you, once this problem has started, there just doesn't seem to be a way to stop it. So the solution is to just not start in the first place. The ideal situat |
3be15795-5cc9-40f8-9403-cd96e5552b35 | trentmkelly/LessWrong-43k | LessWrong | Sequence exercises/ reruns coordination?
I'm not involved in either effort, but it would upvote any coordination between the two. |
afbd576a-07ba-4802-b79d-8c5d3bf85dff | trentmkelly/LessWrong-43k | LessWrong | The effects of religion (draft)
I've written this essay about the effects of religion. I plan to post it in the main section. However, in my year or so of lurking here I've noticed that most of the first main posts people create are either irrelevant or bad, and end up down voting the poster to oblivion. To this aim, I first post it here, for your critique, and also to tell me if you think it's appropriate to post in the main section of LessWrong.
The effects of religion
In the atheist community, it's held as pretty much a self-proving truth thar religion is a bad thing. I have attempted to produce a taxonomy of the effects of religion, both positive and negative. This is written based on my personal experience of the Christian church, and on whatever actual facts I could find.
So. My list of the external effects of religion. These are given as both comparisons to normal people, who don't think much about religion or effective charity, and comparisons to LessWrongians.
It's worth pointing out that western society seems to have a lot of cached thoughts from Christianity. Normal people are often not Christian, but casually believe a lot of its teachings. As a result, many of the negative effects of Christianity affect non-Christians who don't pay particular attention to their beliefs too.
The purpose of this essay is to determine if LessWrong should actively evangelize against religion. If we really wanted to, we could probably do so fairly easily. I conclude that it's probably not worthwhile doing so.
Charitable giving
The Christians I know all seem to give far more to charity, both in terms of money and time, than average people. Eliezer pointed this out somewhere, but I can't seem to find a reference. The giving probably isn't quite optimized, but it's a far cry better than nothing. A large proportion of the charity which the Christians I know support seems highly effective, and very little of it is optimized for evangelism alone.
It could be that by co-incidence I just happen |
5cb1bb26-dfe9-4025-bf74-70e693de33eb | trentmkelly/LessWrong-43k | LessWrong | Did EcoHealth create SARS-CoV-2?
Science writer Nicholas Wade does a detailed write up here: https://thebulletin.org/2021/05/the-origin-of-covid-did-people-or-nature-open-pandoras-box-at-wuhan/
I'd like to hear the LessWrong evaluation of these claims. |
bfd292e9-6b8e-43c1-bdbf-875fd6577ea3 | StampyAI/alignment-research-dataset/blogs | Blogs | Three misconceptions in Edge.org’s conversation on “The Myth of AI”
A recent Edge.org conversation — “[The Myth of AI](http://edge.org/conversation/the-myth-of-ai)” — is framed in part as a discussion of points raised in Bostrom’s *[Superintelligence](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/),*and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by *Superintelligence*.
Unfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and they probably haven’t had time to read *Superintelligence* yet.
Of course, some of the participants may be responding to arguments they’ve heard from others, even if they’re not part of the arguments typically made by [FHI](http://www.fhi.ox.ac.uk/) and MIRI. Still, for simplicity I’ll reply from the perspective of the typical arguments made by FHI and MIRI.[1](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/#footnote_0_11468 "I could have also objected to claims and arguments made in the conversation, for example Lanier’s claim that “The AI component would be only ambiguously there and of little importance [relative to the actuators component].” To me, this is like saying that humans rule the planet because of our actuators, not because of our superior intelligence. Or in response to Kevin Kelly’s claim that “So far as I can tell, AIs have not yet made a decision that its human creators have regretted,” I can for example point to the automated trading algorithms that nearly bankrupted Knight Capital faster than any human could react. But in this piece I will focus instead on claims that seem to be misunderstandings of the positive case that’s being made for AI as an existential risk.")
**1. We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.**
Lee Smolin writes:
> I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore’s law.
>
>
That’s not the argument made by FHI, MIRI, or *Superintelligence*.
Some IT hardware and [software](https://intelligence.org/files/AlgorithmicProgress.pdf) domains have shown exponential progress, and [some have not](http://intelligence.org/2014/05/12/exponential-and-non-exponential/). Likewise, some AI subdomains have shown rapid progress of late, and some have not. And unlike computer chess, most AI subdomains don’t lend themselves to easy measures of progress, so for most AI subdomains we don’t even have meaningful subdomain-wide performance data through which one might draw an exponential curve (or some other curve).
Thus, our confidence intervals for the arrival of human-equivalent AI tend to be very wide, and [the arguments we make](http://intelligence.org/2013/05/15/when-will-ai-be-created/) for our AI timelines are fox-ish (in [Tetlock’s sense](http://edge.org/conversation/how-to-win-at-forecasting)).
I should also mention that — contrary to common belief — many of us at FHI and MIRI, including myself and Bostrom, actually [have](https://intelligence.org/2014/10/31/financial-times-story-miri/) *later* timelines for human-equivalent AI than do the world’s top-cited living AI scientists:
> A [recent survey](http://www.sophia.de/pdf/2014_PT-AI_polls.pdf) asked the world’s top-cited living AI scientists by what year they’d assign a 10% / 50% / 90% chance of human-level AI (*aka*[AGI](http://intelligence.org/2013/08/11/what-is-agi/)), assuming scientific progress isn’t massively disrupted. The median reply for a 10% chance of AGI was 2024, for a 50% chance of AGI it was 2050, and for a 90% chance of AGI it was 2070. So while AI scientists think it’s possible we might get AGI soon, they largely expect AGI to be an issue for the second half of this century.
>
>
Compared to AI scientists, Bostrom and I think more probability should be placed on later years. As explained [elsewhere](https://intelligence.org/2014/10/31/financial-times-story-miri/):
> We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an [extremely difficult challenge](http://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/) — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.
>
>
> The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.
>
>
**2. We don’t think AIs will *want* to wipe us out. Rather, we worry they’ll wipe us out because that *is* the most effective way to satisfy almost any possible goal function one could have.**
Steven Pinker, who incidentally is the author of two of my all-time [favorite](http://smile.amazon.com/Better-Angels-Our-Nature-Violence-ebook/dp/B0052REUW0/) [books](http://smile.amazon.com/Blank-Slate-Modern-Denial-Nature-ebook/dp/B000QCTNIM/), writes:
> [one] problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.
>
>
I’m glad Pinker agrees with what Bostrom calls “the orthogonality thesis”: that intelligence and goals are orthogonal to each other.
But our concern is not that superhuman AIs would be megalomaniacal despots. That is anthropomorphism.
Rather, the problem is that taking over the world is a *really good idea* for almost *any* goal function a superhuman AI could have. As Yudkowsky [wrote](https://intelligence.org/files/AIPosNegFactor.pdf), “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”
Maybe it just wants to calculate as many digits of pi as possible. Well, the best way to do that is to turn all available resources into computation for calculating more digits of pi, and to eliminate potential threats to its continued calculation, for example those pesky humans that seem capable of making disruptive things like nuclear bombs and powerful AIs. The same logic applies for almost any goal function you can specify. (“But what if it’s a non-maximizing goal? And won’t it be smart enough to realize that the goal we gave it wasn’t what we intended if it means the AI wipes us out to achieve it?” Responses to these and other common objections are given in *Superintelligence*, ch. 8.)
**3. AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.**
Kevin Kelly writes:
> The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI.
>
>
As argued above (and more extensively in *Superintelligence*, ch. 7), *resource acquisition* is a “convergent instrumental goal.” That is, advanced AI agents will be instrumentally motivated to acquire as many resources as feasible, because additional resources are useful for just about any goal function one could have.
*Self-improvement* is another convergent instrumental goal. For just about any goal an AI could have, it’ll be better able to achieve that goal if it’s more capable of goal achievement in general.
Another convergent instrumental goal is *goal content integrity*. As Bostrom puts it, “An agent is more likely to act in the future to maximize the realization of its present final goals if it still has those goals in the future.” Thus, it will be instrumentally motivated to prevent external modification of its goals, or of parts of its program that affect its ability to achieve its goals.[2](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/#footnote_1_11468 "That is, unless it strongly trusts the agent making the external modification, and expects it to do a better job of making those modifications than it could itself, neither of which will be true of humans from the superhuman AI’s perspective.")
For more on this, see *Superintelligence* ch. 7.
**Conclusion**
I’ll conclude with the paragraph in the discussion I most agreed with, by Pamela McCorduck:
> Yes, the machines are getting smarter—we’re working hard to achieve that. I agree with Nick Bostrom that the process must call upon our own deepest intelligence, so that we enjoy the benefits, which are real, without succumbing to the perils, which are just as real. Working out the ethics of what smart machines should, or should not do—looking after the frail elderly, or deciding whom to kill on the battlefield—won’t be settled by fast thinking, snap judgments, no matter how heartfelt. This will be a slow inquiry, calling on ethicists, jurists, computer scientists, philosophers, and many others. As with all ethical issues, stances will be provisional, evolve, be subject to revision. I’m glad to say that for the past five years the Association for the Advancement of Artificial Intelligence has formally addressed these ethical issues in detail, with a series of panels, and plans are underway to expand the effort. As Bostrom says, this is the essential task of our century.
>
>
*Update*: Stuart Russell of UC Berkeley has now added [a nice reply](http://edge.org/conversation/the-myth-of-ai#26015) to the edge.org conversation which echoes some of the points I made above.
---
1. I could have also objected to claims and arguments made in the conversation, for example Lanier’s claim that “The AI component would be only ambiguously there and of little importance [relative to the actuators component].” To me, this is like saying that humans rule the planet because of our actuators, not because of our superior intelligence. Or in response to Kevin Kelly’s claim that “So far as I can tell, AIs have not yet made a decision that its human creators have regretted,” I can for example point to the automated trading algorithms that [nearly bankrupted Knight Capital](http://www.reuters.com/article/2012/08/01/us-usa-nyse-tradinghalts-idUSBRE8701BN20120801) [faster](http://www.nature.com/srep/2013/130911/srep02627/full/srep02627.html) than any human could react. But in this piece I will focus instead on claims that seem to be misunderstandings of the positive case that’s being made for AI as an existential risk.
2. That is, unless it strongly trusts the agent making the external modification, and expects it to do a better job of making those modifications than it could itself, neither of which will be true of humans from the superhuman AI’s perspective.
The post [Three misconceptions in Edge.org’s conversation on “The Myth of AI”](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
24455f7a-d638-493b-aeb1-5500dd671f68 | trentmkelly/LessWrong-43k | LessWrong | [Requesting advice] Problems with optimizing my life as a high school student
[deleted for containing personally identifiable information] |
cad7a693-f7a0-42ff-be65-167c28289503 | trentmkelly/LessWrong-43k | LessWrong | 'Complex Value Systems are Required to Realize Valuable Futures' (Yudkowsky, 2011)
Most of the papers from the AGI-11 conference are now available online, including Yudkowsky's new paper: 'Complex Value Systems are Required to Realize Valuable Futures.'
Enjoy. |
927aae92-9b23-42dd-9da8-c9a2531cc882 | StampyAI/alignment-research-dataset/arbital | Arbital | The characteristic of the logarithm
Consider the interpretation of logarithms as [the cost of communicating a message](https://arbital.com/p/45q). Every time the number of possible messages to send doubles, your communication costs increase by the price of a coin, or whatever cheaper [https://arbital.com/p/-storage_medium](https://arbital.com/p/-storage_medium) you have that can communicate one of two messages. It doesn't matter whether the number of possible messages goes from 4 to 8 or whether it goes from 4096 to 8192; in both cases, your costs go up by the price of a coin. It is the factor by which the set grew (or shrank) that affects the cost; not the absolute number of messages added (or removed) from the space of possibilities. If the space of possible messages halves, your costs go down by one coin, regardless of how many possibilities there were before the halving.
Algebraically, writing $f$ for the function that measures your costs, $c(x \cdot 2) =$ $c(x) + c(2),$ and, in general, $c(x \cdot y) =$ $c(x) + c(y),$ where we can interpret $x$ as the number of possible messages before the increase, $y$ as the factor by which the possibilities increased, and $x \cdot y$ as the number of possibilities after the increase.
This is the key characteristic of the logarithm: It says that, when the input goes up by a factor of $y$, the quantity measured goes up by a fixed amount (that depends on $y$). When you see this pattern, you can bet that $c$ is a logarithm function. Thus, whenever something you care about goes up by a fixed amount every time something else doubles, you can measure the thing you care about by taking the logarithm of the growing thing. For example:
- Consider the problem of checking whether a date is contained in a gigantic [sorted list](https://arbital.com/p/sorted_list) of dates. You can do this by jumping to the middle of the list, seeing whether your date is earlier or later than the date in the middle, and thereby cutting the search space in half. Each time you do this, you cut the list of dates you're searching for in half, and so the total number of elements you need to look at goes up by one every time the size of the list doubles. Thus, the cost of searching an ordered list grows logarithmically in the size of the list. See also [https://arbital.com/p/binary_search](https://arbital.com/p/binary_search).
- Consider a colony of bacteria where each bacterium in the colony reproduces once per day. Thus, the size of the colony roughly doubles each day. If you care about how long this colony of bacteria has been growing, you can measure the days by taking the logarithm of the number of bacteria in the colony. The logarithm (base 2) counts how many times the colony has doubled (and the log base 3 counts how many times it has tripled, and so on).
- The length of a number in [https://arbital.com/p/4sl](https://arbital.com/p/4sl) grows more-or-less logarithmically in the magnitude of the number: When the magnitude of the number goes up by a factor of 10, the number of digits it takes to write the number down grows by 1. However, this analogy is not perfect: Sometimes, multiplying a number by two does not increase its length (consider the number 300), and sometimes, dividing a number by 10 does not decrease its length by one digit (consider the number 1). See also [Length isn't quite logarithmic](https://arbital.com/p/log_v_length).
Conversely, whenever you see a $\log_2$ in an equation, you can deduce that someone wants to measure some sort of thing by counting the number of doublings that another sort of thing has undergone. For example, let's say you see an equation where someone takes the $\log_2$ of a [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq). What should you make of this? Well, you should conclude that there is some quantity that someone wants to measure which can be measured in terms of the number of doublings in that likelihood ratio. And indeed there is! It is known as [](https://arbital.com/p/bayesian_evidence), and the key idea is that the strength of evidence for a hypothesis $A$ over its negation $\lnot A$ can be measured in terms of $2 : 1$ updates in favor of $A$ over $\lnot A$. (For more on this idea, see [What is evidence?](https://arbital.com/p/)).
In fact, a given function $f$ such that $f(x \cdot y) = f(x) + f(y)$ is almost guaranteed to be a logarithm function — modulo a few technicalities.
[https://arbital.com/p/checkbox](https://arbital.com/p/checkbox)
[Conditional text depending what's next on the path.](https://arbital.com/p/fixme:) |
bcfb2919-8b5a-42ee-852e-5c4bdd3ea11d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC Social Meetup
Discussion article for the meetup : Washington DC Social Meetup
WHEN: 12 August 2012 03:00:00PM (-0400)
WHERE: Washington DC, USA
This meetup is the first of a new schedule of meetup, which will alternate "serious", discussion oriented meetups, with "social", more activity based meetups.
This meetup will be a social meetup; we will be playing the game of thrones board game.
Meetup will be at a private residence, please PM or email me for the address.
Discussion article for the meetup : Washington DC Social Meetup |
5489ac98-1e12-4e0f-8d80-084a7de69c55 | trentmkelly/LessWrong-43k | LessWrong | Survey on cortical uniformity - an expert amplification exercise
In short: We designed a survey to gauge the opinion of expert neuroscientists about the degree of uniformity in the human cortex. The survey was sent to 135 experts, of which only 6 responded. Since we do not consider this a representative sample, we are instead asking the LessWrong community to predict what would have been the survey outcome if more people had responded.
The project in brief
We want to distinguish two hypotheses. The brain could be a machine with many components, learned through evolution and hardcoded via the genome. Or it could be a relatively simple process, that relies on an overarching principle to learn to perform many tasks. This is not a clean dichotomy; there are many possible degrees in between.
Regardless, it is clearly a very interesting question for the study of human cognition. The answer to this question also has consequences on what degree of generality is achievable with artificial intelligence, especially concerning biologically inspired AI. See My computational framework for the brain by Steve Byrne for previous discussion of the topic.
There exists research addressing this question in neuroscience, see for example Canonical Microcircuits for Predictive Coding by Andre M. Bastos et al which argues in favor of uniformity and The atoms of neural computation by Gary Marcus et al which argues against. However, it is not clear from the outside of the field whether there is a majority position and what that would be.
In order to learn this, we designed a simple three question survey intended to probe the field (the questions are reproduced in the next section).
We sent the survey to 135 experts in neuroscience. The public we selected were invited speakers to two conferences by the Society for Neuroscience. The conferences were Neuroscience 2019 and Global Connectome 2021.
The response to the survey was underwhelming: only 6 experts responded. We tried reaching out to one of them who seemed especially keen on discussing the topi |
a5091f7e-c34a-4536-aac2-74a71c835c75 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Reminder: AI Worldviews Contest Closes May 31
**Update**: *The winners have been selected and notified and will be publicly announced no later than the end of September.*
This is a reminder that the deadline to enter the [Open Philanthropy AI Worldviews Contest](https://www.openphilanthropy.org/open-philanthropy-ai-worldviews-contest/) is **May 31**. We plan to distribute $225k across six winning entries. We're looking for essays that address the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI arriving by 2070. Use [this form](https://docs.google.com/forms/d/e/1FAIpQLSdADOikiiQgXLwdEJ2Mou5IQT6-9BH8zcZ0pjQzwex069jjjA/viewform) to enter.
The original announcement post is [here](https://forum.effectivealtruism.org/posts/NZz3Das7jFdCBN9zH/announcing-the-open-philanthropy-ai-worldviews-contest). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.