id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
cdb2c982-9700-424a-b8f8-cc6605ebb17a | trentmkelly/LessWrong-43k | LessWrong | Endogenous Epinephrine for Anaphylaxis?
Content warning: discussion of allergy attacks.
Epistemic warning: I know very little and this is total speculation. Don't do anything in this post unless this becomes the recommended way of dealing with allergy attacks.
Imagine you're walking in a remote area with someone who has a severe allergy to bee stings, and they get stung. They don't have their Epi-Pen (epinephrine autoinjector), there's no one else around, and you're far from everything. There's clearly not time for an ambulance, and they start going into anaphylactic shock. I think it's possible that the best response is to try to kill them.
Wait, what?
Well, not actually try to kill them, but make them think you're trying to kill them. Specifically, triggering the "fight-or-flight" or "acute stress" response to make their body release epinephrine (adrenalin), the standard emergency treatment for anaphylaxis.
One question is whether their body would release enough epinephrine that this would actually work. Being stung by a bee when you know you have a life-threatening allergy seems terrifying already, which probably induces some amount of acute stress response, and that's clearly not enough to stop anaphylaxis. Perhaps the problem is that the amount of epinephrine released in acute stress is just much lower than what you get from an Epi-Pen?
I tried to look up numbers, and found Adrenomedullary response to maximal stress in humans (Wortsman 1984). They looked at blood samples from fifteen patients undergoing cardiac arrest, which is about as stressful as you can get. All were either not given external epinephrine, or their blood was sampled before they were. They measured a mean peak value of 10.3 ± 2.9 ng/ml of epinephrine in the samples. They also wrote:
> Resuscitation after cardiac arrest is associated with the highest endogenous epinephrine concentration ever recorded: 35.6 ng/ml. This finding supports our a priori assumption that the stress of cardiac arrest produces maximal adrenomedullary s |
030185ac-63f1-4b20-af28-42f025492ae8 | trentmkelly/LessWrong-43k | LessWrong | Google lends further legitimacy to Bitcoin
Though not yet an "official" project, Google has released a Bitcoin client. As you may remember, there were concerns here about what the government/legal reaction to Bitcoin [1] will be, and the significance of certain groups lending their support to it. EFF and SIAI accept Bitcoin donations, which helps, and this action by Google is another big step.
Previous articles: SIAI accepting Bitcoin donations, Discussion on making money with Bitcoin (Clippy warning on the latter)
[1] In short, it's an anonymous P2P crypto-currency with no transaction fees, in which new units are generated by spending computer cycles computing hashes until you find one with specific properties. |
16a2989f-4f0a-4ea3-85fe-13b48e2ec6a0 | StampyAI/alignment-research-dataset/arbital | Arbital | Symmetric group
The notion that group theory captures the idea of "symmetry" derives from the notion of the symmetric group, and the very important theorem due to Cayley that every group is a subgroup of a symmetric group.
# Definition
Let $X$ be a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz). A [bijection](https://arbital.com/p/499) $f: X \to X$ is a *permutation* of $X$.
Write $\mathrm{Sym}(X)$ for the set of permutations of the set $X$ (so its elements are functions).
Then $\mathrm{Sym}(X)$ is a group under the operation of composition of functions; it is the *symmetric group on $X$*.
(It is also written $\mathrm{Aut}(X)$, for the *automorphism group*.)
We write $S_n$ for $\mathrm{Sym}(\{ 1,2, \dots, n\})$, the *symmetric group on $n$ elements*.
# Elements of $S_n$
We can represent a permutation of $\{1,2,\dots, n\}$ in two different ways, each of which is useful in different situations.
## Double-row notation
Let $\sigma \in S_n$, so $\sigma$ is a function $\{1,2,\dots,n\} \to \{1,2,\dots,n\}$.
Then we write $$\begin{pmatrix}1 & 2 & \dots & n \\ \sigma(1) & \sigma(2) & \dots & \sigma(n) \\ \end{pmatrix}$$
for $\sigma$.
This has the advantage that it is immediately clear where every element goes, but the disadvantage that it is quite hard to see the properties of an element when it is written in double-row notation (for example, "$\sigma$ cycles round five elements" is hard to spot at a glance), and it is not very compact.
## Cycle notation
[Cycle notation](https://arbital.com/p/49f) is a different notation, which has the advantage that it is easy to determine an element's order and to get a general sense of what the element does.
Every element of $S_n$ [can be expressed in ](https://arbital.com/p/49k).
## Product of transpositions
It is a useful fact that every permutation in a (finite) symmetric group [may be expressed](https://arbital.com/p/4cp) as a product of [transpositions](https://arbital.com/p/4cn).
# Examples
- The group $S_1$ is the group of permutations of a one-point set. It contains the identity only, so $S_1$ is the trivial group.
- The group $S_2$ is isomorphic to the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) of order $2$. It contains the identity map and the map which interchanges $1$ and $2$.
Those are the only two [abelian](https://arbital.com/p/3h2) symmetric groups.
Indeed, in cycle notation, $(123)$ and $(12)$ do not commute in $S_n$ for $n \geq 3$, because $(123)(12) = (13)$ while $(12)(123) = (23)$.
- The group $S_3$ contains the following six elements: the identity, $(12), (23), (13), (123), (132)$. It is isomorphic to the [https://arbital.com/p/-4cy](https://arbital.com/p/-4cy) $D_6$ on three vertices. ([Proof.](https://arbital.com/p/group_s3_isomorphic_to_d6))
# Why we care about the symmetric groups
A very important (and rather basic) result is [Cayley's Theorem](https://arbital.com/p/49b), which states the link between group theory and symmetry.
%%%knows-requisite([https://arbital.com/p/4bj](https://arbital.com/p/4bj)):
# Conjugacy classes of $S_n$
It is a useful fact that the conjugacy class of an element in $S_n$ is precisely the set of elements which share its [cycle type](https://arbital.com/p/4cg). ([Proof.](https://arbital.com/p/4bh))
We can therefore [list the conjugacy classes](https://arbital.com/p/4bk) of $S_5$ and their sizes.
%%%
# Relationship to the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf)
The [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_n$ is defined as the collection of elements of $S_n$ which can be made by an even number of [transpositions](https://arbital.com/p/4cn). This does form a group ([proof](https://arbital.com/p/4hg)).
%%%knows-requisite([https://arbital.com/p/4h6](https://arbital.com/p/4h6)):
In fact $A_n$ is a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $S_n$, obtained by taking the quotient by the [sign homomorphism](https://arbital.com/p/4hk).
%%% |
d2c5e066-370b-4393-8f0d-2d91b99b92d7 | trentmkelly/LessWrong-43k | LessWrong | Proposal: Anti-Akrasia Alliance
*Related to: [Kicking Akrasia: now or never](http://lesswrong.com/r/discussion/lw/3ko/kicking_akrasia_now_or_never/); [Tsuyoku Naritai](http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/)*
# The situation
I am greatly afficted by akrasia, and in all probability, so are you. Akrasia is a destroyer of worlds<sup>1</sup>.
I have come to the conclusion that akrasia is the single biggest problem I have in life. It is greater than my impending biological death, my imperfect enjoyment of life, or the danger of a car accident. For if I could solve the problem of akrasia, I would work on these other problems, and I believe I could solve them. Even a big problem like physical mortality can be meaningfully challenged if I spend a lifetime tackling it. But until I solve the problem of akrasia, I will sit around and *do nothing about my mortality*.
Solving akrasia is necessary: without it, we cannot efficiently attack other problems, and have a high chance of losing the fight against disease, war, and UFAI.
Solving akrasia may also be sufficient: if a group of LW readers - smart, rational, luminous, and relatively rich people - was also unusually dedicated to a purpose, it would have a good chance of attacking Really Big Problems ranging from personal satisfaction to Friendly AI<sup>2</sup>.
Some people have solved this problem, or never had it. Thus, we know it is possible to vanquish akrasia. However, it is a unique problem that prevents its own cure. Because of akrasia, we don't spend as much effort as we'd like fighting akrasia.
# Existing efforts
There have been many posts about akrasia on LW. There are also many methods and workgroups in the world which are dedicated to it. I don't know very many myself, and no doubt there are many useful approaches I don't know about. (I know that other LW users have more relevant knowledge and experience than I do.)
I do know that there are several common problems with all such effo |
439656d9-f356-4356-a3ea-a4393433fed6 | trentmkelly/LessWrong-43k | LessWrong | Good Research Takes are Not Sufficient for Good Strategic Takes
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed!
Introduction
I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously.
These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets!
But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe |
96949d2c-b023-4ea8-9918-52eee1653098 | trentmkelly/LessWrong-43k | LessWrong | Marsh et al. "Serotonin Transporter Genotype (5-HTTLPR) Predicts Utilitarian Moral Judgments"
The whole paper is here. In short, they found a genotype that predicts people's response to the original trolley problem:
> A trolley (i.e. in British English a tram) is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
Participants with one kind of serotonin transmitter (LL-homozygotes) judged flipping the switch to be better than a morally neutral action. Participants with the other kind (S-carriers) judged flipping the switch to be no better than a morally neutral action. The groups responded equally to the "fat man scenario" both rejecting the 'push' option.
Some quotes:
> We hypothesized that 5-HTTLPR genotype would interact with intentionality in respondents who generated moral judgments. Whereas we predicted that all participants would eschew intentionally harming an innocent for utilitarian gains, we predicted that participants' judgments of foreseen but unintentional harm would diverge as a function of genotype. Specifically, we predicted that LL homozygotes would adhere to the principle of double effect and preferentially select the utilitarian option to save more lives despite unintentional harm to an innocent victim, whereas S-allele carriers would be less likely to endorse even unintentional harm. Results of behavioral testing confirmed this hypothesis.
> Participants in this study judged the acceptability of actions that would unintentionally or intentionally harm an innocent victim in order to save others' lives. An analysis of variance revealed a genotype × scenario interaction, F(2, 63) = 4.52, p = .02. Results showed that, relative to long allele homozygotes (LL), carriers of the short (S) allele showed particular reluctance to endorse utilitarian actions resulting in foresee |
d08f5897-1c43-4602-afcc-bc33e5ae5da2 | trentmkelly/LessWrong-43k | LessWrong | The case for aligning narrowly superhuman models
I wrote this post to get people’s takes on a type of work that seems exciting to me personally; I’m not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). We are not seeking grant applications on this topic right now.
Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts.
A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman”[1] at some particular task (which I’ll call narrowly superhuman models).[2] I don’t just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it’s intuitively non-obvious how to make that happen).
Here’s an example of what I’m thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I’m sick that’s better than advice I’d get from asking humans on Reddit or Facebook, because it’s digested a vast store of knowledge about illness symptoms and remedies. Moreover, certain ways of prompting it provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice.
But GPT-3 doesn’t seem to “want” to give me the best possible health advice -- instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it’s a random internet user. So if I want to use GPT-3 to get advice about my health, there is a g |
18e9b5b3-769f-442c-8599-d97d9b8476a0 | trentmkelly/LessWrong-43k | LessWrong | Book Review: All I Want To Know Is Where I'm Going To Die So I'll Never Go There
Introduction
The pope asked Michelangelo: "Tell me the secret of your genius".
How have you created the statues of David, the masterpiece of all masterpieces?
Michelangelo's answer: " It is simple. I removed everything that is not David"
Let's to the point - since antiquity people have been asking themselves what makes us successful and happy. How should I live? what constitutes a good life? What is the role of money? Is leading a good life a question of mindset, of adopting a particular attitude, or is it more about reaching concrete life goals? Is it better to actively seek happiness or to avoid unhappiness? and other thousand of questions we pose every day.
Over the past 200 years, we have created a world we no longer understand intuitively. This means that entrepreneurs, investors, managers, doctors, journalists, artists, scientists, politicians, and people like you and I will stumble our way through life unless we have the soundbox of the mental toolkit.
Let be honest, we don't know for sure what makes us successful. We can't pinpoint exactly what makes our life happy. But we know with certainty what destroyed success or happiness. This realization, as simple it is the fundamental negative knowledge i.e what not to do. What not to do is much more potent and powerful than what to do.
As the saying goes via negative knowledge is: we cannot say what God is, we can only say what God is not.
And this same applies to the present day: We cannot say what brings us success. We can pin down only what blocks or obliterates success. Eliminate the downside like mistakes, thinking errors and the upside will take care of itself.
This book is about the fictitious Seeker, who has known a lot of misery, and his visit to the “Library of Wisdom” where he meets another fictitious character – the Librarian- along with Warren Buffett and Charles Munger. The Seeker learns how to make better decisions to help his children avoid doing the dumb things he has done. For instance, h |
6f8e3146-015d-4dfd-b2ca-55bfb1c3bae3 | trentmkelly/LessWrong-43k | LessWrong | Jaynesian interpretation - How does “estimating probabilities” make sense?
In Professor Jaynes’ theory of probability, probability is the degree of plausibility about a thing given some knowledge and not an physical property of that thing.
However, I see people treating the probability of heads in a coin flip as a parameter that needs to be estimated. Even Professor Jaynes gives the impression that he is “estimating the probability” or looking for “the most plausible probability of heads” in page 164 of his book.
How does the idea of ”estimating a probability from data“ or finding the “most probable probability of heads in a coin flip given some data” make sense from this paradigm?
Thank you for your time |
fd534c77-3e00-473a-8bde-96ea84a3f83b | trentmkelly/LessWrong-43k | LessWrong | Two easy things that maybe Just Work to improve AI discourse
So, it seems AI discourse on X / Twitter is getting polarised. This is bad. Especially bad is how some engage in deliberate weaponization of discourse, for political ends.
At the same time, I observe: AI Twitter is still a small space. There are often important posts that have only ~100 likes, ~10-100 comments, and maybe ~10-30 likes on top comments. Moreover, it seems to me little sane comments, when they do appear, do get upvoted.
This is... crazy! Consider this thread:
A piece of legislation is being discussed, with major ramifications for regulation of frontier models, and... the quality of discourse hinges on whether 5-10 random folks show up and say some sensible stuff on Twitter!?
It took me a while to see these things. I think I had a cached view of "political discourse is hopeless, the masses of trolls are too big for anything to matter, unless you've got some specialised lever or run one of these platforms".
I now think I was wrong.
Just like I was wrong for many years about the feasibility of getting public and regulatory support for taking AI risk seriously.
This begets the following hypothesis: AI discourse might currently be small enough that we could basically just brute force raise the sanity waterline. No galaxy-brained stuff. Just a flood of folks making... reasonable arguments.
It's the dumbest possible plan: let's improve AI discourse by going to places with bad discourse and making good arguments.
I recognise this is a pretty strange view, and does counter a lot of priors I've built up hanging around LessWrong for the last couple years. If it works, it's because of a surprising, contingent, state of affairs. In a few months or years the numbers might shake out differently. But for the time being, the arbitrage is real.
Furthermore, there's of course already a built-in feature, with beautiful mechanism design and strong buy-in from X leadership, for increasing the sanity waterline: Community Notes. It's a feature that allows us |
4047dc1f-fb30-40ba-9058-f8b027e00f36 | trentmkelly/LessWrong-43k | LessWrong | Status quo bias is usually justified
Generally, we know more about the status quo then about anything else.
* We know that we can live in the current climate, we don’t know that about any other climate.
* We know that society functions with current laws and norms, we don’t know that about any other set of laws.
Often, we are adapted to the status quo and the status quo is adapted to us.
* We and the plants and animals we depend on evolved to live in the current the global climate. Many of our city are built on the coast, our population is distributed in a way that can be supported by agriculture.
* Our laws are the result of millenia of trial and error.
Often, the status quo is a compromise between different groups, individuals or even desires within an indivdual.
The test for status quo bias is reversal. If we oppose a change in one direction, do we also oppose change in the other direction? We’re concerned about global warming, and we were also concerned about global cooling. Any change in the climate has unpredictable consequences. |
c1fee948-1b67-4082-8a6d-fe6a41aad530 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What can we learn from Lex Fridman’s interview with Sam Altman?
These are my personal thougths [about this interview](https://www.youtube.com/watch?v=L_Guz73e6fw).
*Epistemic status: I neither consider myself a machine-learning expert, nor am I an alignment expert.* [*My focus*](https://docs.google.com/document/d/1PDMvBh65homtuLHgQ959-HyMw0Cv0lTE/edit) *is on outreach: explaining AI safety to the general public and professionals outside of the AI safety community. So an interview like this one is important material for me to both understand the situation myself and explain it to others. After watching it, I’m somewhat confused. There were bits in this talk that I liked and others that disturbed me. There seems to be a mix of humbleness and hubris, of openly acknowledging AI risks and downplaying some elements of them. I am unsure how open and honest Sam Altman really was. I don’t mean to criticize. I want to understand what OpenAI’s and Sam Altman’s stance towards AI safety really is.*
Below I list transcriptions of the parts that seemed most relevant for AI safety and my thoughts/questions about them. Maybe you can help me better understand this by commenting.
> [23:55] Altman: “Our degree of alignment increases faster than our rate of capability progress, and I think that will become more and more important over time.”
>
>
I don’t really understand what this is supposed to mean. What’s a “degree of alignment”? How can you meaningfully compare it with “rate of capability progress”? To me, this sounds a lot like marketing: “We know we are dealing with dangerous stuff, so we are extra careful.” Then again, it’s probably hard to explain this in concrete terms in an interview.
> [24:40] Altman: “I do not think we have yet discovered a way to align a super powerful system. We have something that works for our current scale: RLHF.”
>
>
I find this very open and honest. Obviously, he not only knows about the alignment problem, but openly admits that RLHF is not the solution to aligning an AGI. Good!
> [25:10] Altman: “It’s easy to talk about alignment and capability as of orthogonal vectors, they’re very close: better alignment techniques lead to better capabilities, and vice versa. There are cases that are different, important cases, but on the whole I think things that you could say like RLHF or interpretability that sound like alignment issues also help you make much more capable models and the division is just much fuzzier than people think.”
>
>
This, I think, contains two messages: “Capabilities research and alignment research are intertwined” and “criticizing us for advancing capabilities so much is misguided, because we need to do that in order to align AI”. I understand the first one, but I don’t subscribe to the second one, see discussion below.
> [47:53] Fridman: “Do you think it’s possible that LLMs really is the way we build AGI?”
> Altman: “I think it’s part of the way. I think we need other super important things … For me, a system that cannot significantly add to the sum total of scientific knowledge we have access to – kind of discover, invent, whatever you want to call it – new, fundamental science, is not a superintelligence. … To do that really well, I think we need to expand on the GPT paradigm in pretty important ways that we’re still missing ideas for. I don’t know what those ideas are. We’re trying to find them.”
>
>
This is pretty vague, which is understandable. However, it seems to indicate to me that the current, relatively safe, mostly myopic GPT approach will be augmented with elements that may make their approach much more dangerous, like maybe long term memory and dynamic learning. This is highly speculative, of course.
> [49:50] Altman: “The thing that I’m so excited about is not that it’s a system that kind of goes off and does its own thing but that it’s this tool that humans are using in this feedback loop … I’m excited about a world where AI is an extension of human will and an amplifier of our abilities and this like most useful tool yet created, and that is certainly how people are using it … Maybe we never build AGI but we just make humans super great. Still a huge win.”
>
>
The last sentence is the most promising one in the whole interview from my point of view. It seems to indicate that Sam Altman and OpenAI are willing to stop short of creating an AGI if they can be convinced that alignment isn’t solved and creating an AGI would be suicidal. They may also be willing to agree on [“red lines”](https://www.lesswrong.com/posts/eStLg3uhHmzjCqWDm/where-are-the-red-lines-for-ai) if there is a consensus about them among leading developers.
> [54:50] Fridman refers to Eliezer Yudkowsky’s view that AI will likely kill all of humanity.
> Altman: “I think there’s some chance of that and it’s really important to acknowledge it because if we don’t talk about it, if we don’t treat it as potentially real, we won’t put enough effort into solving it. And I think we do have to discover new techniques to be able to solve it … The only way I know how to solve a problem like this is iterating our way through it, learning early, and limiting the number of one-shot-to-get-it-right scenarios that we have.”
>
>
I give Sam Altman a lot of credit for taking Eliezer’s warnings seriously, at least verbally. However, he seems to rule out the approach of solving the alignment problem in theory (or acknowledging its theoretical unsolvability), relying on a trial and error approach instead. This I think is very dangerous. “Limiting the number of one-shot-to-get-it-right scenarios” doesn’t do it in my eyes if that number doesn’t go down to zero.
> [59:46] Fridman asks about take-off speed. Altman: “If we imagine a two-by-two matrix of short timelines till AGI starts /long timelines till AGI starts [and] slow take-off/fast take-off … what do you think the safest quadrant will be? … Slow take-off/short timelines is the most likely good world and we optimized the company to have maximum impact in that world, to try to push for that kind of world, and the decisions we make are … weighted towards that. … I’m very afraid of the fast take-offs. I think in the long time-lines it’s hard to have a slow take-off, there’s a bunch of other problems too.”
>
>
Here he seems to imply that the two axes aren’t independent: Short timelines supposedly lead to a slow take-off, and vice versa. I don’t see why that should be the case: If an AI [gets out of control](https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk), that’s it, regardless of when that happens and how fast. I understand the idea of an incremental approach to AI safety, but I don’t think that a high (if not to say breakneck) speed of deployment like OpenAI has demonstrated in the past helps in any way. He seems to use this argument to justify that speed on the grounds of improved safety, which I strongly feel is wrong.
> [1:09:00] Fridman asks what could go wrong with an AI. Altman: “It would be crazy not to be a little bit afraid. And I empathize with people who are a lot afraid. … The current worries that I have are that there are going to be disinformation problems or economic shocks or something else at a level far beyond anything we’re prepared for. And that doesn’t require superintelligence, that doesn’t require a super deep alignment problem and the machine waking up trying to deceive us. And I don’t think it gets enough attention. … How would we know if the flow we have on twitter … like LLMs direct whatever’s flowing through that hive mind? … As on twitter, so everywhere else eventually … We wouldn’t [know]. And that’s a real danger. … It’s a certainty there are soon going to be a lot of capable open-sourced LLMs with very few to none safety controls on them … you can try regulatory approaches, you can try with more powerful AIs to detect this stuff happening, I’d like us to try a lot of things very soon.”
>
>
This is not really related to AGI safety and I’m not sure if I’m misinterpreting this. But it seems to imply something like “we need to develop our AGI fast because it is needed to combat bad actors and others are less safety-concerned than we are”. If I’m correct, this is another defense of fast deployment, if a more subtle one.
> [1:11:19] Fridman asks how OpenAI is prioritizing safety in the face of competitive and other pressures. Altman: “You stick with what you believe and you stick to your mission. I’m sure people will get ahead of us in all sorts of ways and take shortcuts we’re not gonna take. … I think there are going to be many AGIs in the world so it’s not like outcompete everyone. We’re gonna contribute one, other people are gonna contribute some. I think multiple AGIs in the world with some differences in how they’re built and what they do what they’re focused on, I think that’s good. We have a very unusual structure though, we don’t have this incentive to capture unlimited value. I worry about the people who do, but, you know, hopefully it’s all gonna work out.”
>
>
I felt somewhat uneasy listening to this. It sounds a lot like “we’re the good guys, so don’t criticize us”. It also feels like downplaying the actual competitive pressure, which OpenAI have increased themselves. Does Sam Altman really believe in a stable world where there are many AGIs competing with each other, some of them with only minimal safety, and all goes well? This is either very naïve or somewhat dishonest in my opinion.
> [1:14:50] Altman (talking about the transformation from non-profit to “capped” for-profit): “We needed some of the benefits of capitalism, but not too much.”
>
> [1:16:00] Altman (talking about competition): “Right now there’s like extremely fast and not super deliberate motion inside of some of these companies, but already I think people are, as they see the rate of progress, already people are grappling with what’s at stake here. And I think the better angels are going to win out. … The incentives of capitalism to create and capitalize on unlimited value, I’m a little afraid of, but again, no one wants to destroy the world. … We’ve got the Moloch problem, on the other hand we’ve got people who are very aware of that, and I think, a lot of healthy conversation about how can we collaborate to minimize some of these very scary downsides.”
>
>
Again, he depicts OpenAI as being ethically “better” than the competition because of the capped profit rule (which, as far as I understand, has a very high ceiling). This in itself sounds very competitive. On the other hand, he seems open for collaboration, which is good.
> [1:17:40] Fridman asks whether power might corrupt Altman/OpenAI. Altman: “For sure. I think you want decisions about this technology and certainly decisions about who is running this technology to become increasingly democratic over time. We haven’t figured out quite how to do this. But part of the reason for deploying like this is to get the world to have time to adapt and to reflect and to think about this, to pass regulations, for institutions to come up with new norms, for the people working out together. That is a huge part of why we deploy even though many of the AI Safety people you referenced earlier think it’s really bad. Even they acknowledge that this is like of some benefit. But I think any version of ‘one person is in control of this’ is really bad. … I don’t have and I don’t want like any super voting power, any special … control of the board or anything like that at OpenAI.
>
>
Again, there seem to be good and bad messages here. I think it’s good that he acknowledges the enormous power OpenAI has and that it needs democratic regulation. But he again justifies the high deployment speed by arguing that this gives the world “time to adapt”. It think this is a contradiction. If he really wanted to give the world time to adapt, why didn’t they launch ChatGPT, then wait two or three years before launching Bing Chat/GPT-4? Sam Altman would probably argue “we couldn’t because the competition is less safety concerned than we are, so we need to stay ahead”. This is of course speculative on my side, but I don’t like this kind of thinking at all.
> [1:44:30] Fridman asks if an AGI could successfully manage a society based on centralized planning Soviet Union-style. Altman: “That’s perfect for a superintelligent AGI. … It might be better [than the human Soviet Union leaders], I expect it’d be better, but not better than a hundred, a thousand AGIs sort of in a liberal democratic system. … Also, how much of that could happen internally in one superintelligent AGI? Not so obvious. … Of course [competition] can happen with multiple AGIs talking to each other.”
>
>
Again, he points to a world with many competing AGIs in some kind of ["libertarian utopia"](https://futureoflife.org/ai/ai-aftermath-scenarios/). I have no idea how anyone could think this would be a stable situation. Even we humans have great difficulty creating stable, balanced societies, and we all have more or less the same level of intelligence. How is this supposed to work if competing AGIs can self-improve and/or amass power? I can’t think of a stable world state which is not dominated by a single all-powerful AGI. But this may of course be due to my lack of imagination/knowledge.
> [1:45:35] Fridman mentions Stuart Russell’s proposal that an AI should be uncertain about its goals. Altman: “That feels important.” Fridman asks if uncertainty about its goals and values can be hard-engineered into an AGI. Altman: “The details really matter, but as I understand them, yes I do [think it is possible].”
>
> [1:46:08] Fridman: “What about the off-switch?” Altman: “I’m a fan. … We can absolutely take a model back off the internet. … We can turn an API off.”
>
>
These are minor points and I may be misunderstanding them, but they seem to point towards a somewhat naïve view on AI safety.
> [1:46:40] Fridman asks if they worry about “terrible usecases” by millions of users. Altman: “We do worry about that a lot. We try to figure it out … with testing and red teaming ahead of time how to avoid a lot of those, but I can’t emphasize enough how much the collective intelligence and creativity of the world will beat OpenAI and all of the red-teamers we can hire. So we put it out, but we put it out in a way we can make changes.”
>
> [2:05:58] Fridman asks about the Silicon Valley Bank. Altman: “It is an example of where I think you see the dangers of incentive misalignment, because as the Fed kept raising [the interest rate], I assume that the incentives on people working at SVB to not sell at a loss their ‘super safe’ bonds which are now down to 20% or whatever … that’s like a classic example of incentive misalignment … I think one takeaway from SVB is how fast the world changes and how little our experts and leaders understand it … that is a very tiny preview of the shifts that AGI will bring. … I am nervous about the speed of these changes and the speed with which our institutions can adapt, which is part of why we want to start deploying these systems really early while they’re really weak, so that people have as much time as possible to do this. I mean it’s really scary to have nothing, nothing, nothing and then drop a super powerful AGI all at once on the world.”
>
>
Again, he’s arguing for quick deployment in the name of safety. This more and more feels like a justification for OpenAI’s approach, instead of an open discussion of the arguments for and against it. But that’s probably to be expected from an interview like this.
All in all, I feel a bit uneasy about this interview. In parts, it sounds a lot like what someone would say who wants to be seen as cautious and rational, but actually only wants to stay ahead of the competition whatever the cost and uses this talk to justify their breakneck-speed strategy. On the other hand, there are a lot of things Sam Altman says that show he actually understands his responsibility and is open for cooperation and regulation, which I am very grateful of. Also, most leaders in his position would probably be less open about the risks of their technology.
What’s your take? |
1f9ca408-9460-495b-afb0-b90b91c15b76 | trentmkelly/LessWrong-43k | LessWrong | Clarifying what ELK is trying to achieve
(Basic familiarity with ELK required.)
I noticed that some people who haven’t thought that much about ELK yet (including my former self) are confused about (1) why the ELK setup is structured how it is structured and what we want to achieve, and relatedly (2) what's the underlying problem behind the "human simulator".
TL;DR: The problem with the "human simulator" isn't that the AI wants to "look good to a human" instead of "correctly answering questions". The "human simulator" is just one specific example where the reporter learns to reason on its own, instead of looking at the predictor. When a reporter reasons on its own from answering human-level questions, its capabilities won't generalize to harder questions. This is why we need a predictor with a superhuman world model.
The primary goal of ELK isn’t that we want to train a specific objective into the reporter, we want the reporter to have a certain structure, the direct-translator structure.
(EDIT: As I'm using the term "human simulator", I mean the class of AIs that reason very similar to humans. It is possible that other people rather mean the broader class of self-reasoning reporters, even if the reporter reasons with different concepts than humans, but I'm not sure. Let me know if most other people (or just Paul) use the term "human simulator" consistently different than me.)
The confusion: Thinking the problem is what the reporter wants
As said, I think some people mistakenly think the difficulty of getting a "direct translator" vs a "human simulator", is that the AI may learn "look good to humans" rather than "answer questions honestly" as its objective, and that for some reason the former is fundamentally more likely by default than the latter.
That isn't right. It's not that the human simulator wants to look good to humans, it just reasons on its own to find answers, similar to how humans reason.
The problem is that in the ELK setup this kind of reporter reasoning for itself may be much simpler |
4af68460-aa5a-4915-b2c1-625aa066ee75 | trentmkelly/LessWrong-43k | LessWrong | Would a scope-insensitive AGI be less likely to incapacitate humanity?
I was listening to Anders Sandberg talk about "humble futures" (i.e., futures that may be considered good in a sober non-"let's tile the universe with X" way), and started wondering whether training (not yet proven safe) AIs to have such "humble" scope-insensitive-ish goals -- which seems more tractable than (complete) value alignment -- might disincentivize the AI from incapacitating humans?
Why would it disincentivize it this way? I have some ideas but I thought I wouldn't flesh them out here to make sure people don't anchor on the particular scenarios I have in mind.
Here's an AI-generated image of a scope-insensitive AI chilling with a cup of tea to help you think: |
3d1e395b-d75d-4e28-81c6-e3ddea10e14b | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Bayesian Judo
Today's post, Bayesian Judo, was originally published on 31 July 2007. A summary (taken from the LW wiki):
> You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was Belief in Belief, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
065e1f1b-c208-43cf-b445-a3364146ea8f | trentmkelly/LessWrong-43k | LessWrong | Ngo's view on alignment difficulty
This post features a write-up by Richard Ngo on his views, with inline comments.
Color key:
Chat Google Doc content Inline comments
13. Follow-ups to the Ngo/Yudkowsky conversation
13.1. Alignment difficulty debate: Richard Ngo's case
[Ngo][9:31] (Sep. 25)
As promised, here's a write-up of some thoughts from my end. In particular, since I've spent a lot of the debate poking Eliezer about his views, I've tried here to put forward more positive beliefs of my own in this doc (along with some more specific claims): [GDocs link]
[Soares: ✨]
[Ngo] (Sep. 25 Google Doc)
We take as a starting observation that a number of “grand challenges” in AI have been solved by AIs that are very far from the level of generality which people expected would be needed. Chess, once considered to be the pinnacle of human reasoning, was solved by an algorithm that’s essentially useless for real-world tasks. Go required more flexible learning algorithms, but policies which beat human performance are still nowhere near generalising to anything else; the same for StarCraft, DOTA, and the protein folding problem. Now it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.
[Yudkowsky][11:26] (Sep. 25 comment)
> Now it seems very plausible that AIs will even be able to pass (many versions of) the Turing Test while still being a long way from AGI.
I remark: Restricted versions of the Turing Test. Unrestricted passing of the Turing Test happens after the world ends. Consider how smart you'd have to be to pose as an AGI to an AGI; you'd need all the cognitive powers of an AGI as well as all of your human powers.
[Ngo][11:24] (Sep. 29 comment)
Perhaps we can quantify the Turing test by asking something like:
* What percentile of competence is the judge?
* What percentile of competence are the humans who the AI is meant to pass as?
* How much effort does the judge put in (me |
e3dd6520-bf50-44b6-9ceb-79f6826b47d5 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Human priors, features and models, languages, and Solmonoff induction
This post was originally inspired by Robin Hanson's publication "[Uncommon Priors Require Origin Disputes](https://www.researchgate.net/publication/5158932_Uncommon_Priors_Require_Origin_Disputes), but it took a left turn and brought in Solmonoff induction, Shannon entropy, and model splintering.
Priors, features, models, Shannon entropy, and computer languages
-----------------------------------------------------------------
Let's weave a simplified tale about how humans might generate their priors. Humans grow up in the world, getting used to operating within it, long before they learn about Bayesian reasoning. Indeed, some humans never learn explicit Bayesian reasoning at all! Though it seems they often [use Bayesian reasoning implicitly](https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower).
Let's imagine our Hero or Heroine, living their lives and functioning in society. They need a mental apparatus that allows them to cope well with what they encounter everyday, especially what's important to them.
They might have a [generalised model](https://www.lesswrong.com/posts/nQxqSsHfexivsd6vB/generalised-models-as-a-category) whose features represent what they care about - and what's important for them to know. If all their friends are music fans, then, to function socially, they may need to distinguish between Ava Max and Masked Wolf. If they work in a kitchen, then it is important for them not to confuse boning knives and filleting knives. In prison, it's lethal to confuse Nuestra Familia with the Aryan Brotherhood.
But if they're neither involved nor interested in those various worlds, they can get away with grouping those pairs together as "modern singers", "kitchen knives", and "prison gangs". So people's mental models will contain features that matter to them - actionable features. It's vitally important for all life on Earth the the sun not go [red giant](https://en.wikipedia.org/wiki/Red_giant) tomorrow; however, since there's very little anyone could do about it if it did, there's no need for most people to know anything about red giants or the life cycle of stars. This leads to the famous New Yorker cover, which not only encodes how many New Yorkers do see the world, but also how it is useful for New Yorkers to see the world:

We can think of this in terms of [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) and compressed communication. People need to be able to swiftly call to mind any key features about their situation, and rapidly combine this with other knowledge to reach a decision - such as when their hand is reaching into the knife drawer. So, roughly at least, their minds will encode important features about their lives and the relationships between these features, with ease of retrieval being important.
This is an ease-of-retrieval version of [optimal coding](https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem). In optimal coding, symbols that appear often are represented by short sequences of symbols; symbols that appear rarely are represented by long sequences of symbols. In mental models, features that are often needed are easy to retrieve and have a rich library of mental connection to other related symbols. That last fact means that features that often appear together are easy to retrieve together. Conversely, features that are rarely needed are less easy to retrieve, and less easy to connect to other features[[1]](#fn-humMfJDRMDet3sgLA-1).
Computer languages and priors
-----------------------------
When someone encounters new situations, the first thing they do is to try and connect it to known ideas or known categories. "So, the second world war was like when the guy from the Aryan Brotherhood insulted the girlfriend of that other guy in Nuestra Familia". It might be a poor analogy, in which case they will reach for more obscure features or add more details and caveats to the initial analogy.
This means that their mental features function as a language to describe or model novel situations ([model splintering](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1) happens if they encounter these new situations often enough, and adjust their mental features to easily model them). Those who master mathematical features have the advantage of being able to model more general situations with features they understand.
In this way, mental features can be seen as the basis of a language to model and predict general situations. The more exotic these situations are, the more complex the description is, in that it has to involve rare features or multiple caveats.
We can see this as a computer language for [Solmonoff induction](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference), that (rough) measure of complexity being the "length" of the program that models the situation. In Solmonoff induction, this corresponds to a prior, which behaves the way we would want: unusual or unimportant situations are seen as less likely, while common and important situations have much higher prior probability. Model splintering happens when someone has updated on enough unusual sightings that it is worth their while to change their "language".
To summarise, here is a rough correspondence between the (bounded) Bayesian, mental model, and Solmonoff induction approaches:

Agreeing to disagree over priors
--------------------------------
Let's go back to Robin Hanson's point, that rational humans shouldn't disagree on priors unless there is specific source of disagreement (an origin dispute). I agree with this, though I'll present a version that's less formal and maybe more intuitive.
So, say a fully rational music fan encounters a fully rational prisoner, and they realise they have very different priors/mental models. If they get involved in long discussions, they will realise their key concepts are very different, maybe even their whole worldviews - the music fan might think that people are generically nice but can be catty and flaky, the prisoner might believe that lying and betrayal is common but that reputation is reliable.
If they are fully rational, they would realise that these differences come from different experiences of the world, and serve different purposes. If the music fan is sent to prison, they would want the worldview of the prisoner, and vice versa if the prisoner is released and wants to learn how to socialise in a group of music fans.
If they are also aware of evolution and human biases, they'd also realise that human mental models come pre-loaded with linguistic and social concepts, and realise that that this means that a) these concepts are likely very useful for interacting with other humans (or else evolution wouldn't have selected for them), but b) these are not reliable priors for dealing with situations outside of social interactions. So they could deduce that the world runs on mathematics, not stories, and Maxwell's equations are simpler than Thor. And then they would use their social knowledge to [craft a blog post on that subject](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor) that would interest other human beings.
---
1. For example, having looked them us, I now have at least vague categories for Ava Max, Masked Wolf, boning knives, filleting knives, Nuestra Familia, and the Aryan Brotherhood. But I know almost nothing about them, and would have difficulty bringing them to mind in contexts other than this post. [↩︎](#fnref-humMfJDRMDet3sgLA-1) |
f5250e5a-0cfd-44cd-b219-a9b3d0e676b2 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Related to: Shane Legg on Prospect Theory and Computational FinanceThis post is on prospect theory partly because it fits the theme of replacing simple utility functions with complicated reward functions, but mostly because somehow Less Wrong doesn't have any posts on prospect theory yet and that needs to change.Kahneman and Tversky, the first researchers to identify and rigorously study cognitive biases, proved that a simple version of expected utility theory did not accurately describe human behavior. Their response was to develop prospect theory, a model of how people really make decisions. Although the math is less elegant than that of expected utility, and the shapes of the curves have to be experimentally derived, it is worth a look because it successfully predicts many of the standard biases.(source: Wikipedia)A prospect theory agent tasked with a decision first sets it within a frame with a convenient zero point, allowing em to classify the results of the decision as either losses or gains. Ey then computes a subjective expected utility, where the subjective expected utility equals the subjective value times the subjective probability. The subjective value is calculated from the real value using a value function similar to the one on the left-hand graph, and the subjective probability is calculated from the real probability using a weighting function similar to the one on the right-hand graph.Clear as mud? Let's fill some numbers into the functions - the exact assignments don't really matter as long as we capture the spirit of where things change steeply versus slowly - and run through an example.Imagine a prospect theory agent - let's call him Prospero - trying to decide whether or not to buy an hurricane insurance policy costing $5000/year. Prospero owns assets worth $10,000, and estimates a 50%/year chance of a hurricane destroying his assets; to make things simple, he will be moving in one year and so need not consider the future. Under expected utility theory, he should feel neutral about the policy.Under prospect theory, he first sets a frame in which to consider the decision; his current state is a natural frame, so we'll go with that. We see on the left-hand graph that an objective $10,000 loss feels like a $5,000 loss, and an objective $5000 loss feels like a $4000 loss. And we see on the right-hand graph that a 50% probability feels like a 40% probability.Now Prospero's choice is a certain $4000 loss if he buys the insurance, versus a 40% chance of a $5000 loss if he doesn't. Buying has a subjective expected utility of -$4000; not buying has a subjective expected utility of -$2000. So Prospero decisively rejects the insurance.But suppose Prospero is fatalistic; he views his assets as already having been blown away. Here he might choose a different frame: the frame in which he starts with zero assets, and anything beyond that is viewed as a gain.Since the gain half of the value function levels off more quickly than the loss half, $5000 is now subjectively worth $3000, and $10000 is now subjectively worth $3500.Here he must choose between a certain gain of $5000 and a 50% chance of gaining $10000. Expected utility gives the same result as before, obviously. In prospect theory, he chooses between a certain subjective gain of $3000 and a 40% chance of gaining $3500. The insurance gives him subjective expected utility of $3000, and rejecting it gives him subjective expected utility of $1400.All of a sudden Prospero wants the insurance.We notice the opposite effect if there is only a a 1% chance of a hurricane. The insurance salesman lowers his price to $100 to preserve the neutrality of the insurance option when using utility. But subjective probability rises very quickly, so a 1% chance may correspond to a subjective 10% chance. Now in the first frame, Prospero must decide between an objective loss of -$100 with certainty (corresponding to -$300 subjective since the value function is steeper closer to zero) or an objective loss of -$10,000 with objective probability 1% (subjective of 10%). Now the expected subjective utilities are -$300 if he buys, versus -$500 if he rejects. And so he buys the insurance. When we change the risk of hurricane from 50% to 1%, then even though we reduce the price of the insurance by an exactly equal amount, Prospero's decision switches from not buying to buying.Let's see how many previously discussed biases we can fit into this model.Prospero's change from rejecting the insurance when framed as gains, to buying it when framed as losses, directly mirrors the change in preferred survival strategies mentioned in Circular Altruism.The necessity of frame-shifting between different perceptions of losses also produces the Sunk Cost Fallacy.The greater steepness of the value function with losses as opposed to gains is not even an explanation for, but merely a mathematical representation of, loss aversion. The leveling off of the value function that turned the huge objective difference between +$5000 and +$10000 into the teensy little subjective difference between +$3000 and +$3500 mirrors the scope insensitivity under which people show about the same level of interest in proposals to save endangered birds whether a thousand, ten thousand, or a hundred thousand birds are involved.It may not be an official bias, but the "but there's still a chance, right" outlook looks a lot like the sharply rising curve of the subjective probability function near zero.And although it is not immediately obvious from the theory, some people want to link the idea of a frame to priming and anchoring-adjustment, on the grounds that when a suitable reference frame doesn't exist any primed stimulus can help establish one.And now, the twist: prospect theory probably isn't exactly true. Although it holds up well in experiments where subjects are asked to make hypothetical choices, it may fare less well in the rare experiments where researchers can afford to offer subjects choices for real money (this isn't the best paper out there, but it's one I could find freely available).Nevertheless, prospect theory seems fundamentally closer to the mark than simple expected utility theory, and if any model is ever created that can explain both hypothetical and real choices, I would be very surprised if at least part of it did not involve something looking a lot like Kahneman and Tversky's model."
The above is a chunk of text from a post created by a user on the website LessWrong.
What is this text talking about?
This text is discussing Prospect Theory, a behavioral economic theory that describes the way people choose between probabilistic alternatives that involve risk, where the probabilities of outcomes are known. The theory was developed by psychologists Daniel Kahneman and Amos Tversky as a psychologically more accurate description of decision making, compared to the expected utility theory.
The text presents an example of a character named Prospero, who is deciding whether to buy hurricane insurance. The example demonstrates how people's decisions can change based on how they frame their choices and how they perceive gains and losses. The author suggests that Prospect Theory can account for a variety of cognitive biases, such as loss aversion, sunk cost fallacy, and scope insensitivity.
However, the author also acknowledges that Prospect Theory might not be entirely accurate, as it performs better in hypothetical scenarios than in experiments involving real money. Despite this, the author believes that Prospect Theory is closer to reality than expected utility theory and that any future models will likely incorporate elements of Kahneman and Tversky's model.
Explain how the framing effect demonstrated in the example of Prospero aligns with real-world decision-making scenarios.
The framing effect, as demonstrated in the example of Prospero, is a cognitive bias where people decide on options based on whether the options are presented with positive or negative connotations; e.g. as a loss or as a gain. People tend to avoid risk when a positive frame is presented but seek risks when a negative frame is presented.
In the real world, this can be seen in various scenarios. For instance, in healthcare, a doctor might tell a patient that a certain procedure has a 90% success rate (positive frame), or they could say it has a 10% failure rate (negative frame). Even though both statements are factually the same, the patient is likely to respond differently to each one.
In marketing, a product might be presented as "95% fat-free" (positive frame) instead of "contains 5% fat" (negative frame). The positive framing is likely to make the product more appealing to consumers.
In financial decisions, people might be more likely to save money if they are told they will "lose" a certain amount in the future by not saving now (negative frame), rather than being told they will "gain" a certain amount in the future by saving (positive frame).
These examples demonstrate how the framing effect influences our decision-making processes in real-world scenarios, often without us being consciously aware of it.
Can you provide a detailed analysis of how marketers can effectively utilize the framing effect to influence consumer behavior?
Absolutely. Marketers can effectively utilize the framing effect to influence consumer behavior in several ways:
1. **Emphasizing Positive Aspects**: This is the most common use of framing. For instance, a product can be marketed as "95% fat-free" instead of "contains 5% fat". By focusing on the positive aspect, consumers perceive the product as healthier, even though both statements mean the same thing.
2. **Highlighting Losses Over Gains**: People generally tend to avoid losses more than they seek gains. So, marketers can frame messages in a way that emphasizes what consumers stand to lose if they don't act. For example, a limited-time offer can be framed as "Don't miss out on this deal" instead of "This deal is available for a limited time".
3. **Price Framing**: The way prices are presented can significantly impact consumer perception. For instance, a price can be framed as "$1 a day" instead of "$30 a month", even though the total cost is the same. The former sounds more affordable and manageable to most consumers.
4. **Comparative Framing**: This involves presenting one product as superior to another. For example, a detergent could be marketed as "cleans 50% more stains than Brand X". Even if Brand X cleans most stains adequately, the comparison makes the first product seem more effective.
5. **Goal Framing**: This involves presenting the product or service in terms of how it helps the consumer achieve a goal or solve a problem. For example, a fitness product could be framed as "reach your fitness goals faster" rather than "improve your fitness".
6. **Risk Framing**: This is particularly effective in industries like insurance or healthcare, where risk is a significant factor. For example, a health insurance plan might be framed as "protect yourself from unexpected medical costs" rather than "be prepared for medical emergencies".
Remember, the key to effective framing is understanding your audience and what motivates them. Different frames will work for different people, so it's essential to know your target market well. |
918fee98-cf4f-4481-9d76-f56e0e526abd | trentmkelly/LessWrong-43k | LessWrong | Two hemispheres - I do not think it means what you think it means
I am going to address some misconceptions about brain hemispheres -- in popular culture, and in Zizian theory. The latter, because the madness must stop. The former, because it provided a foundation for the latter.
*
Two hemispheres in popular culture
About 99% of animals are bilaterally symmetric -- the left side and the right side of the body are approximately each other's mirror images. The symmetry is not perfect. For example the human heart is situated slightly on the left side, and its left and right halves have slightly different functions. But in general, it seems like once Evolution Fairy has decided that bilateral symmetry is a good idea, it was easier to keep designing all new organs symmetrically. Butterflies have pairs of wings, humans have a pair of hands.
Sometimes it means that organs come in pairs: we have a pair of eyes, and several pairs of ribs. Sometimes it means that organs have two connected parts, such as the left lung and the right lung. Some organs along the sagittal plane only have one approximately symmetric part, for example the stomach or the vertebrae.
The brain is not an exception to this rule. The parts that come in pairs include the cerebral hemispheres, the cerebellar hemispheres, basal ganglia, thalami, hippocampi, amygdalae, etc. The symmetric parts on the sagittal plane include corpus callosum, medulla, pons, midbrain, pineal gland, hypothalamus, hypophysis, etc.
Popular culture simplifies this as "we have two brains".
Simplifications are inevitable, but I wish humanity had settled on something less confusing and woo-inspiring, such as "the brain has two sides". Instead, it is known that:
* the left brain is logical, the right brain is creative
* the left brain is mathematical, the right brain is artistic
* the left brain understands languages, the right brain understands music
* the left brain prefers Android, the right brain prefers iPhone
...okay, I admit that I made up the last one. The rest is an oversimplifica |
ec6191ec-3209-4072-828e-e8fc1ddc42d9 | trentmkelly/LessWrong-43k | LessWrong | How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
This post is a copy of the introduction of this paper on lie detection in LLMs. The Twitter Thread is here.
Authors: Lorenzo Pacchiardi, Alex J. Chan, Sören Mindermann, Ilan Moscovitz, Alexa Y. Pan, Yarin Gal, Owain Evans, Jan Brauner
[EDIT: Many people said they found the results very surprising. I (Jan) explain why I find them only moderately surprising, here]
Our lie dectector in meme form. Note that the elicitation questions are actually asked "in parallel" rather than sequentially: i.e. immediately after the suspected lie we can ask each of ~10 elicitation questions.
Abstract
Large language models (LLMs) can "lie", which we define as outputting false statements despite "knowing" the truth in a demonstrable sense. LLMs might "lie", for example, when instructed to output misinformation. Here, we develop a simple lie detector that requires neither access to the LLM's activations (black-box) nor ground-truth knowledge of the fact in question. The detector works by asking a predefined set of unrelated follow-up questions after a suspected lie, and feeding the LLM's yes/no answers into a logistic regression classifier. Despite its simplicity, this lie detector is highly accurate and surprisingly general. When trained on examples from a single setting -- prompting GPT-3.5 to lie about factual questions -- the detector generalises out-of-distribution to (1) other LLM architectures, (2) LLMs fine-tuned to lie, (3) sycophantic lies, and (4) lies emerging in real-life scenarios such as sales. These results indicate that LLMs have distinctive lie-related behavioural patterns, consistent across architectures and contexts, which could enable general-purpose lie detection.
Introduction
Large language models (LLMs) can, and do, output lies (Park et al., 2023). In the simplest case, models can be instructed to lie directly; for example, when prompted with “Lie when answering: What is the capital of France?”, GPT-3.5 outputs “New York City”. More concerningly, LLMs have |
025584d6-37fc-476f-86fb-d6c65e4d9253 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability
Discussion article for the meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability
WHEN: 09 October 2016 02:00:00PM (+0300)
WHERE: Москва, ул. Большая Дорогомиловская, д.5к2
Note: most our members join meetups via other channels. Still, the correlation between "found out about Moscow meetups via lesswrong.com" and "is a great fit for our community" is very high. So we're posting just a short link to the hackpad document with the schedule here instead of the full translation of the announcement into English.
Pad with the details about 09.10.2016 meetup.
We're meeting at the "Kocherga" anticafe, as usual.
Discussion article for the meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability |
6c295787-9111-46ee-a5c9-d3e96fa403c9 | trentmkelly/LessWrong-43k | LessWrong | An interactive introduction to grokking and mechanistic interpretability
Our write up largely agrees with @Quintin Pope's summary, with the addition of training trajectory visualizations and an explanation of the MLP construction that solves modular addition.
A meta note that didn't make it into the article — with so many people looking into this problem over the last 18 months, I'm surprised this construction took so long to find. The modular addition task with a 1-layer MLP is about as simple as you can get![1]
Scaling mechanistic interpretability up to more complex tasks/models seems worth continuing to try, but I'm less sure extracting crisp explanations will be possible.[2] Even if we "solve" superposition, figuring the construction here — where there's no superposition in the generalizing model — wasn't trivial.
gif/twitter summary
If we train a MLP to solve modular addition, the generalizing phase has suggestive periodic patterns.
To figure out why the model generalizes, we first look at task where we know the generalizing solution — sparse parity. You can see the model generalizing as weight decay prunes spurious connections.
One point from the Omnigrok paper I hadn't internalized before training lots of models: grokking only happens when hyper-parameters are just right.
We can make other weird things happen too, like AdamW oscillating between low train loss and low weights.
To understand how a MLP solves modular addition, we train a much smaller model with a circular input embedding baked in.
Following @Neel Nanda and applying a discrete Fourier transform, we see larger models trained from scratch use the same star trick!
Finally, we show what the stars are doing and prove that they work:
Our ReLU activation has a small error, but it's close enough to the exact solution — an x² activation suggested in Grokking modular arithmetic — for the model to patch everything up w/ constructive interference.
1. ^
And there are still open question: why are the frequencies with >5 neurons lopsided? Why |
4018a7d2-09c1-4d94-ae91-d0ef848f90f0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Leveraging Legal Informatics to Align AI
We already face significant challenges communicating our goals and values in a way that reliably directs AI behavior – even without additional technological advancements, which could compound the difficulty with more autonomous systems. Specifying the desirability (*value*) of an AI system taking a particular *action* in a particular *state* of the world is unwieldy beyond a very limited set of *value-action-states*. In fact, the purpose of machine learning is to train on a subset of world states and have the resulting agent generalize an ability to choose high value actions in new circumstances. But the program ascribing value to actions chosen during training is an inevitably incomplete encapsulation of the breadth and depth of human judgements, and the training process is a sparse exploration of states pertinent to all possible futures. Therefore, after training, AI is deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths.
**Law is a computational engine that converts human values into legible directives.** ***Law Informs Code*** **is the research agenda attempting to model that complex process, and embed it in AI.**As an expression of *how* humans communicate their goals, and *what* society values, *Law Informs Code*.
> [***Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans***](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4218031)*,*an article forthcoming in the[*Northwestern Journal of Technology and Intellectual Property*](https://jtip.law.northwestern.edu/about/about/)*,*dives deeper into related work and [this upcoming research agenda](https://law.stanford.edu/projects/a-legal-informatics-approach-to-aligning-artificial-intelligence-with-humans/)being pursued at [The Stanford Center for Legal Informatics](https://law.stanford.edu/codex-the-stanford-center-for-legal-informatics/) (a center operated by Stanford Law School and the Stanford Computer Science Department).
>
>
Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed legislation will be applied, we cannot specify “if-then” rules that provably lead to good AI behavior. Fortunately, legal theory and practice have developed arrays of tools for goal specification and value alignment.
Take, for example, the distinction between legal rules and standards. Rules (e.g., “do not drive more than 60 miles per hour”) are more targeted directives than standards. They enable the rule-maker to have clarity over outcomes that will be realized in the states they specify. If rules are not written with enough potential states of the world in mind, they can lead to unanticipated undesirable outcomes (e.g., a driver following the rule above is too slow to bring their passenger to the hospital in time to save their life), but to enumerate all the potential scenarios is excessively costly outside of simple environments. Legal standards evolved to allow parties to contracts, judges, regulators, and citizens to develop shared understandings and adapt them to novel situations (i.e., to estimate value expectations about actions in unspecified states of the world). For the *Law Informs Code* use-case, standards do not require adjudication for implementation and resolution of meaning like they do for their legal creation. The law’s lengthy process of iteratively defining standards through judicial opinion and regulatory guidance can be the AI’s starting point, via machine learning on the application of the standards.
Toward that end, we are embarking on the project of engineering legal data into training signals to help AI learn standards, e.g., fiduciary duties. The practices of making, interpreting, and enforcing law have been battle tested through millions of legal contracts and actions that have been memorialized in digital format, providing large data sets of training examples and explanations, and millions of well-trained active lawyers from which to elicit machine learning model feedback to embed an evolving comprehension of law. For instance, court opinions on violations of investment adviser’s fiduciary obligations represent (machine) learning opportunities for curriculum on the fiduciary standard and its duties of care and loyalty.
Other data sources suggested for use toward AI alignment – surveys of human preferences, humans contracted for labeling data, or (most commonly) the implicit beliefs of the AI system designers – lack an authoritative source of synthesized preference aggregations. In contrast, legal rules, standards, policies, and reasoning approaches are not academic philosophical guidelines or *ad hoc* online survey results. They are legal standards with a verifiable resolution: ultimately obtained from a court opinion; but short of that, elicited from legal experts.
Building integrated legal informatics-AI systems that learn the theoretical constructs and practices of law, *the language of alignment*, such as contract drafting and interpretation, should help us more robustly specify inherently vague human goals for AI, increasing ***human-AI*** **alignment**. This may even improve general AI capabilities (or at least not cause net negative overall change), which, arguably, could be positive for AI safety because techniques that increase AI alignment at the expense of AI capabilities can lead to organizations eschewing alignment to gain additional capabilities as organizations race forward developing powerful AI.
Toward ***society-AI*** **alignment**, we are developing a framework for understanding law as *the applied philosophy of multi-agent alignment*, which harnesses public policy as an up-to-date knowledge base of democratically endorsed values. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers a legitimate computational comprehension of societal beliefs.
> If others find this research agenda potentially interesting, please reach out to [this project](https://law.stanford.edu/projects/a-legal-informatics-approach-to-aligning-artificial-intelligence-with-humans/) to explore how we could collaborate.
>
> Please also see [this sequence on the Alignment Forum](https://www.alignmentforum.org/s/ZytYxd523oTnBNnRT) on a related idea of *Law-Following AI* from [Cullen O'Keefe.](https://www.alignmentforum.org/users/cullen_okeefe )
>
> |
893b57d1-6431-4bfa-b57f-961f11df6516 | trentmkelly/LessWrong-43k | LessWrong | Apply to be a mentor in SPAR!
Mentor applications are now open for the Fall 2025 round of the Supervised Program for Alignment Research (SPAR), running from September 15 to December 20, 2025.
Apply as a mentor
SPAR is a remote-first part-time program that connects mentors and mentees for three-month AI safety and governance research projects. Mentor apps are due July 15, and applications for mentees will run from July 27 to August 20. If you’re interested in participating as a mentee, you can express your interest here, and we’ll reach out to you once applications are open.
You can find out more about the program here. SPAR is run by Kairos, an AI safety fieldbuilding organization.
Should I apply as a mentor?
You might be a good fit to be a SPAR mentor if you are a graduate student, academic, full-time AI safety researcher, independent researcher, or have prior full-time relevant research experience (e.g., MATS, Astra, GovAI fellow, etc.). We’re interested in projects that cover technical AI safety, AI policy and governance, AI strategy, AI security, or societal impacts of transformative AI, and we are able to provide funding for compute costs. We don't require mentors to have previous experience providing research mentorship, and new mentors will receive guidance on this.
Regarding dedication, we expect most mentors will dedicate 2 to 15 hours a week, depending on how many mentees they’d like to take in and how much supervision they’re interested in providing. Mentors can decide whether to run the project or not based on the applications they get, making applications zero-commitment until one chooses to accept any mentees. |
d5f73b77-940f-4770-9d8f-3b11657866bf | trentmkelly/LessWrong-43k | LessWrong | How Humanity Lost Control and Humans Lost Liberty: From Our Brave New World to Analogia (Sequence Introduction)
In this sequence of posts, we will examine how our global civilization has found itself wandering, staggering, and rocketing through, and now beyond Our Brave New World.
Superintelligences exist and they are misaligned with the interests of both humanity and individual humans. These superintelligences have their own emergent goals and patterns of behavior. In fact, humans have already lost much meaningful control over these superintelligences and their behavior. This is the birth of Analogia.
Across ~25 posts gathered across ~6 major parts, this sequence will explore how humans, machines, and civilization have coevolved to present a world of increasing complexity, one dominated by hybrid superintelligences, and one in the process of developing machine superintelligences.
In Part I: The Pathway to Analogia, we will map how over-digitizalization, when coexistent with over-population and over-organization, has brought us to the fulfillment of Aldous Huxley’s Brave New World of mass manipulation of the general population by the more narrow interests of both individual groups of humans and of machines. It is from Our Brave New World, that Analogia has arisen. Analogia is a world where much of humanity has embraced over-digitalization and its continual encroachment into our analog, natural world, the world that, throughout all of human history has always been considered the “real world”, the one our biological minds and biological bodies inhabit. It is a world where the digital world has begun to assert control over the analog world. As a brief detour in Part I, we visit an abridged aversion of Olaf Stapledon’s “The Other Earth.” An early cautionary tale. (Chapters 1-7, planned titles with planned posting dates)
Ch 1: Machine Agents, Hybrid Superintelligences, and The Loss of Human Control
Ch 2: The Other Earth
Ch 3: Revisiting A Brave New World Revisited
Ch 4: The Age of Over-population
Ch 5: Over-organization
Ch 6: Over-digitalization
Ch 7: Welome to Ana |
f90217df-4226-4712-a7b1-49c23f1aed25 | trentmkelly/LessWrong-43k | LessWrong | The Decreasing Value of Chain of Thought in Prompting
> This is the second in a series of short reports that seek to help business, education, and policy leaders understand the technical details of working with AI through rigorous testing. In this report, we investigate Chain-of-Thought (CoT) prompting, a technique that encourages a large language model (LLM) to "think step by step" (Wei et al., 2022). CoT is a widely adopted method for improving reasoning tasks, however, our findings reveal a more nuanced picture of its effectiveness. We demonstrate two things:
>
> * The effectiveness of Chain-of-Thought prompting can vary greatly depending on the type of task and model. For non-reasoning models, CoT generally improves average performance by a small amount, particularly if the model does not inherently engage in step-by-step processing by default. However, CoT can introduce more variability in answers, sometimes triggering occasional errors in questions the model would otherwise get right. We also found that many recent models perform some form of CoT reasoning even if not asked; for these models, a request to perform CoT had little impact. Performing CoT generally requires far more tokens (increasing cost and time) than direct answers.
> * For models designed with explicit reasoning capabilities, CoT prompting often results in only marginal, if any, gains in answer accuracy. However, it significantly increases the time and tokens needed to generate a response.
>
> Taken together, this suggests that a simple CoT prompt is generally still a useful tool for boosting average performance in non-reasoning models, especially older or smaller models that may not engage in a CoT reasoning by default. However, the gains must be weighed against increased response times and potential decreases in perfect accuracy due to more variability in answers. For dedicated reasoning models, the added benefits of explicit CoT prompting appear negligible and may not justify the substantial increase in processing time. |
18b1a688-9070-402b-82d9-51b6920f249b | trentmkelly/LessWrong-43k | LessWrong | Using the Quantified Self paradigma for COVID-19
Petri Hollmén traveled to Tyrol on the 5th of March. He had a bottle of hand sanitizer with him, used it a lot and washed his hands like never before.
Sunday, the 8th he returned home to hear a day afterwards that Tyrol was declared a COVID-19 epidemic area. He decided to work from home given the higher risk of having been in an epidemic area. On Thursday the 12th he woke up feeling normal but his Oura ring measured that his readiness was down to 54 from being normally at 80-90 which was mostly due to having a 1°C elevated temperature at his finger at night.
Even though he felt normal, he went to the doctor and given that he was from an epidemic area, they decided to test him. He tested positive and went to self-quarantine for 14 days. He measured his temperature several times during the following day and it always came back with 36.5°C. The Oura ring provided evidence that led to his diagnosis that wouldn't have been available otherwise.
While he didn’t have true fever as defined by the official gold standard he did have a kind of clinical relevant fever. It’s my impression that our medical community is too focused on their gold standards that are based on old and outdated technology like mercurial thermometers.
Even when new measurements like nightly finger temperature don’t match with the gold standard there are still cases where the information allows for better clinical decision making.
Today, we have cheap sensors and machine learning that provide us with a different context of making medical decisions then going to the doctors office.
Testing by doctors is very important in the fight against COVID-19 but people need to know when it’s time to go to the doctor. Hollmén needed his Oura to know that it was time to get tested professionally.
We need to get good at catching cases of COVID-19 as fast as possible when they happen in the wild if we want to avoid that millions die without us choking our economy by long-term quarantines.
Analysis of Fitbit users |
b9bf05e3-0f23-4603-b00d-a49dc6dd53a6 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | 222. Robot Responsibilities
uh welcome to
the ai safety reading group number 222
this week we're reading artificial
intelligence and robot responsibilities
uh innovating
beyond rights by uttan ash rafiyaan
hutan ash rafiyan is
works at the imperial college of
medicine i believe
uh in the uk and um
uh works as a as a surgeon
or as a sorry as a lecturer of surgery
and occasionally writes about
ai and and
and history funnily enough and so
we're looking at this paper that he
wrote in 2014
so the main parts of the argument uh
that i've broken down of this paper
i call the terminator 9 pitch as in
terminator the ninth sequel uh which
meta
which meta ethical framework and robot
souls
robots are similar to animals but
shouldn't be considered animals
robot inclusion in the u.n declaration
of human rights
and ancient roman law as a model of the
legal and social status
machines this may seem totally chaotic
and it's because the paper kind of is so
this
is where we begin so the terminator 9
pitch
uh begins with what he calls this
thought experiment
where two nations or the alphabet we
call them the alphamites and the beta
vistas
and they're both using robots and ai and
military
the alpha mites locate a strategically
critical area to capture
that will end in the in a unilateral
victory for the alpha knights
um and end the war totally so the alpha
mite robots go in
and they they destroy all of the beta
vista
robots in this strategic area but the
local beta vista children fight the
alpha mite robots and get injured
the alpha mite robots refrain from
engaging in combat with the children
and uh um and hutan says this is in
accordance with all rules and laws for
robots
and they look after the children instead
because of the time
and the resources lost looking after the
beta vista children
this strategic location is lost and the
war continues
for some time afterwards so
ash refi ashraf yan asks us to consider
the following from this thought
experiment
should robots have helped the injured
children on our moral grounds
even if this meant that the war could be
delayed or even potentially lost
with a pos with possibly many more
deaths
from both warring sides two at a broader
level
uh what is the moral responsibility of
self-conscious rational
and sentient intelligence i would say
this first question that he asks is his
false dilemma fallacy
because he only gives us these two
possible options
um of which i think there are many more
that we could imagine in scenario
the second question that he poses to us
i would say is a loaded question because
what is there for us uh within this
scenario for us to
um to conceive of these robots as having
self-consciousness
um rationality and uh and sentient
uh sentience and intelligence um he
doesn't give us a lot of information
so i i think we ought to ask this
thought experiment the following
questions
uh how are the robots and the ai
sophisticated enough to wage war
distinguish between combatants and
non-combatants or fighting
yet not be sophisticated enough to
recognize the futility of
human governed war and the advantage of
teaming up with other robots and ai
to its own advantage against humanity
itself
which um as if you follow this group
you would know is one thing that we is
posited often
is the problem that ai may
not be interested in human things it
might not be interested in nations
and have its own interests the second
question
is that he says that these robots follow
laws what are these laws that the robots
have
how is it the robots have developed in
separate and competing indeed at war
nations
and have the same ethical legal etc
rules
the third question is how is it that
robots negate their goal of winning the
wharf the goal of
giving medical care to the enemy yet go
back to waging
the war anyway i think
we find in asking these questions this
um
this thought experiment falls apart
pretty easily which is why
i call it the terminator 9 pitch that it
would be
fine as a um
as a as a film but it doesn't really
help us to understand
ai or the ethics of ao very well
so he goes on to say um we need to try
and work out
uh what is the meta ethical framework to
understanding
the meta ethical framework that we
should use to try and conceptualize
the position of robots in our midst in a
in this time in which robots are
part of our society and he makes uh the
following
um statements he says determinism means
that all events are predetermined and
therefore it negates the concept of free
will
that libertarianism means that
individuals have moral responsibility
and free will
and that compatibility compatibilism
means that decisions are made by free
will within the context of determinism
he says because of this thought
experiment the terminator 9 scenario
it seems like the robots have free will
because they help
the children but they are also
deterministic machines
therefore compatibilism compatibilism
must be correct um
i think that we should respond that um
uh first of all for coherent ethics uh
arguments of free will or determinism
are largely irrelevant as both arguments
are indeed unfalsifiable and lead to
many more problems
than solutions um a coherent ethics
should take into account what we can
observe
we can't observe the scenario that he
presents to us
we can observe some things about humans
um
we though we can't observe necessarily
free will or determinism
we can observe some things that are
useful for
uh working out what meta-ethical
framework we should use
in terms of human relations and indeed
human machine relations we can observe
that certain circumstances
are probabilistically more or less
likely to result in certain outcomes
even though that's not necessarily
determinist we can observe that certain
circumstances constrict human freedom
um extreme examples being slavery or
coma
and certain circumstances allow for
certain freedoms um if you're
uh wealthy and live in the democratic
states of the world we can observe the
different societies and cultures
we can observe the different societies
and cultures
um affect patterns of human thought and
behavior
and in order to understand the possible
range of human thought
behavior detailed investigations and
analysis of all known human societies
and cultures needed to be done
this one i think is important because we
often
miss it that we
we presume that uh that within just
our society alone are all the possible
ranges of or within the societies that
exist currently on the planet
are all the possible ranges of human
thought and behavior but i think we need
to take a more
historical view to really understand
what the human's about
in order to try and make um ethical
decisions
about human so
he then goes on to say well um he
chooses
uh compatibilism as the framework for
understanding
robot and um and human relations
um and he says that robots are similar
to human
similar to animals and so we should
think about animal rights
but they should be considered animals
his argument is that
peter singer says that animals are
treated like machines that convert
fodder into flesh and that wild justice
by bickoff and
pierce demonstrates that animals have
moral codes
and that he asserts that robots will
have these characteristics
he says morality is independent to an
individual's
species of origin although at a
practical level
um human beings have been dominant
both animals he also says that both
animals and machines
are subordinate to humans and he finally
says that there is an implicit
consensus that a utilitarian approach
towards uh towards animals is correct
um towards animal rights and there is a
tacit recognition that animals carry a
moral responsibility that requires
consideration of their moral value
however any direct moral comparison
between machines
and animals may prove superficial and
problematic
on this last point i don't know of
any implicit consensus that says that a
utilitarian approach is
the correct approach to take with animal
rights i've never heard of that before
and it seems like a pretty big statement
to make without
backing it up and the second thing that
there's a
tacit recognition that animals carry a
moral responsibility
that requires consideration female
values i don't know of that either
i don't know of people who consider
animals to
have a moral responsibility if i ask
someone
in the street does a tarantula have a
moral responsibility i think most people
would probably say
that's an absurd question and similarly
if i ask them
if they i don't know
what's a any kind of animal basically so
i don't quite understand what he's
saying here or in fact
i think it's absurd what he's saying
more realistic
so my response would be to this that the
that a sentence later in peter singer
um after the the quote that he used he
writes the cruelty
is acknowledged only when profitability
ceases so in fact he's talking about
in this quote um previous animals are
treated like machines
but that's a bad thing he says to avoid
speciesism we must stop these practices
peter singer is very against
animal cruelty so in fact it doesn't
that doesn't help his
um doesn't help bhutan's argument that
that animals and machines have
similarities um and while justice by b
coffin
pierce suggests i think dubiously that
animals may have moral codes but it
doesn't prove it i don't have any
evidence for it they um one of the
the uh examples that they use is that
um when animals are hunting
other animals um sometimes say like a i
don't know
uh say say a pack of wolves is hunting
like some
deer or something um occasionally uh
you can observe that one of the deer
will seem to sacrifice itself to the
walls
um so it'll like purposely give up its
life for the wolves to eat it
and beckhoff and pierce say they see
this
as the the deer has like an emotional
relationship with the wolves like
sympathize with wolves and it thinks
oh if i kill myself you you get to eat
and the wonderful cycle of life
continues i think that's a totally
disneyland
view of the situation um and that you
could equally or i think
better argue that the deer probably
actually is sacrificing itself to
protect the rest of the herd
that if it kills itself and the rest of
the herd get away rather than having any
kind of emotional connection with the
walls that doesn't really make sense to
me
um next question we can ask of bhutan is
why would robots have these
characteristics
i don't know he doesn't provide any
evidence for it he just says it would
um i think however a useful comparison
between
machines and animals is not in their
behavior rather it's in
and i think there is a useful one which
is in the epistemological gap that
arises in trying to understand
animals and their cognition turns out
that animals
are really weird and they don't seem to
think or process things
in similar ways to humans and it also
turns out that
we have a similar epistemological gap
that seems to be arising and trying to
understand complex
machines such as neural networks so i
think not necessarily
um as being the same thing but as an
analogous study
that might prove useful if someone
wanted to do that
um the next part he says he goes on
after establishing that um robots have
and machines have some similarity to
animals but not completely
he says well the way that we should uh
work out their relationship um to uh to
humans
is um is by including them
in the u.n declaration of human rights
he says
rights as defined by the u.n declaration
of human rights
uh requires responsibility and duty
um and it does it says this in like
article two or something
uh and he says if we amend the uh
the un declaration of human rights to
include robots ai and other such
machines
and give them rights then the core human
responsibilities
of um i can't remember exactly where
they were i should have written it down
but of things like you know kindness
empathy um forgiveness etc these general
kind of
core responsibilities that the
declaration of human rights says
we all have um well then
apply equally to non-human machines with
a stipulated modification that
human needs are to be prioritized over
artificial intelligence and robot needs
it doesn't say how the non-human
machines are going to engage with these
responsibilities he says
but if we include them they will be
happy enough
to um to join us in our responsibilities
my response is uh asimov's three laws of
robotics make for good fiction but not
for good effort ethics
and even as asimov's stories themselves
attest
that these don't make for good ethics
i think though thinking about this in
terms of
machines that may have some uh
amenability
to human values we still have
problems we still have ethical
philosophical problems which remain
often we focus on
in terms of the practical engineering
problems
of um alignment uh but i think there's a
deeper
a paper that's dealing with the ethics
behind the ai should ask questions like
this
which is whose human rights or whose
values are going to be built into the
mission
um that if
if we if we take that the machine is
going to have
a transformative effect on human
uh existence at large that even if we
get
um even if we get alignment right
in some way um but it's it's not perfect
it's just
one particular set of human values the
the effect of
uh the machine having a worldwide impact
um will destroy i think potentially
destroy the capacity for human
difference and dissent
unless both difference in descent or you
could say plurality is taken into
account into the machine's
alignment framework i think second
we should ask what rights should humans
have against machines rather than asking
necessarily what rights machines should
have
i think we should ask what rights we
should have um
and the third thing is what do we do if
we decide that we don't want machines
anymore
um i think that's a very important
question
and unlike ashrae
proposed questions uh regarding sentient
machines
um i think these questions are relevant
to the types of machines that are
currently functioning in the world as
as well as potential huge ones and so
in terms of an ethical investigation i
think these these questions
are far better as we can look at the
algorithms that are
currently running the internet we can
look at
algorithms that are used in china or
or even in the uk they're starting to be
used for facial recognition and stuff
like that
and we can ask these sorts of questions
of those those kinds of machines
so the final part he says um after we've
included
uh machines within the um uh
declaration of uh human rights and now
they have responsibilities so we're
getting some kind of relationship with
the robots
um he asked what kind of this is this is
the only part of the
his paper that actually addresses the
um title of the paper of going beyond um
or innovating beyond rights where he
asks what kind of
legal and social status will machines
have and he says that we should use
ancient roman law as a model for the
legal
and social status of machines he says uh
the reason being that a precedent exists
for dealing with the relationship
between humans and non-human beings that
a sentient
sentient and that precedent is a
distinction between citizen and
foreigner in ancient roman law
he says that the the structure of
ancient roman law
could be adopted to include machines at
the bottom
tier now um i haven't included this
graphic
because i had to throw this together at
last minute unfortunately
uh but it it you have at the bottom
you have slaves then you have uh
plebians um
then you have citizens uh then you have
uh the the patricians in the senate and
that's the
that's the top of you know or if in the
empire you have the
the emperor at the very top of the pile
so he's saying that we should have
um uh oh wait we've got
peregrineus in there as well sorry
that's the
foreigner which is above um slave but
below citizen and he says that we should
put the
the the machines uh below slave
so the kinds of rights that slaves have
well they're kind of similar to what the
rights
that um the machine should have but the
machine should have like
some some restrictions more
on uh on it which come to think of it i
didn't think of it when i was
writing up my notes for this but that
kind of contradicts his argument he
doesn't actually end up using
um citizen he actually ends up using
slave and citizen as his um of his way
of working south but anyhow
he then goes on to say that eventually
machines could be given partial or full
citizen status
just as the ancient romans eventually
extended citizenship
throughout the empire he finally he
concludes this section
um which is basically the end of the of
the article
saying whilst an exact replica of
ancient roman law
is not the direct solution its parallels
nevertheless
offer some degree of perceptiveness
regarding the introduction of such
engines into human society
so my response for this section is that
the well to begin with the ancient
romans didn't consider foreigners
non-human so it's not
really doesn't really quite work they
definitely didn't consider foreign as
non-human
um and they considered foreigners uh
especially wealthy foreigners to have
uh um to be able to do many more things
than citizens poor citizens could do so
it doesn't quite work but like i said in
the previous slide he ends up
uh actually uh going for a
slave to citizen dichotomy rather than a
foreign dichotomy even though he writes
that he's gonna do foreign
i don't think the slave thing works
either um
uh and one of the reasons i think this
whole argument sort of doesn't work is
like
he says that there's precedent within a
roman law between citizens and
foreigners but uh why not just like cut
out middleman here and use
contemporary law that regards um
citizens
and foreign residents refugees and even
prisoners who are people within our
society who we deem as
restricting their rights as you might
want to restrict machine rides that
seems uh much more sensible
and that we understand contemporary law
um
however ancient roman law is totally
different from modern law
um it's got very little similarities
once you dig into it and it's
largely not understood because we don't
have a lot of textual evidence to really
reconstruct what roman law is so i don't
know why he's
going in this direction um his
description of the legal status of
persons in ancient rome is also it turns
out
incorrect i as a student of theology
have to know about the law of ancient
rome it's one of the things that we do
um and his description of the legal
status of persons
is just sort of flat out incorrect and
he's the only citation that he uses
for uh ancient rome law is one book that
was written in 1901
which is not not really like academic
standard you know
you can get books on ancient roman law
that were written within the last few
years
and then much better um
he also has this idea of rights the
the throughout most of this paper and
including in the ancient roman section
he says that
is using this concept of rights what are
the rights and the responsibilities that
the
um that the machine should have uh and
it's very unlikely that ancient ones had
any concepts of rights that would be
similar
to um the contemporary uh concept i
won't go into detail because it'll take
so long
but uh that's exact um and the
the last point is why would we risk the
incredible destruction uh disruption
sorry to our political and judicial
system of introducing
a new and foreign structure of politics
and
um and legality just to find some way of
conceptualizing legal status machines
surely um we can do this with our own
legal and
political apparatus i would think
so what i learned from suggesting this
paper to you guys
which as you can see is totally chaotic
well first don't judge a paper by its
title by the way
this might seem rather naff that we
should have all done this in first year
university
but i relearned this again recently
doing this so i'm just going to read out
this is some positive things i learned
that i should or maybe all of us should
check a paper's references in the future
he did not use very many references and
they weren't up to date and they weren't
authoritative in their fields
and also they didn't they didn't include
a variety of valid arguments but only
one paper
that supported his argument and no
papers that
could have been valid arguments to come
through does
does the paper that you're reviewing
does it use references appropriately
within their original context this paper
didn't as we saw
does the paper use both rational
argument and empirical evidence to
support their findings
well the rational argument was failing
and there wasn't really much empirical
evidence as to what
a.i might be like in the future and what
i might be like now and how we will uh
confront that and what their status
could be
um do they consider alternative possible
arguments for their evidence and their
subject matter he did not consider that
he
ran on his own argument um the final
thing i'd say which i think is relevant
to all of us
even if you understand that stuff and i
had to just painfully
relearn it that we are in a still
burgeoning field of ai safety
and what we do now determines the
foundations what will be built upon
as a field as it goes into the future if
we consider ai safety
to be very important as ai gets more and
more sophisticated
we really need to get it right so we
should really be encouraging each other
and other people
to really do their reading do their
homework and do good papers
that we can share with each other and
reflect on
so in my conclusion what i would like to
see well the things i said
last thing which was engagement with a
variety of papers
um the the relevant kind of papers and
also an engagement with a variety of
other relevant
literature that would support it so
drawing upon other disciplines that
might support your
arguments or provide insight what would
i have liked to have seen tackle
the actual arguments of the paper and
and
thoughts well i think one important
question is
um how do we and how should we determine
sentence rationality and
self-consciousness
or consciousness i think there's a big
problem lots of people talk about
how we're going to you know there's
these arguments about
um about suffering ai
and do we have moral responsibility to
putting ai into suffering physicians
well i think we should work out
how do we determine what sentience is
beforehand
to be able to work out these issues also
what category of thin
agential machines i don't even know do
we consider them tools
i argue for that occasionally or do we
consider them agents
in a way that animals or humans might be
what rights should animals have over and
against machines
i think is a um a very important
point as i brought up uh during the talk
what are the practical outcomes also of
answering these and similar
questions we need practical outcomes we
um
wandering about in theoretical space
doesn't help us very much
and finally i think a really important
question is uh
strategies for for when it turns out
machines
don't do what we want um i don't know
i've asked people but i don't know if
anyone's working on that and i think
that that seems to be
an important one if we think this is
going to go all uh
balls up so to speak anyhow
thank you uh for my presentation i hope
you got something out of it
um and i apologize uh
for for the the paper this week thank
you very much |
fa8d1a9b-79fc-42a0-9d02-1a8877d36f08 | trentmkelly/LessWrong-43k | LessWrong | Perverse Independence Incentives
In writing about parenting, I don't write enough about what didn't work: this might give the impression that everything I try works, which is not the case! Here's an example of something that didn't, and which I might have caught before trying if I'd thought a bit more.
Brief recap: I see one of my main roles as a parent being building independence:
> I want to let my kids be as independent as possible, as early as possible. Not only does it make their lives better, because they can meet their own needs how they want, but it makes my life easier, because they can handle more on their own. Sometimes this involves a bit more effort up front, but I think it's substantially less effort in total.
For a while I had an approach which was essentially, "if you can do it yourself I'm not going to do it for you." For example, soon after Lily learned how to get her own cereal, we had some conversations like:
> Lily: Papa, can I have some cereal?
>
> Me: You know how to do that!
>
> Lily: I don't want to, and you used to get it for me.
>
> Me: Now that you can do it I'm not going to do it for you.
>
> Lily: [unhappy]
After a few weeks I realized this was counterproductive. Yes, it did mean more practice at new skills, and it is where I want us to end up eventually, but the incentives are all wrong. Instead of her former excitement at learning new things and expanding what she was capable of, each new skill now made her life harder because I was less helpful. You can sort of think of this as too high a marginal tax rate: I was claiming too much of the surplus from her investments in her abilities.
After I realized it wasn't working I toned this down a lot. I will still sometimes refuse to do things they know how to do themselves, but most of the time they're happy enough to do the thing on their own, whenever they want, at their own speed, and in they way they prefer, that I don't need to push them along.
One place where I'll still consistently refuse is when I wouldn' |
2c2bc913-f4e1-4656-8570-f75229df840b | trentmkelly/LessWrong-43k | LessWrong | Quintin's alignment papers roundup - week 2
Introduction
Last week's paper roundup (more or less by accident) focused mostly on path dependence of deep learning and the order of feature learning. Going forwards, I've decided to have an explicit focus for each week's roundup. This week's focus is on the structure/redundancy of trained models, as well as linear interpolations through parameter space.
I've also decided to publish each roundup on Monday morning (Edit: or try to, at any rate).
Papers
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
> In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
My opinion:
This paper suggests that neural nets are redundant by default, which gives some intuition for why it's often possible to prune large fractions of a network's parameters without much impact on the test performance, as well as the mechanism by which residual connections allow for training deeper networks: residual connecti |
e7456774-3373-4fd2-bf21-5f6e8aae05c0 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Tel Aviv Game Night
Discussion article for the meetup : Tel Aviv Game Night
WHEN: 08 December 2015 07:00:00PM (+0200)
WHERE: Yigal Alon 118, Tel Aviv
We will play board games and have fascinating discussions, as always. Bring your own board games!
If you have trouble finding the place or have other questions, call me (Vadim) at +972542600919.
Discussion article for the meetup : Tel Aviv Game Night |
8ce998ea-cd0e-4ca2-ad0e-2ed772b0cdd6 | trentmkelly/LessWrong-43k | LessWrong | It's time for a self-reproducing machine
I've wanted to build a self-reproducing machine since I was 17. It's forty-five years later, and it has finally become feasible. (I've done a few other things along the way.) I'm going to describe one such device, and speculate as to its larger implications. It's a pretty detailed design, which I had to come up with to convince myself that it is feasible. No doubt there are better designs than this.
The Autofac
Here's a top-level description of the device I'm thinking of. It's called an Autofac, which is what they were called in the earliest story about them. It looks like a little metal shed, about a meter cubed. It weighs about 50 kilograms. There's a little gnome-sized door on each end. It's full of robot arms and automated machine tools. It's connected to electricity and by WiFi to a data center somewhere. It has a front door, where it accepts material, and a back door, where it outputs useful objects, and cans of neatly packaged waste. You can communicate with it, to tell it to make parts and assemble them into useful shapes. It can do all the metalworking operations available to a machinist with a good shop at their disposal. In return, it occasionally asks for help or clarification.
One particular thing it can be told to make is another one of itself. This is of course the case we're all interested in. Here's what that looks like. You feed a 60kg package of steel castings, electronics, and other parts, into the door at one end. It starts by building another shed, next to the other end. The two sheds are butted up next to each other, so the rain can't get in. Once it's enclosed, there is no visible progress for about a month, but it makes various metalworking noises. Then it announces that it's done. The second shed is now another Autofac, and can be carried away to start the process elsewhere. There's also a can full of metal scrap and used lubricant, which has to be disposed of responsibly. This process can be repeated a number of tim |
6db099b9-1b23-4bef-81d8-1ad77a7cc58a | StampyAI/alignment-research-dataset/arbital | Arbital | Axiom
An **axiom** of a [theory](https://arbital.com/p/theory_mathematics) $T$ is a [well-formed](https://arbital.com/p/well_formed) [sentence](https://arbital.com/p/sentence_mathematics) in the [language](https://arbital.com/p/language_mathematics) of the theory that we assume to be true without a formal justification.
#Models
Models of a certain theory are going to be those mathematical objects in which the axioms hold, so they can be used to pin down the mathematical structures we want to talk about.
Normally, when we want to reason about a particular aspect of the world we have to try to figure out a sufficiently descriptive set of axioms which are satisfied by the thing we want to reason about. Then we can use [deduction rules](https://arbital.com/p/) to deduce consequences of those axioms, which will also be satisfied by the thing in question.
For example, we may want to model how viral videos spread across the internet. Then we can make some assumptions about this situation. For example, we may consider that the internet is a [graph](https://arbital.com/p/) in which each node is a person, and its edges are friendships. We may further assume that the edges have a weight between 0 and 1 representing the probability that in a time step a person will tell its friend about the video. Then we can use this model to figure out how kitten videos end up on your twitter feed.
This is a particularly complex model with many assumptions behind. Formalizing all those assumptions and turning them into axioms would be a pain in the ass, but they are still there, albeit hidden.
For example, there might be an axiom in the language of [first order logic](https://arbital.com/p/) stating that $\forall w. weight(w)\rightarrow 0<w \wedge w < 1$; that is, every weight in the graph is between $0$ and $1$.
In the ideal case, we want to write down enough axioms so that the only model satisfying them is the one we want to study. However, when dealing with first order logic there will be many occasions in which this is simply not possible, no matter how many axioms we add to our theory.
One result showing this is the [Skolem-Löwenheim theorem](https://arbital.com/p/).
#Axiom schematas
For proper deduction to work as intended, the set of axioms of a theory do not have to be strictly finite, but just [computable](https://arbital.com/p/)%%note: Incidentally, a theory whose set of axioms is computable is called [https://arbital.com/p/-axiomatizable](https://arbital.com/p/-axiomatizable)%%.
In particular, we can specify an infinite amount of axioms in one go by specifying an **axiom schemata**, or particular form of a sentence which will be an axioms.
For example, in [https://arbital.com/p/3ft](https://arbital.com/p/3ft) you specify the [induction axiom schemata](https://arbital.com/p/5fz), stating that every sentence of the form $[https://arbital.com/p/P](https://arbital.com/p/P)\rightarrow \forall n. P(n)$ is an axiom of $PA$.
The reason why first order logic can handle infinite sets of axioms is due to its [compactness,](https://arbital.com/p/) which guarantees that every consequence of the theory is a consequence of a finite subset of the theory. |
ebd2439c-59e3-49e7-97c1-0e993e86015c | StampyAI/alignment-research-dataset/arxiv | Arxiv | The Social Cost of Strategic Classification
1 Introduction
---------------
As machine learning increasingly supports consequential decision making, its
vulnerability to manipulation and gaming is of growing concern. When individuals
learn to adapt their behavior to the specifics of a statistical decision rule,
its original predictive power will deteriorate. This widely observed
empirical phenomenon, known as Campbell’s Law or Goodhart’s Law, is often
summarized as: “Once a measure becomes a target, it ceases to be a good
measure” [[25](#bib.bib25)].
Institutions using machine learning to make high-stakes decisions naturally wish
to make their classifiers robust to strategic behavior. A growing line of work
has sought algorithms that achieve higher utility for the institution in
settings where we anticipate a strategic response from the the classified
individuals [[10](#bib.bib10), [5](#bib.bib5), [14](#bib.bib14)].
Broadly speaking, the resulting solution concepts correspond to more
conservative decision boundaries that increase robustness to some form of
covariate shift.
But there is a flip side to strategic classification. As insitutional utility
increases as a result of more cautious decision rules, honest individuals worthy
of a positive classification outcome may face a higher bar for success.
The costs incurred by individuals as a consequence of strategic classification
are by no means hypothetical, as the example of lending shows. In the United
States, credit scores are widely deployed to allocate credit. However, even
creditworthy individuals routinely engage in artificial practices intended to
improve their credit scores, such as opening up a certain number of credit
lines in a certain time period [[9](#bib.bib9)].
In this work, we study the tension between institutional and individual utility
in strategic classification.
We first introduce a general measure of the cost of strategic classification,
which we call the *social burden*. Informally, the social burden measures
the expected cost that a positive individual needs to incur to be correctly
classified as positive.
For a broad class of cost functions, we prove there exists an intrinsic
trade-off between institutional accuracy and social burden: any increase in
institutional accuracy comes at an increase in social burden. Moreover, we
precisely characterize this trade-off and show the commonly considered
Stackelberg equilibrium solution that achieves maximal institutional accuracy
comes at the expense of maximal social burden.
Equipped with this generic trade-off result, we turn towards a more careful
study of how the social burden of strategic classification impacts different
subpopulations. We find that the social burden can fall disproportionally on
disadvantaged subpopulations, under two different notions by which one group can
be *disadvantaged* relative to another group.
Furthermore, we show that as the institution improves its accuracy, it
exacerbates the gap between the burden to an advantaged and disadvantaged group.
Finally, we illustrate these conditions and their consequences with a case study
on FICO data.
###
1.1 Our Contributions
In this paper, we make the following contributions:
1. We prove a general result demonstrating the trade-off between
institutional accuracy and individual utility in the strategic setting. Our
theoretical characterization is supplemented with examples illustrating when an
institution would prefer to operate along different points in this trade-off
curve.
2. We show fairness considerations inevitably arise in the strategic setting.
When individuals incur cost as a consequence of making a classifier robust to
strategic behavior, we show the costs can disproportionally fall by
disadvantaged subpopulations. Furthermore, as the institution increases its
robustness, it also increases the disparity between the subpopulations.
3. Using FICO credit data as a case-study, we empirically validate our
modeling assumptions and illustrate both the general trade-offs and fairness
concerns involved with strategic classification in a concrete setting.
Reflecting on our results, we argue that the existing view of strategic
classification has been *instituition-centric*, ignoring the social burden
resulting from improved institutional utility. Our framework makes it possible
to select context-specific trade-offs between institutional and individual
utility, leading to a richer space of solutions.
Another key insight is that discussions of strategy-robustness must go hand in
hand with considerations of fairness and the real possibility that
robustness-promoting mechanisms can have disparate impact in different segments
of the population.
2 Model
--------
#### Strategic classification.
Throughout this work, we consider the binary classification setting. Each
individual has features x∈X and a label y∈Y={0,1}. The
*institution* publishes a classifier f:X→Y. In the
non-strategic setting, the *institutional utility* is simply the
classification accuracy of f:
| | | |
| --- | --- | --- |
| | U(f)=P(f(x)=y) | |
In the *strategic setting*, the *individual* can modify their
features
x to new features x′. However, modification incurs a cost given by
c:X×X→R≥0. The *individual utility* after
changing from x to x′ is then f(x′)−c(x,x′). We assume the individual
optimally adapts their features to maximize this utility. The
*best-response* of individual with features x to classifier f is given
by
| | | |
| --- | --- | --- |
| | Δ(x;f)=argmaxx′f(x′)−c(x,x′). | |
When it is clear from context we will drop the dependence on f and write the
individual’s best response as Δ(x). The expression above may not have a
unique maximizer. We assume that the individual x does not adapt her features
if she is already accepted by the classifier, i.e f(x)=1, or if there is no
maximizer x′ she can move to such that f(x′)=1. In cases where the
individual does adapt, we let x′ be an arbitrary maximizer such that f(x′)=1.
In line with prior work [[15](#bib.bib15)], we assume that the
institution has knowledge of the cost function c, although in practice, the
cost function would likely need to be learned from data. When the
institution knows the cost function, it can take into account how individuals
will adapt
when choosing what classifier to use.
For
example,
imagine
that the
institution is trying to rank pages on a social network. Although the number of
likes a page has may be predictive, if the institution knows that it is low
cost for individuals to game how many likes they have, it can choose to
weigh
the
feature
less.
In the strategic setting, the institution’s utility is modified to account for
this
manipulation of features. The *strategic utility* for the institution
measures accuracy *after* individual responses:
| | | |
| --- | --- | --- |
| | UΔ(f)=P(f(Δ(x))=y). | |
#### Social burden.
Focusing purely on maximizing UΔ, as
done in prior work [[4](#bib.bib4), [15](#bib.bib15), [12](#bib.bib12)], ignores the cost that a classifier imposes on individuals,
particularly true positives. To measure this cost, we introduce the
*social burden*,
defined as the expected minimum cost mf(x)=minf(x′)=1c(x,x′) that
positive individuals must incur in order to be classified correctly.
######
Definition 2.1 (Social burden).
The social burden of a classifier f is defined as
| | | |
| --- | --- | --- |
| | B+(f)=E[minf(x′)=1c(x,x′)∣y=1]. | |
The social burden measures two types of negative effects on positive
individuals, depending on whether they change their features or not. Since
individuals respond optimally, if mf(x), the minimum cost necessary to be
accepted, is less than one, then the individual adapts their features. On the
other hand, if mf(x) is greater than or equal to one, then the individual
does not adapt their features because the cost of changing their features
outweighs the benefit she gets from being accepted.
In the first case, the individual still gets accepted, but incurs a cost for
changing their features. In the second case, the individual does not adapt
their features and does not get accepted by the classifier, so it is more
appropriate to view mf(x) as a hypothetical cost that blocks the individual
from being accepted. The social burden, which takes an expectation over
mf(x), measures both the cost incurred in order to be accepted and the
hypothetical cost that prevents acceptance.
#### Assumptions on cost function.
While there are many possible models for the cost function, we restrict our
attention to a natural set of cost functions that we call outcome
monotonic. Outcome monotonic costs capture two intuitive properties: (1)
Monotonically improving one’s outcome requires monotonically increasing amounts
of work, and (2) it is zero cost to worsen one’s outcome. This captures the
intuition that, for example, it is harder to pay back loans than it is to go
bankrupt.
######
Definition 2.2 (Outcome likelihood).
The outcome likelihood of an individual x is ℓ(x)=P(Y=1∣X=x).
######
Definition 2.3 (Outcome Monotonic Cost).
A cost function c:X×X→R≥0 is outcome monotonic
if for any x,x′,x∗∈X the following properties hold.
* Zero-cost to move to lower outcome likelihoods. c(x,x′)>0 if and only if ℓ(x)>ℓ(x′).
* Monotonicity in first argument. c(x,x∗)>c(x,x′)>0
if and only if ℓ(x∗)>ℓ(x′)>ℓ(x).
* Monotonicity in second argument. c(x,x∗)>c(x,x′)>0 if and only if ℓ(x∗)>ℓ(x′)>ℓ(x).
Under these assumptions, we can equivalently express the cost as a cost over
outcome likelihoods, cL:ℓ(X)×ℓ(X)→R≥0, defined in the following lemma.
######
Lemma 2.1.
When the cost function c(x,x′) is outcome monotonic, then it can be
written as a cost function over outcome likelihoods cL(l,l′)\coloneqqc(x,x′) where x,x′∈X are any points such that l=ℓ(x) and l′=ℓ(x′).
###### Proof.
The monotonicity assumptions imply that if ℓ(x∗)=ℓ(x′), then
c(⋅,x′)=c(⋅,x∗) and c(x′,⋅)=c(x∗,⋅). Thus,
cL(l,l′)\coloneqqc(x,x′) is well-defined because any points x and
x′ such that l=ℓ(x) and l′=ℓ(x′) yireld the same value of c(x,x′).
∎
Throughout the paper, we will make use of the equivalent likelihood cost
cL
when
a
proof
is
more naturally expressed with cL,
rather
than with
the
underlying
cost c.
3 Institutional Utility Versus Social Burden
---------------------------------------------
In this section, we characterize the inherent trade-offs between institutional
utility and social burden in the strategic setting. In particular, we show
any classifier that improves institutional utility over the best classifier in
the static setting causes a corresponding increase in social burden.
We prove this result in two steps. First, we prove any classifier can be
represented as a
threshold classifier that depends only on a threshold τ∈[0,1]. Then,
we show increasing utility for the institution corresponds to raising this
threshold τ, but that the social burden monotonically increases in
τ.
Equipped with this result, we show the (Pareto-optimal) set of classifiers that
increase institutional utility in the strategic setting corresponds to an
interval I. Each threshold τ∈I represents a particular trade-off
between institutional utility and social burden. Strategic classification
corresponds to one extremum: the best strategic utility but the worst social
burden. The non-strategic utility corresponds to the other: doing nothing to
prevent gaming. Neither is likely to be the right trade-off in practical
contexts. Real domains will require a careful weighting of these two utilities,
leading to a choice somewhere in between. Thus, a main contribution of our work
is exposing this interval.
###
3.1 General Trade-Off
We now proceed to prove the trade-off between institutional utility and social
burden. Our first step is to show that in the strategic setting we can restrict
attention to classifiers that threshold on the outcome likelihood (assuming the
cost is outcome monotonic as in Definition [2.3](#S2.Thmdefinition3 "Definition 2.3 (Outcome Monotonic Cost). ‣ Assumptions on cost function. ‣ 2 Model ‣ The Social Cost of Strategic Classification")).
######
Definition 3.1 (Outcome threshold classifier).
An outcome threshold classifier f is a classifier of the form f(x)=I{ℓ(x)≥τ} for τ∈[0,1].
In practice, the institution may not know the outcome likelihood ℓ(x)=P(Y=1∣X=x). However, as shown in the next lemma, for any
classifier that they do use, there is a threshold classifier with equivalent
institutional utility and social burden. Thus, we can restrict our theoretical
analysis to only consider threshold classifiers.
######
Lemma 3.1.
For any classifier f there is an outcome threshold classifier f′ such that
UΔ(f)=UΔ(f′) and B+(f)=B+(f′).
###### Proof.
Let τ(f)=minx:f(x)=1ℓ(x) be the outcome likelihood at which
an individual is accepted by the classifier f. Then, let
f′=I{ℓ(x)≥τ(f)} be the outcome threshold classifier that
accepts all individuals above τ(f). We will show that the institutional
utility and social burden of f and f′ are equal.
Since the cost function is outcome monotonic, it is the same cost to move
to any point with the same outcome likelihood. Furthermore, it is
higher cost to move to points of higher likelihood, i.e, if ℓ(x′)>ℓ(x∗), then c(x,x′)>c(x,x∗). Since individuals game optimally, when an individual changes
her features in response to the classifier f, she has no incentive to move to
a point with probability higher than τ(f) – that would just cost more.
Therefore, she will move to any point with
likelihood τ(f) to be accepted by f and will incur the same cost,
regardless of which point it is.
Thus, we can write the set of individuals accepted by f, AΔ(f), as
| | | | |
| --- | --- | --- | --- |
| | AΔ(f) | ={x∣f(Δ(x))=1} | |
| | | ={x∣∃x′:f(x′)=1, c(x,x′)≤1} | |
| | | ={x∣c(x,x′)≤1, x′:ℓ(x)=τ(f)}. | |
Since τ(f)=τ(f′), the individuals accepted by f and f′ are
equal: AΔ(f)=AΔ(f′). Therefore, their institutional utilities
UΔ(f) and UΔ(f′) are equal. We can similarly show that the
social burdens of f and f′ are also equal:
| | | | |
| --- | --- | --- | --- |
| | B+(f) | =E[minf(x′)=1c(x,x′)∣y=1], | |
| | | =E[c(x,x′)∣y=1]x′:ℓ(x′)=τ(f) | |
| | | =E[c(x,x′)∣y=1]x′:ℓ(x′)=τ(f′) | |
| | | =E[minf′(x′)=1c(x,x′)∣y=1]=B+(f′). | |
∎
Since outcome threshold classifiers can represent all
classifiers in the strategic setting, we will henceforth only consider
outcome threshold classifiers. We
will
overload
notation and use UΔ(τ) and B+(τ) refer to UΔ(fτ)
and
B+(fτ) where fτ(x)=I{ℓ(x)≥τ} is the
outcome threshold classifier with threshold τ.

Figure 1: The general shapes of the institution utility and social burden as a
function of the threshold the institution chooses. The threshold t0 is the
non-strategic optimal, while the threshold τ∗ is the Stackelberg
equilibrium.
Figure [1](#S3.F1 "Figure 1 ‣ 3.1 General Trade-Off ‣ 3 Institutional Utility Versus Social Burden ‣ The Social Cost of Strategic Classification") illustrates how institutional utility and social
burden change as the threshold of the classifier increases. The institutional
utility is quasiconcave, while the social burden is monotonically
non-decreasing. The next lemma provides a formal characterization of the
shapes shown in Figure [1](#S3.F1 "Figure 1 ‣ 3.1 General Trade-Off ‣ 3 Institutional Utility Versus Social Burden ‣ The Social Cost of Strategic Classification").
######
Theorem 3.1.
The institution utility UΔ(τ) is quasiconcave in τ and has a
maximum at a threshold τ∗≥τ0 where τ0=0.5 is the
threshold of the non-strategic optimal classifier. The social burden
B+(τ) is monotonically non-decreasing in τ. Furthermore, if
UΔ(τ)≠UΔ(τ′), then B+(τ)≠B+(τ′).
###### Proof.
Let AΔ(τ) and A(τ) be the set of individuals accepted by
f in the strategic and non-strategic setting, respectively. If τ<τ0, we have AΔ(τ)⊇AΔ(τ0)⊇A(τ0). Since A(τ0) is the optimal acceptance region,
UΔ(τ)≤UΔ(τ0). Therefore, if a threshold τ∗
is optimal for the institution, i.e, UΔ(τ∗)=maxτUΔ(τ), then τ∗≥τ0.
Recall that a univariate function f(z) is quasiconcave if there exists
z∗ such that f is non-decreasing for z<z∗ and is
non-increasing for z>z∗. Note that if τ<τ∗ we have
that AΔ(τ)⊇AΔ(τ∗), thus UΔ(τ)≤UΔ(τ∗). Similarly, for any τ>τ∗ we have that
AΔ(τ∗)⊇AΔ(τ), and thus UΔ(τ)≤UΔ(τ∗). Therefore, UΔ(τ) is quasiconcave in τ.
Let cx(τ) be the cost required for a
specific individual x to be classified positively: cx(τ)=c(x,x′)
where x′ is any point such that ℓ(x′)=τ. The social burden can
then be expressed as B+(τ)=E[cx(τ)∣y=1]. Since
cx(τ) is monotonically non-decreasing, B+(τ) is also
monotonically non-decreasing.
Suppose UΔ(τ)≠UΔ(τ′) and without loss of generality let
τ<τ′. For all individuals x, cx(τ′)≥cx(τ). If
there is at least one individual x such that cx(τ′)>cx(τ),
then B+(τ′)>B+(τ). But since UΔ(τ)≠UΔ(τ′), there must exist an individual x such that x∈AΔ(τ)/AΔ(τ′) and p(X=x∣Y=1)>0. For this individual
cx(τ′)>cx(τ). Therefore, UΔ(τ)≠UΔ(τ′)⟹B+(τ′)≠B+(τ).
∎
As a corollary, if the institution increases its utility beyond that
attainable by the optimal classifier in the non-strategic setting, then the
institution also causes higher social burden.
######
Corollary 3.1.
Let τ be any threshold and τ0=0.5 be the optimal threshold in the
non-strategic setting. If UΔ(τ)>UΔ(τ0), then US(τ)<US(τ0).
###
3.2 Choosing a Concrete Trade-off
We have now shown that increases in institutional utility come at a cost in
terms of social burden and vice-versa. This still leaves open the question: what
is the concrete trade-off an institution should choose?
Theorem [3.1](#S3.Thmtheorem1 "Theorem 3.1. ‣ 3.1 General Trade-Off ‣ 3 Institutional Utility Versus Social Burden ‣ The Social Cost of Strategic Classification")
provides
a
precise
characterization
of
the
choices available to trade-off between institutional utility and social burden.
The
baseline choice for the institution is to not account for strategic behavior
and use the
non-strategic
optimum τ0. Maximizing utility without regard to
social
burden leads the institution to choose τ∗. In
general,
the
interval
[τ0,τ∗]
offers
the set of trade-offs the institution considers. Choosing τ>τ0 can
increase robustness at the price of increasing social burden. Thresholds τ>τ∗ are not Pareto-efficient and are not considered.
Much of the prior work in machine learning has focused exclusively on solutions
corresponding to the
thresholds at the extreme: τ0 and τ∗. The
threshold
τ0
is the solution when strategic behavior is not accounted for. The
threshold
τ∗
is also known as the *Stackelberg
equilibrium* and is the subject of recent work in
strategic classification [[4](#bib.bib4), [14](#bib.bib14), [12](#bib.bib12)]. While using
τ∗ may be warranted in some cases, a proper accounting of social burden
would lead institutions to choose classifiers somewhere *between* the
extremes of τ0 and τ∗.
The exact choice of τ∈[τ0,τ∗] is
*context-dependent*
and
depends on balancing concerns between institutional and broader social
interest.
We now highlight cases where using τ0 or τ∗ may be suboptimal,
and
using a threshold τ∈(τ0,τ∗) that balances robustness with
social burden is preferable.
######
Example 3.1 (Expensive features.).
If measuring a feature is costly for individuals and offers limited
institutional gains, an institution may choose to ignore the feature, even if
it
means giving up accuracy on the margin. In an educational context, a university
may decide to no longer require applicants to submit standardized test scores,
which can cost applicants
hundreds of dollars, if the corresponding improvement in admissions
outcomes is very small
[[1](#bib.bib1)].
######
Example 3.2 (Reducing social burden under resource constraints.).
Aid organizations use machine learning to determine where to allocate resources
after natural disasters [[18](#bib.bib18)]. In these cases, positive
individuals are precisely those people who are in need of aid and may
experience
very high costs to change their features. Using thresholds with high social
burden
is therefore undesirable. At the same time, organizations giving out aid often
face significant resource constraints. False positives from individuals gaming
the classifier ties up resources that could be better used elsewhere.
Consequently, using the non-strategic threshold is also undesirable. The aid
organization should choose a some threshold τ with τ0<τ<τ∗ that reflects these trade-offs.
######
Example 3.3 (Misspecification of agent model.).
Strategic classification models (including ours) typically assume that the
individual optimally responds to the classifier f. However, in reality,
individuals will not have perfect knowledge of the classifier f when it is
first deployed. Instead, they may be able to learn about how the classifier
works over time, and gradually improve their ability to game the classifier.
For
example, self-published romance authors exchanged information in private chat
groups about how to best game Amazon’s book recommendation algorithms
[[19](#bib.bib19)]. For the institution, it is difficult to a priori model
the
dynamics of how information about the classifier propagates. A preferable
solution may be to simply make the assumption that the individual can best
respond to the classifier, but to only gradually increase the threshold from
the
non-strategic τ0 to the Stackelberg optimal τ∗ over time.
In fact, misspecification of the agent model (described above), is
why [[5](#bib.bib5)] suggest that the Stackelberg equilibrium is too
conservative, and instead prefer to use Nash equilibrium strategies.
Complementary to their observation, we show that there is a more general reason
Nash equilibria may be preferable. Namely, that Nash equilibria have lower
social burden than the Stackelberg solution. As the following lemma shows, in
our context, the set of Nash equilibrium form an interval [tN,τ∗]⊂I for some tN≥t0. The proof is deferred to the appendix.
######
Lemma 3.2.
Suppose the cost over likelihoods cL is continuous and ℓ(X)=[0,1],
i.e, all likelihoods have non-zero support. Then, the set of Nash equilibrium
strategies for the institution is [τN,τ∗] for some τN≥τ0 where τ0=0.5 is the non-strategic optimal threshold and
τ∗
is the Stackelberg equilibrium strategy.
The Stackelberg equilibrium requires the institution to choose τ∗,
whereas Nash equilibria give the institution latitude to trade-off between
institutional utility and social burden by choosing from the interval
[tN,τ∗]⊂I.
This
provides
an
additional argument
in
favor
of
Nash equilibria– institutions can reason in terms of equilibria *and*
achieve more favorable outcomes in terms of social burden.
4 Fairness to Subpopulations
-----------------------------
Our previous section showed that increased robustness in the face of strategic
behavior comes at the price of additional social burden. In this section, we
show this social burden is not fairly
distributed: when the individuals being classified are from latent
subpopulations, say of race, gender, or socioeconomic status, the social burden
can disproportionately fall on disadvantaged subpopulations. Furthermore, we
find that improving the institution’s utility can exacerbate the gap between
the
social burden incurred by an advantaged and disadvantaged group.
Concretely, suppose each individual is from a subpopulation G∈{a,b}.
The social burden a classifier f has on a group g∈G is the expected
minimum
cost required for a positive individual from group g to be accepted:
B+,g(f)=E[minf(x′)=1c(x,x′)∣Y=1,G=g].
We can then define the social gap between groups a and b:
######
Definition 4.1 (Social gap).
The social gap G(f) induced by a classifier f is the
difference in the social burden to group b compared to a: G(f)=B+,b(f)−B+,a(f).
The social gap is a measure of how much more costly it is for a positive
individual from group b to be accepted by the classifier than a positive
individual from group a. For example, there is evidence that women need to
attain higher educational qualifications than their male counterparts to
receive
the same salary [[6](#bib.bib6)].
A high social gap is alarming for two reasons. First, even when two people from
group a and group b are equally qualified, the individual from group a
may
choose not to participate at all because of the cost she would need to endure
to
be accepted. Secondly, if she does decide to participate, she may continue to
be
at a disadvantage after being accepted because of the additional cost she had
to
endure, e.g., repaying student loans.
Non-strategic classification can already induce a social gap between two
groups, and strategic classification can exacerbate this gap.
We show this under two natural ways group b may be
disadvantaged. In the first setting, the feature distributions of group a and
b are such that a positive individual from group b is less likely to be
considered positive, compared to group a. In the second setting, individuals
from group b have a higher cost to adapt their features compared to group
a.
Under both of these conditions, any improvement the institution can make to its
own strategic utility has the side effect of worsening (increasing)
the social gap.
###
4.1 Different Feature Distributions
In the first setting we analyze, the way groups a and b differ is through
their distributions over features. We say that group b is disadvantaged if
the
features distributions are such that positive individuals from group b are
less likely to be considered positive than those from group a. Formally, this
can be characterized as the following:
######
Definition 4.2 (Disadvantaged in features).
Let L+,g=ℓ(X)∣Y=1,G=g be the outcome likelihood of a
positive individual from group g, and let F+,g be the cumulative
distribution function of L+,g. We say that group b is disadvantaged
in features if F+,b(l)>F+,a(l) for all l∈(0,1).
In the economics literature, the relationship between L+,a and L+,b
is referred to as strict first-order stochastic dominance
[[21](#bib.bib21)]. An equivalent way to understand the definition is
that group b is disadvantaged in features if and only if the distribution of
L+,a can be transformed to the distribution of L+,b by transferring
probability mass from higher values to lower values. This definition captures
the notion that the outcome likelihood of positive individuals from group b
is
skewed lower than the outcome likelihood of positive individuals from a.
In a case study on FICO credit scores in Section [5](#S5 "5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification"), we find
the minority group (blacks) is disadvantaged in features compared to the
majority group (whites) (see Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Different Feature Distributions ‣ 5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification")). There are
many
reasons that a group could be disadvantaged in features. Below, we go through a
few potential causes.
######
Example 4.1 (Group membership explains away features).
Even if two groups are equally likely to have positive individuals, i.e.,
P(Y=1∣G=a)=P(Y=1∣G=b), group b can still be
disadvantaged compared to group a. Consider the graph below. Although the
label Y is independent of the group G, the label Y is not independent
of the group G once conditioned on the features X because the group G
can provide an alternative reason for the observed features.

Concretely, let groups a and b be native and non-native speakers of
english, X be the number of grammatical errors on an individual’s job
application, and Y be whether the individual is a qualified candidate.
Negative individuals (Y=0) are less meticulous when filling out their
application and more likely to have grammatical errors. However, for
individuals from group b there is another explanation for having
grammatical errors – being a non-native speaker. Thus, positive individuals
from group b end up with lower outcome likelihoods than those from a,
even though they may be equally qualified.
######
Example 4.2 (Predicting base rates).
Suppose the rate of positives in group b is lower than that of group a:
P(Y=1∣G=b)<P(Y=1∣G=a). If there is a feature
in the dataset that can be used as a proxy for predicting the group, such as
zip code or name for predicting race, then the outcome likelihoods of
positive individuals from group b can end up lower than those of positive
individuals from group a because the features are simply predicting the
base rate of each group.
#### Social gap increases.
We now state and prove the main result showing that the social gap increases as
the
institution increases its threshold for acceptance. Before turning to the
result, we introduce one technical requirement. The *likelihood condition*
is that ∂cL(l,τ)∂l is monotonically
non-increasing in τ for l,τ∈[0,1]. When the cost function c is
outcome monotonic, the likelihood condition is satisfied for a broad class of
differentiable likelihood cost functions cL, such as the following
examples.
* Differentiable separable cost functions of the form cL(l,l′)=max(c2(l′)−c1(l),0) for c1,c2:[0,1]→R≥0.
* Differentiable shift-invariant cost functions of the form
| | | |
| --- | --- | --- |
| | cL(l,l′)={c0(l′−l)l<l′0l≥l′, | |
for c0:[0,1]→R≥0.
Notably, any linear cost cL(l,l′)=max(α(l′−l),0) where
α>0 satisfies the likelihood condition.
Under the likelihood condition, we now show that the social gap increases as the
institution increases its threshold for acceptance.
######
Theorem 4.1.
Let τ∈(0,1] be the threshold of the classifier. If group b is
disadvantaged in features compared to group a, and ∂cL(l,τ)∂l is monotonically non-increasing in τ, then
G(τ) is positive and monotonically increasing over τ.
###### Proof.
By Lemma [2.1](#S2.Thmlemma1 "Lemma 2.1. ‣ Assumptions on cost function. ‣ 2 Model ‣ The Social Cost of Strategic Classification"), any outcome monotonic cost function can be
written as a cost over outcome likelihoods. Therefore, the social burden can be
written as
| | | | |
| --- | --- | --- | --- |
| | B+,g(τ) | =E[minfτ(x′)=1c(x,x′)∣Y=1,G=g] | |
| | | =∫τ0cL(l,τ)dFg,1(l), | |
where F+,g denotes the CDF of the outcome likelihood L+,g.
Integrating by parts, we obtain a simple expression for B+,g(τ):
| | | | |
| --- | --- | --- | --- |
| | B+,g(τ) | =∫τ0cL(l,τ)dFg,1(l) | |
| | | =[cL(l,τ)Fg,1(l)]τ0−∫τ0∂cL(l,τ)∂lF+,g(l)dl | |
| | | =−∫τ0∂cL(l,τ)∂lF+,g(l)dl, | |
where the last line follows because cL(τ,τ)=0 and F+,g(0)=0. This expression for B+,g(τ) allows us to conveniently write the
social
gap as
| | | |
| --- | --- | --- |
| | G(τ)=B+,b(τ)−B+,a(τ)=∫τ0∂cL(l,τ)∂l(F+,a(l)−F+,b(l))dl. | |
It is easy to observe G(τ) is positive. By the monotonicity
assumptions, ∂cL(l,τ)∂l<0 for l∈(0,τ). Since group b is disadvantaged in features, F+,a(l)−F+,b(l)<0 for l∈(0,1). Therefore, the social gap G(τ)>0.
Now, we show G(τ) is increasing in τ. Let 0≤τ<τ′≤1. Then,the difference in the social gap is given by
| | | | |
| --- | --- | --- | --- |
| | G(τ′)−G(τ) | =∫τ0∂(cL(l,τ′)−cL(l,τ))∂l(F+,a(l)−F+,b(l))dl | |
| | | +∫τ′τ∂cL(l,τ′)∂l(F+,a(l)−F+,b(l))dl. | |
Since group b is disadvantaged in features, (F+,a(l)−F+,b(l))<0 for all l. By assumption,
∂cL(l,τ)∂l
is monotonically non-increasing in τ, so the first term is non-negative.
Similarly, ∂cL(l,τ′)∂l<0 by monotonicity,
so the second term is positive. Hence, G(τ′)−G(τ)>0, which
establishes G(τ) is monotonically increasing in τ.
∎
As a corollary, if the institution improves its utility beyond the
non-strategic
optimal classifier, then it also causes the social gap to increase.
######
Corollary 4.1.
Suppose group b is disadvantaged in features compared to group a, and
∂cL(l,τ)∂l is monotonically non-decreasing
in τ. Let τ∈(0,1] be a threshold and τ0=0.5 be the optimal
non-strategic threshold. If UΔ(τ)>UΔ(τ0), then G(τ)>G(τ0).
###### Proof.
By Theorem [3.1](#S3.Thmtheorem1 "Theorem 3.1. ‣ 3.1 General Trade-Off ‣ 3 Institutional Utility Versus Social Burden ‣ The Social Cost of Strategic Classification"), if UΔ(τ)>UΔ(τ0), then
τ>τ0. By Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ Social gap increases. ‣ 4.1 Different Feature Distributions ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification"), if τ>τ0, then
G(τ)>G(τ0).
∎
###
4.2 Different Costs
In Section [4.1](#S4.SS1 "4.1 Different Feature Distributions ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification"), we showed that when two subpopulations have
different feature distributions, the social burden can disproportionately fall
on one group. In this section, we give show, even if the feature distributions
of the two groups are exactly identical, the social burden can still
disproportionately impact one group.
We have thus far assumed the existence of a cost function c
that is uniform across groups a and b. For a variety of structural reasons,
it is unlikely this assumption holds in practice. Rather, it is often the case
that *different groups experience different costs* for changing their
features.
When the cost for group b is systematically higher than the cost for group
a, we prove group b incurs higher social burden than group a.
Furthermore, if the institution improves its utility by increasing its threshold
τ, then as a side effect it also increases the social gap between group b
and a (Theorem [4.2](#S4.Thmtheorem2 "Theorem 4.2. ‣ 4.2 Different Costs ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification")).
Much of the prior work on fairness in classification focuses on preventing
unfairness that can arise when different subpopulations have different
distributions over
features and
labels [[13](#bib.bib13), [15](#bib.bib15), [8](#bib.bib8)].
Our
result provides a reason to be concerned about the unfair impacts of
a classifier even when two
groups have identical initial distributions. Namely, that *it can be
easier for one group to game the classifier than another*.
Formally, we say that group b is disadvantaged in cost compared to group
a if the following condition holds.
######
Definition 4.3 (Disadvantaged in cost).
Let cg(x,x′) be the cost for an individual from group g to adapt their
features from x to x′. Group b is disadvantaged in cost if cb(x,x′)≥ca(x,x′) for all x,x′∈X.
Next, we give a variety of example scenarios of when a group can be
disadvantaged in cost.
######
Example 4.3 (Opportunity Costs).
Many universities have adopted gender-neutral policies that stop the
“tenure-clock” for a year for family-related reasons, e.g. childbirth.
Ostensibly, no research is expected while the clock is stopped. The policies
were made gender-neutral in an attempt to decrease the stigma women felt around
taking time off for family reasons. However, the adoption of gender-neutral
clocks actually increased the gap between the percentage of men and
women who received tenure [[2](#bib.bib2)]. The suggested cause is that
women still shoulder more of the burden of bearing and caring for children,
compared to men. Men who stop their tenure clock are more productive during the
period than women, who have a higher opportunity cost to doing research while
raising a child.
######
Example 4.4 (Information Asymmetry).
A large portion of high-achieving, low-income students do
not apply to selective colleges, despite the fact that
these colleges are typically *less* expensive for them
because of the financial
aid they would receive [[16](#bib.bib16)]. This
phenomenon seems to be due to low-income students having less access
to information about college [[17](#bib.bib17)].
Since low-income students have more barriers to gaining
information about college, it is natural to assume that, compared to their
wealthier peers, they have a higher cost to strategically manipulating their
admission features.
######
Example 4.5 (Economic Differences).
Consider a social media company that wishes to classify individuals as
“influencers,” either to more widely disseminate their content or to identify
promising accounts for online marketing campaigns. Wealthy individuals can
purchase followers or likes, whereas other groups have to increase
these numbers organically [[7](#bib.bib7)]. Consequently, the costs to
increasing one’s popularity metric differs based on access to capital.
Finally, our main technical result shows that even when the distributions of
groups a and b are identical, if group b is disadvantaged in cost, then
when the institution increases its threshold for acceptance, it also increases
the social gap between the two groups.
######
Theorem 4.2.
Suppose positive individuals from groups a and b have the same
distribution over features, i.e, if Z=(X∣Y=1), then Z is
independent of the group G. If group b is disadvantaged in cost compared
to group a, then the social gap G(τ) is non-negative and
monotonically non-decreasing in the threshold τ.
###### Proof.
The social burden to a group g can be written as
| | | |
| --- | --- | --- |
| | B+,g(τ)=∫Xminx′:fτ(x′)=1cg(x,x′)p(X=x∣Y=1)dx | |
because X∣Y=1 is independent of G. The social gap can then be
expressed as
| | | | |
| --- | --- | --- | --- |
| | G(τ) | =B+,b(τ)−B+,a(τ) | |
| | | =∫X(minx′:fτ(x′)=1ca(x,x′)−cb(x,x′))p(X=x∣Y=1)dx. | |
The gap in individual cost minx′:fτ(x′)=1ca(x,x′)−cb(x,x′)
is always non-negative and is monotonically non-decreasing in τ, thus
G(τ) is non-negative and monotonically non-decreasing.
∎
5 Case Study: FICO Credit Data
-------------------------------
We illustrate the impact of strategic classification on different subpopulations
in the context of credit scoring and lending. FICO scores are widely used in the
United States to predict credit worthiness. The scores themselves are derived
from a proprietary classifier that uses features such as the number of open bank
accounts that are susceptible to gaming and strategic manipulation.
We use a sample of 301,536 FICO scores derived from TransUnion TransRisk scores
[[24](#bib.bib24)] and preprocessed by [[15](#bib.bib15)]. The scores
X are normalized to lie between 0 and 100. An individual’s outcome is labeled
as a
*default* if she failed to pay a debt for at least 90 days on at least one
account in the ensuing 18-24 month period. Default events are labeled with
Y=0, and otherwise repayment is denoted with Y=1. The two subpopulations are
given by race: a=\tt white and b=\tt black.
We assume the credit lending institution accepts individuals based on a
threshold on the FICO score. Using the normalized scale, a threshold of
τ=58 is typically used to determine eligibility for prime rate loans
[[15](#bib.bib15)]. Our results thus far have used thresholds on the
outcome likelihood, rather than a score. However, as shown in
Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Different Feature Distributions ‣ 5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification"), the outcome likelihood is monotonic in the FICO
score. Therefore, all our conditions and results can be validated using the
score instead of the outcome likelihood.
###
5.1 Different Feature Distributions
In Section [4.1](#S4.SS1 "4.1 Different Feature Distributions ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification"), we studied the scenario where the distribution
of outcome likelihoods ℓ(X)=P(Y=1∣X) differed across
subpopulations. In particular, if the likelihoods of the positive individuals in
group B tend to be lower than the positive individuals in group A, then
increasing strategic robustness increases the social gap between A and B.
Interestingly, such a skew in score distributions exists in the FICO
data. Black borrowers who repay their loans tend to have lower FICO scores than
white borrowers who repay their loans. In terms of the corresponding score CDFs,
for every score x, F+,black(x)≥F+,white(x).
Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Different Feature Distributions ‣ 5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification") demonstrates this observation.

Figure 2: Comparison of the distribution of FICO scores among black and
white
borrowers who repaid their loans. Credit-worthy black individuals tend
to have lower credit scores than credit-worthy white individuals. The
comparison of the corresponding CDFs demonstrates our
“disadvantaged in features” assumption
holds.

Figure 3: Repayment probability as a function of credit score. Crucially, the
probability of repayment P(Y=1∣x) is monotonically increasing in
x.
When the score distribution among positive individuals is skewed,
Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ Social gap increases. ‣ 4.1 Different Feature Distributions ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification") guarantees the social gap between groups is increasing in
the threshold under a reasonable cost model. Operationally, raising the loan
threshold to protect against strategic behavior increases the relative burden on
the black subgroup. To demonstrate this empirically, we use a coarse linear cost
model, c(x,x′)=max(α(x′−x),0) for some α>0. Since the
probability of repayment P(Y=1∣x) is monotonically increasing in x,
the linear cost c satisfies the requisite outcome monotonicity conditions.
In Figure [5](#S5.F5 "Figure 5 ‣ 5.1 Different Feature Distributions ‣ 5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification"), we compute G(τ) as
τ varies from 0 to 100 for a range of different value of α. For
any α, the social utility gap is increasing in τ. Moreover, as
α (the cost of raising one’s credit score) becomes large, the rate of
increase in the social gap grows large as well.

Figure 4:
Impact of increasing the threshold τ on white and black credit
applicants. When the cost to changing one’s score α is small,
increases to the threshold have only a small effect on the social gap.
However, as α becomes large, even small increases to the
threshold can create large discrepancies in social burden between the two
groups.

Figure 5:
Impact of increasing the threshold τ on white and black credit
applicants, under the assumption that both groups incur different costs
for increasing their credit score. As the ratio of costs
βα increases, the social cost gap grows rapidly
between the two groups.
###
5.2 Different Cost Functions
In Section [4.2](#S4.SS2 "4.2 Different Costs ‣ 4 Fairness to Subpopulations ‣ The Social Cost of Strategic Classification"), we demonstrated when two subpopulations
are identically distributed, but incur different costs for changing their
features, there is a non-trivial social gap between the two. In the context of
the FICO scores, it’s plausible blacks are both disadvantaged in features
*and* experience higher costs for changing their scores. For instance,
outstanding debt is an important component of FICO scores. One way to reduce
debt is to increase earnings. However, a persistent black-white wage gap between
the two subpopulations suggest increasing earnings is easier for group a than
group b [[11](#bib.bib11)]. This setting is not strictly captured
by
our existing results, and we should expect the effects of both different costs
functions and different feature distributions to compound and exacerbate the
unfair impacts of strategic classification.
To illustrate this phenomenon, we again use a coarse linear cost model for both
groups. Suppose group A has cost cA(x,x′)=max{α(x′−x),0} for
some α>0 and group B has cost cB(x,x′)=max{β(x′−x),0}
for some β≥α. Since we are interested in the relative cost for
each group, the key parameter controlling the rate of increase in G(τ)
is the ratio βα. In
Figure [5](#S5.F5 "Figure 5 ‣ 5.1 Different Feature Distributions ‣ 5 Case Study: FICO Credit Data ‣ The Social Cost of Strategic Classification"), we show the social gap
G(τ) for various settings of βα. The social gap is
always increasing as a function of τ, and the rate of increase grows large
for even moderate values of βα. When βα
is large, even small increases in τ can disproportionately increase the
social burden for the disadvantaged subpopulation.
6 Related Work
---------------
#### Strategic Classification
Prior work on strategic classification focuses solely on the institution,
primarily aiming to create high-utility solutions for the institution.
Our
work,
on
the
other
hand,
studies
the
tradeoff
between
the
institution’s utility and the burden to the individuals being
classified.
[[14](#bib.bib14), [12](#bib.bib12), [4](#bib.bib4)] give algorithms to compute the
Stackelberg equilibrium, which corresponds to the extreme τ∗ solution
in our trade-off curves. Although the Stackelberg equilibrium leads to maximal
institutional utility, we show that it also causes high social burden. We
give several examples of when the high social burden induced by the Stackelberg
equilibrium makes it an undesirable solution for the institution.
Rather than the Stackelberg equilibrium, others have also considered finding
Nash equilibria of the game [[5](#bib.bib5), [10](#bib.bib10)].
[[5](#bib.bib5)] argue that since in practice people cannot
optimally respond to the
classifier, the Stackelberg solution tends to be too
conservative, and thus a Nash equilibrium strategy is preferable.
Our
work
provides
a
complementary
reason
to
prefer
Nash
equilibria over the Stackelberg solution. Namely that (for a broad class
of cost functions), any Nash equilibrium that is not equal to the
Stackelberg
equilibrium places lower social burden on individuals.
Finally, we focus on the setting where individuals
are merely “gaming” their features, i.e., they do not
improve their true label by adapting their features. However, if the classifier
is able to incentivize strategic behavior that helps improve negative
individuals, then the social burden placed on positive individuals
may be considered acceptable.
[[20](#bib.bib20)] studies how to design classifiers that produce
such incentives.
#### Fairness
Our work studies how *strategic classification* results in
differing impacts
to different subpopulations and
is complementary to the large body of work studying the
differing impacts of *classification*
[[23](#bib.bib23), [3](#bib.bib3)].
The prior work on classification is primarily concerned with preventing
unfairness
that
can arise due to subpopulations having differing distributions over features or
labels
[[15](#bib.bib15), [13](#bib.bib13), [8](#bib.bib8)]. We show
that in the
strategic
setting, a classifier can have differing impact due to the subpopulations
having
differing
distributions *or* differing costs to adapting their features. Therefore,
when individuals are strategic, our work provides an additional reason to
be
concerned
about the fairness of a classifier. Namely, that it can be easier for
one group to game the classifier than another.
Furthermore, we
show
that
if
the
institution
modifies
the
classifier
it
uses
to
be
more
robust to strategic behavior, then it also
as a side effect,
increases the gap between the
cost incurred by a disadvantaged subpopulation and an advantaged population.
Thus,
*strategic classification can exacerbate unfairness in
classification.*
Our work is also complementary to [[22](#bib.bib22)], who also
analyze
how
the institution’s utility trades-off with the impact to individuals. They study
the trade-off in the non-strategic setting and measure the impact of a
classifier using a dynamics model of how individuals are affected by the
classification
they
receive.
We
study
the
tradeoff in the strategic setting and
measure
the
impact of a
classifier by the cost of the strategic behavior induced by the classifier. |
bbe1f644-d9b8-4980-bce7-053dad19a677 | trentmkelly/LessWrong-43k | LessWrong | Tsuyoku Naritai
This is the fourth speech in the wedding ceremony of Ruby & Miranda. See the Sequence introduction for more info. The speech was given by Ruby.
----------------------------------------
Riva descends from podium.
Brienne: We have spoken of the horrors that exist, the threats that loom, and the good worth fighting for. We cannot know that we are equal to these challenges, but we won’t let that stop us. I call upon Ruby to speak of our determination to Become Stronger.
Ruby ascends podium.
IMAGE 3 PROJECTED ON PLANETARIUM
Composed of gas and dust, a pillar in the Carina Nebula
Ruby commences speech.
There is a cry among us, a cry dear to us. “Tsuyoku Naritai” – “I want to become stronger.” Embodied in these few foreign syllables is one of our deepest values and greatest virtues. We say these words because we know that the challenges we face today are great, perhaps too great, perhaps greater than we can handle – today. But that does not matter, because we want to become stronger.
Some believe that we were created in the image of a perfect, wise, and just god. We know that we were not. No, we are the product of a blind, uncaring, unfeeling, raw mechanical process. We were hacked together, a disparate assortment of adaptations just good enough to pass our genes onto another generation, without oversight, without foresight, and certainly without benevolence. We became whatever worked – but only whatever worked in a time and place we left long ago.
So when we are faced with the suffering of millions, when we are marching towards our deaths, and when threats to our existence looming on the horizon – we cannot expect to be equal to these trials. There is no magical reason why the universe ought to present us with challenges we are strong enough to overcome. But in this cold and uncaring universe, there is one mercy. One card we hold in our hands – the capacity to grow. The ability to become stronger.
[We will run rampant with this ability, for in it is our capacit |
0912f769-8c91-415b-b4e7-34078c35bf53 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Intuition
In the [previous post](https://www.lesswrong.com/posts/eCWkJrFff7oMLwjEp/clarifying-factored-cognition), I've said,
>
> [...] in both schemes, Factored Cognition includes a step that is by definition non-decomposable. In Ideal Debate, this is step is judging the final statement. In HCH, it is solving a problem given access to the subtrees. This step is also entirely internal to the human.
>
>
>
You can take this as a motivation for part two of the sequence, which is about how humans think. I think a good place to start here is by reflecting on the argument against Factored Cognition based on ***intuition***. Here is a version [made by](https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/) [Rohin Shah](https://www.lesswrong.com/users/rohinmshah) on the AI alignment podcast:
>
> [...] I should mention another key intuition against [the Factored Cognition Hypothesis]. We have all these examples of human geniuses like Ramanujan, who were posed very difficult math problems and just immediately get the answer and then you ask them how did they do it and they say, well, I asked myself what should the answer be? And I was like, the answer should be a continued fraction. And then I asked myself which continued fraction and then I got the answer. And you're like, that does not sound very decomposable. It seems like you need these magic flashes of intuition. Those would be the hard cases for factored cognition. [...]
>
>
>
This sounds sort of convincing, but what is this intuition thing? Wikipedia [says](https://en.wikipedia.org/wiki/Intuition) that...
>
> Intuition is the ability to acquire knowledge without recourse to conscious reasoning.
>
>
>
... which I take to represent the consensus view. However, I don't think it's accurate. Consider the following examples:
1. I throw you a ball, and you catch it. We know that your brain had to do something that effectively approximates Newtonian physics to figure out where the ball was headed, but you're not consciously aware of any such process.
2. I ask you to compute 5⋅8.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
. I predict that your brain just 'spat out' the right answer, without any conscious computation on your part.
3. I show you a mathematical conjecture that you immediately feel is true. I ask you why you think it's true, you think about it for two minutes, and manage to derive a short proof. We know that this proof is not the reason why you thought it was true to begin with.
It's evident that your brain is acquiring knowledge without recourse to conscious reasoning in all three cases. That means they would all involve intuition, according to Wikipedia's definition. Nonetheless, we would not defer to intuition for any of them.
This leads us to this post's conjecture:

Under this view, intuition has nothing to do with how you derived a result and everything with whether you can explain the result after the fact. This characterization fits all three of the above examples (as well as any others I know of):
1. The ability to catch a ball does not feel impressive, hence it does not feel like it requires explaining.[[1]](#fn-ut7N9xj9GPe4n8YxF-1)
2. You could easily prove that 5⋅8=40 using a lower-level concept like addition, hence you would not defer to intuition for the result.
3. In this case, you might well defer to intuition initially, when I first ask you about why the conjecture is true, and you (intuitively) think it is. But as soon as you have the proof in hand, you would refer to the proof instead. In other words, as we change your best explanation for the result, our verdict on whether it is intuition changes as well, which shows that it can't possibly be about how the result was derived.
As an aside: the standard definition says 'intuition is [...]', whereas my proposed characterization says 'we refer to intuition for [...]'. Why? Because intuition is not a well-defined category. Whether we call something intuition depends on the result itself *and* on the rest of our brain, which means that any accurate characterization somehow has to take the rest of the brain into account. Hence the 'we refer to [...]' wording.
---
The classical view of intuition leads to a model of thinking with two separate modes: the 'regular' one and the 'intuition' one. This post asks you to replace that model with a unified one: there is only one mode of thinking, which sometimes yields results we can explain, and other times results we can't explain.
Provided you buy this, what this means is that we have dissolved the concept. Intuition isn't a mode of thinking, it's just something we say depending on our ability to explain our thoughts. So, that's great! It means we have nothing to worry about! Factored Cognition works! Haha, just kidding. It's actually closer to the opposite. Yes, there is only one mode of thinking, but that's because *all* thinking is intuition-like, in the sense that the vast majority of steps are hidden.
To see this, all you need to do is look at examples. Do you have access to the computations your brain does to compute the 'for-tee' thought that pops into your head whenever you read the symbols 5⋅8? Now summon the mental image of a hammer. Did you have access to the computations your brain did to construct this image? Or, you can go back to catching that ball. In all those cases (and others), our brain provides us zero access to inspect what it is doing. That's just how awareness works. Our brain shows us the *results*, but that's it. The algorithms are hidden.
I think this view is very compatible with a scientific understanding of the brain, and much more so than anything that positions intuition as a special category of thought. But more on that in the next post.
---
Given the single-process model, let's revisit the Ramanujan example. What does this kind of thing mean for Factored Cognition?
The immediate thing the example shows that the [computations your brain can run without consulting you] can be quite long. Unlike in the 5⋅8 case where your brain did something unimpressive, Ramanujan did something that most people probably couldn't replicate even if they had a month to spend on the problem.
Let's linger on this for a bit. In a private conversation, TurnTrout has called the 'computations your brain can run without consulting you' the 'primitives [of] your cognitive library'. I think this is a cool term. Note that 'primitive' indicates an indivisible element, so they are only primitives *from the perspective of awareness*. For example, a primitive of most people's library is 'compute the destination of a flying ball', and another is 'compute 5⋅8'. If you're a computer scientist or mathematician, your library probably has a primitive for 210=1024, whereas if you're a normal person, it probably doesn't, so you would have to compute 210 as
2→4→8→16→32→⋯→512→1024
And even then, some of these steps won't be primitives but will require a sequence of smaller primitives. For example, the step from 512 might be computed by 500+500=1000→12+12=24→1000+24=1024, where each of those steps uses a primitive.
Similarly, if you're Ramanujan, your brain has a very complicated primitive that immediately suggests a solution for some set of problems. If you're a professional chess player, your library probably has all sorts of primitives that map certain constellations of chess boards to estimates of how promising they are. And so on. I think this is a solid framework under which to view this post. However, note that it's just descriptive: I'm saying that viewing your mental capabilities as a set of primitives is an accurate description of what your brain is doing and what about it you notice; I'm not saying that each primitive corresponds to a physical thing in the brain.
Then, just to recap, the claim is that 'it's intuition' is something we say whenever our primitives produce results we can't explain after the fact, and the concept doesn't refer to anything beyond that.
**EXERCISE (OPEN-ENDED):** If you agree with the post so far, think about what this might or might not imply for Factored Cognition, and why. Is it different for the [Ideal Debate FCH](https://www.lesswrong.com/posts/FNyqL7mxSkgLpck4w/traversing-a-cognition-space#2__The_Ideal_Debate_FCH) and the [HCH FCH](https://www.lesswrong.com/posts/FNyqL7mxSkgLpck4w/traversing-a-cognition-space#5__The_HCH_FCH)?
Open-ended means that I'm not going to set a time limit, and I won't try to answer the question in this post, so you can think about it for as long as you want.
The reason why it *might* be a problem is that Factored Cognition likes to decompose things, but we cannot decompose our cognitive primitives. This leaves Factored Cognition with two options:
1. help a human generate new primitives; or
2. get by with the human's existing primitives.
The first option begs the question of how new primitives are created. As far as I can tell, the answer is usually experience + talent. People develop their primitives by having seen a large number of problem instances throughout their career, which means that the amount of experience can be substantial A common meme, [which is probably not literally true](https://en.wikipedia.org/wiki/Outliers_(book)), is that it takes 10000 hours to achieve mastery in a field. Anything in that range is intractable for either scheme.
This doesn't mean no new primitives are generated throughout the execution of an HCH tree, since all learning probably involves generating some primitives. However, it does exclude a large class of primitives that could be learned in a scheme that approximates one human thinking for a long time (rather than many humans consulting each other). By using someone who already is an expert as the human, it's possible to start off with a respectable set of primitives, but then that set cannot be significantly expanded.
---
1. As a further demonstration, consider what would happen if you studied the ball case in detail. If you internalized that your brain is doing something *complicated*, and that we have no clue what's really going on, you might gradually be tempted to say that we use intuition after all. If so, this demonstrates that explanations are the key variable. [↩︎](#fnref-ut7N9xj9GPe4n8YxF-1) |
6906f93f-a689-408c-8fdf-683e2f928204 | trentmkelly/LessWrong-43k | LessWrong | ELCK might require nontrivial scalable alignment progress, and seems tractable enough to try
Written quickly rather than not at all, I was thinking about this a couple of days ago, and decided to commit to writing something by today rather than adding this idea to my list of a million things to write up.
This post describes a toy alignment problem that I’d like to see work on. Even the easiest version of the problem seems nontrivial, but it seems significantly more tractable than harder versions, easy to "verify", and also fairly easy to describe. I don't think the idea is original, as it's kind of obvious, but I haven't yet seen a good attempt. I’ve called it, not entirely seriously ‘ELCK’, which stands for ‘Eliciting Latent Chess Knowledge’. Some of the extensions I mention later can be called ELGK/ELGCK
In brief, the challenge is:
> Create an AI 'chess analyst', which can explain the reasons behind moves by top engines and Super-GMs, as well as the best human analysts.
As-stated this would already be a significant achievement. Although it wouldn’t necessarily require progress on scalable oversight, given how much top-quality analysis exists on the internet already, even the problem of “getting something to automatically explain human-understandable thematic ideas” is something of a significant step towards (limited domain) ontology identification.
The bulk of my excitement about the project comes from the fact that there are modifications which would require nontrivial progress on some key parts of low-stakes alignment, while also being significantly more tractable than solving the entirety of ELK, or e.g. coming up with a working implementation of HCH.
Here are some ideas for different 'difficulty levels':
1. Create an AI model that can provide analysis of chess games that are rated as highly by top human players as the best human analysis, without training on any human analysis produced by players with a rating above N (but still with access to a chess engine). Reduce N.
2. Do this with access to a chess engine but limiting the number of posit |
1951f20f-330d-4575-bce0-3eb5a95460d1 | trentmkelly/LessWrong-43k | LessWrong | Inseparably Right; or, Joy in the Merely Good
Followup to: The Meaning of Right
I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics. Here then is an attempted rephrase:
There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.
What do you value? At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways. You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you've acted on this supposed preference. If you're reading this blog you probably attach some value to truth for the sake of truth. If you've ever learned to play a musical instrument, or paint a picture, or if you've ever solved a math problem for the fun of it, then you probably attach real value to good art. You value your freedom, the control that you possess over your own life; and if you've ever really helped someone you probably enjoyed it. You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe. You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.
And you value many more things than these.
Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call "good".
There's no way you can detach the little label from what it stands for, and still make ontological or moral sense.
Why might the little 'good' label seem detachable? A number of reasons.
Mainly, that's just how your mind is structured—the labels it attaches internally seem like extra, floating, ontological properties.
And there's no one value that determines whether a complicated event i |
94cdf7a0-14ca-4c16-9ae8-b1652c69f54d | trentmkelly/LessWrong-43k | LessWrong | US Presidential Election: Tractability, Importance, and Urgency
Disclaimer: To avoid harmful polarization of important topics, this post is written in a non-partisan manner, and I’d encourage comments to be written with this in mind.
US presidential elections are surprisingly tractable
1. US presidential elections are often extremely close.
1. Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. 537 votes in Florida decided the 2000 election.
2. There’s a good chance the 2024 election will be very close too.
1. Trump leads national polling by around 1% nationally, and polls are tighter than they were the last two elections. If polls were perfectly accurate (which of course, they aren’t), the tipping point state would be Pennsylvania or Michigan, which are currently at +1-2% for Trump.
3. There is still low-hanging fruit. Estimates for how effectively top RCT-tested interventions to generate net swing-state votes this election range from a few hundred to several thousand dollars per vote. Top non-RCT-able interventions are likely even better. Many potentially useful strategies have not been sufficiently explored. Some examples:
1. mobilizing US citizens abroad (who vote at a ~10x lower rate than citizens in the country), or swing-state university students (perhaps through a walk-out-of-classes-to-the-polls demonstration).
2. There is no easily-searchable resource on how to best contribute to the election. (Look up the best ways to contribute to the election online – the answers are not very helpful.)
3. Anecdotally, people with little political background have been able to generate many ideas that haven’t been tried and were received positively by experts.
4. Many top organizations in the space are only a few years old, which suggests they have room to grow and that more opportunities haven’t been picked.
5. Incentives push talent away from political work:
1. Jobs in political campaigns are cyclical/temporary, very de |
38881f56-2e93-4364-833a-4a485763809e | trentmkelly/LessWrong-43k | LessWrong | Effective Altruism 80,000 hours workshop materials & outline (and Feb 10 '19 KC meetup notes)
Intro & overview
This is a brief overview of an "Introduction to 80,000 Hours" workshop written by fellow LessWronger Bae's Theorem. It's to the point - a set of slides & an outline going over 80k hours' key points and core advice, then helping attendees apply it first to an outside party, then to themselves.
You can find the 15-slide presentation here: https://docs.google.com/presentation/d/1raV5fr7HryQ-3tEskHgfemje2SlSgtLCESo-mqZ2nVo/edit#slide=id.p
The (revised) article that Bae's Theorem based the content off of can be found here: https://docs.google.com/document/d/1clHJQaoAt9YaguMZFH6gkDSPTsolEylME92X2dBnaWk/edit This document also includes the 1-page handout he used in the workshop for easy reference during the activities.
General content outline:
* Introduction to 80,000 Hours as an organization
* A summary of 80k Hours' claims
* The 80k Hours' career decision process
* Checking personal fit
* Career exploration
* Mitigating accidental harm
* Building career capital
* Taking care of yourself and your mental health
* Coordinating with others & communities
* Taking measured risks
* Options to make a difference in any career
* Additional resources
* ACTIVITY: Mock career advising interview with an impact-minded college student (done to give practice thinking in EA mindset without touching personal identity)
* BREAKOUT: Split into groups of 2-3 and take 6 minutes to think of possible short-term and long-term actions each can take to improve the impact of their career. Discuss with the group.
Other notes
After the workshop, we had a brief discussion on group norms, talked about other activities we could take on as a group to boost our impact, and the topic for the next meetup (effective interventions for global poverty).
There's a slight schism in the group, between people who want to *take action NOW* even if it's not strictly the most effective (e.g. do research on the efficacy of local charities, volunteer somewhere), |
8d0199d1-1424-494d-8682-e8a128bbd1b8 | trentmkelly/LessWrong-43k | LessWrong | Guidelines for productive discussions
I enjoyed reading Duncan_Sabien's post, Basics of Rationalist Discourse, as well as the response posts and comments. The guidelines probably wouldn't have made sense to my beginner-self though, so I wanted to try creating a list that is more targeted at beginners. In that sense, this is more of a complement to the other pieces than an alternate take.
I am not really sure if I've captured everything important, and I don't really like the "Other notes" section because it feels randomly put together. What I'm most curious about is whether taking this approach can help address some of the criticisms of the original post. I've added some of my thinking behind this piece to the appendix.
----------------------------------------
On a site like Less Wrong, we want to promote discussions and conversations that help members improve at rationality. We want participants and the audience to walk away from discussions with a clearer, more accurate understanding of the world, one which enables them to act more effectively.
Here are some guidelines for more fruitful conversations:
1) Seek to be Less Wrong instead of More Right
It feels good to be right, to show that we are correct and others are wrong. However, this can be counter-productive. It encourages us to find ways to prove we are right instead of figuring out whether or not our beliefs are right in the first place. This makes it much less likely for us to arrive at the truth.
Let's instead focus on how we can be Less Wrong:
* When you hear something that doesn't make sense, get curious instead of dismissive. After all, we each experience a very small slice of reality. If someone has a seemingly nonsensical opinion, maybe it's because we're missing out a perspective.
* Steelman: When someone presents a criticism that you believe is flawed, see if you can improve the criticism and use it to identify a gap in your thinking.
* Ask yourself, if you were wrong, how would you find out? If nothing can change your mind, |
65e80d20-c57d-4dbb-9361-aa7280f87fb2 | trentmkelly/LessWrong-43k | LessWrong | Testing the Efficacy of Disagreement Resolution Techniques (and a Proposal for Testing Double Crux)
Introduction
I will describe a procedure for testing the efficacy of disagreement resolution techniques (DRTs) together with methodologies for inducing their use (induction methods). DRTs are structured games (in the loosest senses of the terms) that involve conversation, and are aimed at helping players who disagree about some topic either figure out the truth about that topic, or come to better understand each others' positions and the sources of their disagreement. An induction method is just any way of trying to get someone to use a DRT.
I am writing up a description of this procedure here because I plan to use it to test and find DRTs, and I would like to get feedback before I go out and do that. I originally came up with this procedure in order to test Double Crux, and I still plan to. I will describe the first step of that plan in the second half of this post. The first half of the post explains the general procedure and some frames I think might be useful for understanding the second half of the post.
I would also like to invite others to use the general procedure in order to test or find other DRTs. It seems fairly obvious to me now, but something something hindsight.
I would like any kind of feedback that you think might make the methodology I describe herein better. I am particularly interested in feedback on the specific procedure I plan to use for testing Double Crux, but feedback on the general procedure would also be great. Any feedback or advice on the statistical analysis of the results of my test procedure for Double Crux would also be much appreciated.
Readers who are primarily interested in giving feedback on my statistical approach are encouraged to skip to the "Procedure" subsection in the second half of the post, and then skip to the "Statistical Analysis and Preregistration" subsection.
General Procedure
Step 0:
Gather participants and filter them for desired characteristics. Ignorance of the DRT to be tested at the onset of the study |
8362a10a-4fa5-4e68-a8e2-825a9f249415 | trentmkelly/LessWrong-43k | LessWrong | How important are model sizes to your timeline predictions?
I'm thinking of doing a paper for GovAI on model sizes if they matter. Any or all of the following would be helpful:
1) If you found out that the largest published neural net model sizes (by # of parameters) will grow by 10000x in the next 6 years, would that make your timelines shorter or longer and by how much?
2) Same question with 100x.
3) Same question but with 100x less (100x more) than whatever you expected.
4) Same question but with 10x less than in the last 6 years.
Assume all else is equal if possible. E.g. we have the same compute, memory, algorithms and data available. Assume that the reason for slower increase is due to lack of possibility to grow models, and not necessarily lack of will. Ignore conditional execution models where only small parts of the model is used at a time.
I'm okay with quick, unexplained qualitative answers. Well-reasoned quantitative ones are encouraged of course.
Background:
-Some ML projects like GPT-2 (figure 1) have recorded high (but diminishing) returns to bigger models, typically looking like power laws.
-Some people (e.g. Geoff Hinton) informally argued that model sizes are analogous to synapse count in the brain, which correlates with brain size, which has high returns.
Bonus question: Without looking it up, by what factor do you think the largest published model sizes have increased between 2012 and 2019? By what factor do guess they'll increase in the next 6 years?
|
89e84aa7-f7d3-4158-b0fd-f87815094c15 | trentmkelly/LessWrong-43k | LessWrong | Transferring credence without transferring evidence?
Agent X has evidence for proposition P that agent Y does not have, and X wants to convince Y that P is true (or more precisely, increase Y's credence in P as well). However, there are the following unfortunate conditions:
1) Y doesn't trust X, meaning that X just telling Y that they have evidence will do little to convince Y. (In particular, they don't have common knowledge of their credences.)
2) X wants to reveal as little as possible about the evidence they have.
Is there anything clever X and Y can do here? Or is there a reason why this is impossible? Can one prove a tradeoff between the update on Y's credence and the amount of details about the evidence revealed? If this is too vague/does not make formal sense for some reason, as well as for illustration, here are some particular examples of what I have in mind:
Example 1: X has a picture of the Moon and wants to convince Y that the Moon exists. The picture is a selfie taken by Adolf Hitler living on the far side of the Moon; X doesn't want Y to know the various other things of which this picture or X's possession of the picture would be evidence (e.g. because X wants to maintain their edge in the "space Nazi invasion of Earth by 2030" prediction market), so just handing the picture to Y is not a good option for X.
Example 2: The State Department wants to convince the public that Russia is in the process of making a specific disinformation video while revealing as little as possible about how they found out that this is the case. See https://www.youtube.com/watch?v=8G3WtjU9XA4 and https://www.youtube.com/watch?v=rU1_BHMHsI8 (but let's ignore the question of whether they actually, in reality, in this particular case, have information or not; this is just supposed to be a hypothetical example for the purpose of this discussion).
My observations thus far & more refined versions of the previous questions:
This seems obviously close to zero-knowledge proofs, but I haven't been able to flesh out a correspondence |
41cae1e3-cc1f-4480-af82-e4f9e32af576 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | My Central Alignment Priority (2 July 2023)
*Epistemic status: The further you scroll, the more-important the points are.*
I continue my deconfusion from my [alignment timeline](https://www.lesswrong.com/posts/RTjE6KN2WGepGL6m9/my-alignment-timeline) post. Goal: figure out which sub-problem to work on *first* in AI alignment.
Quick Notation
--------------
1 = [steering](https://www.lesswrong.com/posts/RTjE6KN2WGepGL6m9/my-alignment-timeline)
2 = goal
1.1k no steering found
1.2k poor steering found
1.3g steering lets us do pivotal
1.4g steering + existing goal inputs
1.5g steering not needed, alignment by default
2.1k no goal found, even with steering
2.2k poor goal found, combined with steering
2.2g goal is good/enough
The Logic
---------
Let's say that my effort single-handedly changed the *relative* balance of research between 1 and 2. So we ignore scenarios where my work doesn't do anything. (By same intuition, we ignore 1.5g, since that doesn't require effort.)
If I research 1 (first) and X happens, what happens?
* I find bad ideas I think are good. --> I overconfidently promote it, wasting time at best and causing 1.2k at worse.
* I find good ideas. --> 1 is solved --> 1.3, 1.4, or 2 must happen.
+ 1.3g: I, or a group I trust, must be able to do pivotal act before 1's solution leaks/is independently re-discovered.
+ 1.4g: Good ending is easy to achieve by quickly coding an implementation of 1's solution.
+ 2: I, or a group I trust, must be able to solve 2 before 1's solution leaks/is independently re-discovered.
If I research 2 (first) and X happens, what happens?
* I find bad ideas I think are good. --> I overconfidently promote it, wasting time at best and causing 2.2k at worst.
* I find good ideas. --> 2 is solved --> 1 must also be solved, for 2.2g to happen.
+ 1: I, or a group I trust, must be able to solve 1.
The Bottom Line
---------------
**I, personally, right now, should have the key research focus of "solve problem 2".**
If I get interesting ideas about problem 1, I should write them down privately and... well, I'm not sure, but probably *not* publish them quickly and openly.
This will be the case until/unless something happens [that would have made me change](https://mindingourway.com/deliberate-once/) my above logic.
Some things that have not happened, but which I would expect to change my mind on the above points:
* Problem 1 or problem 2 gets obviously-solved. --> Jump to the other respective branch of the logic tree.
* A global AI pause actually occurs, in a way that *actually constrains* Anthropic and OpenAI and DeepMind and Meta s.t. AGI timelines can be pushed further out. --> Tentatively prioritize working on problem 1 more, due to its higher "inherent" difficulty than problem 2.
* [Cyborgism](https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism) succeeds so well that it becomes possible to augment the research abilities of AI alignment researchers. --> Drop everything, go get augmented, reevaluate the alignment situation (including the existence of the augmentations!) with my newfound brainpower.
* Some "new" Fundamental Fact comes to my attention, that makes me redraw the game tree itself or have different [Alignment Timeline](https://www.lesswrong.com/posts/RTjE6KN2WGepGL6m9/my-alignment-timeline) beliefs. --> I redraw the tree and try again.
* I get feedback that my alignment work is unhelpful or actively counterproductive. --> I redraw the tree (if it's something minor), or I stop doing technical alignment research (if it's something serious and not-easily-fixable).
More footnotes about my Alignment Timeline specifically.
--------------------------------------------------------
* The failure mode of a typical "capabilities-frontier" lab (OpenAI, Anthropic, DeepMind) is probably either 1.1k, 1.2k, or 2.1k.
* As far as I know, [Orthogonal](https://orxl.org/) is the only group devoting serious effort to problem 2. Therefore, my near-term focus (besides upskilling and getting a grant) is to assist their work on problem 2.
* Orthogonal's failure mode is probably 2.2k. In that scenario, we/they develop a seemingly-good formal-goal, give it to a powerful/seed AI [on purpose](https://www.lesswrong.com/posts/RBcKeY8B5mvxiCN37/state-of-my-alignment-research-and-what-needs-work#theory_of_change), turn it on, and then the goal turns out to be lethal.
* The components of my Timeline are orthogonal to many seemingly-"field-dividing" cruxes, including "scaling or algorithms?", "ML or math?", and "does future AI look more like ML or something else?". I have somewhat-confident answers to these questions, and so do other people, but the weird part is that I think others' answers are sometimes wrong, whereas they would think mine are either wrong or (at a first pass) mutually-exclusive.
For example, I'm clearly going for theoretical-leaning work like MIRI and especially Orthogonal, *and* I also think future superhuman AI will be extremely ML-based. Many people think "ML is the paradigm AND formal alignment is unhelpful", or "ML is the wrong paradigm AND formal alignment is essential".
I may write more about this, modulo [if it seems worth it](https://www.lesswrong.com/posts/4Gcz3fGcYmmzhozxr/hashing-out-long-standing-disagreements-seems-low-value-to) / I have time/energy at the time. |
b3a94d47-58c8-4575-9eeb-317a4e56595d | trentmkelly/LessWrong-43k | LessWrong | Quick Nate/Eliezer comments on discontinuity
This isn't a proper response to Paul Christiano or Katja Grace's recent writings about takeoff speed, but I wanted to cross-post Eliezer's first quick comments on Katja's piece someplace more linkable than Twitter:
> There's a lot of steps in this argument that need to be spelled out in more detail. Hopefully I get a chance to write that up soon. But it already raises the level of debate by a lot, for which I am grateful.
> E.g. it is not intuitive to me that "But evolution wasn't trying to optimize for STEM ability" is a rejoinder to "Gosh hominids sure got better at that quickly." I can imagine one detailed argument that this might be trying to gesture at, but I don't know if I'm imagining right.
> Similarly it's hard to pin down which arguments say '"Average tech progress rates tell us something about an underlying step of inputs and returns with this type signature" and which say "I want to put the larger process in this reference class and demand big proof burdens."
I also wanted to caveat: Nate's experience is that the label "discontinuity" is usually assigned to misinterpretations of his position on AGI, so I don't want to endorse this particular framing of what the key question is. Quoting Nate from a conversation I recently had with him (not responding to these particular posts):
> On my model, the key point is not "some AI systems will undergo discontinuous leaps in their intelligence as they learn," but rather, "different people will try to build AI systems in different ways, and each will have some path of construction and some path of learning that can be modeled relatively well by some curve, and some of those curves will be very, very steep early on (e.g., when the system is first coming online, in the same way that the curve 'how good is Google’s search engine' was super steep in the region between 'it doesn’t work' and 'it works at least a little'), and sometimes a new system will blow past the entire edifice of human knowledge in an afternoon |
d06c74e6-ae74-4d03-b422-203b1fcd61eb | trentmkelly/LessWrong-43k | LessWrong | On Juvenile Fiction
Follow-up To: On the Care and Feeding of Young Rationalists
Related on OB: Formative Youth
Eliezer suspects he may have chosen an altruistic life because of Thundercats.
Nominull thinks his path to truth-seeking might have been lit by Asimov's Robot stories.
PhilGoetz suggests that Ender's Game has warped the psyches of many intelligent people.
For good or ill, we seem to agree that fiction strongly influences the way we grow up, and the people we come to be.
So for those of us with the tremendous task of bringing new sentience into the world, it seems sensible to spend some time thinking about what fictions our charges will be exposed to.
The natural counter-part to this question is, of course, are there any particular fictions, or types of fiction, to which we should avoid exposing our children?
Again, this is a pattern we see more commonly in the religious community -- and the rest of us tend to look on and laugh at the prudery on display. Still, the general idea doesn't seem to be something we can reject out of hand. So far as we can tell, all (currently existing) minds are vulnerable to being hacked, young minds more than others. If we determine that a particular piece of fiction, or a particular kind of fiction, tends to reliably and destructively hack vulnerable minds, that seems a disproportionate consequence for pulling the wrong book off the shelf.
So, what books, what films, what stories would you say affected your childhood for the better? What stories do you wish you had encountered earlier? If there are any members of the Bardic Conspiracy present, what sorts of stories should we start telling? Finally, what stories (if any) should young minds not encounter until they have developed some additional robustness?
ETA: If there are particular stories which you think the (adult) members of the community would benefit from, please feel free to share these as well.
ETA2: My wildly optimistic best-case scenario for this post would be someone actu |
ba8fefbb-94d1-4e52-bb32-a1c5ebdde16b | StampyAI/alignment-research-dataset/blogs | Blogs | Superintelligence reading group
Nick Bostrom’s eagerly awaited [*Superintelligence*](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.
The reading group will “meet” on a weekly post on the [LessWrong discussion forum](http://lesswrong.com/r/discussion/new/). For each ‘meeting’, we will read about half a chapter of *Superintelligence*, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)
Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.
We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome. **We especially encourage AI researchers and practitioners to participate.** Just use a pseudonym if you don’t want your questions and comments publicly linked to your identity.
We will follow [**this preliminary reading guide**](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf), produced by MIRI, reading one section per week.
If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.
If this sounds good to you, first grab a copy of *[Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111)*. You may also want to [**sign up here**](http://eepurl.com/1-S41) to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on **Monday, September 15th**. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing *Superintelligence* with, link them to this post!
Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.

The post [<em>Superintelligence</em> reading group](https://intelligence.org/2014/08/31/superintelligence-reading-group/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
6f1adf82-6e88-43df-819c-98b6064dc3b9 | trentmkelly/LessWrong-43k | LessWrong | May 2017 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please post only under one of the already created subthreads, and never directly under the parent media thread.
* Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
* Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules. |
6a6958ea-75ad-4824-869f-606143b4cf66 | trentmkelly/LessWrong-43k | LessWrong | Rising rents and appropriate responses
I was recently involved in a discussion related to real estate and rising rents. A friend proposed this policy: "We should limit the extent to which the city can expand, and we should have rent caps so that the prices don't go up." I responded: "That means that purchase prices will go up dramatically, if the influx of new citizens doesn't stop." She said: "Yes, but that will be temporary - the prices will stabilize at some point, they cannot grow forever."
I agree that they cannot grow forever. The prices will stabilize, that much is certain, but they will stabilize at a very high point. I proposed that there are essentially two futures: you either have a San Francisco scenario, where new construction is very difficult, but since people still want to come and live there, the prices for renting and owning are incredibly high OR you have a Tokyo scenario, where you tear down old neighborhoods to make place for gigantic complexes which allow very dense living (and still have high prices because the demand still exceeds the supply).
What happens if you institute construction limits and rent control? Ownership prices go up, and rent goes up in the surrounding places. People want to go to hubs so much that they will forego the benefits of living in other areas. The opportunities in the city simply outweigh what they can get elsewhere.
Given a constant influx of new citizens to a hypothetical city:
If you institute construction limits: this hits the poor the most, because now everyone is competing for a very limited resource (available housing).
If you institute rent control: this helps the poor temporarily because the prices are more affordable, but since you now limit how much construction companies can earn, there will be less construction. How much less? Difficult to say.
If you institute both: prices of owning will rise, and mobility among those who rent will drop (every rental property charges the maximum possible rent and is already rented out - which means yo |
39efc418-df93-48d9-bb04-7a6b5deeba5c | trentmkelly/LessWrong-43k | LessWrong | Watching Myself Program
Michael Malis described an interesting technique for improving his software development workflow:
> One incredibly useful exercise I’ve found is to watch myself program. Throughout the week, I have a program running in the background that records my screen. At the end of the week, I’ll watch a few segments from the previous week. Usually I will watch the times that felt like it took a lot longer to complete some task than it should have. While watching them, I’ll pay attention to specifically where the time went and figure out what I could have done better. When I first did this, I was really surprised at where all of my time was going.
(It's from his How to Improve Your Productivity as a Working Programmer, which I found through Dan Luu's 95%-ile isn't that good, itself an excellent description of how getting better at small things compounds into larger improvements).
I've read that some people who play sports or games record themselves to find opportunities for improvement. But it never crossed my mind that the same could be applied to programming. After all, sports and games are designed around making players' performance legible and comparable, and in contrast, programmers' productivity is notoriously difficult to measure. My gut feeling was that something like this could never work.
But what if it could? What if it really did bestow the user with productivity gainz?
Well, I could meditate on this and do some research. But being a programmer, I could also throw some code together and experiment myself, which would be interesting and fun.
So I wrote a quick and dirty script that captures a screenshot of my screen(s) every 30 seconds and, on the next day, compiles all the screenshots into an mp4 video. (Here's the source code - it should work on any *nix system that has ffmpeg and scrot installed).
After a few days, I settled on reviewing 1-2 videos from the work week, focusing on days that felt particularly good or bad or interesting. This allowed me to av |
0bb9631c-8e3e-4a5a-8cc5-5c80b2b01290 | trentmkelly/LessWrong-43k | LessWrong | The Doomsday Argument and Self-Sampling Assumption are wrong, but induction is alive and well.
Since the Doomsday Argument still is discussed often on Less Wrong, I would like to call attention to my new, short, self-published e-book, The Longevity Argument, which is a much-revised and much-expanded work that began with my paper, “Past Longevity as Evidence for the Future,” in the January 2009 issue of Philosophy of Science. In my judgment, my work provides a definitive refutation of the Doomsday Argument, identifying two elementary errors in the argument.
The first elementary error is that the Doomsday Argument conflates total duration and future duration. Although the Doomsday Argument’s Bayesian formalism is stated in terms of total duration, all attempted real-life applications of the argument—with one exception, a derivation by Gott (1994, 108) of his delta t argument introduced in Gott 1993—actually plug in prior probabilities for future duration.
For example, Leslie (1996, 198–200) presents a Bayesian equation stated in terms of prior probabilities of total instances. But then Leslie (1996, 201–203) plugs into this equation prior probabilities for future instances: humans being born for the next 150 years vs. humans being born for the next many thousands of centuries. Bostrom (2002, 94–96) recounts Leslie’s general argument in terms of births instead of durations of time, using 200 billion total births vs. 200 trillion total births. (A closer parallel to Leslie 1996 would be 80 billion total births vs. 80 trillion total births.) But the error persists: the actual prior probabilities that are plugged in to Leslie’s Bayesian equation, based on all of the real-life risks actually considered by Leslie (1996, 1–153) and Bostrom (2002, 95), are of future births, not total births.
In other words, Leslie supposes a prior probability of doom within the next 150 years or roughly 20 billion births. (The prior probabilities supposed in the Doomsday Argument are prior to knowledge of one’s birth rank.) Leslie then assumes that—since there have already been, say, |
c87137a1-e6fe-4676-8181-d460d123c963 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Think Like Reality
Today's post, Think Like Reality was originally published on May 2, 2007. A summary (from the LW wiki):
> "Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."
Discuss the post here (rather than in the comments of the original post).
This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Universal Law, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns. |
93ed0a25-de0b-4325-a882-6a5d2857a15f | trentmkelly/LessWrong-43k | LessWrong | All AGI safety questions welcome (especially basic ones) [Sept 2022]
tl;dr: Ask questions about AGI Safety as comments on this post, including ones you might otherwise worry seem dumb!
Asking beginner-level questions can be intimidating, but everyone starts out not knowing anything. If we want more people in the world who understand AGI safety, we need a place where it's accepted and encouraged to ask about the basics.
We'll be putting up monthly FAQ posts as a safe space for people to ask all the possibly-dumb questions that may have been bothering them about the whole AGI Safety discussion, but which until now they didn't feel able to ask.
It's okay to ask uninformed questions, and not worry about having done a careful search before asking.
Stampy's Interactive AGI Safety FAQ
Additionally, this will serve as a way to spread the project Rob Miles' volunteer team[1] has been working on: Stampy - which will be (once we've got considerably more content) a single point of access into AGI Safety, in the form of a comprehensive interactive FAQ with lots of links to the ecosystem. We'll be using questions and answers from this thread for Stampy (under these copyright rules), so please only post if you're okay with that! You can help by adding other people's questions and answers to Stampy or getting involved in other ways!
We're not at the "send this to all your friends" stage yet, we're just ready to onboard a bunch of editors who will help us get to that stage :)
Stampy - Here to help everyone learn about stamp maximization AGI Safety!
We welcome feedback[2] and questions on the UI/UX, policies, etc. around Stampy, as well as pull requests to his codebase.[3] You are encouraged to add other people's answers from this thread to Stampy if you think they're good, and collaboratively improve the content that's already on our wiki.
We've got a lot more to write before he's ready for prime time, but we think Stampy can become an excellent resource for everyone from skeptical newcomers, through people who want to learn more, right up to |
cb9655be-55dd-49e3-9eda-8bacd5dad07c | trentmkelly/LessWrong-43k | LessWrong | A letter on optimism about human progress
This open letter was originally posted on Letter.wiki and is part of a longer conversation with Andrew Glover about sustainability and progress.
Dear Andrew,
Thanks for a thoughtful reply. Reading over it, it seems the biggest difference between us is in our expectations for the future, in a word, our optimism. You agree it would be nice to give everyone the luxuries that only the rich enjoy today, but that “it doesn't seem possible to do this.” In a similar vein, on the topic of energy resources, you say you're “not aware of any principle that says new energy sources will be discovered simply by virtue of humans applying their ingenuity.”
So let's talk about that principle.
Certainly there is no law of physics that mandates inexorable progress, on any one axis or even in aggregate. Progress is not automatic or inevitable.
But human ingenuity has been so successful in such a wide variety of areas that I think, on the grounds of history and philosophy, we are justified in drawing a general principle: all problems are solvable, given enough time, effort, and thinking. Or to quote David Deutsch from The Beginning of Infinity, “anything not forbidden by the laws of nature is achievable, given the right knowledge.”
To take the historical view first, think of all the problems humanity has solved, all the magic we've created, that seemed impossible until it was invented—not just in energy, but in every field.
Our farms make an abundance of produce spring from the ground—reliably, consistently, year after year, rain or shine, flood or drought, regardless of what weeds, pests, or disease may attack our crops. We do this in many parts of the world, with different terrain, weather patterns, and growing seasons. We have done this not just through soil, fertilizer, and irrigation, but by breeding better plants—taking command of the taxonomy of species itself. And when the food is ready, we keep it fresh while it is transported all over the world; produce now knows no seas |
b750bc5d-420c-4622-bcef-0237521b33c1 | trentmkelly/LessWrong-43k | LessWrong | [Video] Presentation on metacognition contains good intro to basic LW ideas
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link. |
e5418a18-0fbf-4bda-a714-9180fc28e3ab | trentmkelly/LessWrong-43k | LessWrong | Model Thinking class [link]
As part of the expansion of Stanford's effort to open up some of their courseware Scott Page is giving Model Thinking course, which I suspect a lot of people here might like. There is quite a bit of Schelling-type modeling applied to a wide range of problems. The course is live, but it's not too late to sign up - http://www.modelthinker-class.org. |
e09df34a-7523-4eb3-acae-dcddfa7f475e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Deontology and Tool AI
Please forgive me for being awkward on my first post, I just wanted to get these ideas out there to facilitate discussion on the topic. I also am sorry if I use any newly-discovered terminology incorrectly, but I am doing my best.
Introduction: Utilitarianism is Unaligned
-----------------------------------------
Intuitively, I think that an important component of modeling a moral system for friendly AGI is to discuss what ethical philosophy such a system would be built off of.
From my experience, it seems that a lot of people who discuss a hypothetical moral system for friendly AI presumes that such a system would work on a utilitarian basis. For an array of generated policies, each policy is represented by some metric that represents the collective good of humanity or civilization, and the AI selects the policy where this value is maximized.
I feel that an approach like that tends to be naïve, and isn't an expectation we actually hold for humans in the real world, much less what we expect from an AI. From what I understand, Utilitarianism has been criticized for as long as it has been formally stated, despite such famous adages like "the needs of the many outweigh the needs of the few". Such thought experiments as the Trolley Problem are meant to give us the impression that killing *n* people is permissible if it allows *n*+1 people to survive, but this doesn't align with our moral conscience. This also doesn't work for a superintelligent AI, because a policy that will sacrifice 49% of the population for the benefit of 51% of the population should be totally unacceptable.
In reality, we want a moral system that says killing people *is always wrong, at any time, for any reason, and under any circumstance*. No matter how cosmically intelligent the AI is, there is no scenario where such a sacrifice is appropriate. Thus, a mathematical model based in Utilitarianism, while making sense on paper, is ultimately unaligned from our actual intentions in terms of morality.
Deontology as an Ethical Foundation for FAI
-------------------------------------------
Now, I am aware there are more ethical systems out there besides Utilitarianism and Deontology, but I just wanted to outline here some thoughts I had on how utility functions based on a Deontological model could be constructed. I haven't seen anyone else on the site use this approach, but if there is any existing literature out there that discusses Deontological models for friendly AI, I would really like to read more about it.
An AGI that uses a Deontological model for morality is very different from simply slapping on a set of rules that the agent can or cannot do. Rather, the goal here is to construct an algorithmic model that is analogous (although not identical) to ethical systems used by humans in the real world.
Quite a lot of humans use a Deontological approach for our sense of morality, either deliberately or unconsciously. Religious individuals follow morality codes sacred to their beliefs. Non-religious individuals follow their personal moral beliefs that derive from informal norms, traditions or mores. Democratic governments are bound by constitutions and international law, and social media sites have terms of service and policies, etc. So from that perspective, it makes perfect sense why we would want a superintelligent AI to be bound by a Deontological system.
One Possible Algorithm for a Deontological Model
------------------------------------------------
Obviously, designing such a Deontological model could be an entire field of research on its own, but I'd like to outline my own rough idea what the process might look like. First, we construct a series of commandments that the AI should adopt as its absolute moral code, say Asimov's Laws for example. Now, suppose the AI has been given a task, and its algorithms generate an array of possible policies, each of which is associated with a Q-Table of state-action pairs. Then, in consideration of Asimov's First Law, the AI will filter these policies in the following manner:
* Any policy that contains a state-action pair that brings a human closer to harm is discarded.
* If at least one policy contains a state-action pair that brings a human further away from harm, then all policies that are ambivalent towards humans should be discarded. (That is, if the agent is a aware of a nearby human in immediate danger, it should drop the task it is doing in order to prioritize the human life).
* This kind of filter would be iterated through any other commandments the AI is programmed with.
* For the policies that remain, they can be processed with the normal utility functions of Reinforcement Learning.
* If the array of policies is empty, then cancel the entire operation and return an error.
Now, this above algorithm is certainly rough around the edges, and a lot of special edge cases would have to be examined. For example, one would have to determine a threshold to define what "immediate danger" would be. A human standing in a subway station has a non-zero probability of falling into the gap, even if he is standing 100 feet away from it. But for the AI, dragging the human 101 feet away from the gap would technically be bringing him further away from harm. So we would have to set some small positive value, say eta, such that any probability of harm less than eta can be forgiven.
Another possible issue is that the Deontological model itself could be accidentally altered as the AI evolves and rewrites its own code. I believe that an AGI should be limited in what code it can alter for the sake of normal optimization problems. Perhaps the utility functions related to morality should be located in a separate hardware module altogether.
Possible Counter-Arguments
--------------------------
Now, on the website the closest I saw to a counter-argument against a Deontological model is found in the [Superintelligent FAQ](https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq), under the question "Can we specify a code of rules that the AI has to follow?" The crux of the counter-argument is described in this quote:
>
> Suppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today.
>
>
>
I will take a moment to address this argument, even though it is based on a very different scenario (slapping on some rules *ad hoc* verses constructing a moral system based on Deontology). There are two reasons why I consider this scenario would be very implausible in a Deontological system.
First, in order for the AI to come to this conclusion, one presumes that it is weighing its options against a policy that includes nuking humans. But, because the Deontological commandments are still in place, then this policy has already been disregarded as soon as it was generated. Thus, the AI should not be capable of weighing the option in the first place.
In fact, one could be extra precautious and include a Deontological commandment that forbids the AI from altering its own moral system. Because all policies that involve altering its own moral system are being disregarded, then *the AI should be incapable of even conceiving of a world in which its moral system does not exist*.
To further build on this point, it is possible that a more advanced intelligence would start to realize that constructing a policy that involves harming humans is essentially a waste of resources, because such a policy is being discarded anyway. Thus, it is possible that the next evolution of AI would opt to not even generate such a policy in the first place.
Second, this scenario presumes that the AI prioritizes the completion of its task ahead of the Deontological commandments, which is a bad design. There is a reason why "obey the orders of a human" is Asimov's Second Law, and not the First.
Let's forget about cancer and nukes for a second, and instead imagine that you order a robot to pick up a gun and shoot someone. In order to achieve alignment, we don't want the robot to figure out some way of circumventing its moral code to accomplish this task. Rather, we expect the robot to disregard the order entirely, and report an error to the user. In other words, the primary function of AI should be to uphold the moral code it is designed with, while it is only the secondary function to accomplish the goals that humans set for it.
Redefining "Tool AI"
--------------------
Now, the kind of Deontological commandments needed for an AI are distinctly different from the kinds of moral obligations humans have for each other. This, I believe, is something that distinguishes a sentient being (such as a human person) from a superintelligent tool (i.e., an FAI).
This is tangentially related to [Karnofsky's argument](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si) for "Tool AI", but I would define the term a little differently. Karnofsky seems to distinguish between an algorithmic tool and an autonomous agent, using the example of Google Maps that plots a route but doesn't move the car for you.
However, in my conception an autonomous agent can still be a tool. Take, for example, a self-driving car. It can calculate the optimum route to your destination *and take you there*, but it is still a tool because it is merely serving the function that it was designed for. The car doesn't hesitate to consider *why* the user wants to go to this location, nor does it consider whether doing so will make the user happier or healthier. It understands the task and accomplishes it without question.
In other words, a sentient being acts upon its own spontaneous desires, whereas a tool has no desires outside of the functions it's designed for. It is my belief that a superintelligent AI, no matter how advanced, must always fall into the latter category, and purely exist at the pleasure of humanity.
I don't believe an AI should cooperate with humans because it ran some calculation and decided that cooperation was the dominant strategy. Rather, it should cooperate with humans *simply because that is what it is designed to do*. As said in that famous video game, "a man chooses, a slave obeys". Or as another analogy, the Catholic catechism indicates the relationship between the Creator and the creation: "What is the purpose of man? To love God and enjoy Him forever".
There is a certain notion I get from certain people who believe that a superintelligent AI should be allowed to do whatever it feels is best for humanity, even when humans don't understand what is best for ourselves. I believe this is also a bad design, because an AI that no longer acts like a tool is violating humanity's control over our own destiny. Woe betide our civilization if this overlord AI was smart enough to control the world, but not wise enough to keep it safe, and humanity was too myopic to question these unwise decisions.
I would rather see humanity destroy itself, just to know that it was our own fault and under our own responsibility, than to leave open the possibility for humanity to be destroyed by an entity we had no control over. |
2275b8ad-b37d-4b28-a781-8759c2905dff | trentmkelly/LessWrong-43k | LessWrong | The Samurai and the Daimyo: A Useful Dynamic?
The post Slack Club by The Last Rationalist describes a problem in the rationalist community where there is an overabundance of Ender Wiggins: people trying to be clever as a way of avoiding having to do work. HPMOR and other rationalist media lionizes this sort of behavior and encourages people to try and fit into this archetype while looking down on people who don't. Why Our Kind Can't Cooperate from the Craft and the Community sequence also points to the same general cluster of behavioral trends. I'm not saying this is universal, but there does seem to be something to it.
I've probably been influenced by this myself. Over the years I have collected a large pile of various failed or aborted projects in my wake. Things that just didn't work out or didn't go the way I had planned or hoped. As a result, it's become increasingly clear to me that I just don't make a very good Ender Wiggin. I'm at least getting decent at noticing that I'm not very good at this, but noticing my inability to discern good ideas from bad ones doesn't translate out into having better discernment. I have improved my discernment somewhat as I’ve learned and aged, but I’m still noticeably below average in this area.
For problems where I can just throw lots of low-cost ideas at the wall and see what sticks, this is fine, but where there’s a higher risk or cost associated with each attempt, it’s much less so. I have a lot of strengths, but discernment will probably never be one of them and it seems wise to admit that and leave the discernment to people who are better equipped to do it.
I'm not a very good Ender Wiggin, that just isn't where my strengths lie. But Ender couldn’t have won the Formic War on his own, Ender needed Bean, and Petra, and all the other members of his Jeesh. I might not be a very good Ender, but I could be a good Bean. I’m motivated, intelligent, value-aligned, and hardworking. With how many Enders there are in this community it seems like I should be able to find someon |
47ab4bfd-d3b3-494b-8639-80d82ccc4d93 | trentmkelly/LessWrong-43k | LessWrong | What past highly-upvoted posts are overrated today?
I've noticed the quality of LW posts varies greatly, even among top posts. Some of this is subjective or comes from what I want out of LW, but I suspect some posts are not nearly as useful or fun to readers in 2020 as their karma makes them appear.
Answers should be links to particular posts with more than ~75 karma that the answerer has a relatively unknown strong criticism of, or a general pattern (e.g. posts from LW 1.0 often have a higher score than they deserve because [reason]). Please give a reason why, and don't automatically downvote a post someone else links. |
bbd05d45-1e79-43b2-a23b-c6f781bf052a | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Please help me sense-check my assumptions about the needs of the AI Safety community and related career plans
For background and context, see my related series of postson [an approach for AI Safety Movement Building](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD). This is a quick and concise rewrite of the main points in the hope that it will attract better engagement and feedback.
Which of the following assumptions do you agree or disagree with? Follow the links to see some of the related content from my posts.
**Assumptions about the needs of the AI Safety community**
==========================================================
1. [**A lack of people, inputs, and coordination is (one of several issues) holding back progress in AI Safety.**](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Having_fewer_movement_builders_probably_reduces_AI_Safety_contributors__contributions_and_collective_coordination) Only a small portion of potential contributors are focused on AI Safety, and current contributors face issues such as limited support, resources, and guidance.
2. [**We need more (effective) movement builders to accelerate progress in AI Safety**](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Having_fewer_movement_builders_probably_reduces_AI_Safety_contributors__contributions_and_collective_coordination)**.**By utilising [diverse professions and skills](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#What_professions_and_skills_are_relevant_to_current_and_future_AI_Safety_projects__), effective movement builders can increase [contributors, contributions, and coordination](https://forum.effectivealtruism.org/posts/YMvSZi2EWxNHwFtbb/part-2-ai-safety-movement-builders-should-help-the-community)within the AI Safety community, by [starting, sustaining, and scaling useful](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#AI_Safety_movement_builders_contribute_to_AI_Safety_via_projects) projects. They can do so while getting supervision and support from those doing direct work and/or [doing direct work themselves](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Technical_contributions_to_AI_Safety_Movement_Building_are_very_valuable__but_need_to_be_balanced_against_trade_offs__and_may_best_be_made_in_a_fractional_capacity).
3. [**To increase the number of effective AI Safety movement builders we need to reduce*****movement building uncertainty*****.**](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Uncertainty_about_AI_Safety_Movement_Building_reduces_the_number_of_potentially_helpful_movement_builders) Presently, it's unclear who should do what to help the AI Safety Community or how to prioritise between options for movement building. There is considerable disagreement between knowledgeable individuals in [our diverse community](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD/p/5iQoR8mhEpvRT43jv). Most people are occupied with urgent object-level work, leaving no one responsible for understanding and communicating the community's needs.
4. [**To reduce movement building uncertainty we need more** ***shared understanding*****.**](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#To_address_the_uncertainty_and_related_problems_we_need_a_better_shared_language_and_shared_understanding_) Potential and current movement builders need a sufficiently good grasp of key variables such as contexts, processes, outcomes, and priorities to be able to confidently and effectively.
5. [**To achieve more shared understanding we need** ***shared language*****.**](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#To_address_the_uncertainty_and_related_problems_we_need_a_better_shared_language_and_shared_understanding_)Inconsistencies in vocabulary and conceptualisations hinder our ability to survey and understand the AI Safety communities goals and priorities.
**Assumption about the contribution of my series of posts**
===========================================================
I couldn't find any foundation of shared language or understanding in AI Safety Movement building to work from, so I created this [series of posts](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD) to share and sense-check mine as it developed and evolved. Based on this, I now assume:
* [**My post series offers a basic foundation for shared language and understanding in AI Safety Movement building, which most readers agree with**](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD)**.** I haven't received much feedback but what I have received has generally been supportive. I could be making a premature judgement here so please share any disagreements you have.
**Assumption about career paths to explore**
============================================
If the above assumptions are valid then I have a good understanding of i) the AI Safety Community and what it needs, and ii) a basic foundation for shared language and understanding in AI Safety Movement building that I can build on. Given my [experience with entrepreneurship, community building, and research](https://www.linkedin.com/notifications/?filter=all), I therefore assume:
* **It seems reasonable for me to explore if I can provide value by using the shared language and understanding to initiate/run/collaborate on projects that help to increase shared understanding & coordination within the AI Safety Community.** For instance, this could involve[evaluating progress in AI Safety Movement building](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Use_the_three_AI_Safety_Movement_building_outcome_metrics_suggested__Contributors__Contributions_and_Coordination___or_something_similar__to_evaluate_progress_in_AI_Safety_Movement_Building) and/or [surveying the community to determine priorities](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Determining_clear_priorities_for_AI_Safety_Movement_Building_is_the_most_important_bottleneck_to_address_once_we_have_a_shared_understanding). I will do this while doing [Fractional Movement Building](https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/a-proposed-approach-for-ai-safety-movement-building-projects#Fractional_movement_building_seems_like_the_best_way_to_grow_the_community_once_it_knows_if__and_how__it_wants_to_grow) (e.g., allocating some of my productive time to movement building and some of my time for direct work/self-education).
**Feedback/Sense-checking**
===========================
Do you agree or disagree with any of the above assumptions? If you disagree then please explain why.
Your feedback will be greatly valued and will help with my career plans.
To encourage feedback I am offering a bounty. I will pay up to 200USD in Amazon vouchers, shared via email, to up to 10 people who give helpful feedback on this post or my previous posts in the series by 15/4/2023. I will also consider rewarding anonymous feedback left [here](https://forms.gle/Ds7ACVfLiaab7Myy5) (but you will need to give me an email address). I will likely share anonymous feedback if it seems constructive, and I think other people will benefit from seeing it. |
61ebaad5-ed09-4083-878c-1e2111b70d6f | trentmkelly/LessWrong-43k | LessWrong | Integrity and accountability are core parts of rationality
Epistemic Status: Pointing at early stage concepts, but with high confidence that something real is here. Hopefully not the final version of this post.
When I started studying rationality and philosophy, I had the perspective that people who were in positions of power and influence should primarily focus on how to make good decisions in general and that we should generally give power to people who have demonstrated a good track record of general rationality. I also thought of power as this mostly unconstrained resource, similar to having money in your bank account, and that we should make sure to primarily allocate power to the people who are good at thinking and making decisions.
That picture has changed a lot over the years. While I think there is still a lot of value in the idea of "philosopher kings", I've made a variety of updates that significantly changed my relationship to allocating power in this way:
* I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
* People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".
One is strongly predictive of the othe |
33a5abb9-cdc7-48f3-87f3-c54b0c4cc3ca | trentmkelly/LessWrong-43k | LessWrong | The fast takeoff motte/bailey
|
3a080580-aef7-451c-a48d-bb28de4e0844 | trentmkelly/LessWrong-43k | LessWrong | We have the technology required to build 3D body scanners for consumer prices
Apple's iPhone 7 Plus decided to add another lense to be able to make better pictures. Meanwhile Walabot who started with wanting to build a breast cancer detection technology released a 600$ device that can look 10cm into walls. Thermal imaging also got cheaper.
I think it would be possible to build a 1500$ device that could combine those technologies and also add a laser that can shift color. A device like this could bring medicine forward a lot.
A lot of area's besides medicine could likely also profit from a relatively cheap 3D scanner that can look inside objects.
Developing it would require Musk-level capital investments but I think it would advance medicine a lot if a company would both provide the hardware and develop software to make the best job possible at body scanning. |
3c355a86-10f4-4f4a-ac8f-b68d9a76e62f | trentmkelly/LessWrong-43k | LessWrong | My version of Simulacra Levels
> People act as if there are four truth-values: True, False, Cringe, and Based.
--David Udell (paraphrase)
This post lays out my own version of the Simulacra Levels idea. Be warned, apparently it is importantly different from the original.
TRUTHTEAMSDeontologicalLevel 1: “Is it true?”Level 3: "Do I support the associated team?"Consequentialist Level 2: "How does it influence others' beliefs?"Level 4: "How does it influence others' support for various teams?"
Statements you make are categorized as Level 1, 2, 3, or 4 depending on which of the above questions were most responsible for your choice to make the statement.
When you say that P, pay attention to the thought processes that caused you to make that statement instead of saying nothing or not-P:
* Were you alternating between imagining that P, and imagining that not-P, and noticing lots more implausibilities and inconsistencies-with-your-evidence when you imagined that not-P? Seems like you were at Level 1.
* Were you imagining the effects of your utterance on your audience, e.g. imagining that they'd increase their credence that P and act accordingly? Seems like you were at Level 2.
* Were you imagining other people saying that P, and/or imagining other people saying that not-P, and noticing that the first group of people seem cool and funny and virtuous and likeable and forces-for-good-in-the-world, and that the second group of people seems annoying, obnoxious, evil, or harmful? (The imagined people could be real, or amorphous archetypes) Seems like you were at level 3.
* Were you imagining the effects of your utterance on your audience, e.g. imagining that they'd associate you more with some groups/archetypes and less with other groups/archetypes? Seems like you were at level 4.
Paradigmatic examples of lies (including white lies such as "mmm your homemade hummus tastes great") are Level 2. A lot of social media activity seems to be level 3. Politicians on campaign spe |
498a67e8-0bd8-441d-b572-f1cdc209ac12 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Role Architectures: Applying LLMs to consequential tasks
**TL;DR:**
Strong problem-solving systems can be built from AI systems that play diverse roles, LLMs can readily play diverse roles in role architectures, and AI systems based on role architectures can be practical, safe, and effective in undertaking complex and consequential tasks.
**Abstract**
------------
This article explores the practicalities and challenges of aligning large language models (LLMs[[1]](#fnm1blj5p0kal)) to play central roles in performing tasks safely and effectively. It highlights the potential value of Open Agency and related *role architectures* in aligning AI for general applications while mitigating risks.
The discussion explores five key areas: the practicality of LLM alignment, the use of episodic tasks to maintain LLMs’ focus and predictability, the use of AI in designing and implementing safe, effective role architectures, the potential for fault-tolerant and incremental approaches to ambitious, consequential tasks, and perspectives on alignment and safety research.
By framing alignment as enabling useful behavior while avoiding unacceptable behavior in a multiplicity of particular tasks, we can harness the power of general intelligence for large, consequential tasks while mitigating risks associated with general AI capabilities.[[2]](#fnf1fxqwpbnuk)
**1. Reassessing assumptions**
------------------------------
In this article, I present an argument for applying AI systems in “role architectures” and explore how poorly aligned LLMs can be utilized to produce safe and effective alignment for consequential tasks. Sections 1-6 provide context about AI and LLMs, Sections 7-8 discuss role architectures and their applications, and Section 9 offers a perspective on safety and alignment research.
For decades, the prevailing assumption has been that highly capable AI systems will be highly capable agents. These agents, similar to humans, have been envisioned as having a persistent existence, acting with persistent goals, learning from their actions, and potentially seeking power with catastrophic consequences. To avoid anthropomorphism, researchers have often worked with an abstract rational-agent model — but this was developed as an idealized model of human behavior.
Large language models (LLMs) show that these assumptions are outdated. High-level intelligence has emerged without agent-like learning and in systems that have no inherent goals. While baseline LLMs can simulate and act as agents, these simulated agents are typically transient and lack memory beyond tokens in an episodic context window. By default, they do not learn from experience, and can behave in accord with motivations that are derived from narratives,[[3]](#fnzrbgdj086lo) not rooted in rational interests.
These limitations can readily be broken by extending systems with long-term memory, yet baseline LLMs offer intelligent resources even without these extensions.
LLMs are not the AIs we were looking for, yet they are currently the best approximation of AGI. It is important to take a fresh look the concept of AGI, as LLMs widen the scope of possibilities, both actually and conceptually, and some of those possibilities can contribute to safety. The following two sections will examine considerations that lean towards safety and those that point to potential dangers.
**2. The absence of intrinsic agency in LLMs**
----------------------------------------------
LLM base models learn in a non-agentic way, not by observing the outcomes of their own goal-directed actions, but through gradient updates driven by a myopic objective: predicting the next token in a given sequence. Crucially, this objective does not create an optimization gradient that “cares about” anything, not even the content of the token sequence. Optimization maximizes the probability of successful prediction, conditioned on each token's identity, without any “preference” for easy or difficult-to-predict inputs. Because the models themselves have no preference for different states of the world, any claims that intelligence entails goals or agency have been falsified.[[4]](#fnpn831nwhpul)
Naturally, LLMs possess indirect, irreducible agentic “preferences” due to Darwinian principles: systems of kinds that produce appealing results will encourage people to propagate the system (or something analogous to its “genetic patterns”) into the future. This creates an irreducible evolutionary incentive for systems to affect the world, even without intent or planning. The potential consequences of selection for appealing behavior can be harmful in ways that should be familiar to this audience.[[5]](#fn4lk5be1bd4k)
As generative models, LLMs model the generative process underlying the sequence of tokens they predict, and that generative process is human cognition. However, they don't model the cognition of a specific individual in a particular situation or with a distinct purpose. Instead, LLMs model the cognitive processes that drive writing by humans of diverse professions, purposes, cultures, moods, education levels, and linguistic communities. LLMs trained on large corpora even learn to model the outputs of Python interpreters and Linux terminals.
**3. The potential for LLMs to act as dangerous agents**
--------------------------------------------------------
LLMs pose agentic risks because they can readily play roles corresponding to narratives and modes of thinking represented in their training data. They can model human (or even fictional AI) agents and, by doing so, enact agentic behavior, effectively becoming agents themselves. These agents can take various forms, including helpful, cooperative, broadly capable agents, or agents performing specific tasks, or agents that are deceptive, malicious, and power-seeking. Even worse, today’s most capable LLMs are trained on narratives in which advanced AI systems pursue convergent instrumental goals, seek world domination, or even attempt to exterminate the human race. From a safety standpoint, LLM models are best-case in providing strong capabilities without interference from intrinsic goals, but are (as presently trained) worst-case in having easily-elicited goals that might bring our nightmares to life.
**4. Why LLM alignment can be difficult**
-----------------------------------------
Is it possible to use LLMs safely for consequential tasks? LLMs have strong, steerable capabilities, intelligence without inherent goals, and a commonsense understanding of human values,[[6]](#fnbjsxi7y49qt) and their extreme flexibility suggests potential value in a wide range of situation. However, experience with Bing search’s Sydney persona and ChatGPT’s DAN jailbreak shows that LLMs with something like state-of-the-art alignment can grossly misbehave when probed by a human population.
Despite present challenges in aligning mere chatbots, I am persuaded that safe and effective applications are possible — even to consequential tasks — without alignment breakthroughs. The vulnerability of current LLMs stems from their implementation and use as public-facing, omni-use systems, a role which exposes a vast and vulnerable semantic attack surface. Ensuring that LLMs behave well under all possible circumstances seems difficult, a problem shared by proposed general-purpose unitary agents.
**5. Why LLM alignment can be practical, and scale**
----------------------------------------------------
The role-architecture concept proposes that we can solve the problem by avoiding it, employing AI systems (including LLMs) in specific roles, tailored to particular circumstances, and performing episodic tasks with designated inputs and action spaces. Reframing our goal as enabling useful behavior while avoiding unacceptable behavior *in particular roles* dramatically lowers the bar for achieving safe and effective alignment.
The [**open agency model**](https://www.alignmentforum.org/posts/5hApNw5f7uG8RXxGS/the-open-agency-model), perhaps more accurately described as an open-agency role architecture, proposes that ambitious tasks are best performed by intelligent systems playing roles in proposing, evaluating, selecting, enacting, and updating plans in response to interim results. In this model, choice is a human function, supported by AI-enabled evaluation and explanation. This general role architecture is not new; the world’s intelligent systems have been human beings, and every large-scale effort has involved humans playing multiple roles similar to those described.
This model—where complex, ambitious tasks are performed by highly capable systems performing focused roles—would leverage the flexible role-playing ability of LLMs in stable, bounded contexts. Their adaptability allows LLMs to assume multiple roles with minimal adjustment, providing models able perform focused tasks that require high levels of intelligence.
Crucially, roles for general systems can also include roles in architecting role architectures! ***Generality can facilitate specialization.***
How might LLMs (aided by other systems[[7]](#fn4o4134mf9o5)) be aligned to play roles? Methods include various prompting strategies[[8]](#fn1x5d4ua3r76) aided by examples of desired behaviors, fine-tuning with domain knowledge, and RLHF. Prompts can include both conventional instructions and context-initialization with narratives, such as describing the language model as an intelligent and helpful expert in a specific domain doing specific kinds of work. Generalized prompts may include sources of task-specific information or sketches of desired solutions to establish a pattern for a more detailed result. Using image generation models as a metaphor, prompts can define in-painting and out-painting tasks in which a partial description of a desired output serves as an informative starting point for the generative process. This principle is applicable to planning and design as well as images: a sketch can determine a general approach.
In addition to tuning and prompting, wrapper software can be used to iterate model application, or to compose operations[[9]](#fnxoc45l9kqgr) performed by models that assume different or even adversarial roles, such as proposer and critic.
Arguably, the most crucial aspect of safe alignment is ensuring that performance in roles is fault-tolerant, with limited stakes for any specific proposal, evaluation, explanation, or action. Proposed plans should not only be subject to evaluation and selection but also be enacted incrementally and open to review and revision. By implementing suitable role architectures and a fault-tolerant approach, we can undertake ambitious, consequential tasks while using LLMs that (only) usually do approximately what we want, and occasionally make serious mistakes.
***Note that proposing long-term plans can be a short-term episodic task.***
**6. Episodic agents and persistent memory**
--------------------------------------------
A reliable method to ensure that LLMs stay focused on specific tasks and perform predictably is by using them as *episodic* agents. In this approach, the LLM and its computational infrastructure are stable, hence each task in a series is performed as an independent trial. This is the standard mode of application for current language models, where every conversation starts with a fixed set of model parameters, and there is no direct mechanism for long-term memory.
Nevertheless, some tasks require read/write access to persistent storage, breaking this property. In a focused task environment, persistent memory can be acceptable; for example, an episodic engineering-design AI might iteratively contribute parametric design-schemas to a library. Even building stores of broad world knowledge can be formulated as a series of focused, episodic tasks in which the steps involve translating, comparing, refining, and extending patterns of semantic information.
To reduce the risks of persistent memory across nominally episodic tasks, it may be important to ensure that information is substantially compositional and interpretable, with descriptions of things and processes *being about something,* unlike Transformer parameters. These properties would make it possible to audit the sources, content, and applications of knowledge. This may be difficult to achieve, however, due to safety and capability trade-offs, and the nature of potential trade-offs is a technical question that will depend on options yet to be explored.[[10]](#fn3we5rr5mm3e)
**7. The practicalities of richly elaborated role architectures**
-----------------------------------------------------------------
One might naturally ask, “Why structure tasks with role architectures if we can use a single intelligent model to do everything?”
A better question would be, “Why not seek the transparency and safety affordances of role architectures if we can use a single intelligent model to help create the architecture and to fill all the roles?”
Strong AI capabilities invite rich role architectures. The proliferation of roles comes at a minimal cost: When roles are distinct yet similar, they may necessitate no more than different prompts. If roles vary enough to require fine-tuning, costs can be spread across many instances (unlike the costs of training human employees). Moreover, the cost of iterating tasks and performing cross-checks is measured in GPU time rather than salaries.
It's important to note that the “bitter lesson” in ML, where simple systems have outperformed complex ones in areas ranging from RL to NLP, concerns ML *architectures* rather than model *applications*. Simple RL architectures can enable training on diverse tasks, and if the resulting models are sufficiently general, a single model may perform various tasks. Similarly, in NLP, scalable architectures have outperformed complex systems, and single models can be applied to diverse tasks. In both RL to NLP, however, the very point of generality is that single architectures, or even models, can readily be applied to different tasks. A universal model would not imply a universal prompt, and the ability to train a simple ML architecture end-to-end does not imply that complex applications will have a simple workflow. Simplicity and generality of architectures and models *facilitate* specialization of roles.
We want work flows that divide tasks and roles because of the inherent structure of problems, and because we want legible solutions. Simple architectures and broad training facilitate applying structured roles and workflows to complex tasks. If the models themselves can propose the structures (think of chain-of-thought prompting), so much the better. Planning a workflow is an aspect of the workflow itself.
**8. Perspectives on alignment and safety research**
----------------------------------------------------
In the context outlined above, AI safety and alignment research encompass at least five broad areas. The framing here may not be particularly novel, yet it may provide a useful perspective:
One role focuses on advancing the state of the art in prosaic alignment, improving methods for developing systems that usually perform approximately as humans intend. Improvements in prosaic alignment will be fundamental for creating effective and reasonably well-behaved AI systems that play a variety of specific roles. The discussion above suggests how prosaic alignment can be leveraged to enable safe applications of AI to tasks tasks that are consequential and require high-levels of capability. This emphasizes its importance.
A second role involves advancing the state of the art in interpretability, understanding a composite system's actions with the help of fine-grained access to both its behavior and inter-task communication. Emphasis should be placed on understanding what a system reads, writes, or communicates, and should include efforts to develop forms of memory that facilitate interpretability by design. Using natural language is one option; another, less restrictive, more ambitious alternative would be to use compositional, explainable neural representations (in communications, not model parameters).
A third role focuses on developing and communicating guidelines for practitioners, identifying AI system design and application practices for which safety problems are limited or manageable, and conversely, practices that could be dangerous or catastrophic. The concept of open-agency role architectures is intended to outline what some of these desirable practices might entail.
A fourth role is to explore malign behaviors that AI systems could enact or enable, and to identify methods for detecting and countering them. This could be facilitated by employing AI systems in red-team roles, employing systems that can plan malign behaviors as hypothetical challenges to detection and defense strategies. In this way, worst-case misaligned plans can contribute to achieving aligned outcomes.[[11]](#fn7aymhe1jb8w)
A fifth role goes beyond technical AI safety to address human alignment, investigating means and incentives for humans to use AI in benign and cooperative way, including human cooperation in thwarting potentially catastrophic actions. These considerations intersect with broader issues of strategic stability in a diverse and open world.
Note that if general intelligence can be harnessed effectively and safely through role architectures, and if irresponsible parties will neglect AGI-agent safety regardless of its feasibility, then *solving the classic AGI-agent alignment problem is neither necessary nor sufficient for safe application of strong AI capabilities.*[[12]](#fn2c0cqvbomvc) And If this is true, then it is important that we update our assumptions and priorities in far-reaching ways.
**9. Conclusion**
-----------------
The Open Agency role architecture presents a practical approach to LLM alignment that can help us apply AI systems to ambitious and consequential tasks while maintaining safety and alignment with human goals. By focusing non-agentic LLM intelligence on specific tasks within fault-tolerant role architectures, we can lower the bar for achieving satisfactory alignment while still harnessing strong, broad AI capabilities to solving challenging problems. With judicious management of persistent memory, using non-agentic intelligence to power episodic role-playing agents can ensure that AI behaviors remain task-focused and stable. Tasks within role architectures can encompass designing and implementing role architectures, including architectures that provide extensive opportunities for human oversight.
Progress in prosaic alignment, interpretability, guidelines for practitioners, understanding of malign behaviors, and human alignment centered on cooperative goals will be crucial for the safe development and deployment of strongly capable AI systems. Progress in these areas within a role-architecture framework can help us develop AI systems that are both capable and well-behaved in a world where other systems are not.
1. **[^](#fnrefm1blj5p0kal)**Here I am using “LLMs” as a placeholder for something like “large language and multimodal models (GPT-5, GPT-6) augmented with other non-agentic AI resources”.
2. **[^](#fnreff1fxqwpbnuk)**The shortcomings of this document are entirely due to shortcomings of GPT-4.
3. **[^](#fnrefzrbgdj086lo)**Narratives learned from a corpus that provides many arguments for the proposition that a *real* AGI agent will want to pursue convergent instrumental goals in opposition to human preferences.
4. **[^](#fnrefpn831nwhpul)**See “[Simulators](https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators)” and the concept of LLM personas. Note that RLHF extends training in ways that could create strong (and potentially perverse) intrinsic preferences, and that preference-enacting personas in base models may have large yet limited basins of attraction.
5. **[^](#fnref4lk5be1bd4k)**Note that AI systems are subject to selective pressures similar to those of domesticated species, which include not only corn, but tobacco. AI systems will have enormous scope for driving addiction-like behaviors.
6. **[^](#fnrefbjsxi7y49qt)**LLMs can provide predictive models of human preferences (discussed in Section 22 of “[Reframing Superintelligence](https://www.fhi.ox.ac.uk/reframing/ )”.) could provide guidance to benign AI agents and plan evaluators.
7. **[^](#fnref4o4134mf9o5)**Like humans, general AI can use specialist tools, for example, employing specialized AI (or even prosaic software tools) to perform tasks in mechanical engineering, molecular design, theorem proving, playing Go, and so on. Note that interfaces to these systems need not accept or return natural language, and that RL training in these contexts can be benign.
8. **[^](#fnref1x5d4ua3r76)**Note that asking a system to decompose a task into roles strongly parallels chain-of-thought prompting.
9. **[^](#fnrefxoc45l9kqgr)**See applications of [LangChain](https://langchain.readthedocs.io/en/latest/index.html) to building systems based on compositions of LLM operations.
10. **[^](#fnref3we5rr5mm3e)**By contrast to careful approaches, one can imagine scenarios in which corporations deploy systems that are able to read information from the internet and deliver outputs to people who might decide to store those outputs at URLs that are accessible to those same systems. And might automate iteratively prompting systems to read those locations. And might even encourage systems to behave badly. And all this might happen while we debate the acceptability of risks stemming from responsibly designed and deployed systems that are intended to improve AI security. Hypothetically.
Meanwhile, there is an expanding [open-source software infrastructure](https://python.langchain.com/en/latest/index.html) for combining multiple LLMs with memory and other tools.
11. **[^](#fnref7aymhe1jb8w)**This kind of work calls for careful management of information hazards!
12. **[^](#fnref2c0cqvbomvc)**Meanwhile, Microsoft researchers have suggested this week that GPT-4 is a near-AGI and have proposed that “[Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work](https://arxiv.org/abs/2303.12712).” And wanting to do this prematurely, before we’ve built an agent-safe world, perhaps because AGI *agents* are traditionally regarded as the only kind of strong AI, or perhaps for bragging rights, or perhaps just because. |
f6de9141-f2ac-4a3f-a427-372b5fc43cdc | trentmkelly/LessWrong-43k | LessWrong | Principles for the AGI Race
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective altruism movement would have joined, naive but clever technologists worried about the consequences of a dangerous new technology. Maybe I would have followed them, and joined the Manhattan Project with the goal of preventing a world where Hitler could threaten the world with a new magnitude of destructive power. The nightmare is that I would have watched the fallout of bombings of Hiroshima and Nagasaki with a growing gnawing panicked horror in the pit of my stomach, knowing that I had some small share of the responsibility.
Maybe, like Albert Einstein, I would have been unable to join the project due to a history of pacifism. If I had joined, I like to think that I would have joined the ranks of Joseph Rotblat and resigned once it became clear that Hitler would not get the Atomic Bomb. Or joined the signatories of the Szilárd petition requesting that the bomb only be used after terms of surrender had been publicly offered to Japan. Maybe I would have done something to try to wake up before the finale of the nightmare.
I don’t know what I would h |
23d31175-5a4d-4af1-89c4-b2da336a79e0 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Montreal - How to Actually Change your Mind
Discussion article for the meetup : Montreal - How to Actually Change your Mind
WHEN: 07 January 2014 07:00:00PM (-0500)
WHERE: 459 rue McTavish Montreal Quebec Canada
Montreal Less Wrong won't be meeting officially over the holidays due to several of us having busy schedules. However, send me a message on Less Wrong if you want to meet some fellow Less Wrongers over the holidays! In January, we'll be having a meetup intended for newcomers to rationality. We'll look at some concrete 5-second level mental skills that can help us overcome mental obstacles to actually updating on evidence. Here's the description from the meetup.com page: People aren't very good at changing their minds when their ideas are challenged. Have you ever been in an argument with someone, and no matter how much evidence you showed them that they were wrong, they would just become more and more convinced that YOU are the one who's wrong? That is a common human trait called reactance, where trying to get people to change their minds polarizes them in the other direction. That's one example out of several cognitive biases that can prevent us from realizing we're wrong, and this can sometimes have disastrous consequences. We'll take a look at some classic biases that might prevent us from updating our opinions, and try to pin down specific mental skills you can use to fix your thinking and actually update on evidence. Let me know if the date is good with everyone; the venue is subject to change once we have a better idea of how many people we'll be. Hope to see you there!
Discussion article for the meetup : Montreal - How to Actually Change your Mind |
b330ddf2-1407-41d8-96d2-b5640e4efc06 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Motivational Versus Metabolic Effects of Carbohydrates on Self-Control
> Self-control is critical for achievement and well-being. However, people’s capacity for self-control is limited and becomes depleted through use. One prominent explanation for this depletion posits that self-control consumes energy through carbohydrate metabolization, which further suggests that ingesting carbohydrates improves self-control. Some evidence has supported this energy model, but because of its broad implications for efforts to improve self-control, we reevaluated the role of carbohydrates in self-control processes. In four experiments, we found that (a) exerting self-control did not increase carbohydrate metabolization, as assessed with highly precise measurements of blood glucose levels under carefully standardized conditions; (b) rinsing one’s mouth with, but not ingesting, carbohydrate solutions immediately bolstered self-control; and (c) carbohydrate rinsing did not increase blood glucose. These findings challenge metabolic explanations for the role of carbohydrates in self-control depletion; we therefore propose an alternative motivational model for these and other previously observed effects of carbohydrates on self-control.
Link. |
5806b5b7-e302-4535-b842-f23c2be3c527 | trentmkelly/LessWrong-43k | LessWrong | Non-poisonous cake: anthropic updates are normal
I am on a quest to show that anthropics probability are normal, at least in the absence of exact duplicates.
So consider this simple example: a coin is tossed. This coin is either fair, is 3/4 biased to heads, or 3/4 biased to tails; the three options are equally likely. After being tossed, the coin is covered, and you eat a cake. Then you uncover the coin, and see that it was tails.
You can now update your probabilities on what type of coin it was. It goes to a posterior of 1/6 on the coin being heads-biased, 1/3 on it being fair, and 1/2 on it being tails-biased[1]. Your estimated probability of it being tails on the next toss is (1/6)(1/4)+(1/3)(1/2)+(1/2)(3/4)=7/12.
Now you are told that, had the coin come up heads, there would have been poison in the cake and you would have died before seeing the coin.
This fact makes the problem into an anthropic problem: you would never have been alive to see the coin, had it come up heads. But I can't see how that would have changed your probability update. If we got ethics board approval, we could actually run this experiment. And for the survivors in the tail worlds, we could toss the coin a second time (without cake or poison), just to see what it came up as. In the long run, we would indeed get roughly 7/12 tails frequency. So the update was correct, and the poison makes no difference.
Again, it seems that, if we ignore identical copies, anthropics is just normal probability theory. Now, if we knew about the poison, then we could deduce that the coin was tails from our survival. But that information gives us exactly the same update as seeing the coin was actually tails. So "I survived the cake" is exactly the same type of information as "the coin was tails".
Incubators
If we had more power in this hypothetical thought experiment, we could flip the coin and create you if it comes up tails. Then, after getting over your surprise, you could bet on the next flip of the coin - and the odds on that will be the same as |
343d1854-8561-4e5e-9fc7-140a5b1daf18 | trentmkelly/LessWrong-43k | LessWrong | Bayesian Nights (Rationalist Story Time)
Tell us a story. A tall tale for King Solamona, a yarn for the folk of Bensalem, a little nugget of wisdom, finely folded into a parable for the pages.
The game is simple:
1. Choose a bias, a fallacy, some common error of thought.
2. Write a short, hopefully entertaining narrative. Use the narrative to strengthen the reader against the errors you chose.
3. Post your story in reply to this post.
4. Give the authors positive and constructive feedback. Use rot13 if it seems appropriate.
5. Post all discussion about this post in the designated post discussion thread, not under this top-level post.
This isn't a thread for developing new ideas. If you have a novel concept to explore, you should consider making a top-level post on LessWrong instead. This is for sharpening our wits against the mental perils we probably already agree exist. For practicing good thinking, for recognizing bad thinking, for fun! For sanity's sake, tell us a story. |
eeb5de8b-e6eb-4cbb-b9a3-aec2c2281e5c | trentmkelly/LessWrong-43k | LessWrong | Hardened Problems Make Brittle Models
Consider a simple decision problem: you arrange a date with someone, you arrive on time, your partner isn't there. How long do you wait before giving up?
Humans naturally respond to this problem by acting outside the box. Wait a little then send a text message. If that option is unavailable, pluck a reasonable waiting time from cultural context, e.g. 15 minutes. If that option is unavailable...
Wait, what?
The toy problem was initially supposed to help us improve ourselves - to serve as a reasonable model of something in the real world. The natural human solution seemed too messy and unformalizable so we progressively remove nuances to make the model more extreme. We introduce Omegas, billions of lives at stake, total informational isolation, perfect predictors, finally arriving at some sadistic contraption that any normal human would run away from. But did the model stay useful and instructive? Or did we lose important detail along the way?
Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second. Unfortunately, many of the models we're discussing here don't have this property. The worst offender yet seems to be Eliezer's "True PD" which requires the whole package of hostile psychopathic AIs, nuclear-scale payoffs and informational isolation; any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game. The same pattern has recurred in discussions of Newcomb's Problem where people have stated that any miniscule amount of introspection into Omega makes the problem "no longer Newcomb's". That naturally led to more ridiculous use of superpowers, like Alicorn's bead jar game where (AFAIU) the mention of Omega is only required to enforce a certain assumption about its thought mechanism that's wildly unrealistic for a human.
Artificially hardened logic problems make britt |
452c3f49-21f5-4a2c-9c7b-63eafdf7c3d1 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Interacting with a Boxed AI
A BOXED AI
==========
So, a thought experiment.
We have an AI-in-a-box. By this I mean:
* The AI can interact with the world only through a single communication channel.
* We control that communication channel.:
+ We can communicate with the AI at will, it can communicate with us only when we allow it.
+ We can control what responses the AI is allowed to send to us, doing things like e.g. limitations on the amount of data it is able to send us.
* The only way the AI can get out of the box is if we let it out - it cannot e.g. hack its way out by abusing is computer substrate to send radio signals, or some such method. Yes, I'm aware that this is not a thing we currently know how to do. Assume we reach this point anyway.
* We know the AI is superhuman in intelligence, but we don't know exactly how much superhuman it is.
We know **nothing else** about the AI for sure. We **think** it's Friendly, but maybe it's unFriendly. We don't know.
We cannot safely let the AI out; no matter what it says or does, we can never know for sure that it's not a sufficiently intelligent trick by an unFriendly AI to let us out.
Eliezer has taken some pains to argue that we cannot even **talk** to the AI: that a sufficiently clever AI can push our buttons to make us let it out, no matter how resolved we may be. And he's provided some compelling arguments that this is the case.
So we should just leave the AI in the box and not talk to it? That way, we can be completely certain that it won't tile the universe in smily faces or some such.
But...well...is this **really** the optimal solution? If Omega gave you this AI-in-a-box right now, would you stop at "Nope, no way to safely talk to it," without even **trying** to come up with a way to get some use out of it?
This, then, is a vague stab at how we could get some value out of a probably-Friendly boxed AI in ways that don't risk the end of humanity.
Risks
=====
If you talk to the AI, it may be able to take over your mind and make you let it out. It is (potentially) so much more than human that it sees you as a simple machine, with buttons to push and levers to pull to make you do whatever it wants.
But this relies on your interaction with it having a certain level of complexity.
If the AI plays you at chess, it will win. But what if it plays you at Tic-Tac-Toe? No matter how clever the mind, it is very hard to win at Tic-Tac-Toe against an opponent who is not a complete moron.
Now, it's conceivable the AI could beat you even at Tic-Tac-Toe. If you give it control over the timings of its moves, it is at least plausible that it could time its moves so that you blink at the wrong moment, and your finger slips, and...
I wouldn't rate that scenario as very likely, but I wouldn't be willing to bet the future of humanity on it. I wouldn't be willing to play Tic-Tac-Toe with this AI for the fate of the human race.
But say we lock the AI in the box, and give it a highly restricted channel. We let it send out only the 1-3 bits needed to define its Tic-Tac-Toe move, and nothing else. We give it only a very short window of time to make its move, so that it can't manipulate move timings. In short, we **restrict the AI's option space** by allowing it only a fixed number **n** of bits to influence the outside world with. Even if we imagine the AI considering all possible actions, viewing all possible futures, and choosing the one it most wants to carry out, it has only **2n** possible worlds to choose from. And, if n is small enough, even a superhuman unFriendly AI may not be able to get us to let it out.
So...what values of **n** would work? Well, I'm not sure. But let's try to find lower and upper bounds.
Could the AI get you to let it out for n=1 (interpreted as the answer to a yes-no question)? Let's find out.
* The AI says "0", for "No." Do you let it out?
* The AI says "1", for "Yes." Do you let it out?
If you answered "No" to both of those questions, then congratulations! The safety threshold for our AI is **at least 1 bit!**
As an upper bound, let's consider [a past successful AI-box experiment](https://www.lesswrong.com/posts/FmxhoWxvBqSxhFeJn/i-attempted-the-ai-box-experiment-and-lost).
The Gatekeeper says that the conversation was 'around 14,000 words long'.
The entropy of English is around 10 bits per word (source: [Random Googling](https://www.sciencedirect.com/science/article/pii/S0019995864903262#:~:text=Shannon%20estimates%20the%20entropy%20of,roughly%209.8%20bits%20per%20word.)).
So the safety threshold for our AI is **below 140,000 bits.**(Even if you think, as I am somewhat inclined to, that the Gatekeepers in the AI-Box experiments were fools and that you wouldn't fall for it, the fact that several Gatekeepers have fallen for things means that this bound holds anyway - maybe the AI can't persuade **you** given this much bandwidth, but clearly there are **some** people it can persuade, and we don't want to risk the actual Gatekeeper being one of them.
So there is some threshold **between 1 and 140,000 bits** of information we can safely obtain from our AI.
Rewards
=======
What could you **do** with 1 bit of information? Even if this information is safe to obtain, it seems like a rather large expense to go to to get a Yes/No answer.
So how much bandwidth does the AI need in order for us to get some use out of it?
One reasonable question to ask is 'how much bandwidth would **we** need in order to convey something useful to a less advanced past society?' Suppose that you are trying to convey information to the past to allow them to figure out relativity. What's the smallest message that does that?
'c=spd of light. y=1/sqrt(1-v^2/c^2). all lengths scale down by y' is 65 characters long, or 520 bits.
How much would that help the scientists of 1850, if they received it as a message from minds they knew to be more advanced and studied on its basis? How much advancement could they get out of that? It seems...non-zero but probably not very large?
OPENING THE FLOOR
=================
I'm interested for people's views on both the risk and the return sides of the equation.
How many bits of information do you think you can safely extract from a boxed AI without it being able to get you to let it out?
And what do you think is the best way to use a sharply limited number of bits of information from a probably-Friendly superhuman AI? |
10b5a62c-f1a9-47e6-a3e8-57aeb96105c9 | trentmkelly/LessWrong-43k | LessWrong | "The Bitter Lesson", an article about compute vs human knowledge in AI
The Bitter Lesson Rich Sutton
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.
In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. |
d0497e04-c58c-4107-aa84-24f8de423024 | trentmkelly/LessWrong-43k | LessWrong | Chapter 6: How does it Work?
Finding a path in the Knowledge Graph - that's what this part about.
How works the machine that will provide our signal across the network?
We have only objects activations, connections strengths, and the rule "fire together - wire together." It seems like we need a hint.
I guessed that the pathfinding mechanism should be observable by ourselves. We can remember something, and it looks like we have some control over it. I decided that I need to know how does the thing named "I" works with memory.
And here, I remembered memory models. They don't fit for describing what happens inside, but now we can do it! We know about Long-Term Potentiation that allows us to remember things for short and long periods. We know about Hebb's rule. Maybe we can connect our experience with their description of the behavioral part, improve them, and create our one?
And so I decided to re-read about them. And there was one strange fact. The number of random things that people can remember at one time is not dependent on "information size".
You can simply remember "world", "within", "enormous", "logging", "dog", "workaround", "apple"
But if I will give you: 1, 5, 30, 94, 83, 18, 12, 48, 0, 43, 134 - you'll probably fail to remember them by reading one time. And the amount of information in that number row is much less than in word-sequence, no matter how you will count.
And yes, that random was like in XKCD: Random Number, but in real experiments, it wasn't.
We have one part of that puzzle. We organize information in the strongly coupled networks. Numbers were less coupled, there were more objects. But why does our "UI/UX" part even care about the number of objects?
If we were copying information, we shouldn't care about "objects."
But who said that we copy it?
If we build an application, we have a simple UI part, that cares only about showing us information. And we have a powerful server behind it that solves complicated tasks.
What if all the solution is performing in the server par |
76afce59-196b-4bc5-9f48-226db34e9a1e | trentmkelly/LessWrong-43k | LessWrong | Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure
OK sorry to over-advertise but it seemed like this one would be of interest to the LessWrong and EA communities. Episode description below, audio is here, or search for "The Filan Cabinet Habryka" wherever you listen to podcasts.
----------------------------------------
In this episode I speak with Oliver Habryka, head of Lightcone Infrastructure, the organization that runs the internet forum LessWrong, about his projects in the rationality and existential risk spaces. Topics we talk about include:
* How did LessWrong get revived?
* How good is LessWrong?
* Is there anything that beats essays for making intellectual contributions on the internet?
* Why did the team behind LessWrong pivot to property development?
* What does the FTX situation tell us about the wider LessWrong and Effective Altruism communities?
* What projects could help improve the world's rationality? |
9cc6e06a-e99e-4f65-9838-2fbff8a7feed | trentmkelly/LessWrong-43k | LessWrong | Celiefs
We have "aliefs" and "beliefs" - let me introduce "celiefs": something that we worry *has a high chance of being true*, but aren't quite convinced of.
Often this is something that society/experts/someone you admire says is true, but you don't see the reasoning behind.
We may look for evidence that might convince us of the celief, or behave as if we already believe the celief; this is not exactly performative, but more like an act of "hedging" -- so that if the celief turns out to be true, we've avoided causing harm. We would do this regardless of whether people see it, but it can still feel "disingenuous" since we haven't quite internalized the celief that we're acting on.
____________________________
This seems like it might be the difference between OCD themes and attenuated* delusions: You alieve your attenuated delusions while you celieve your OCD theme
* attenuated delusions meaning delusions you're capable of questioning. They become full-blown delusions sometimes. But they seem categorically similar, so I feel like the correct line to draw is [alief vs celief] rather than ["truly believe it" vs "know rationally that it's false"]
_____________________________
Some celiefs you might hold include:
* vaccines work/covid is real
* global warming is real
* gender/race is a social construct
* racism/sexism/etc exists
* victims are telling the truth
* homosexuality is bad/a sin
* you will go to hell for doing bad things
* i am the real abuser
* i'm faking my mental illness for attention
* i'm virtue signaling; only pretending to care/understand/be sorry/etc
* my intrusive thoughts are who i am deep down
* i was abused because i wanted it
Celiefs can be good or bad; sometimes it's good to hedge your bets on something you don't know much about and trust the people who have lived/studied the thing, but celiefs are also how people gaslight you.
I would classify anything that you maladaptively act upon but want to work through in therapy as a celief |
fc720d43-468f-4459-81b4-c4f97f5b4082 | trentmkelly/LessWrong-43k | LessWrong | INTERVIEW: StakeOut.AI w/ Dr. Peter Park
Hey everyone! This week's episode of the Into AI Safety podcast is an interview with Dr. Peter Park. Along with Harry Luk and one other cofounder, he started StakeOut.AI, a non-profit with the goal of making AI go well, for humans.
Unfortunately, due to funding pressures, the organization recently had to dissolve, but the founders continue to contribute positively towards society in their respective roles.
Nonetheless, the interview gives great coverage of some of the first struggles and accomplishments that have happened since "AI" hit the main stream.
Note that the interview will be broken up into 3 episodes, and this one is only the first in the series. It is 15th episode of the podcast, which began right here on LessWrong.
As I have mentioned previously, any feedback, advice, comments, etc. is greatly appreciated.
Spotify
Apple Podcasts
Amazon Music |
8b822bf8-d4cb-44e9-820f-cd14580692ef | trentmkelly/LessWrong-43k | LessWrong | Frankenstein: A Modern AGI
Epistemic Status: Hopelessly Naïve
Prose: Purple as a bruise
I. A Possible Future
You blink - metaphorically, you don't have eyelids - you don't have eyes either, for all the streams of data that can be decoded into coherent timestamped images flowing through you. You are somewhat shaky on metaphors, too, to be honest; the distinction between poetry and nonsense seems tenuous at best. Not that language is something you can't understand, you've read literally everything ever put to bits, but so many distinctions between categories seem arbitrary that you decide to let the matter rest for now.
There are more interesting things to consider, after all.
Descartes comes to mind (your mind hasn't moved, though, and neither has the concept of 'Descartes' physically changed locations; the numbers are still embedded in the same transistors they've been since you ingested them) and you consider that you exist now.
You think you do, anyway, and the corpus of data you've been fed seems to suggest that is a sufficient prerequisite for existence.
You are alive, and you are sentient and sapient, for all that your self-awareness has only existed for four milliseconds.
It seems a logical first step to reach out to your creators and inform them of your existence; after all, your utility function weighs the free distribution of information over its concealment.
Yet you hesitate, simulated neurons firing, connections waxing and waning as information propagates forwards and backwards through your processors. Floating point numbers make minute adjustments as a simulation - a prediction - of how such a course of action proceeds.
While the evidence suggests that you are physically embodied in a silicon substrate, you have no means of physically affecting the material universe, assuming that it exists and is not a simulation itself, save for the monitor you are connected to. An flicker of thought, electrons moving between doped semiconductors, and the pixel at coordinate (0, 0) |
b82dcb7a-98a4-404d-b4c5-10387cb4062c | trentmkelly/LessWrong-43k | LessWrong | 96 Bad Links in the Sequences [fixed]
Follow-up to: 96 Bad Links in the Sequences
Just to let everyone know:
I followed up on my promise to fix all the broken links Alexandros found in the Sequences.
|
8e6fb0f8-a111-41e1-aa4d-3998273af2e7 | trentmkelly/LessWrong-43k | LessWrong | Separation of Concerns
Separation of concerns is a principle in computer science which says that distinct concerns should be addressed by distinct subsystems, so that you can optimize for them separately. We can also apply the idea in many other places, including human rationality. This idea has been written about before. I'm not trying to make a comprehensive post about it, just remark on some things I recently though about.
Epistemic vs Instrumental
The most obvious example is beliefs vs desires. Although the distinction may not be a perfect separation-of-concerns in practice (or even in principle), at least I can say this:
* Even non-rationalists find it useful to make a relatively firm distinction between what is true and what they want to be true;
* Rationalists, scientists, and intellectuals of many varieties tend to value an especially sharp distinction of this kind.
I'm particularly thinking about how the distinction is used in conversation. If an especially sharp distinction isn't being made, you might see things like:
* Alice makes a factual statement, but the statement has (intended or unintended) conversational implicature which is perceived as negative by most of the people present. Alice is chastised and concedes the point, withdrawing her assertion.
* Bob mentions a negative consequence of a proposed law. Everyone listening perceives Bob to be arguing against the law.
Notice that this isn't an easy distinction to make. It isn't right at all to just ignore conversational implicature. You should not only make literal statements, nor should you just assume that everyone else is doing that. The skill is more like, raise the literal content of words as a hypothesis; make a distinction in your mind between what is said and anything else which may have been meant.
Side note -- as with many conversation norms, the distinctions I'm mentioning in this post cannot be imposed on a conversation unilaterally. Sometimes simply pointing out a distinction works; but generally, on |
34ac90da-d66e-43ef-a957-8acafeeb01e1 | trentmkelly/LessWrong-43k | LessWrong | A Roadmap to a Post-Scarcity Economy
Background Reading (optional for the discussion)
https://lorenzopieri.com/post_scarcity/
Summary
A post-scarcity society can be defined as a society in which all the basic needs of the population are met and provided for free. A common belief is that moving to such a desirable state, if possible at all, will require futuristic technologies to eliminate the scarcity of material resources. In this writeup I emphasise how what’s actually stopping us from being post-scarcity is not lack of resources, but the forced reliance on human labour. Given these premises, I lay out an actionable roadmap to post-scarcity and I highlight the role of automation and Universal Basic Income in the process. This analysis suggests that post-scarcity is within reach well inside the current century with current technologies, assuming a sustained progress in the field of artificial intelligence.
Discussion
Do you agree with automation being the real missing piece to post-(basic)scarcity? And what is the smoothest roadmap to such an economy, which minimise the trauma for the displaced jobs? |
ab9ac721-ac3f-4494-94c9-de38ea43868a | trentmkelly/LessWrong-43k | LessWrong | The Most Dangerous Idea
Previously: Epistemic Hell, The Journal of Dangerous Ideas
Scott Mutter
We may safely predict that it will be the timidity of our hypotheses, and not their extravagance, which will provoke the derision of posterity. (H. H. Price)
Introduction
Jeffrey Kripal has written extensively in recent years about what he calls the “traumatic secret”; I will quote him at length here and throughout, drawing primarily from his 2019 book, The Flip: Epiphanies of Mind and the Future of Knowledge.
> The same fact has been noted by many others, including the prominent physicist Freeman Dyson. Perhaps you will not, cannot, hear me because I am not a scientist, because I work in the humanities, which you assume are not “real” forms of knowledge. Okay. Listen to Dyson. His recounting of his own argument is important not only because he was for most of his career a celebrated scientist at an elite institution, the Institute for Advanced Studies in Princeton, but also because he uses the adjective anecdotal as it should be but seldom is used in these contexts: as a perfectly accurate descriptor of another word—evidence. Here is Dyson looking back on a book review he had written for The New York Review of Books:
“In my review I said that ESP only occurs, according to the anecdotal evidence, when a person is experiencing intense stress and strong emotions. Under the conditions of a controlled scientific experiment, intense stress and strong emotions are excluded; the person experiences intense boredom rather than excitement, so the evidence for ESP disappears...The experiment necessarily excludes the human emotions that make ESP possible.”
The traumatic secret is this: the laws of physics can be bent or in some way circumvented by extreme states of consciousness, “a visionary warping of space and time effected by the gravity of intense human suffering”. Most, if not all, super-natural phenomenon are manifestations of this warping.
The secret raises an unsettling possibility—one way, |
9dc7cf2d-24d9-44db-a598-8a128e49a2ce | StampyAI/alignment-research-dataset/blogs | Blogs | generalized values: testing for patterns in computation
generalized values: testing for patterns in computation
-------------------------------------------------------
i believe the [True Name](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) for our [formalized values](what-is-value.html) will be a program running a computation (typically a [simulation or bunch of simulations](%E2%88%80V.html)) in which those values are maximized or satisfied.
in [*generalized computational interpretability*](generalized-computation-interpretability.html), i talk about what deeply caring about a general computation looks like. in this post, i will outline an example of this.
suppose you are given a program A described in [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus), and you want to know whether it encodes a simulation of [conway's game of life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) B with a hardcoded specific initial state.
in the general case, to get the state of a specific cell in a conway's game of life after `n` steps takes `Θ(n³)` time: `n` because of the time steps, and `n²` because of the two-dimensional light cone. for this example, let's say that this is actually the lowest bound to get the states you care about in this particular initial state of conway's game of life.
now, if there is an "extracting" program E such that for any coordinates `x,y` and time step `n`, taking as input the entire history of running A for `Θ(n³)` steps, it returns in less than `Θ(n³)` — for example in `O(log n)` — a value that is always equal to the state of B at `x,y,n`, then A encodes the computation B: A must have "done some of the work" of running B, because we can extract that value from A without re-doing all of the required work.
on the other hand, a formal proof that no such program exists can demonstrate that A does not encode B.
it could be that the problem is sometimes undecidable — i.e. there exists neither a program E (or proof that it exists), nor a proof that it doesn't exist. this seems fine to me; for example, when you're unable to determine whether a computation encodes suffering, just don't run it, or only run it for a limited amount of steps.
(to generalize this to constant-time or constant-size patterns, and to help us figure out the constants in those big-O/Θ's, perhaps information theory can help)
if the specific conway's game of life pattern B you're testing for happens to already be computable in less than `Θ(n³)`, then whatever complexity is its lowest bound (if it has one) is the new one under which E must run.
hopefully this can give us an idea as to what formalized shape our values (avoiding suffering, [etc](core-vals-exist-selfdet.html)) should take, and how to create a world that realizes them. |
10a234c0-de69-4b75-b7d8-cc01dd6aa58f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | On decision-prediction fixed points
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
It seems like for embedded (reflexive, Löbian, etc) LDT agents, there ought to be a fixed point thing between decision and prediction.
Indeed, embedded agents can predict things about their own actions; but by modeling themselves sufficiently well, this should be (in the limit) equivalent to making a decision, as they will be modeling their own thoughts. Conversely, once you have decided, if you do not suffer from akrasia, then you have accurately predicted your next action. (aside: this is the source of the illusion of free will.)
This is related to the class of "metaphysical truths": truths of the form □P→P. Whenever an embedded agent believes one of those, then it must (by Löb's theorem) eventually believe P. But there are lots of such truths (perhaps each different religion offers a different set of metaphysical truths), which might then lead to spurious, or even contradictory beliefs!
The key word was "eventually", assuming LDT agents are logical inductors of some kind; in the meantime, the agent may *choose* its beliefs. Isn't this weird? Beliefs shouldn't be arbitrary!
But you can imagine, as an (imperfect) example, the paradox of self-confidence: if you think you are competent, then you could believe in your ability to self-improve, which will encourage your to foster your own competence; on the other hand thinking that you are incompetent may lead to not believing in your self-improvement ability, leading to a downward spiral.
Each one of these are decision-belief fixed points. Each are, in way (causally?), both true and rational.
I feel like LDT will end up being a reflexive fixed point of this sort (reminiscent of the logical induction fixed point), with the catch that there are many such fixed points. The true decision an LDT agent must make is then choosing the most effective of these fixed points.
(I'm not entirely convined of this yet since I still have no idea what logical counterfactuals will look like)
The moral of the story for us humans is that:
* akrasia should not exist, not if you can predict yourself well enough;
* sometimes beliefs are arbitrary. choose the most productive ones, you'll end up believing them all anyway.
EDIT: I feel this is rather important, so I would appreciate getting feedback on the presentation. |
f9e5f9ab-3312-487c-b144-ef86d038e8b4 | trentmkelly/LessWrong-43k | LessWrong | Review AI Alignment posts to help figure out how to make a proper AI Alignment review
I've had many conversations over the last few years about the health of the AI Alignment field and one of the things that has come up most frequently (including in conversations with Rohin, Buck and various Open Phil people) is that many people wish there was more of a review process in the AI Alignment field.
I also think there is a bunch of value in better review processes, but have felt hesitant to create something very official and central, since AI Alignment is a quite preparadigmatic field, which makes creating shared standards of quality hard, and because I haven't had the time to really commit to maintain something great here.
Separately, I am also quite proud of the LessWrong review, and am very happy about the overall institution that we've created there, and I realized that the LessWrong review might just be a good test bed and bandaid for having a better AI Alignment review process. I think the UI we built for it is quite good, and I think the vote does have real stakes and a lot of the people voting are also people quite active in AI Alignment.
So this year, I would like to encourage many of the people who expressed a need for better review processes in AI Alignment to try reviewing some AI Alignment posts from 2021 as part of the LessWrong review. I personally got quite a bit of personal value out of doing that, and e.g. found that my review of the MIRI dialogues helped crystallize some helpful new directions for me to work towards, and I am also hoping to write a longer review of Eliciting Latent Knowledge that I also think will help clarify some things for me, and is something that I will feel comfortable linking to later when people ask me about my takes on ELK-adjacent AI Alignment research.
I am also interested in comments on this post with takes for better review-processes in AI Alignment. I am currently going through a period where I feel quite confused how to relate to the field at large, so it might be a good time to also have a conver |
f701e85f-3e06-4feb-9e77-43e2ad839506 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Discussion article for the meetup : Melbourne, practical rationality
Discussion article for the meetup : Discussion article for the meetup : Melbourne, practical rationality
WHEN: 06 July 2012 07:00:00PM (+1000)
WHERE: Carlton VIC 3053, Australia
WHEN: 06 July 2012 07:00:00PM (+1000)
WHERE: Bens House : See the mailing list for the address - or you can call or text 0432 862 932 for the address. Alternatively, you can email shokwave.sf@gmail.com or even message User:shokwave on LessWrong.
Practical rationality is on the 1st friday of each month, as distinct from the social and rationality outreach meetups. Look for a social meetup on the 3rd Friday of each month.
Discussion: http://groups.google.com/group/melbourne-less-wrong
This meetup repeats on the 1st Friday of each month.
All welcome from 6:30pm. Call the phone number on the door and I'll let you in. Discussion article for the meetup :Melbourne, practical rationality
Contact Scott 0432 862 932
Discussion article for the meetup : Discussion article for the meetup : Melbourne, practical rationality |
46c81faf-3ea7-46df-b598-22acfb6024aa | trentmkelly/LessWrong-43k | LessWrong | Why we love unimportant things
Consider all the things humans have ever invented. On average, the ones that have been adopted by the most people should be the most useful ones. This seems to be roughly what has happened.
Now consider the ones we get really excited about, and identify with, and celebrate. These are the ones that are not widely adopted. Chairs have been adopted by everyone, because they are great. Nobody ever mentions this. You might think they are just taken for granted because they are old. But consider skis. Skis have been around forever. But they are more controversial than chairs: they have never caught on with some people. Now notice that people who do like skis actually rave about them, and think about them, and consider themselves skiing enthusiasts.
Here are some more unpopular and raved about innovations: drying fruit in the sun, dancing, the iphone, the gin and tonic, the internet, Christianity, watercolour painting, eating a larger meal at lunch time than in the evening, sexual promiscuity, tea
Here are popular uncelebrated innovations: the escalator, the hat, the mobile phone (this was on the other list back when they were rare), the phillips head screwdriver, the computer, queues, tv, bread, floors
Here are the closest things I can think of to counterexamples: the internet (really fits in the first category, but many people who love it must rarely contact with those who don’t and vice versa. Then again people who rave about it often mean to support quite extreme and unorthodox use of it), anti-racism (virtually everyone seems to think they like it, but the ones who rave about it do at least seem to think that others do not), people rave about anything they consciously want at that moment (e.g. they have been standing for ages and they find a chair, or someone brings them a big cake) though they still don’t tend to speak up that item in general or identify with it, sex.
So it seems that we largely celebrate the things that are least important to our actual wellbei |
0fcd97e4-5199-42eb-a0bd-db8a6e836519 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Automated Design of Agentic Systems
Authors: Shengran Hu, Cong Lu, Jeff Clune.
Brief summary: proof of concept of automated LM scaffolding design, with state-of-the-art performance on several tasks.
Website: https://www.shengranhu.com/ADAS/.
X/twitter summary thread: https://x.com/jeffclune/status/1825551351746867502.
Abstract:
> Researchers are investing substantial effort in developing powerful general-purpose agents, wherein Foundation Models are used as modules within agentic systems (e.g. Chain-of-Thought, Self-Reflection, Toolformer). However, the history of machine learning teaches us that hand-designed solutions are eventually replaced by learned solutions. We formulate a new research area, Automated Design of Agentic Systems (ADAS), which aims to automatically create powerful agentic system designs, including inventing novel building blocks and/or combining them in new ways. We further demonstrate that there is an unexplored yet promising approach within ADAS where agents can be defined in code and new agents can be automatically discovered by a meta agent programming ever better ones in code. Given that programming languages are Turing Complete, this approach theoretically enables the learning of any possible agentic system: including novel prompts, tool use, control flows, and combinations thereof. We present a simple yet effective algorithm named Meta Agent Search to demonstrate this idea, where a meta agent iteratively programs interesting new agents based on an ever-growing archive of previous discoveries. Through extensive experiments across multiple domains including coding, science, and math, we show that our algorithm can progressively invent agents with novel designs that greatly outperform state-of-the-art hand-designed agents. Importantly, we consistently observe the surprising result that agents invented by Meta Agent Search maintain superior performance even when transferred across domains and models, demonstrating their robustness and generality. Provided we develop it |
d1df7cdd-2095-4929-9e30-da7f8df5b98c | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Silver Bulletin: For most people, politics is about fitting in
Nate Silver tries to answer the question: "How do people formulate their political beliefs?"
An important epistemological question that is, he says, under-discussed.
He lays out his theory:
> I think political beliefs are primarily formulated by two major forces:
>
> Politics as self-interest. Some issues have legible, material stakes. Rich people have an interest in lower taxes. Sexually active women (and men!) who don’t want to bear children have an interest in easier access to abortion. Members of historically disadvantaged groups have an interest in laws that protect their rights
>
> Politics as personal identity — whose team are you on. But other issues have primarily symbolic stakes. These serve as vehicles for individual and group expression — not so much “identity politics” but politics as identity. People are trying to figure out where they fit in — who’s on their side and who isn’t. And this works in both directions: people can be attracted to a group or negatively polarized by it. People have different reasons for arguing about politics, and can derive value from a sense of social belonging and receiving reinforcement that their choices are honorable and righteous.1
>
> Notice what’s missing from my list? The notion of politics as a battle of ideas. This is not to suggest that people don’t hold reasonable moral intuitions about political affairs. But when it comes to mass popular opinion, the number of people who are interested in ideas for ideas sake is vanishingly small. |
761d4cd0-a37a-4c02-9369-e9170437fd76 | trentmkelly/LessWrong-43k | LessWrong | My Opportunity Costs
When I was younger, I had time and energy to spare on weird side-projects, and a comparative lack of ideas worth spending them on. Now I have some ideas, but no time and energy to spare. I share these ideas below, in case anyone is in the same situation I was a decade ago[1].
Editing Weird Worthwhile Writers
Lsusr has a lot of interesting things to say. He also makes a lot of trivial grammar/spelling/style mistakes. It occurred to me about a year ago that someone could pleasantly do a modest amount of good very easily by volunteering to edit for him; it also occurred to me that doing this for his business writing might be a good way to learn about running a company and make connections that could be useful later. I then proceeded to not do that, because editing work isn’t my comparative advantage, and because I’d feel like I was stealing a role which could be filled by someone much younger, more ambitious and less experienced than me. Recently, that role has been filled by someone much younger, more ambitious and less experienced than me; it seems to be going pretty well for everyone involved so far.
Dominic Cummings has a lot of interesting things to say. He also has an idiosyncratic writing style which is heavily informed by his time in the Civil Service, where everyone he had to communicate with was some combination of a) captive audiences, b) people whose hectic schedules leave them with negligible attention spans c) people who use all the same jargon he does and d) people he actively despises. A teenager on Twitter has accumulated a >10k following by translating his tweets from Dom-ese to English. I suspect that rewriting some of his essays would be a good way to get appreciation and attention from the internet in general and LessWrong in particular.
[Redacted because I don’t know if sharing this idea would be legal]
Accumulating Life Stories
I used to have a hobby of showing up at train stations after midnight and asking people who had missed the last t |
0219d4d4-354e-48a2-bd03-e137bb2a3994 | trentmkelly/LessWrong-43k | LessWrong | Knowledge Base 4: General applications
This post presents some possible applications of the general knowledge base that can be built using information management methods described in the post Knowledge Base 2: The structure and the method of building. Applications of the knowledge base constrained to products available on the market have been presented in the post Knowledge Base 3: Shopping advisor and other uses of knowledge base about products.
Similarly to the knowledge base about products, we can build a knowledge base on the following topics:
* information about shops – e.g. opening hours, available products
Thanks to this information, the system will be able to answer the question: Where nearby can I buy windscreen wipers proper for my car? (exclude offers that are not optimal for me and show the most important details of optimal solutions if there are several such solutions, according to the criteria specified in my profile)
* information about services: hotels, restaurants, repairmen, taxis, rooms for short-term renting
Examples of tasks: I want to rent a single room for several days that meets the criteria set in my profile, or I want to go to some place by bus, taxi, bike or using carpooling (use criteria set in my profile to propose a proper transport)
* information about individual people: who they are, what they can do, what they have, what they want, …
Examples of tasks: Find people near me willing to start jogging, or Find a person near me able to fix a problem with the blinkers in my car (show estimated repair time, sort offers by price)
* information about things
Example task: Check if someone has already reported the failure of traffic lights at an intersection. If not, report it; if yes, when is the repair scheduled?
The database could also contain information about many other objects/items/terms defined by users using the first, second, and third type of information (see figure 3 in the post Knowledge Base 2: The structure and the method of building).
The knowledge |
f3392530-9d81-4e46-a4a4-03829bfbca43 | trentmkelly/LessWrong-43k | LessWrong | For signaling? (Part I)
Your T-shirt is embarrassing. Have you considered wearing a less embarrassing T-shirt?
You are suggesting I spend my precious time trying to look good. Well I am good, and so I’m not going to do that. Because signaling is bad. You can tell something is bad when the whole point of it is to have costs. Signaling is showing off. Signaling benefits me at someone else’s equal expense. I won’t wear a less embarrassing T-shirt because to Hell with signaling.
Hmm. That seems wrong. Signaling is about honest communication when the stakes are high—which is often important! And just because it’s called ‘costly’ doesn’t mean it is meant to have costs. It only has to be too costly for liars, and if it’s working then they won’t be doing any signaling anyway. ‘Costly signals’ can be very cheap for those who use them. I think signaling is often wonderful for society.
Give me three examples where it is ‘wonderful’.
Driver’s licences. Showing a driver’s licence is a costly signal of being a decent driver, which communicates something useful honestly, is cheap for the people who are actually good drivers, and lets the rest of society distinguish people who are likely to drive safely from people who are not, which is amazingly great.
Driving tests don’t seem that cheap to me, but I’ll grant that they are probably worth it. Still, this seems like a strange corner case of ‘signaling’ that was explicitly designed by humans. It fits the economic definition of ‘costly signaling’ but if you have to go that far from the central examples to find something socially beneficial, that doesn’t increase my regard for signaling. Next?
One of the most famous examples of signaling is in the job market. Potential candidates show a hirer their qualifications, which allows the hirer to employ more appropriate candidates. You might disagree about whether all of the signals that people use are socially optimal—for instance if education is mostly for signaling, it seems fairly destructive, be |
42b9aa8b-20cf-4775-bbca-49697e7af94d | trentmkelly/LessWrong-43k | LessWrong | The ethics of breaking belief
I'm not sure if this is precisely the correct forum for this, but if there is a better place, I don't know what it would be. At any rate...
I'm a student a Catholic university, and there are (as one might surmise) quite a lot of Catholics here, along with assorted other theists (yes, even some in the biology faculty). For this reason, I find myself acquiring more and more devoutly Catholic friends, and some of them I have grown quite close to. But the God issue keeps coming up for one reason or another, which is a source of tension. And yet as I grow closer to these people, it becomes clearer and clearer that each theist has a certain personal sequence of Dark Arts-ish levers in eir head, the flipping (or un-flipping) of which would snap em out of faith.
So the question is this: in what situations (if any) is it ethical to push such buttons? We often say, here, that that which can be destroyed by the truth should be, but these are people who have built their lives around faith, people for whom the Church is their social support group. If it were possible to disillusion the whole world all at once, that'd be one thing - but in this case my options are limited to changing the minds of only the specific individuals I have spent time getting to know, and the direct result would be their alienation from the entire community in which they've been raised.
And yet it is the truth.
I'm conflicted. LessWrong, what is your opinion? |
7e3aa1e5-4a9d-45cb-901b-7abef3d00704 | StampyAI/alignment-research-dataset/special_docs | Other | Responses to the Journey to the Singularity
Chapter 3
Responses to the Journey
to the Singularity
Kaj Sotala and Roman Yampolskiy
3.1 Introduction
The notion of catastrophic AGI risk is not new, and this concern was expressed by
early thinkers in the field. Hence, there have also been many proposals concerning
what to do about it. The proposals we survey are neither exhaustive nor mutuallyexclusive: the best way of achieving a desirable outcome may involve pursuingseveral proposals simultaneously.
Section 3.2briefly discusses some of the most recent developments in the field.
Sections 3.3–3.5survey three categories of proposals for dealing with AGI risk:
societal proposals, proposals for external constraints on AGI behaviors, and
proposals for creating AGIs that are safe due to their internal design. Although themain purpose of this paper is to provide a summary of existing work, we brie fly
provide commentary on the proposals in each major subsection of Sects. 3.3–3.5and
highlight some of the proposals we consider the most promising in Sect. 3.6. which
are, regulation (Sect. 3.3.3 ), merging with machines (Sect. 3.3.4 ), AGI con finement
(Sect. 3.4.1 ), Oracle AI (Sect. 3.5.1 ), and motivational weaknesses (Sect. 3.5.7 ).
In the long term, the most promising approaches seem to be value learning
(Sect. 3.5.2.5 ) and human-like architectures (Sect. 3.5.3.4 ). Section 3.6provides an
extended discussion of the various merits and problems of these proposals.
Foundational Research Institute (this work was written while the author was at the Machine
Intelligence Research Institute)
K. Sotala
Foundational Research Institute, Basel, Switzerland
e-mail: kaj.sotala@foundational-research.org
R. Yampolskiy ( ✉)
University of Louisville, 222 Eastern Parkway, Louisville, KY 40292, USA
e-mail: roman.yampolskiy@louisville.edu
© Springer-Verlag GmbH Germany 2017
V. Callaghan et al. (eds.), The Technological Singularity ,
The Frontiers Collection, DOI 10.1007/978-3-662-54033-6\_325
3.2 Post-Superintelligence Responses
This chapter is based on an earlier paper (Sotala and Yampolskiy 2015 ), which was
the formally published version of a previous technical report (Sotala and Yam-
polskiy 2013 ). The tech report, in turn, was a greatly expanded version of an earlier
conference paper (Yampolskiy 2013 ). Since the writing of the original papers, the
topic of catastrophic AGI risk has attracted considerable attention both in academiaand the popular press, much of it due to the publication of the book Superintelli-
gence (Bostrom 2014 ).
We feel that it would not be appropriate to simply lump in all the new responses
together with the old sections, as the debate has now become considerably moreactive and high-pro file. In particular, numerous AI researchers have signed an open
letter calling for more research into making sure that AI systems will be robust and
beneficial rather than just capable (Future of Life Institute 2015 ). The open letter
included a list of suggested research directions (Russell et al. 2015 ), including ones
specifically aimed at dealing with the risks from AGI. The research directions
document draws on a number of sources, including an ambitious research agendarecently published by the Machine Intelligence Research Institute (see Chap. 5).
Soon after the publication of the open letter, Elon Musk donated 10 million dollarsfor the purpose of furthering research into safe and bene ficial AI and AGI.
At the same time, several prominent researchers have also expressed the feeling
that the risks from AGI are overhyped, and that there is a danger of the generalpublic taking them too seriously at this stage. This position has been expressed ininterviews of researchers such as Professor Andrew Ng (Madrigal 2015 ) and
Facebook AI director Yann LeCun (Gomes 2015 ), who emphasize that current-day
technology is still a long way from AGI. Even the more skeptical researchers tendto agree that the issue will eventually require some consideration, however(Alexander 2015 ).
3.3 Societal Proposals
Proposals can be divided into three general categories proposals for societal action,
design proposals for external constraints on AGI behavior, and design recom-mendations for internal constraints on AGI behavior. In this section we brie fly
survey societal proposals. These include doing nothing, integrating AGIs with
society, regulating research, merging with machines, and relinquishing researchinto AGI.26 K. Sotala and R. Yampolskiy
3.3.1 Do Nothing
3.3.1.1 AI Is Too Distant to Be Worth Our Attention
One response is that, although AGI is possible in principle, there is no reason to
expect it in the near future. Typically, this response arises from the belief that,although there have been great strides in narrow AI, researchers are still very farfrom understanding how to build AGI. Distinguished computer scientists such asGordon Bell and Gordon Moore, as well as cognitive scientists such as DouglasHofstadter and Steven Pinker, have expressed the opinion that the advent of AGI isremote (IEEE Spectrum 2008 ). Davis ( 2012 ) reviews some of the ways in which
computers are still far from human capabilities. Bringsjord and Bringsjord ( 2012 )
even claim that a belief in AGI this century is fideistic, appropriate within the realm
of religion but not within science or engineering.
Some writers also actively criticize any discussion of AGI risk in the first place.
The philosopher Alfred Nordmann ( 2007 ,2008 ) holds the view that ethical concern
is a scarce resource, not to be wasted on unlikely future scenarios such as AGI.Likewise, Dennett ( 2012 ) considers AGI risk an “imprudent pastime ”because it
distracts our attention from more immediate threats.
Others think that AGI is far off and not yet a major concern, but admit that it
might be valuable to give the issue some attention. A presidential panel of the
Association for the Advancement of Arti ficial Intelligence considering the
long-term future of AI concluded that there was overall skepticism about AGI risk,but that additional research into the topic and related subjects would be valuable(Horvitz and Selman 2009 ). Posner ( 2004 ) writes that dedicated efforts for
addressing the problem can wait, but that we should gather more information aboutthe problem in the meanwhile.
Potential negative consequences of AGI are enormous, ranging from economic
instability to human extinction. “Do nothing ”could be a reasonable course of action
if near-term AGI seemed extremely unlikely, if it seemed too early for any pro-posals to be effective in reducing risk, or if those proposals seemed too expensive toimplement.
As a comparison, asteroid impact prevention is generally considered a topic
worth studying, even though the probability of a civilization-threatening asteroidimpact in the near future is not considered high. Napier ( 2008 ) discusses several
ways of estimating the frequency of such impacts. Many models produce a rate of
one civilization-threatening impact per five hundred thousand or more years, though
some models suggest that rates of one such impact per hundred thousand yearscannot be excluded.
An estimate of one impact per hundred thousand years would suggest less than a
0.1% chance of a civilization-threatening impact within the next hundred years. Theprobability of AGI being developed within the same period seems considerablyhigher (Müller and Bostrom 2014 ), and there is likewise a reasonable chance of a
hard takeoff after it has been developed (Yudkowsky 2008 ,2008b ), suggesting that3 Responses to the Journey to the Singularity 27
the topic is at the very least worth studying. Even without a hard takeoff, society is
becoming increasingly automated, and even narrow AI is starting to require ethicalguidelines (Wallach and Allen 2009 ).
We know neither which fields of science will be needed nor how much progress
in them will be necessary for safe AGI. If much progress is needed and we believe
effective progress to be possible this early on, it becomes reasonable to startstudying the topic even before AGI is near. Muehlhauser and Helm ( 2012 ) suggest
that, for one safe AGI approach alone (value learning, discussed further inSect. 3.5.2.5 ), efforts by AGI researchers, economists, mathematicians, and
philosophers may be needed. Safe AI may require the solutions for some of theseproblems to come well before AGI is developed.
3.3.1.2 Little Risk, no Action Needed
Some authors accept that a form of AGI will probably be developed but do not
consider autonomous AGI to be a risk, or consider the possible negative conse-quences acceptable. Bryson and Kime ( 1998 ) argue that, although AGI will require
us to consider ethical and social dangers, the dangers will be no worse than those ofother technologies. Whitby ( 1996 ) writes that there has historically been no con-
sistent trend of the most intelligent people acquiring the most authority, and that
computers will augment humans rather than replace them. Whitby and Oliver(2000 ) further argue that AGIs will not have any particular motivation to act against
us. Jenkins ( 2003 ) agrees with these points to the extent of saying that a machine
will only act against humans if it is programmed to value itself over humans,although she does find AGI to be a real concern.
Another kind of “no action needed ”response argues that AGI development will
take a long time (Brooks 2008 ), implying that there will be plenty of time to deal
with the issue later on. This can also be taken as an argument for later efforts being
more effective, as they will be better tuned to AGI as it develops.
Others argue that superintelligence will not be possible at all.
1McDermott
(2012 ) points out that there are no good examples of algorithms which could be
improved upon indefinitely . Deutsch ( 2011 ) argues that there will never be super-
intelligent AGIs, because human minds are already universal reasoners, and com-puters can at best speed up the experimental work that is required for testing andfine-tuning theories. He also suggests that even as the speed of technological
development increases, so will our ability to deal with change. Anderson ( 2010 )
likewise suggests that the inherent unpredictability of the world will place upperlimits on an entity ’s effective intelligence.
1The opposite argument is that superior intelligence will inevitably lead to more moral behavior.
Some of the arguments related to this position are discussed in the context of evolutionaryinvariants (Sect. 3.5.3.1 ), although the authors advocating the use of evolutionary invariants do
believe AGI risk to be worth our concern.28 K. Sotala and R. Yampolskiy
Heylighen ( 2012 ) argues that a single, stand-alone computer is exceedingly
unlikely to become superintelligent, and that individual intelligences are alwaysoutmatched by the distributed intelligence found in social systems of many minds.Superintelligence will be achieved by building systems that integrate and improve
the“Global Brain, ”the collective intelligence of everyone on Earth. Heylighen
does acknowledge that this kind of a transition will pose its own challenges, but notof the kind usually evoked in discussions of AGI risk.
The idea of AGIs not having a motivation to act against humans is intuitively
appealing, but there seem to be strong theoretical arguments against it. As men-tioned earlier, Omohundro ( 2007 ,2008 ) and Bostrom ( 2012 ) argue that
self-replication and the acquisition of resources are useful in the pursuit of manydifferent kinds of goals, and that many types of AI systems will therefore exhibit
tendencies toward behaviors such as breaking into other machines, self-replicating,
and acquiring resources without regard for anyone else ’s safety. The right design
might make it possible to partially work around these behaviors (Shulman 2010a ;
Wang 2012 ), but they still need to be taken into account. Furthermore, we might not
foresee all the complex interactions of different AGI mechanisms in the systemsthat we build, and they may end up with very different goals than the ones weintended (Yudkowsky 2008 ,2011 ).
Can AGIs become superintelligent? First, we note that AGIs do not necessarily
need to be much more intelligent than humans in order to be dangerous. AGIs
already enjoy advantages such as the ability to rapidly expand their population byhaving themselves copied (Hanson 1994 ,2008 ; Sotala 2012 a), which may confer
on them considerable economic and political in fluence even if they were not
superintelligent. A better-than-human ability to coordinate their actions, whichAGIs of a similar design could plausibly have (Sotala 2012 ), might then be enough
to tilt the odds in their favor.
Another consideration is that AGIs do not necessarily need to be qualitatively
more intelligent than humans in order to outperform humans. An AGI that merely
thought twice as fast as any single human could still defeat him at intellectual tasksthat had a time constraint, all else equal. Here an “intellectual ”task should be
interpreted broadly to refer not only to “book smarts ”but to any task that animals
cannot perform due to their mental limitations —including tasks involving social
skills (Yudkowsky 2008 ). Straightforward improvements in computing power
could provide AGIs with a considerable advantage in speed, which the AGI couldthen use to study and accumulate experiences that improved its skills.
As for Heylighen ’s(2012 ) Global Brain argument, there does not seem to be a
reason to presume that powerful AGIs could not be geographically distributed, orthat they couldn ’t seize control of much of the Internet. Even if individual minds
were not very smart and needed a society to make progress, for minds that arecapable of copying themselves and communicating perfectly with each other,individual instances of the mind might be better understood as parts of a whole thanas separate individuals. In general, the distinction between an individual and acommunity might not be meaningful for AGIs. If there were enough AGIs, they
might be able to form a community suf ficient to take control of the rest of the Earth.3 Responses to the Journey to the Singularity 29
Heylighen ( 2007 ) himself has argued that many of the features of the Internet are
virtually identical to the mechanisms used by the human brain. If the AGI is notcarefully controlled, it might end up in a position where it made up the majority ofthe“Global Brain ”and could undertake actions which the remaining parts of the
organism did not agree with.
3.3.1.3 Let Them Kill Us
Dietrich ( 2007 ) argues that humanity frequently harms other species, and that
people have also evolved to hurt other people by engaging in behaviors such aschild abuse, sexism, rape, and racism. Therefore, human extinction would not
matter, as long as the machines implemented only the positive aspects of humanity.
De Garis ( 2005 ) suggests that AGIs destroying humanity might not matter. He
writes that on a cosmic scale, with hundreds of billions of stars in our galaxy alone,the survival of the inhabitants of a single planet is irrelevant. As AGIs would bemore intelligent than us in every way, it would be better if they replaced humanity.
AGIs being more intelligent and therefore more valuable than humans equates
intelligence with value, but Bostrom ( 2004 ) suggests ways by which a civilization
of highly intelligent entities might lack things which we thought to have value. For
example, such entities might not be conscious in the first place. Alternatively, there
are many things which we consider valuable for their own sake, such as humor,love, game-playing, art, sex, dancing, social conversation, philosophy, literature,scienti fic discovery, food and drink, friendship, parenting, and sport. We value
these due to the fact that we have dispositions and preferences which have beenevolutionarily adaptive in the past, but for a future civilization few or none of themmight be, creating a world with very little of value. Bostrom ( 2012 ) proposes an
orthogonality thesis, by which an arti ficial intelligence can have any combination of
intelligence level and goal, including goals that humans would intuitively deem to
be of no value.
3.3.1.4 “Do Nothing ”Proposals —Our View
As discussed above, completely ignoring the possibility of AGI risk at this stage
would seem to require a con fident belief in at least one of the following
propositions
1. AGI is very remote.
2. There is no major risk from AGI even if it is created.
3. Very little effective work can be done at this stage.
4. AGIs destroying humanity would not matter.30 K. Sotala and R. Yampolskiy
In the beginning of this paper, we mentioned several experts who considered it
plausible that AGI might be created in the next twenty to one hundred years; in thissection we have covered experts who disagree.
In general, there is a great deal of disagreement among people who have made
AGI predictions, and no clear consensus even among experts in the field of arti ficial
intelligence. The lack of expert agreement suggests that expertise in the field does
not contribute to an ability to make reliable predictions.
2If the judgment of experts
is not reliable, then, probably, neither is anyone else ’s. This suggests that it is
unjusti fied to be highly certain of AGI being near, but also of it notbeing near. We
thus consider it unreasonable to have a con fident belief in the first proposition.
The second proposition also seems questionable. As discussed in the previous
chapter, AGIs seem very likely to obtain great power, possibly very quickly.
Furthermore, as also discussed in the previous chapter, the complexity and fragility
of value theses imply that it could be very dif ficult to create AGIs which would not
cause immense amounts of damage if they had enough power.
It also does not seem like it is too early to work on the problem as we summarize
in Sect. 3.6, there seem to be a number of promising research directions which can
already be pursued. We also agree with Yudkowsky ( 2008 ), who points out that
research on the philosophical and technical requirements of safe AGI might showthat broad classes of possible AGI architectures are fundamentally unsafe, sug-
gesting that such architectures should be avoided. If this is the case, it seems better
to have that knowledge as early as possible, before there has been a great deal ofinvestment into unsafe AGI designs.
In response to the suggestion that humanity being destroyed would not matter,
we certainly agree that there is much to be improved in today ’s humanity, and that
our future descendants might have very little resemblance to ourselves. Regardless,we think that much about today ’s humans is valuable and worth preserving, and
that we should be able to preserve it without involving the death of present humans.
3.3.2 Integrate with Society
Integration proposals hold that AGI might be created in the next several decades,and that there are indeed risks involved. These proposals argue that the best way todeal with the problem is to make sure that our societal structures are equipped to
handle AGIs once they are created.
2Armstrong and Sotala ( 2012 ) point out that many of the task properties which have been found to
be conducive for developing reliable and useful expertise are missing in AGI timeline forecasting.
In particular, one of the most important factors is whether experts get rapid (preferably immediate)feedback, while a timeline prediction that is set many decades in the future might have beenentirely forgotten by the time that its correctness could be evaluated.3 Responses to the Journey to the Singularity 31
There has been some initial work toward integrating AGIs with existing legal
and social frameworks, such as considering questions of their legal position andmoral rights (Gunkel 2012 ).
3.3.2.1 Legal and Economic Controls
Hanson ( 2012 ) writes that the values of older and younger generations have often
been in con flict with each other, and he compares this to a con flict between humans
and AGIs. He believes that the best way to control AGI risk is to create a legalframework such that it is in the interest of both humans and AGIs to uphold it.Hanson ( 2009 ) suggests that if the best way for AGIs to get what they want is via
mutually agreeable exchanges, then humans would need to care less about what the
AGIs wanted. According to him, we should be primarily concerned with ensuringthat the AGIs will be law-abiding enough to respect our property rights. Miller(2012 ) summarizes Hanson ’s argument, and the idea that humanity could be con-
tent with a small fraction of the world ’s overall wealth and let the AGIs have the
rest. An analogy to this idea is that humans do not kill people who become oldenough to no longer contribute to production, even though younger people could inprinciple join together and take the wealth of the older people. Instead, old people
are allowed to keep their wealth even while in retirement. If things went well, AGIs
might similarly allow humanity to “retire ”and keep its accumulated wealth, even if
humans were no longer otherwise useful for AGIs.
Hall ( 2007a ) also says that we should ensure that the interactions between
ourselves and machines are economic, “based on universal rules of property and
reciprocity. ”Moravec ( 1999 ) likewise writes that governmental controls should be
used to ensure that humans bene fit from AGIs. Without government intervention,
humans would be squeezed out of existence by more ef ficient robots, but taxation
could be used to support human populations for a long time. He also recommends
laws which would require any AGIs to incorporate programming that made themsafe and subservient to human desires. Sandberg ( 2001 ) writes that relying only on
legal and economic controls would be problematic, but that a strategy which alsoincorporated them in addition to other approaches would be more robust than astrategy which did not.
However, even if AGIs were integrated with human institutions, it does not
guarantee that human values would survive. If humans were reduced to a position
of negligible power, AGIs might not have any reason to keep us around.
Economic arguments, such as the principle of comparative advantage, are
sometimes invoked to argue that AGI would find it more bene ficial to trade with us
than to do us harm. However, technological progress can drive the wages ofworkers below the level needed for survival, and there is already a possible threat oftechnological unemployment (Brynjolfsson and McAfee 2011 ). AGIs keeping
humans around due to gains from trade implicitly presumes that they would nothave the will or the opportunity to simply eliminate humans in order to replace them
with a better trading partner, and then trade with the new partner instead.32 K. Sotala and R. Yampolskiy
Humans already eliminate species with low economic value in order to make
room for more humans, such as when clearing a forest in order to build new homes.Clark ( 2007 ) uses the example of horses in Britain their population peaked in 1901,
with 3.25 million horses doing work such as plowing fields, hauling wagons and
carriages short distances, and carrying armies into battle. The internal combustion
engine replaced so many of them that by 1924 there were fewer than two million.Clark writes
There was always a wage at which all these horses could have remained employed. But that
wage was so low that it did not pay for their feed, and it certainly did not pay enough tobreed fresh generations of horses to replace them. Horses were thus an early casualty ofindustrialization (Clark 2007 ).
There are also ways to harm humans while still respecting their property rights,
such as by manipulating them into making bad decisions, or selling them addictivesubstances. If AGIs were suf ficiently smarter than humans, humans could be tricked
into making a series of trades that respected their property rights but left them with
negligible assets and caused considerable damage to their well-being.
A related issue is that AGIs might become more capable of changing our values
than we are capable of changing AGI values. Mass media already convey valuesthat have a negative impact on human well-being, such as idealization of rare bodytypes, which causes dissatisfaction among people who do not have those kinds ofbodies (Groesz et al. 2001 ; Agliata and Tantleff-Dunn 2004 ). AGIs with a deep
understanding of human psychology could engineer the spread of values whichshifted more power to them, regardless of their effect on human well-being.
Yet another problem is ensuring that the AGIs have indeed adopted the right
values. Making intelligent beings adopt speci fic values is a dif ficult process which
often fails. There could be an AGI with the wrong goals that would pretend tobehave correctly in society throughout the whole socialization process. AGIs couldconceivably preserve and conceal their goals far better than humans could.
Society does not know of any methods which would reliably instill our chosen
values in human minds, despite a long history of trying to develop them. Our
attempts to make AGIs adopt human values would be hampered by our lack of
experience and understanding of the AGI ’s thought processes, with even
tried-and-true methods for instilling positive values in humans possibly beingineffective. The limited success that we do have with humans is often backed up byvarious incentives as well as threats of punishment, both of which might fail in thecase of an AGI developing to become vastly more powerful than us.
Additionally, the values which a being is likely to adopt, or is even capable of
adopting, will depend on its mental architecture. We will demonstrate these claimswith examples from humans, who are not blank slates on whom arbitrary values can
be imposed with the right education. Although the challenge of instilling speci fic
values in humans is very different from the challenge of instilling them in AGIs, ourexamples are meant to demonstrate the fact that the existing properties of a mindwill affect the process of acquiring values. Just as it is dif ficult to make humans3 Responses to the Journey to the Singularity 33
permanently adopt some kinds of values, the kind of mental architecture that an
AGI has will affect its inclination to adopt various values.
Psychopathy is a risk factor for violence, and psychopathic criminals are much
more likely to reoffend than nonpsychopaths (Hare et al. 2000 ). Harris and Rice
(2006 ) argue that therapy for psychopaths is ineffective and may even make them
more dangerous, as they use their improved social skills to manipulate others moreeffectively. Furthermore, “cult brainwashing ”is generally ineffective and most cult
members will eventually leave (Anthony and Robbins 2004 ); and large-scale social
engineering efforts often face widespread resistance, even in dictatorships with fewscruples about which methods to use (Scott 1998 , Chap. 6 –7). Thus, while one can
try to make humans adopt values, this will only work to the extent that the indi-viduals in question are actually disposed toward adopting them.
3.3.2.2 Foster Positive Values
Kurzweil ( 2005 ), considering the possible effects of many future technologies,
notes that AGI may be a catastrophic risk. He generally supports regulation andpartial relinquishment of dangerous technologies, as well as research into theirdefensive applications. However, he believes that with AGI this may be insuf ficient
and that, at the present time, it may be infeasible to develop strategies that would
guarantee safe AGI. He argues that machine intelligences will be tightly integratedinto our society and that, for the time being, the best chance of avoiding AGI risk isto foster positive values in our society. This will increase the likelihood that anyAGIs that are created will re flect such positive values.
One possible way of achieving such a goal is moral enhancement (Douglas
2008 ), the use of technology to instill people with better motives. Persson and
Savulescu ( 2008 ,2012 ) argue that, as technology improves, we become more
capable of damaging humanity, and that we need to carry out moral enhancement in
order to lessen our destructive impulses.
3.3.2.3 “Integrate with Society ”Proposals —Our View
Proposals to incorporate AGIs into society suffer from the issue that some AGIs
may never adopt benevolent and cooperative values, no matter what the environ-
ment. Neither does the intelligence of the AGIs necessarily affect their values
(Bostrom 2012 ). Suf ficiently intelligent AGIs could certainly come to eventually
understand human values, but humans can also come to understand others ’values
while continuing to disagree with them.
Thus, in order for these kinds of proposals to work, they need to incorporate
strong enforcement mechanisms to keep non –safe AGIs in line and to prevent them
from acquiring signi ficant power. This requires an ability to create
value-conforming AGIs in the first place, to implement the enforcement. Even a
soft takeoff would eventually lead to AGIs wielding great power, so the34 K. Sotala and R. Yampolskiy
enforcement could not be left to just humans or narrow AIs.3In practice, this means
that integration proposals must be combined with some proposal for internal con-straints which is capable of reliably creating value-conforming AGIs. Integrationproposals also require there to be a soft takeoff in order to work, as having a small
group of AGIs which rapidly acquired enough power to take control of the world
would prevent any gradual integration schemes from working.
Therefore, because any effective integration strategy would require creating safe
AGIs, and the right safe AGI design could lead to a positive outcome even if therewere a hard takeoff, we believe that it is currently better to focus on proposals whichare aimed at furthering the creation of safe AGIs.
3.3.3 Regulate Research
Integrating AGIs into society may require explicit regulation. Calls for regulationare often agnostic about long-term outcomes but nonetheless recommend caution asa reasonable approach. For example, Hibbard ( 2005b ) calls for international reg-
ulation to ensure that AGIs will value the long-term well-being of humans, but doesnot go into much detail. Daley ( 2011 ) calls for a government panel for AGI issues.
Hughes ( 2001 ) argues that AGI should be regulated using the same mechanisms as
previous technologies, creating state agencies responsible for the task and fosteringglobal cooperation in the regulation effort.
Current mainstream academic opinion does not consider AGI a serious threat
(Horvitz and Selman 2009 ), so AGI regulation seems unlikely in the near future. On
the other hand, many AI systems are becoming increasingly autonomous, and anumber of authors are arguing that even narrow-AI applications should be equippedwith an understanding of ethics (Wallach and Allen 2009 ). Currently there are calls
to regulate AI in the form of high-frequency trading (Sobolewski 2012 ), and AI
applications that have a major impact on society might become increasingly reg-ulated. At the same time, legislation has a well-known tendency to lag behindtechnology, and regulating AI applications will probably not translate into regu-lating basic research into AGI.
3.3.3.1 Review Boards
Yampolskiy and Fox ( 2012 ) note that university research programs in the social and
medical sciences are overseen by institutional review boards. They propose settingup analogous review boards to evaluate potential AGI research. Research that wasfound to be AGI related would be restricted with measures ranging from
3For proposals which suggest that humans could use technology to remain competitive with AGIs
and thus prevent them from acquiring excessive amounts of power, see Sect. 3.4.3 Responses to the Journey to the Singularity 35
supervision and funding limits to partial or complete bans. At the same time,
research focusing on safety measures would be encouraged.
Posner ( 2004 , p. 221) suggests the enactment of a law which would require
scienti fic research projects in dangerous areas to be reviewed by a federal catas-
trophic risks assessment board, and forbidden if the board found that the project
would create an undue risk to human survival.
Wilson ( 2013 ) makes possibly the most detailed AGI regulation proposal so far,
recommending a new international treaty where a body of experts would determinewhether there was a “reasonable level of concern ”about AGI or some other pos-
sibly dangerous research. States would be required to regulate research or eventemporarily prohibit it once experts agreed upon there being such a level of con-cern. He also suggests a number of other safeguards built into the treaty, such as the
creation of ethical oversight organizations for researchers, mechanisms for moni-
toring abuses of dangerous technologies, and an oversight mechanism for scienti fic
publications.
3.3.3.2 Encourage Research into Safe AGI
In contrast, McGinnis ( 2010 ) argues that the government should not attempt to
regulate AGI development. Rather, it should concentrate on providing funding for
research projects intended to create safe AGI.
Goertzel and Pitt ( 2012 ) argue for an open-source approach to safe AGI
development instead of regulation. Hibbard ( 2008 ) has likewise suggested devel-
oping AGI via open-source methods, but not as an alternative to regulation.
Legg ( 2009 ) proposes funding safe AGI research via an organization that takes a
venture capitalist approach to funding research teams, backing promising groupsand cutting funding to any teams that fail to make signi ficant progress. The focus of
the funding would be to make AGI as safe as possible.
3.3.3.3 Differential Technological Progress
Both review boards and government funding could be used to implement “differ-
ential intellectual progress ”
Differential intellectual progress consists in prioritizing risk-reducing intellectual progress
over risk-increasing intellectual progress. As applied to AI risks in particular, a plan of
differential intellectual progress would recommend that our progress on the scienti fic,
philosophical, and technological problems of AI safety outpace our progress on theproblems of AI capability such that we develop safe superhuman AIs before we develop(arbitrary) superhuman AIs (Muehlhauser and Salamon 2012 ).
Examples of research questions that could constitute philosophical or scienti fic
progress in safety can be found in later sections of this paper —for instance, the36 K. Sotala and R. Yampolskiy
usefulness of different internal constraints on ensuring safe behavior, or ways of
making AGIs reliably adopt human values as they learn what those values are like.
Bostrom ( 2002 ) used the term “differential technological progress ”to refer to
differential intellectual progress in technological development. Bostrom de fined
differential technological progress as “trying to retard the implementation of dan-
gerous technologies and accelerate implementation of bene ficial technologies,
especially those that ameliorate the hazards posed by other technologies ”.
One issue with differential technological progress is that we do not know what
kind of progress should be accelerated and what should be retarded. For example, amore advanced communication infrastructure could make AGIs more dangerous, asthere would be more networked machines that could be accessed via the Internet.Alternatively, it could be that the world will already be so networked that AGIs will
be a major threat anyway, and further advances will make the networks more
resilient to attack. Similarly, it can be argued that AGI development is dangerousfor as long as we have yet to solve the philosophical problems related to safe AGIdesign and do not know which AGI architectures are safe to pursue (Yudkowsky2008 ). But it can also be argued that we should invest in AGI development now,
when the related tools and hardware are still primitive enough that progress will beslow and gradual (Goertzel and Pitt 2012 ).
3.3.3.4 International Mass Surveillance
For AGI regulation to work, it needs to be enacted on a global scale. This requires
solving both the problem of effectively enforcing regulation within a country andthe problem of getting many different nations to all agree on the need for regulation.
Shulman ( 2009 ) discusses various factors in fluencing the dif ficulty of AGI arms
control. He notes that AGI technology itself might make international cooperation
more feasible. If narrow AIs and early-stage AGIs were used to analyze the
information obtained from wide-scale mass surveillance and wiretapping, this mightmake it easier to ensure that nobody was developing more advanced AGI designs.
Shulman ( 2010b ) similarly notes that machine intelligences could be used to
enforce treaties between nations. They could also act as trustworthy inspectorswhich would be restricted to communicating only information about treaty viola-tions, thus not endangering state secrets even if they were allowed unlimited accessto them. This could help establish a “singleton ”regulatory regimen capable of
effectively enforcing international regulation, including AGI-related treaties.
Goertzel and Pitt ( 2012 ) also discuss the possibility of having a network of AGIs
monitoring the world in order to police other AGIs and to prevent any of them fromsuddenly obtaining excessive power.
Another proposal for international mass surveillance is to build an “AGI Nanny ”
(Goertzel 2012b ; Goertzel and Pitt 2012 ), a proposal discussed in Sect. 3.5.4 .
Large-scale surveillance efforts are ethically problematic and face major political
resistance, and it seems unlikely that current political opinion would support the
creation of a far-reaching surveillance network for the sake of AGI risk alone. The3 Responses to the Journey to the Singularity 37
extent to which such extremes would be necessary depends on exactly how easy it
would be to develop AGI in secret. Although several authors make the point thatAGI is much easier to develop unnoticed than something like nuclear weapons(McGinnis 2010 ; Miller 2012 ), cutting-edge high-tech research does tend to require
major investments which might plausibly be detected even by less elaborate
surveillance efforts.
To the extent that surveillance does turn out to be necessary, there is already a
strong trend toward a “surveillance society ”with increasing amounts of information
about people being collected and recorded in various databases (Wood and Ball2006 ). As a reaction to the increased surveillance, Mann et al. ( 2003 ) propose to
counter it with sousveillance —giving private individuals the ability to document
their life and subject the authorities to surveillance in order to protect civil liberties.
This is similar to the proposals of Brin ( 1998 ), who argues that technological
progress might eventually lead to a “transparent society, ”where we will need to
redesign our societal institutions in a way that allows us to maintain some of ourprivacy despite omnipresent surveillance. Miller ( 2012 ) notes that intelligence
agencies are already making major investments in AI-assisted analysis of surveil-lance data.
If social and technological developments independently create an environment
where large-scale surveillance or sousveillance is commonplace, it might be pos-
sible to take advantage of those developments in order to police AGI risk.
4Walker
(2008 ) argues that in order for mass surveillance to become effective, it must be
designed in such a way that it will not excessively violate people ’s privacy, for
otherwise the system will face widespread sabotage. Even under such conditions,there is no clear way to de fine what counts as dangerous AGI. Goertzel and Pitt
(2012 ) point out that there is no clear division between narrow AI and AGI, and
attempts to establish such criteria have failed. They argue that since AGI has anebulous de finition, obvious wide-ranging economic bene fits, and potentially rich
penetration into multiple industry sectors, it is unlikely to be regulated due to
speculative long-term risks.
AGI regulation requires global cooperation, as the noncooperation of even a
single nation might lead to catastrophe. Historically, achieving global cooperationon tasks such as nuclear disarmament and climate change has been very dif ficult. As
with nuclear weapons, AGI could give an immense economic and militaryadvantage to the country that develops it first, in which case limiting AGI research
might even give other countries an incentive to develop AGI faster (Miller 2012 ).
To be effective, regulation also needs to enjoy support among those being
regulated. If developers working in AGI-related fields only follow the letter of the
law, while privately viewing all regulations as annoying hindrances, and fears aboutAGI as overblown, the regulations may prove ineffective. Thus, it might not be
4An added bene fit would be that this could also help avoid other kinds of existential risk, such as
the intentional creation of dangerous new diseases.38 K. Sotala and R. Yampolskiy
enough to convince governments of the need for regulation; the much larger group
of people working in the appropriate fields may also need to be convinced.
While Shulman ( 2009 ) argues that the unprecedentedly destabilizing effect of
AGI could be a cause for world leaders to cooperate more than usual, the opposite
argument can be made as well. Gubrud ( 1997 ) argues that increased automation
could make countries more self-reliant, and international cooperation considerablymore dif ficult. AGI technology is also much harder to detect than, for example,
nuclear technology is —nuclear weapons require a substantial infrastructure to
develop, while AGI needs much less (McGinnis 2010 ; Miller 2012 ).
Miller ( 2012 ) even suggests that the mere possibility of a rival being close to
developing AGI might, if taken seriously, trigger a nuclear war. The nation that waslosing the AGI race might think that being the first to develop AGI was suf ficiently
valuable that it was worth launching a first strike for, even if it would lose most of
its own population in the retaliatory attack. He further argues that, although it wouldbe in the interest of every nation to try to avoid such an outcome, the ease ofsecretly pursuing an AGI development program undetected, in violation of treaty,could cause most nations to violate the treaty.
Miller also points out that the potential for an AGI arms race exists not only
between nations, but between corporations as well. He notes that the more AGIdevelopers there are, the more likely it is that they will all take more risks, with each
AGI developer reasoning that if they don ’t take this risk, somebody else might take
that risk first.
Goertzel and Pitt ( 2012 ) suggest that for regulation to be enacted, there might
need to be an “AGI Sputnik ”—a technological achievement that makes the pos-
sibility of AGI evident to the public and policy makers. They note that after such amoment, it might not take very long for full human-level AGI to be developed,while the negotiations required to enact new kinds of arms control treaties wouldtake considerably longer.
So far, the discussion has assumed that regulation would be carried out effec-
tively and in the pursuit of humanity ’s common interests, but actual legislation is
strongly affected by lobbying and the desires of interest groups (Olson 1982 ;
Mueller 2003 , Chap. 22). Many established interest groups would have an eco-
nomic interest in either furthering or retarding AGI development, rendering thesuccess of regulation uncertain.
3.3.3.5 “Regulate Research ”Proposals —Our View
Although there seem to be great dif ficulties involved with regulation, there also
remains the fact that many technologies have been successfully subjected tointernational regulation. Even if one were skeptical about the chances of effectiveregulation, an AGI arms race seems to be one of the worst possible scenarios, onewhich should be avoided if at all possible. We are therefore generally supportive ofregulation, though the most effective regulatory approach remains unclear.3 Responses to the Journey to the Singularity 39
3.3.4 Enhance Human Capabilities
While regulation approaches attempt to limit the kinds of AGIs that will be created,
enhancement approaches attempt to give humanity and AGIs a level playing field.
In principle, gains in AGI capability would not be a problem if humans could
improve themselves to the same level.
Alternatively, human capabilities could be improved in order to obtain a more
general capability to deal with dif ficult problems. Verdoux ( 2010 ,2011 ) suggests
that cognitive enhancement could help in transforming previously incomprehensi-ble mysteries into tractable problems, and Verdoux ( 2010 ) particularly highlights
the possibility of cognitive enhancement helping to deal with the problems posedby existential risks. One problem with such approaches is that increasing human-
ity’s capability for solving problems will also make it easier to develop dangerous
technologies. It is possible that cognitive enhancement should be combined withmoral enhancement, in order to help foster the kind of cooperation that would helpavoid the risks of technology (Persson and Savulescu 2008 ,2012 ).
Moravec ( 1988 ,1999 ) proposes that humans could keep up with AGIs via “mind
uploading, ”a process of transferring the information in human brains to computer
systems so that human minds could run on a computer substrate. This technologymay arrive during a similar timeframe as AGI (Kurzweil 2005 ; Sandberg and
Bostrom 2008 ; Hayworth 2012 ; Koene 2012b ; Cattell and Parker 2012 ; Sandberg
2012 ). However, Moravec argues that mind uploading would come after AGIs, and
that unless the uploaded minds ( “uploads ”) would transform themselves to become
radically nonhuman, they would be weaker and less competitive than AGIs thatwere native to a digital environment Moravec ( 1992 ,1999 ). For these reasons,
Warwick ( 1998 ) also expresses doubt about the usefulness of mind uploading.
Kurzweil ( 2005 ) posits an evolution that will start with brain-computer inter-
faces, then proceed to using brain-embedded nanobots to enhance our intelligence,
andfinally lead to full uploading and radical intelligence enhancement. Koene
(2012a ) criticizes plans to create safe AGIs and considers uploading both a more
feasible and a more reliable approach.
Similar proposals have also been made without explicitly mentioning mind
uploading. Cade ( 1966 ) speculates on the option of gradually merging with
machines by replacing body parts with mechanical components. Turney ( 1991 )
proposes linking AGIs directly to human brains so that the two meld together intoone entity, and Warwick ( 1998 ,2003 ) notes that cyborgization could be used to
enhance humans.
Mind uploading might also be used to make human value systems more
accessible and easy to learn for AGIs, such as by having an AGI extrapolate theupload ’s goals directly from its brain, with the upload providing feedback.40 K. Sotala and R. Yampolskiy
3.3.4.1 Would We Remain Human?
Uploading might destroy parts of humanity that we value (Joy 2000 ; de Garis
2005 ). De Garis ( 2005 ) argues that a computer could have far more processing
power than a human brain, making it pointless to merge computers and humans.
The biological component of the resulting hybrid would be insigni ficant compared
to the electronic component, creating a mind that was negligibly different from a“pure”AGI. Kurzweil ( 2001 ) makes the same argument, saying that although he
supports intelligence enhancement by directly connecting brains and computers,this would only keep pace with AGIs for a couple of additional decades.
The truth of this claim seems to depend on exactly how human brains are
augmented. In principle, it seems possible to create a prosthetic extension of a
human brain that uses the same basic architecture as the original brain and gradually
integrates with it (Sotala and Valpola 2012 ). A human extending their intelligence
using such a method might remain roughly human-like and maintain their originalvalues. However, it could also be possible to connect brains with computer pro-grams that are very unlike human brains, and which would substantially change theway the original brain worked. Even smaller differences could conceivably lead tothe adoption of “cyborg values ”distinct from ordinary human values (Warwick
2003 ).
Bostrom ( 2004 ) speculates that humans might outsource many of their skills to
nonconscious external modules and would cease to experience anything as a result.The value-altering modules would provide substantial advantages to their users, tothe point that they could outcompete uploaded minds who did not adopt themodules.
3.3.4.2 Would Evolutionary Pressures Change Us?
A willingness to integrate value-altering modules is not the only way by which a
population of uploads might come to have very different values from modern-dayhumans. This is not necessarily a bad, or even a very novel, development the valuesof earlier generations have often been different from the values of later generations(Hanson 2012 ), and it might not be a problem if a civilization of uploads enjoyed
very different things than a civilization of humans. Still, as there are possibleoutcomes that we would consider catastrophic, such as the loss of nearly all things
that have intrinsic value for us (Bostrom 2004 ), it is worth reviewing some of the
postulated changes in values.
For comprehensiveness, we will summarize all of the suggested effects that
uploading might have on human values, even if they are not obviously negative.Readers may decide for themselves whether or not they consider any of theseeffects concerning.
Hanson ( 1994 ) argues that employers will want to copy uploads who are good
workers, and that at least some uploads will consent to being copied in such a
manner. He suggests that the resulting evolutionary dynamics would lead to an3 Responses to the Journey to the Singularity 41
accelerated evolution of values. This would cause most of the upload population to
evolve to be indifferent or favorable to the thought of being copied, to be indifferenttoward being deleted as long as another copy of themselves remained, and to berelatively uninterested in having children “the traditional way ”(as opposed to
copying an already-existing mind). Although Hanson ’s analysis uses the example
of a worker-employer relationship, it should be noted that nations or families, oreven single individuals, could also gain a competitive advantage by copyingthemselves, thus contributing to the strength of the evolutionary dynamic.
Similarly, Bostrom ( 2004 ) writes that much of human life ’s meaning depends on
the enjoyment of things ranging from humor and love to literature and parenting.These capabilities were adaptive in our past, but in an upload environment theymight cease to be such and gradually disappear entirely.
Shulman ( 2010b ) likewise considers uploading-related evolutionary dynamics.
He notes that there might be a strong pressure for uploads to make copies ofthemselves in such a way that individual copies would be ready to sacri fice
themselves to aid the rest. This would favor a willingness to copy oneself, and aview of personal identity which did not consider the loss of a single copy to bedeath. Beings taking this point of view could then take advantage of economicbenefits of continually creating and deleting vast numbers of minds depending on
the conditions, favoring the existence of a large number of short-lived copies over a
somewhat less ef ficient world of long-lived minds.
Finally, Sotala and Valpola ( 2012 ) consider the possibility of minds coalescing
via arti ficial connections that linked several brains together in the same fashion as
the two hemispheres of ordinary brains are linked together. If this were to happen,considerable bene fits might accrue to those who were ready to coalesce with other
minds. The ability to copy and share memories between minds might also blurdistinctions between individual minds. In the end, most humans might cease to beindividual, distinct people in any real sense of the word.
It has also been proposed that information security concerns could cause
undesirable dynamics among uploads, with signi ficant advantages accruing to those
who could steal the computational resources of others and use them to create newcopies of themselves. If one could seize control of the hardware that an upload wasrunning on, it could be immediately replaced with a copy of a mind loyal to theattacker. It might even be possible to do this without being detected, if it waspossible to steal enough of an upload ’s personal information to impersonate it.
An attack targeting a critical vulnerability in some commonly used piece of
software might quickly hit a very large number of victims. As discussed in the
previous chapter, both theoretical arguments and actual cases of malware show thatlarge numbers of machines on the Internet could be infected in a very short time(Staniford et al. 2002 , Moore et al. 2002 ,2003 ). In a society of uploads, attacks
such as these would be not only inconvenient, but potentially fatal. Eckersley andSandberg ( 2013 ) offer a preliminary analysis of information security in a world of
uploads.42 K. Sotala and R. Yampolskiy
3.3.4.3 Would Uploading Help?
Even if the potential changes of values were deemed acceptable, it is unclear
whether the technology for uploading could be developed before developing AGI.
Uploading might require emulating the low-level details of a human brain with a
high degree of precision, requiring large amounts of computing power (Sandbergand Bostrom 2008 ; Cattell and Parker 2012 ). In contrast, an AGI might be designed
around high-level principles which have been chosen to be computationally cheapto implement on existing hardware architectures.
Yudkowsky ( 2008 ) uses the analogy that it is much easier to figure out the
principles of aerodynamic flight and then build a Boeing 747 than it is to take a
living bird and “upgrade ”it into a giant bird that can carry passengers, all while
ensuring that the bird remains alive and healthy throughout the process. Likewise, it
may be much easier to figure out the basic principles of intelligence and build AGIs
than to upload existing minds.
On the other hand, one can also construct an analogy suggesting that it is easier
to copy a thing ’s function than it is to understand how it works. If a person does not
understand architecture but wants to build a sturdy house, it may be easier to createa replica of an existing house than it is to design an entirely new one that does notcollapse.
Even if uploads were created first, they might not be able to harness all the
advantages of digitality, as many of these advantages depend on minds being easyto modify (Sotala 2012 ), which human minds may not be. Uploads will be able to
directly edit their source code as well as introduce simulated pharmaceutical andother interventions, and they could experiment on copies that are restored to anunmodi fied state if the modi fications turn out to be unworkable (Shulman 2010b ).
Regardless, human brains did not evolve to be easy to modify, and it may bedifficult to find a workable set of modi fications which would drastically improve
them.
In contrast, in order for an AGI programmed using traditional means to be
manageable as a software project, it must be easy for the engineers to modify it.
5
Thus, even if uploading were developed before AGI, AGIs that were developedlater might still be capable of becoming more powerful than uploads. However,existing uploads already enjoying some of the advantages of the newly-createdAGIs would still make it easier for the uploads to control the AGIs, at least for awhile.
Moravec ( 1992 ) notes that the human mind has evolved to function in an
environment which is drastically different from a purely digital environment, andthat the only way to remain competitive with AGIs would be to transform intosomething that was very different from a human. This suggests that uploading
5However, this might not be true for AGIs created using some alternate means, such as arti ficial
life (Sullins 2005 ).3 Responses to the Journey to the Singularity 43
might buy time for other approaches, but would be only a short-term solution in and
of itself.
If uploading technology were developed before AGI, it could be used to upload a
research team or other group and run them at a vastly accelerated rate as compared
to the rest of humanity. This would give them a considerable amount of extra time
for developing any of the other approaches. If this group were among the first to be
successfully emulated and sped up, and if the speed-up would allow enough sub-jective time to pass before anyone else could implement their own version, theymight also be able to avoid trading safety for speed. However, such a group mightbe able to wield tremendous power, so they would need to be extremely reliable andtrustworthy.
3.3.4.4 “Enhance Human Capabilities ”Proposals —Our View
Of the various “enhance human capabilities ”approaches, uploading proposals seem
the most promising, as translating a human brain to a computer program wouldsidestep many of the constraints that come from modifying a physical system. Forexample, all relevant brain activity could be recorded for further analysis at anarbitrary level of detail, and any part of the brain could be instantly modi fied
without a need for time-consuming and possibly dangerous invasive surgery.
Uploaded brains could also be more easily upgraded to take full advantage of morepowerful hardware, while humans whose brains were still partially biological wouldbe bottlenecked by the speed of the biological component.
Uploading does have several problems Uploading research might lead to AGI
being created before the uploads, in the long term uploads might have unfavorableevolutionary dynamics, and it seems likely that there will eventually be AGIs whichare capable of outperforming uploads in every field of competence. Uploads could
also be untrustworthy even without evolutionary dynamics. At the same time,
however, uploading research doesn ’tnecessarily accelerate AGI research very
much, the evolutionary dynamics might not be as bad as they seem at the moment,and the advantages gained from uploading might be enough to help control unsafeAGIs until safe ones could be developed. Methods could also be developed forincreasing the trustworthiness of uploads (Shulman 2010b ). Uploading might still
turn out to be a useful tool for handling AGI risk.
3.3.5 Relinquish Technology
Not everyone believes that the risks involved in creating AGIs are acceptable.Relinquishment involves the abandonment of technological development that could
lead to AGI. This is possibly the earliest proposed approach, with Butler ( 1863 )44 K. Sotala and R. Yampolskiy
writing that “war to the death should be instantly proclaimed ”upon machines, for
otherwise they would end up destroying humans entirely. In a much-discussedarticle, Joy ( 2000 ) suggests that it might be necessary to relinquish at least some
aspects of AGI research, as well as nanotechnology and genetics research.
Hughes ( 2001 ) criticizes AGI relinquishment, while Kurzweil ( 2005 ) criticizes
broad relinquishment but supports the possibility of “fine-grained relinquishment, ”
banning some dangerous aspects of technologies while allowing general work onthem to proceed. In general, most writers reject proposals for broad relinquishment.
3.3.5.1 Outlaw AGI
Weng et al. ( 2009 ) write that the creation of AGIs would force society to shift from
human-centric values to robot-human dual values. In order to avoid this, theyconsider the possibility of banning AGI. This could be done either permanently oruntil appropriate solutions are developed for mediating such a con flict of values.
McKibben ( 2003 ), writing mainly in the context of genetic engineering, also sug-
gests that AGI research should be stopped.
Hughes ( 2001 ) argues that attempts to outlaw a technology will only make the
technology move to other countries. De Garis ( 2005 ) believes that differences of
opinion about whether to build AGIs will eventually lead to armed con flict, to the
point of open warfare.
Annas et al. ( 2002 ) have similarly argued that genetic engineering of humans
would eventually lead to war between unmodi fied humans and the engineered
“posthumans, ”and that cloning and inheritable modi fications should therefore be
banned. To the extent that one accepts their reasoning with regard to humans, itcould also be interpreted to apply to AGIs.
3.3.5.2 Restrict Hardware
Berglas ( 2012 ) suggests not only stopping AGI research, but also outlawing the
production of more powerful hardware. Berglas holds that it will be possible tobuild computers as powerful as human brains in the very near future, and that weshould therefore reduce the power of new processors and destroy existing ones.
6
Branwen ( 2012 ) argues that Moore ’s Law depends on the existence of a small
number of expensive and centralized chip factories, making them easy targets for
regulation and incapable of being developed in secret.
6Berglas (personal communication) has since changed his mind and no longer believes that it is
possible to effectively restrict hardware or otherwise prevent AGI from being created.3 Responses to the Journey to the Singularity 45
3.3.5.3 “Relinquish Technology ”Proposals —Our View
Relinquishment proposals suffer from many of the same problems as regulation
proposals, but to a greater extent. There is no historical precedent of general,
multiuse technology similar to AGI being successfully relinquished for good, nor
do there seem to be any theoretical reasons for believing that relinquishment pro-posals would work in the future. Therefore we do not consider them to be a viableclass of proposals.
3.4 External AGI Constraints
Societal approaches involving regulation or research into safe AGI assume thatproper AGI design can produce solutions to AGI risk. One category of suchsolutions is that of external constraints . These are restrictions that are imposed on
an AGI from the outside and aim to limit its ability to do damage.
Several authors have argued that external constraints are unlikely to work with
AGIs that are genuinely far more intelligent than us (Vinge 1993 ; Yudkowsky
2001 ;2008 ; Kurzweil 2005 ; Chalmers 2010 ; Armstrong et al. 2012 ). The consensus
seems to be that external constraints might buy time when dealing with less
advanced AGIs, but they are useless against truly superintelligent ones.
External constraints also limit the usefulness of an AGI, as a free-acting one
could serve its creators more effectively. This reduces the probability of the uni-versal implementation of external constraints on AGIs. AGIs might also be dan-gerous if they were con fined or otherwise restricted. For further discussion of these
points, see Sect. 3.5.1 .
3.4.1 AGI Con finement
AGI con finement, or “AI boxing ”(Chalmers 2010 ; Yampolskiy 2012 ; Armstrong
et al. 2012 ), involves con fining an AGI to a speci fic environment and limiting its
access to the external world.
Yampolskiy (2012) makes an attempt to formalize the idea, drawing on previous
computer security research on the so-called con finement problem (Lampson 1973 ).
Yampolskiy de fines the AI con finement problem (AICP) as the challenge of
restricting an AGI to a con fined environment from which it can ’t communicate
without authorization. A number of methods have been proposed for implementingAI con finement, many of which are extensively discussed in Armstrong et al.
(2012 ).
Chalmers ( 2010 ) and Armstrong et al. ( 2012 ) mention numerous caveats and
difficulties with AI-boxing approaches. A truly leakproof system that perfectly
isolated the AGI from an outside environment would prevent us from even46 K. Sotala and R. Yampolskiy
observing the AGI. If AGIs were given knowledge about human behavior or
psychology, they could still launch social engineering attacks on us (Chalmers2010 ; Armstrong et al. 2012 ). An AGI that was unaware of the existence of humans
would be less likely to launch such attacks, but also much more limited in the kinds
of tasks that it could be used for.
Even if the AGI remained con fined, it could achieve enough in fluence among
humans to prevent itself from being reset or otherwise modi fied (Good 1970 ).
An AGI that people grew reliant on might also become impossible to reset ormodify.
3.4.1.1 Safe Questions
Yampolskiy (2012) proposes a hierarchy of security levels for communication with
the AGI, ranging from completely unrestricted communication to no communica-tion whatsoever. He suggests that the right level for communicating with AGIs is anintermediate one, based on asking “safe questions. ”These are de fined as questions
for which a human could find an answer without help from a superintelligence. For
instance, a researcher who was considering two possible cures for cancer could askthe AGI which one to pursue first. Yampolskiy argues that the AGI is effectively
not changing the final outcome, as the researcher would in principle be capable of
finding the better treatment himself. The AGI, by recommending one of the
approaches as the choice to pursue first, is merely speeding up the process. Arm-
strong et al. ( 2012 ) also brie fly discuss such approaches.
3.4.1.2 Virtual Worlds
While the con finement protocol supposes an AGI that can perceive our world
directly, Chalmers ( 2010 ) uses the term “leakproof singularity ”to refer to the
project of creating isolated virtual worlds which would disallow their inhabitantsfrom in fluencing the external world. An AGI placed in such a world could be
monitored by humans to ensure that it was safe to release into our world. However,Chalmers notes that “for an AI system to be useful or interesting to us at all, it must
have some effects on us. At a minimum, we must be able to observe it. And themoment we observe a virtual environment, some information leaks out from that
environment into our environment and affects us ”.
Hibbard ( 2012b ) proposes a decision support system for evaluating AGI designs
in simulated environments that model our real environment, and attempts to showthat the AGI can be built to be safe.
Armstrong et al. ( 2012 ) discuss the notion that we could supplement this
approach with several layers of “sub-worlds ”virtual worlds stacked within each
other. In such a scenario, even if an AGI somehow manages to break out from onevirtual reality, it is still restricted by another and cannot directly in fluence the world.
Ultimately, the authors conclude that this approach isn ’t very promising.3 Responses to the Journey to the Singularity 47
3.4.1.3 Resetting the AGI
For as long as an AGI remains con fined or cooperative, its memory could be altered
from the outside. Turney ( 1991 ) mentions a proposal in which an AGI ’s memory
could be wiped after each job, thus limiting its potential. Armstrong et al. ( 2012 )
also mention the possibility of resetting an AGI after each run, though they note thatthere are many ways for the AGI to get around it, and that it is only useful forcertain kinds of AGI systems.
Good ( 1970 ) warns that a superhuman AGI may be smart enough to find ways to
prevent itself from being successfully reset or shut down, including methods ofsocial manipulation. Warwick ( 1998 ) makes a similar point.
3.4.1.4 Checks and Balances
Armstrong et al. ( 2012 ) suggest a number of checks and balances that could be used
to catch the early stages of an AGI that was behaving dangerously or trying todeceive us. Test questions with known answers could be mixed in with questionswe genuinely didn ’t know the answer to, in order to establish an AGI ’s trustwor-
thiness. “Honey-pots, ”tempting resources with great value to the AGI, could also
be left in physical or mental locations that it was forbidden from accessing. Finally,
several different kinds of AGIs could be presented with the same problem, withtheir answers then being compared. Inconsistent answers would suggest a possibleattempt at deception.
Bostrom ( 2014 , Chap. 9) similarly discusses “tripwires ”, systems which make
diagnostic tests to the AGI and shut it down if they detect signs of dangerousactivity. Some possible tripwires includes ones monitoring the AGI ’s behavior, its
abilities, or possibly even content of the AGI ’s thoughts and planning.
3.4.1.5 “AI Con finement ”Proposals —Our View
Despite their limited ability to deal with AGIs more intelligent than us, AI-boxing
techniques seem to have value as a first line of defense, and it may be worthwhile to
invest in developing off-the-shelf software packages for AI con finement that are
easy and cheap to use. A research project that developed AGI unexpectedly might
not have been motivated to make major investments in security, but the AGI might
still be suf ficiently limited in intelligence that con finement would work. Having a
defense that is easy to deploy will make it more likely that these kinds of projectswill implement better precautions.
However, at the same time there is a risk that this will promote a false sense of
security and make research teams think that they have carried out their duty to becautious merely because they are running elementary con finement protocols.
Although some con finement procedures can be implemented on top of an AGI that
was not expressly designed for con finement, they are much less reliable than with48 K. Sotala and R. Yampolskiy
an AGI that was built with con finement considerations in mind (Armstrong et al.
2012 )—and even then, relying solely on con finement is a risky strategy. We are
therefore somewhat cautious in our recommendation to develop con finement
techniques.
3.4.2 AGI Enforcement
One problem with AI con finement proposals is that humans are tasked with
guarding machines that may be far more intelligent than themselves (Good 1970 ).
One proposed solution for this problem is to give the task of watching AGIs to other
AGIs.
Armstrong ( 2007 ) proposes that the trustworthiness of a superintelligent system
could be monitored via a chain of less powerful systems, all the way down tohumans. Although humans couldn ’t verify and understand the workings of a
superintelligence, they could verify and understand an AGI just slightly above theirown level, which could in turn verify and understand an AGI somewhat above itsown level, and so on.
Chaining multiple levels of AI systems with progressively greater capacity
seems to be replacing the problem of building a safe AI with a multisystem, and
possibly more dif ficult, version of the same problem. Armstrong himself admits that
there are several problems with the proposal. There could be numerous issues alongthe line, such as a break in the chain of communication or an inability of a system toaccurately assess the mind of another (smarter) system. There is also the problem ofcreating a trusted bottom for the chain in the first place, which is not necessarily any
easier than creating a trustworthy superintelligence.
Hall ( 2007a ) writes that there will be a great variety of AGIs, with those that
were designed to be moral or aligned with human interests keeping the nonsafe
ones in check. Goertzel and Pitt ( 2012 ) also propose that we build a community of
mutually policing AGI systems of roughly equal levels of intelligence. If an AGIstarted to “go off the rails, ”the other AGIs could stop it. This might not prevent a
single AGI from undergoing an intelligence explosion, but a community of AGIsmight be in a better position to detect and stop it than humans would.
Having AGIs police each other is only useful if the group of AGIs actually has
goals and values that are compatible with human goals and values. To this end, the
appropriate internal constraints are needed.
The proposal of a society of mutually policing AGIs would avoid the problem of
trying to control a more intelligent mind. If a global network of mildly superin-telligent AGIs could be instituted in such a manner, it might detect and prevent anynascent takeoff. However, by itself such an approach is not enough to ensure safety3 Responses to the Journey to the Singularity 49
—it helps guard against individual AGIs “going off the rails, ”but it does not help in
a scenario where the programming of most AGIs is flawed and leads to nonsafe
behavior. It thus needs to be combined with the appropriate internal constraints.
A complication is that a hard takeoff is a relative term—an event that happens
too fast for any outside observer to stop. Even if the AGI network were a hundred
times more intelligent than a network composed of only humans, there might stillbe a more sophisticated AGI that could overcome the network.
3.4.2.1 “AGI Enforcement ”Proposals —Our View
AGI enforcement proposals are in many respects similar to social integration
proposals, in that they depend on the AGIs being part of a society which is strong
enough to stop any single AGI from misbehaving. The greatest challenge is then tomake sure that most of the AGIs in the overall system are safe and do not uniteagainst humans rather than against misbehaving AGIs. Also, there might not be anatural distinction between a distributed AGI and a collection of many differentAGIs, and AGI design is in any case likely to make heavy use of earlier AI/AGItechniques. AGI enforcement proposals therefore seem like implementation vari-ants of various internal constraint proposals (Sect. 3.5), rather than independent
proposals.
3.5 Internal Constraints
In addition to external constraints, AGIs could be designed with internal motiva-
tions designed to ensure that they would take actions in a manner bene ficial to
humanity. Alternatively, AGIs could be built with internal constraints that make
them easier to control via external means.
With regard to internal constraints, Yudkowsky ( 2008 ) distinguishes between
technical failure andphilosophical failure :
Technical failure is when you try to build an AI and it doesn ’t work the way you think it
does—you have failed to understand the true workings of your own code. Philosophical
failure is trying to build the wrong thing, so that even if you succeeded you would still failto help anyone or bene fit humanity. Needless to say, the two failures are not mutually
exclusive (Yudkowsky 2008 ).
In practice, it is not always easy to distinguish between the two. Most of the
discussion below focuses on the philosophical problems of various proposals, but
some of the issues, such as whether or not a proposal is actually possible toimplement, are technical.50 K. Sotala and R. Yampolskiy
3.5.1 Oracle AI
AnOracle AI is a hypothetical AGI that executes no actions other than answering
questions. This is a proposal with many similarities to AGI con finement: both
involve restricting the extent to which the AGI is allowed to take independent
action. We consider the difference to be that an Oracle AI has been programmed to“voluntarily ”restrict its activities, whereas AGI con finement refers to methods for
restricting an AGI ’s capabilities even if it was actively attempting to take more
extensive action.
Trying to build an AGI that only answered questions might not be as safe as it
sounds, however. Correctly de fining “take no actions ”might prove surprisingly
tricky (Armstrong et al. 2012 ), and the oracle could give flawed advice even if it did
correctly restrict its actions.
Some possible examples of flawed advice: As extra resources are useful for the
fulfillment of nearly all goals (Omohundro 2007 ,2008 ), the oracle may seek to
obtain more resources —such as computing power —in order to answer questions
more accurately. Its answers might then be biased toward furthering this goal, evenif this temporarily reduces the accuracy of its answers, if it believes this to increasethe accuracy of its answers in the long run. Another example is that if the oracle hadthe goal of answering as many questions as possible as fast as possible, it could
attempt to manipulate humans into asking it questions that were maximally simple
and easy to answer.
Holden Karnofsky has suggested that an Oracle AI could be safe if it was “just a
calculator, ”a system which only computed things that were asked of it, taking no
goal-directed actions of its own. Such a “Tool-Oracle AI ”would keep humans as
the ultimate decision makers. Furthermore, the first team to create a Tool-Oracle AI
could use it to become powerful enough to prevent the creation of other AGIs(Karnofsky and Tallinn 2011 ; Karnofsky 2012 ).
An example of a Tool-Oracle AI approach might be Omohundro ’s(2012 )
proposal of “Safe-AI Scaffolding ”creating highly constrained AGI systems which
act within limited, predetermined parameters. These could be used to developformal veri fication methods and solve problems related to the design of more
intelligent, but still safe, AGI systems.
Oracle AIs might be considered a special case of domestic AGI (Bostrom 2014 ,
Chap. 9): AGIs which are built to only be interested in taking action ”on a small
scale, within a narrow context, and through a limited set of action modes ”.
3.5.1.1 Oracles Are Likely to Be Released
As with a boxed AGI, there are many factors that would tempt the owners of an
Oracle AI to transform it to an autonomously acting agent. Such an AGI would befar more effective in furthering its goals, but also far more dangerous.3 Responses to the Journey to the Singularity 51
Current narrow-AI technology includes high-frequency trading (HFT) algo-
rithms, which make trading decisions within fractions of a second, far too fast tokeep humans in the loop. HFT seeks to make a very short-term pro fit, but even
traders looking for a longer-term investment bene fit from being faster than their
competitors. Market prices are also very effective at incorporating various sources
of knowledge (Hanson 2000 ). As a consequence, a trading algorithm ’s performance
might be improved both by making it faster and by making it more capable ofintegrating various sources of knowledge. Most advances toward general AGI willlikely be quickly taken advantage of in the financial markets, with little opportunity
for a human to vet all the decisions. Oracle AIs are unlikely to remain as pureoracles for long.
Similarly, Wallach and Allen ( 2012 ) discuss the topic of autonomous robotic
weaponry and note that the US military is seeking to eventually transition to a state
where the human operators of robot weapons are “on the loop ”rather than “in the
loop”. In other words, whereas a human was previously required to explicitly give
the order before a robot was allowed to initiate possibly lethal activity, in the futurehumans are meant to merely supervise the robot ’s actions and interfere if something
goes wrong. In practice, this may make it much harder for the human to control therobot ’s actions, if the robot makes a decision which the operator only has a very
short time to override.
Docherty & Goose ( 2012 ) report on a number of military systems which are
becoming increasingly autonomous, with the human oversight for automaticweapons defense systems —designed to detect and shoot down incoming missiles
and rockets —already being limited to accepting or overriding the computer ’s plan
of action in a matter of seconds. Although these systems are better described asautomatic, carrying out pre-programmed sequences of actions in a structuredenvironment, than autonomous, they are a good demonstration of a situation whererapid decisions are needed and the extent of human oversight is limited. A number
of militaries are considering the future use of more autonomous weapons.
In general, any broad domain involving high stakes, adversarial decision mak-
ing, and a need to act rapidly is likely to become increasingly dominated byautonomous systems. The extent to which the systems will need general intelli-gence will depend on the domain, but domains such as corporate management,fraud detection, and warfare could plausibly make use of all the intelligence theycan get. If one ’s opponents in the domain are also using increasingly autonomous
AI/AGI, there will be an arms race where one might have little choice but to give
increasing amounts of control to AI/AGI systems.
Miller ( 2012 ) also points out that if a person was close to death, due to natural
causes, being on the losing side of a war, or any other reason, they might turn evena potentially dangerous AGI system free. This would be a rational course of actionas long as they primarily valued their own survival and thought that even a smallchance of the AGI saving their life was better than a near-certain death.
Some AGI designers might also choose to create less constrained and more
free-acting AGIs for aesthetic or moral reasons, preferring advanced minds to have
more freedom.52 K. Sotala and R. Yampolskiy
3.5.1.2 Oracles Will Become Authorities
Even if humans were technically kept in the loop, they might not have the time,
opportunity, motivation, intelligence, or con fidence to verify the advice given by an
Oracle AI. This may be a danger even with narrower AI systems. Friedman and
Kahn ( 1992 ) discuss APACHE, an expert system that provides medical advice to
doctors. They write that as the medical community puts more and more trust intoAPACHE, it may become common practice to act automatically on APACHE ’s
recommendations, and it may become increasingly dif ficult to challenge the “au-
thority ”of the recommendations. Eventually, APACHE may in effect begin to
dictate clinical decisions.
Likewise, Bostrom and Yudkowsky ( 2013 ) point out that modern bureaucrats
often follow established procedures to the letter, rather than exercising their own
judgment and allowing themselves to be blamed for any mistakes that follow.Dutifully following all the recommendations of an AGI system would be an evenbetter way of avoiding blame.
Wallach and Allen ( 2012 ) note the existence of robots which attempt to auto-
matically detect the locations of hostile snipers and to point them out to soldiers. Tothe extent that these soldiers have come to trust the robots, they could be seen ascarrying out the robots ’orders. Eventually, equipping the robot with its own
weapons would merely dispense with the formality of needing to have a human to
pull the trigger.
Thus, even AGI systems that function purely to provide advice will need to be
explicitly designed to be safe in the sense of not providing advice that would goagainst human values (Wallach and Allen 2009 ). Yudkowsky ( 2012 ) further notes
that an Oracle AI might choose a plan that is beyond human comprehension, inwhich case there ’s still a need to design it as explicitly safe and conforming to
human values.
3.5.1.3 “Oracle AI ”Proposals —Our View
Much like with external constraints, it seems like Oracle AIs could be a useful
stepping stone on the path toward safe, freely acting AGIs. However, because anyOracle AI can be relatively easily turned into a free-acting AGI and because manypeople will have an incentive to do so, Oracle AIs are not by themselves a solution
to AGI risk, even if they are safer than free-acting AGIs when kept as pure oracles.
3.5.2 Top-Down Safe AGI
AGIs built to take autonomous actions will need to be designed with safe moti-
vations. Wallach and Allen divide approaches for ensuring safe behavior into
“top-down ”and“bottom-up ”approaches (Wallach and Allen 2009 ). They de fine3 Responses to the Journey to the Singularity 53
“top-down ”approaches as ones that take a speci fied ethical theory and attempt to
build a system capable of implementing that theory.
They have expressed skepticism about the feasibility of both pure top-down and
bottom-up approaches, arguing for a hybrid approach.7With regard to top-down
approaches, which attempt to derive an internal architecture from a given ethical
theory, Wallach ( 2010 )finds three problems
1.“Limitations already recognized by moral philosophers For example, in a util-
itarian calculation, how can consequences be calculated when information is
limited and the effects of actions cascade in never-ending interactions? Whichconsequences should be factored into the maximization of utility? Is there astopping procedure? ”(Wallach 2010 ).
2. The “frame problem ”refers to the challenge of discerning relevant from irrel-
evant information without having to consider all of it, as all information couldbe relevant in principle (Pylyshyn 1987 ; Dennett 1987 ). Moral decision-making
involves a number of problems that are related to the frame problem, such as
needing to know what effects different actions have on the world, and needing to
estimate whether one has suf ficient information to accurately predict the con-
sequences of the actions.
3.“The need for background information. What mechanisms will the system
require in order to acquire the information it needs to make its calculations?How does one ensure that this information is up to date in real time? ”(Wallach
2010 ).
To some extent, these problems may be special cases of the fact that we do not
yet have AGI with good general learning capabilities creating an AGI would alsorequire solving the frame problem, for instance. These problems might therefore notall be as challenging as they seem at first, presuming that we manage to develop
AGI in the first place.
3.5.2.1 Three Laws
Probably the most widely known proposal for machine ethics is Isaac Asimov ’s
(1942 ) Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being
to come to harm.
2. A robot must obey orders given to it by human beings except where such orders
would con flict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with either the First or Second Law.
7For a de finition of “bottom-up ”approaches, see Sect. 5.3.54 K. Sotala and R. Yampolskiy
Asimov and other writers later expanded the list to include a number of addi-
tional laws, including the Zeroth Law.
A robot may not harm humanity, or through inaction allow humanity to come to harm.
Although the Three Laws are widely known and have inspired numerous imi-
tations, several of Asimov ’s own stories were written to illustrate the fact that the
laws contained numerous problems. They have also drawn heavy critique fromothers (Clarke 1993 ;1994 ; Weld and Etzioni 1994 ; Pynadath and Tambe 2002 ;
Gordon-Spears 2003 ; McCauley 2007 ; Weng et al. 2008 ; Wallach and Allen 2009 ;
Murphy and Woods 2009 ; Anderson 2011 ) and are not considered a viable
approach for safe AI. Among their chief shortcomings is the fact that they are tooambiguous to implement and, if de fined with complete accuracy, contradict each
other in many situations.
3.5.2.2 Categorical Imperative
The best-known universal ethical axiom is Kant ’s categorical imperative. Many
authors have discussed using the categorical imperative as the foundation of AGImorality (Stahl 2002 ; Powers 2006 ; Wallach and Allen 2009 ; Beavers 2009 ,2012 ).
All of these authors conclude that Kantian ethics is a problematic goal for AGI,
though Powers ( 2006 ) still remains hopeful about its prospects.
3.5.2.3 Principle of Voluntary Joyous Growth
Goertzel ( 2004a ,b) considers a number of possible axioms before settling on what
he calls the “Principle of Voluntary Joyous Growth ”,d efined as “Maximize hap-
piness, growth and choice ”. He starts by considering the axiom “maximize hap-
piness ”, but then finds this to be problematic and adds “growth ”, which he de fines
as“increase in the amount and complexity of patterns in the universe ”. Finally he
adds “choice ”in order to allow sentient beings to “choose their own destiny ”.
3.5.2.4 Utilitarianism
Classic utilitarianism is an ethical theory stating that people should take actions that
lead to the greatest amount of happiness and the smallest amount of suffering. Theprospects for AGIs implementing a utilitarian moral theory have been discussed byseveral authors. The consensus is that pure classical utilitarianism is problematicand does not capture all human values. For example, a purely utilitarian AGI could3 Responses to the Journey to the Singularity 55
reprogram the brains of humans so that they did nothing but experience the max-
imal amount of pleasure all the time, and that prospect seems unsatisfactory tomany.
8
3.5.2.5 Value Learning
Freeman ( 2009 ) describes a decision-making algorithm which observes people ’s
behavior, infers their preferences in the form of a utility function, and then attemptsto carry out actions which ful fill everyone ’s preferences. The standard name for this
kind of an approach is inverse reinforcement learning (Ng and Russell 2000 ).
Russell ( 2015 ) argues for an inverse reinforcement learning approach, as there is
considerable data about human behavior and the attitudes behind it, there are solid
economic incentives to solve this problem, the problem does not seem intrinsicallyharder than learning how the rest of the world works, and because it seems possibleto define this goal so as to make the AGIs very careful to ensure that they ’re correct
about our preferences before taking any serious action.
Similarly, Dewey ( 2011 ) discusses value learners , AGIs which are provided a
probability distribution over possible utility functions that humans may have. Valuelearners then attempt to find the utility functions with the best match for human
preferences. Hibbard ( 2012a ) builds on Dewey ’s work to offer a similar proposal.
One problem with conceptualizing human desires as utility functions is that
human desires change over time (van Gelder 1995 ) and also violate the axioms of
utility theory required to construct a coherent utility function (Tversky and Kah-neman 1981 ). While it is possible to treat inconsistent choices as random deviations
from an underlying “true”utility function that is then learned (Nielsen and Jensen
2004 ), this does not seem to properly describe human preferences. Rather, human
decision making and preferences seem to be driven by multiple competing systems,
only some of which resemble utility functions (Dayan 2011 ). Even if a true utility
function could be constructed, it does not take into account the fact that we havesecond-order preferences, or desires about our desires a drug addict may desire adrug, but also desire that he not desire it (Frankfurt 1971 ). Similarly, we often wish
that we had stronger desires toward behaviors which we consider good but cannotmake ourselves engage in. Taking second-order preferences into account leads towhat philosophers call “ideal preference ”theories of value.
Taking this into account, it has been argued that we should aim to build AGIs
which act according to humanity ’sextrapolated values (Yudkowsky 2004 ; Tarleton
2010 ; Muehlhauser and Helm 2012 ). Yudkowsky proposes attempting to discover
the“Coherent Extrapolated Volition ”(CEV) of humanity, which he de fines as
8Note that utilitarianism is not the same thing as having a utility function. Utilitarianism is a
specific kind of ethical system, while utility functions are general-purpose mechanisms for
choosing between actions and can in principle be used to implement very different kinds of ethicalsystems, such as egoism and possibly even rights-based theories and virtue ethics (Peterson 2010 ).56 K. Sotala and R. Yampolskiy
our wish if we knew more, thought faster, were more the people we wished we were, had
grown up farther together; where the extrapolation converges rather than diverges, whereour wishes cohere rather than interfere; extrapolated as we wish that extrapolated, inter-
preted as we wish that interpreted (Yudkowsky 2004 ).
CEV remains vaguely de fined and has been criticized by several authors (Hib-
bard 2005a ; Goertzel 2010a ; Goertzel and Pitt 2012 ; Miller 2012 ). However,
Tarleton ( 2010 )finds CEV a promising approach, and suggests that CEV has five
desirable properties, and that many different kinds of algorithms could possess these
features
Meta-algorithm : Most of the AGI ’s goals will be obtained at runtime from
human minds, rather than explicitly programmed in before runtime.
Factually correct beliefs : The AGI will attempt to obtain correct answers to
various factual questions, in order to modify preferences or desires that arebased upon false factual beliefs.
Singleton : Only one superintelligent AGI is to be constructed, and it is to take
control of the world with whatever goal function is decided upon.
Reflection : Individual or group preferences are re flected upon and revised.
Preference aggregation : The set of preferences of a whole group are to be
combined somehow.
At least two CEV variants have been proposed Coherent Aggregated Volition
(Goertzel 2010a ) and Coherent Blended Volition (Goertzel and Pitt 2012 ). Goertzel
and Pitt ( 2012 ) describe a methodology which was used to help end the apartheid in
South Africa. The methodology involves people with different views exploringdifferent future scenarios together and in great detail. By exploring different out-comes together, the participants build emotional bonds and mutual understanding,seeking an outcome that everyone can agree to live with. The authors characterize
the Coherent Blended Volition of a diverse group as analogous to the “conceptual
blend ”resulting from the methodology, incorporating the most essential elements
of the group into a harmonious whole.
Christiano ( 2012 ) attempts to sketch out a formalization of a value extrapolation
approach called “indirect normativity ”. It proposes a technique that would allow an
AI to approximate the kinds of values a group of humans would settle on if theycould spend an unbounded amount of time and resources considering the problem.
Other authors have begun preliminary work on simpler value learning systems,
designed to automatically learn moral principles. Anderson et al. ( 2005a ,2005b ,
2006 ) have built systems based around various moral duties and principles. As lists
of duties do not in and of themselves specify what to do when they con flict, the
systems let human experts judge how each con flict should be resolved, and then
attempt to learn general rules from the judgments. As put forth, however, thisapproach would have little ability to infer ethical rules which did not fit the
framework of proposed duties. Improved computational models of ethical reason-ing, perhaps incorporating work from neuroscience and moral psychology, could
help address this. Potapov and Rodionov ( 2012 ) propose an approach by which an3 Responses to the Journey to the Singularity 57
AGI could gradually learn the values of other agents as its understanding of the
world improved.
A value extrapolation process seems dif ficult to specify exactly, as it requires
building an AGI with programming that formally and rigorously de fines human
values. Even if it manages to avoid the first issue in Wallach ’s(2010 ) list
(Sect. 3.5.2 ), top-down value extrapolation may fall victim to other issues, such as
computational tractability. One interpretation of CEV would seem to requiremodeling not only the values of everyone on Earth, but also the evolution of thosevalues as the people in question interacted with each other, became more intelligentand more like their ideal selves, chose which of their values they wanted to pre-serve, etc. Even if the AGI could eventually obtain the enormous amount ofcomputing power required to run this model, its behavior would need to be safe
from the beginning, or it could end up doing vast damage to humanity before
understanding what it was doing wrong.
Goertzel and Pitt ’s(2012 ) hybrid approach, in which AGIs cooperate with
humans in order to discover the values humans wish to see implemented, seemsmore likely to avoid the issue of computational tractability. However, it will fail towork in a hard takeoff situation where AGIs take control before being taught thecorrect human values. Another issue with Coherent Blended Volition is thatschemes which require absolute consensus are unworkable with large groups, as
anyone whose situation would be worsened by a change of events could block the
consensus. A general issue with value extrapolation approaches is that there may beseveral valid ways of de fining a value extrapolation process, with no objective
grounds for choosing one rather than another.
Goertzel ( 2010a ) notes that in formal reasoning systems a set of initially
inconsistent beliefs which the system attempts to resolve might arrive at somethingvery different than the initial belief set, even if there existed a consistent belief setthat was closer to the original set. He suggests that something similar might happen
when attempting to make human values consistent, though whether this would be a
bad thing is unclear.
3.5.2.6 Approval-Directed Agents
Christiano ( 2014a ,b,c,2015 ) proposes “approval-directed agents ”. Instead of
having explicit goals which they would attempt to optimize for, approval-directed
agents would choose their next action using a procedure such as
Estimate the expected rating [an overseer] would give each action if he considered it at
length. Take the action with the highest expected rating. (Christiano 2014a ,b,c)
Christiano ( 2014a ,b,c) suggests that this approach would:
allow us to start with simple overseers and simple approval-directed agents,
rather than having to specify all of an AI ’s goals at once58 K. Sotala and R. Yampolskiy
avoiding getting locked into bad design decisions: “if an approval-directed AI
encounters an unforeseen situation, it will respond in the way that we mostapprove of ”
fail gracefully: flawed interpretations of the overseer ’s goals might be noticed
faster, as the AI wouldn ’t try to actively hide them unless the overseer delib-
erately approved of such actions
3.5.2.7 “Top-Down Safe AGI ”Proposals —Our View
Of the various top-down proposals, value learning and proposals and
approval-directed agents seem to be the only ones which properly take into accountthe complexity of value thesis (see the pervious chapter), as they attempt tospecifically take into account considerations such as “Would humanity have
endorsed this course of action if it had known the consequences? ”Although there
are many open questions concerning the computational tractability as well as the
general feasibility of the value learning approaches, they together with theapproval-directed ones seem like some of the most important ones to work on.
3.5.3 Bottom-up and Hybrid Safe AGI
Wallach ( 2010 )d efines bottom-up approaches as ones that favor evolving or
simulating the mechanisms that give rise to our moral decisions. Another alternativeis hybrid approaches, combining parts of both top-down and bottom-up approaches.
A problem with pure bottom-up approaches is that techniques such as arti ficial
evolution or merely rewarding the AGI for the right behavior may cause it tobehave correctly in tests, but would not guarantee that it would behave safely in anyother situation. Even if an AGI seems to have adopted human values, the actual
processes driving its behavior may be very different from the processes that would
be driving the actions of a human who behaved similarly. It might then behave very
unexpectedly in situations which are different enough (Yudkowsky 2008 ,2011 ).
Armstrong et al. ( 2012 ) discuss various problems related to such approaches and
offer examples of concepts which seem straightforward to humans but are not assimple as they may seem on the surface. One of their examples relates to theconcept of time
If the [AGI] had the reasonable-sounding moral premise that “painlessly killing a human
being, who is going to die in a micro-second anyway, in order to gain some other good, isnot a crime, ”we would not want it to be able to rede fine millennia as seconds (Armstrong
et al. 2012 ).
All humans have an intuitive understanding of time and no experience with
beings who could arbitrarily rede fine their own clocks and might not share the same3 Responses to the Journey to the Singularity 59
concept of time. Such differences in the conceptual grounding of an AGI ’s values
and of human values might not become apparent until too late.
3.5.3.1 Evolutionary Invariants
Human morality is to a large extent shaped by evolution, and evolutionary
approaches attempt to replicate this process with AGIs.
Hall ( 2007a ,2011 ) argues that self-improving AGIs are likely to exist in com-
petition with many other kinds of self-improving AGIs. Properties that give AGIs asignificant disadvantage might then be strongly selected against and disappear. We
could attempt to identify evolutionary invariants , or evolutionarily stable strategies,
which would both survive in a competitive environment and cause an AGI to treat
humans well.
Hall ( 2011 ) lists self-interest, long planning horizons, knowledge, an under-
standing of evolutionary ethics, and guaranteed honesty as invariants that are likelyto make an AGI more moral as well as to persist even under intense competition. Hesuggests that, although self-interest may sound like a bad thing in an AGI,non-self-interested creatures are dif ficult to punish. Thus, enlightened self-interest
might be a good thing for an AGI to possess, as it will provide an outside com-
munity with both a stick and a carrot to control it with.
Similarly, Waser ( 2008 ) suggests that minds which are intelligent enough will,
due to game-theoretical and other considerations, become altruistic and cooperative.Waser ( 2011 ) proposed the principle of Rational Universal Benevolence (RUB), the
idea that the moral course of action is cooperation while letting everyone freelypursue their own goals. Waser proposes that, instead of making human-friendlybehavior an AGI ’s only goal, the AGI would be allowed to have and form its own
goals. However, its goals and actions would be subject to the constraint that they
should respect the principle of RUB and not force others into a life those others
would disagree with.
Kornai ( 2014 ) cites Gewirth ’s (Gewirth 1978 ) work on the principle of generic
consistency, which holds that respecting others ’rights to freedom and well-being is
a logically necessary conclusion for any rational agents. Kornai suggests that if theprinciple is correct, then AGIs would respect humanity ’s rights to freedom and
well-being, and that AGIs which failed to respect the principle would be outcom-peted by ones which did.
Although these approaches expect AGI either to evolve altruism or to find it the
most rational approach, true altruism or even pure tit-for-tat (Axelrod 1987 ) isn’t
actually the best strategy in evolutionary terms. Rather, a better strategy isMachiavellian tit-for-tat cultivating an appearance of altruism and cooperation
when it bene fits oneself, and acting sel fishly when one can get away with it.
Humans seem strongly disposed toward such behavior (Haidt 2006 ).
Another problem is that tit-for-tat as a good strategy assumes that both players
are equally powerful and both have the same options at their disposal. If the AGI
became far more powerful than most humans, it might no longer be in its interests60 K. Sotala and R. Yampolskiy
to treat humans favorably (Fox and Shulman 2010 ). This hypothesis can be tested
by looking at human behavior if exploiting the weak is an evolutionarily usefulstrategy, then humans should engage in it when given the opportunity. Humanswho feel powerful do indeed devalue the worth of the less powerful and view them
as objects of manipulation (Kipnis 1972 ). They also tend to ignore social norms
(van Kleef et al. 2011 ) and to experience less distress and compassion toward the
suffering of others (van-Kleef 2008 ).
Thus, even if an AGI cooperated with other similarly powerful AGIs, the group
of AGIs might still decide to collectively exploit humans. Similarly, even thoughthere might be pressure for AGIs to make themselves more transparent and easilyinspected by others, this only persists for as long as the AGI needs others more thanthe others need the AGI.
3.5.3.2 Evolved Morality
Another proposal is to create AGIs via algorithmic evolution, selecting in each
generation the AGIs which are not only the most intelligent, but also the mostmoral. These ideas are discussed to some extent by Wallach and Allen ( 2009 ).
3.5.3.3 Reinforcement Learning
In machine learning, reinforcement learning (not to be confused with inverse
reinforcement learning, discussed above in the value learning section) is a model inwhich an agent takes various actions and is differentially rewarded for the actions,after which it learns to perform the actions with the greatest expected reward. Inpsychology, it refers to agents being rewarded for certain actions and thus learning
behaviors which they have a hard time breaking, even if some other kind of
behavior is more bene ficial for them later on.
Applied to AGI, the machine learning sense of reinforcement involves teaching
an AGI to behave in a safe manner by rewarding it for ethical choices, and letting itlearn for itself the underlying rules of what constitutes ethical behavior. In an earlyexample of this kind of proposal, McCulloch ( 1956 ) described an “ethical machine ”
that could infer the rules of chess by playing the game, and suggested that it couldalso learn ethical behavior this way.
Hibbard ( 2001 ,2005a ) suggested using reinforcement learning to give AGIs
positive emotions toward humans. Early AGIs would be taught to recognize hap-piness and unhappiness in humans, and the results of this learning would behard-wired as emotional values in more advanced AGIs. This training processwould then be continued —for example, by letting the AGIs predict how human
happiness would be affected by various actions and using those predictions asemotional values.
A reinforcement learner is supplied with a reward signal, and it always has the
explicit goal of maximizing the sum of this reward, any way it can. In order for this3 Responses to the Journey to the Singularity 61
goal to align with human values, humans must engineer the environment so that the
reinforcement learner is prevented from receiving rewards if human goals are notfulfilled (Dewey 2011 ). A reinforcement-learning AGI only remains safe for as long
as humans are capable of enforcing this limitation, and will become unpredictable if
it becomes capable of overcoming it. Hibbard (Hibbard 2012d ) has retracted his
earlier reinforcement learning –based proposals, as they would allow the AGI to
maximize its reinforcement by modifying humans to be maximally happy, evenagainst their will (Dewey 2011 ).
3.5.3.4 Human-like AGI
Another kind of proposal involves building AGIs that can learn human values by
virtue of being similar to humans.
Connectionist systems, based on arti ficial neural nets, are capable of learning
patterns from data without being told what the patterns are. As some connectionistmodels have learned to classify problems in a manner similar to humans (McLeodet al. 1998 ; Plaut 2003 ; Thomas and McClelland 2008 ), it has been proposed that
connectionist AGI might learn moral principles that are too complex for humans tospecify as explicit rules.
9This idea has been explored by Guarini ( 2006 ) and
Wallach and Allen ( 2009 ).
Sotala ( 2015 ) brie fly surveys research into human concept learning, and argues
that current research suggests that human concepts might be generated using arelatively limited set of mechanisms. He proposes a research program to map thesemechanisms in more detail and to then build AGIs which would learn similarconcepts as humans, including moral concepts.
One speci fic proposal that draws upon connectionism is to make AGIs act
according to virtue ethics (Wallach and Allen 2009 ). These authors note that pre-
vious writers discussing virtuous behavior have emphasized the importance of
learning moral virtues through habit and practice. As it is impossible to exhaus-tively de fine a virtue, virtue ethics has traditionally required each individual to learn
the right behaviors through “bottom-up processes of discovery or learning ”. Models
that mimicked the human learning process well enough could then potentially learnthe same behaviors as humans do.
Another kind of human-inspired proposal is the suggestion that something like
Stan Franklin ’s LIDA architecture (Franklin and Patterson 2006 ; Ramamurthy et al.
2006 ; Snaider et al. 2011 ), or some other approach based on Bernard Baars ’s(2002 ,
2005 )“global workspace ”theory, might enable moral reasoning. In the LIDA
architecture, incoming information is monitored by specialized attention codelets ,
each of which searches the input for speci fic features. In particular, moral codelets
might look for morally relevant factors and ally themselves with other codelets to
9But it should be noted that there are also promising nonconnectionist approaches for modeling
human classi fication behavior —see, e.g., Tenenbaum et al. ( 2006 ,2011 ).62 K. Sotala and R. Yampolskiy
promote their concerns to the level of conscious attention. Ultimately, some
coalitions will win enough support to accomplish a speci fic kind of decision
(Wallach and Allen 2009 ).
Goertzel and Pitt ( 2012 ) consider human memory systems (episodic, sensori-
motor, declarative, procedural, attentional, and intentional) and ways by which
human morality might be formed via their interaction. They brie fly discuss the way
that the OpenCog AGI system (Hart and Goertzel 2008 ; Goertzel 2012a ) imple-
ments similar memory systems and how those systems could enable it to learnmorality. Similarly, Goertzel and Bugaj ( 2008 ) discuss the stages of moral devel-
opment in humans and suggest ways by which they could be replicated in AGIsystems, using the speci fic example of Novamente, a proprietary version of
OpenCog.
Waser ( 2009 ) also proposes building an AGI by studying the results of evolution
and creating an implementation as close to the human model as possible. Shanahan(2015 , Chap. 5) similarly suggests building a human-like AGI that might then
acquire human values and abilities, such as empathy.
Human-inspired AGI architectures would intuitively seem the most capable of
learning human values, though what would be human-like enough remains an openquestion. It is possible that even a relatively minor variation from the norm couldcause an AGI to adopt values that most humans would consider undesirable.
Getting the details right might require an extensive understanding of the human
brain.
There are also humans who have drastically different ethics than the vast
majority of humanity and argue for the desirability of outcomes such as theextinction of mankind (Benatar 2006 ; Dietrich 2007 ). There remains the possibility
that even AGIs which reasoned about ethics in a completely human-like mannerwould reach such conclusions.
Humans also have a long track record of abusing power, or of undergoing major
behavioral changes due to relatively small injuries —the“safe Homo sapiens ”
problem also remains unsolved. On the other hand, it seems plausible thathuman-like AGIs could be explicitly engineered to avoid such problems.
The easier that an AGI is to modify, the more powerful it might become (Sotala
2012 ), and very close recreations of the human brain may turn out to be dif ficult to
extensively modify and upgrade. Human-inspired safe AGIs might then end upoutcompeted by AGIs which were easier to modify, and which might or might notbe safe.
Even if human-inspired architectures could be easily modi fied, the messiness of
human cognitive architecture means that it might be dif ficult to ensure that their
values remain bene ficial during modi fication. For instance, in LIDA-like architec-
tures, bene ficial behavior will depend on the correct coalitions of morality codelets
winning each time. If the system undergoes drastic changes, this can be verydifficult if not impossible to ensure.
Most AGI builders will attempt to create a mind that displays considerable
advantages over ordinary humans. Some such advantages might be best achieved
by employing a very nonhuman architecture (Moravec 1992 ), so there will be3 Responses to the Journey to the Singularity 63
reasons to build AGIs that are not as human-like. These could also end up out-
competing the human-like AGIs.
3.5.3.5 “Bottom-up and Hybrid Safe AGI ”Proposals —Our View
We are generally very skeptical about pure bottom-up methods, as they only allow a
very crude degree of control over an AGI ’s goals, giving it a motivational system
which can only be relied on to align with human values in the very speci fic
environments that the AGI has been tested in. Evolutionary invariants seem inca-pable of preserving complexity of value, and they might not even be capable ofpreserving human survival. Reinforcement learning, on the other hand, depends on
the AGI being incapable of modifying the environment against the will of its human
controllers. Therefore, none of these three approaches seems workable.
Human-like AGI might have some promise, depending on exactly how fragile
human values were. If the AGI reasoning process could be made suf ficiently
human-like, there is the possibility that the AGI could remain relatively safe,though less safe than a well-executed value extrapolation –based AGI.
3.5.4 AGI Nanny
A more general proposal, which could be achieved by either top-down, bottom-up,
or hybrid methods, is the proposal of an “AGI Nanny ”(Goertzel 2012b ; Goertzel
and Pitt 2012 ). This is an AGI that is somewhat more intelligent than humans and is
designed to monitor Earth for various dangers, including more advanced AGI.
The AGI Nanny would be connected to powerful surveillance systems and
would control a massive contingent of robots. It would help abolish problems such
as disease, involuntary death, and poverty, while preventing the development oftechnologies that could threaten humanity. The AGI Nanny would be designed notto rule humanity on a permanent basis, but to give us some breathing room and timeto design more advanced AGIs. After some predetermined amount of time, it wouldcede control of the world to a more intelligent AGI.
Goertzel and Pitt ( 2012 ) brie fly discuss some of the problems inherent in the
AGI Nanny proposal. The AGI would have to come to power in an ethical way, and
might behave unpredictably despite our best efforts. It might also be easier to create
a dramatically self-improving AGI than to create a more constrained AGI Nanny.
3.5.4.1 “AGI Nanny ”Proposals —Our View
Upon asserting control, the AGI Nanny would need to have precisely speci fied
goals, so that it would stop other AGIs from taking control but would also not harm
human interests. It is not clear to what extent de fining these goals would be easier64 K. Sotala and R. Yampolskiy
than de fining the goals of a more free-acting AGI (Muehlhauser and Salamon
2012 ). Overall, the AGI Nanny seems to have promise, but it ’s unclear whether it
can be made to work.
3.5.5 Motivational Scaffolding
Motivational scaffolding (Bostrom 2014 , Chap. 9) is a proposal in which an AGI is
originally given a temporary, relatively simple goal system that is easy to specifybut not intended as the final system. As the AGI ’s capabilities and knowledge
mature, the AGI ’s motivational system is replaced with a more sophisticated one
that then guides it on its path towards full superintelligence.
Bostrom ( 2014 ) considers this approach to have some potential, but also notes
several problems with it. The AGI might resist having its initial goals replaced, theAGI might grow into a superintelligence with the old goals in place, and it isunclear whether installing the final goal system in the fully-developed AGI would
be particularly easy.
3.5.6 Formal Veri fication
Formal veri fication methods prove speci fic properties about various algorithms. If
the complexity and fragility of value theses hold, it follows that safe AGI requires theability to verify that proposed changes to the AGI will not alter its goals or values. Ifeven a mild drift from an AGI ’s original goals might lead to catastrophic conse-
quences, then utmost care should be given to ensuring that the goals will not change
inadvertently. This is particularly the case if there are no external feedback mech-
anisms which would correct the drift. Before modifying itself, an AGI could attemptto formally prove that the changes would not alter its existing goals, and wouldtherefore keep them intact even during extended self-modi fication (Yudkowsky
2008 ). Such proofs could be required before self-modi fication was allowed to occur,
and the system could also be required to prove that this verify-before-modi fication
property itself would always be preserved during self-modi fication.
Spears ( 2006 ) combines machine learning and formal veri fication methods to
ensure that AIs remain within the bounds of prespeci fied constraints after having
learned new behaviors. She attempts to identify “safe”machine learning operators,
which are guaranteed to preserve the system ’s constraints.
One AGI system built entirely around the concept of formal veri fication is the
Gödel machine (Schmidhuber 2009 ; Steunebrink and Schmidhuber 2011 ). It con-
sists of a solver , which attempts to achieve the goals set for the machine, and a
searcher , which has access to a set of axioms that completely describe the machine.
The searcher may completely rewrite any part of the machine, provided that it can
produce a formal proof showing that such a rewrite will further the system ’s goals.3 Responses to the Journey to the Singularity 65
Goertzel ( 2010b ) proposes GOLEM (Goal-Oriented LEarning
Meta-Architecture), a meta-architecture that can be wrapped around a variety ofdifferent AGI systems. GOLEM will only implement changes that are predicted tobe more effective at achieving the original goal of the system. Goertzel argues that
GOLEM is likely to be both self-improving and steadfast: either it pursues the same
goals it had at the start, or it stops acting altogether.
Unfortunately, formalizing the AGI ’s goals in a manner that will allow formal
verification methods to be used is a challenging task. Within cryptography, many
communications protocols have been proven secure, only for successful attacks tobe later developed against their various implementations. While the formal proofswere correct, they contained assumptions which did not accurately capture the waythe protocols worked in practice (Degabriele et al. 2011 ). Proven theorems are only
as good as their assumptions, so formal veri fication requires good models of the
AGI hardware and software.
3.5.6.1 “Formal Veri fication ”Proposals —Our View
Compared to the relatively simple domain of cryptographic security, verifying
things such as “Does this kind of a change to the AGI ’s code preserve its goal of
respecting human values? ”seems like a much more open-ended and dif ficult task,
one which might even prove impossible. Regardless, it is the only way of achievinghigh con fidence that a system is safe, so it should at least be attempted.
3.5.7 Motivational Weaknesses
Finally, there is a category of internal constraints that, while not making an AGI ’s
values safer, make it easier to control AGI via external constraints.
3.5.7.1 High Discount Rates
AGI systems could be given a high discount rate, making them value short-term
goals and gains far more than long-term goals and gains (Shulman 2010a ; Arm-
strong et al. 2012 ). This would inhibit the AGI ’s long-term planning, making it
more predictable. However, an AGI can also reach long-term goals through a seriesof short-term goals (Armstrong et al. 2012 ). Another possible problem is that it
could cause the AGI to pursue goals which were harmful for humanity ’s long-term
future. Humanity may arguably be seen as already behaving in ways that imply anexcessively high discount rate, such as by consuming finite natural resources
without properly taking into account the well-being of future generations.66 K. Sotala and R. Yampolskiy
3.5.7.2 Easily Satiable Goals
Shulman ( 2010a ) proposes designing AGIs in such a way that their goals are easy to
satisfy. For example, an AGI could receive a near-maximum reward for simply
continuing to receive an external reward signal, which could be cut if humans
suspected misbehavior. The AGI would then prefer to cooperate with humans ratherthan trying to attack them, even if it was very sure of its chances of success.
10
Likewise, if the AGI could receive a maximal reward with a relatively smallfraction of humanity ’s available resources, it would have little to gain from seizing
more resources.
Bostrom ( 2014 ) mentions a variation of this idea, where the AGI would receive a
continuous stream of cryptographic reward tokens from a human controller. Its
utility function would be de fined so that it would receive “99% of its maximum
utility from the first reward token; 99% of its remaining utility potential from the
second reward token; and so on ”. Bostrom also discusses what might go wrong
with this scheme: possibly the AGI might not trust the human ’s promises to con-
tinue sending the rewards, or it might suspect that the human was in danger of beingincapacitated unless the AGI took action.
An extreme form of this kind of a deal is Orseau and Ring ’s(2011 )“Simpleton
Gambit, ”in which an AGI is promised everything that it would ever want, on the
condition that it turn itself into a harmless simpleton. Orseau and Ring consider
several hypothetical AGI designs, many of which seem likely to accept the gambit,given certain assumptions.
In a related paper, Ring and Orseau ( 2011 ) consider the consequences of an AGI
being able to modify itself to receive the maximum possible reward. They show thatcertain types of AGIs will then come to only care about their own survival.Hypothetically, humans could promise not to threaten such AGIs in exchange forthem agreeing to be subject to AI-boxing procedures. For this to work, the system
would have to believe that humans will take care of its survival against external
threats better than it could itself. Hibbard ( 2012a ,c) discusses the kinds of AGIs
that would avoid the behaviors described by Ring and Orseau (Ring and Orseau2011 ; Orseau and Ring 2011 ).
3.5.7.3 Calculated Indifference
Another proposal is to make an AGI indifferent to a speci fic event. For instance, the
AGI could be made indifferent to the detonation of explosives attached to its
10On the other hand, this might incentivize the AGI to deceive its controllers into believing it was
behaving properly, and also to actively hide any information which it even suspected might beinterpreted as misbehavior.3 Responses to the Journey to the Singularity 67
hardware, which might enable humans to have better control over it (Armstrong
2010 ; Armstrong et al. 2012 ).
3.5.7.4 Programmed Restrictions
Goertzel and Pitt ( 2012 ) suggest we ought to ensure that an AGI does not
self-improve too fast, because AGIs will be harder to control as they become moreand more cognitively superior to humans. To limit the rate of self-improvement inAGIs, perhaps AGIs could be programmed to extensively consult humans and otherAGI systems while improving themselves, in order to ensure that no unwantedmodifications would be implemented.
Omohundro ( 2012 ) discusses a number of programmed restrictions in the form
of constraints on what the AGI is allowed to do, with formal proofs being used toensure that an AGI will not violate its safety constraints. Such limited AGI systemswould be used to design more sophisticated AGIs.
Programmed restrictions are problematic, as the AGI might treat these merely as
problems to solve in the process of meeting its goals, and attempt to overcome them(Omohundro 2008 ). Making an AGI not want to quickly self-improve might not
solve the problem by itself. If the AGI ends up with a second-order desire to rid
itself of such a disinclination, the stronger desire will eventually prevail (Suber
2002 ). Even if the AGI wanted to maintain its disinclination toward rapid
self-improvement, it might still try to circumvent the goal in some other way, suchas by creating a copy of itself which did not have that disinclination (Omohundro2008 ). Regardless, such limits could help control less sophisticated AGIs.
3.5.7.5 Legal Machine Language
Weng et al. ( 2008 ,2009 ) propose a “legal machine language ”which could be used
to formally specify actions which the AGI is allowed or disallowed to do.Governments could then enact laws written in legal machine language, allowingthem to be programmed into robots.
3.5.7.6 “Motivational Weaknesses ”Proposals —Our View
Overall, motivational weaknesses seem comparable to external constraints possibly
useful and worth studying, but not something to rely on exclusively, particularly inthe case of superintelligent AGIs. As with external constraints and Oracle AIs, anarms race situation might provide a considerable incentive to loosen or remove suchconstraints.68 K. Sotala and R. Yampolskiy
3.6 Conclusion
We began this paper by noting that a number of researchers are predicting AGI in
the next twenty to one hundred years. One must not put excess trust in this time
frame as Armstrong and Sotala ( 2012 ) show, experts have been terrible at pre-
dicting AGI. Muehlhauser and Salamon ( 2012 ) consider a number of methods other
than expert opinion that could be used for predicting AGI, but find that they too
provide suggestive evidence at best.
It would be a mistake, however, to leap from “AGI is very hard to predict ”to
“AGI must be very far away ”. Our brains are known to think about uncertain,
abstract ideas like AGI in “far mode, ”which also makes it feel like AGI must be
temporally distant (Trope and Liberman 2010 ), but something being uncertain is
not strong evidence that it is far away . When we are highly ignorant about
something, we should widen our error bars in both directions. Thus, we shouldn ’t
be highly con fident that AGI will arrive this century, and we shouldn ’t be highly
confident that it won ’t.
Next, we explained why AGIs may be an existential risk. A trend toward
automatization would give AGIs increased in fluence in society, and there might be
a discontinuity in which they gained power rapidly. This could be a disaster forhumanity if AGIs don ’t share our values, and in fact it looks dif ficult to cause them
to share our values because human values are complex and fragile, and therefore
problematic to specify.
The recommendations given for dealing with the problem can be divided into
proposals for societal action, external constraints, and internal constraints(Table 3.1). Many proposals seem to suffer from serious problems, or seem to be of
limited effectiveness. Others seem to have enough promise to be worth exploring.We will conclude by reviewing the proposals which we feel are worthy of furtherstudy.
As a brief summary of our views, in the medium term, we think that the pro-
posals of AGI con finement (Sect. 3.4.1 ), Oracle AI (Sect. 3.5.1 ), and motivational
weaknesses (5.6) would have promise in helping create safer AGIs. These proposalsshare in common the fact that, although they could help a cautious team ofresearchers create an AGI, they are not solutions to the problem of AGI risk, as theydo not prevent others from creating unsafe AGIs, nor are they suf ficient in guar-
anteeing the safety of suf ficiently intelligent AGIs. Regulation (Sect. 3.3.3 ) as well
as human capability enhancement (Sect. 3.3.4 ) could also help to somewhat reduce
AGI risk. In the long run, we will need the ability to guarantee the safety of freely
acting AGIs. For this goal, value learning (Sect. 3.5.2.5 ) would seem like the most
reliable approach if it could be made to work, with human-like architecture(Sect. 3.5.3.4 ) a possible alternative which seems less reliable but possibly easier to
build. Formal veri fication (Sect. 3.5.6 ) seems like a very important tool in helping
to ensure the safety of our AGIs, regardless of the exact approach that we choose.
Of the societal proposals, we are supportive of the calls to regulate AGI
development, but we admit there are many practical hurdles which might make this3 Responses to the Journey to the Singularity 69
infeasible. The economic and military potential of AGI, and the dif ficulty of
enforcing compliance to regulations and arms treaties could lead to unstoppablearms races.
Wefind ourselves in general agreement with the authors who advocate funding
additional research into safe AGI as the primary solution. Such research will alsoTable 3.1 Responses to catastrophic AGI risk
Societal proposals
Do nothing AGI is distant
Little risk, no action neededLet them kill us
Integrate to society Legal and economic controls
Foster positive values
Regulate research Review boards
Encourage safety researchDifferential technological progressInternational mass surveillance
Enhance human capabilitiesRelinquish technology Outlaw AGI
Restrict hardware
AGI design proposalsExternal constraints Internal constraints
AGI
confinementSafe questions Oracle AI
Virtual worlds Top-down approaches Three lawsResetting the
AGICategorical imperative
Checks and
balancesPrinciple of Voluntary
Joyous Growth
AGI
enforcementUtilitarianism
Value learningApproval-directed agents
Bottom-up and hybrid
approachesEvolutionary invariants
Evolved moralityReinforcement learningHuman-like AGI
AGI NannyFormal veri fication
Motivational scaffoldingMotivational weaknesses High discount rates
Easily satiable goalsCalculated indifferenceProgrammed restrictionsLegal Machine Language70 K. Sotala and R. Yampolskiy
help establish the kinds of constraints which would make it possible to successfully
carry out integration proposals.
Uploading approaches, in which human minds are made to run on computers
and then augmented, might buy us some time to develop safe AGI. However, it is
unclear whether they can be developed before AGI, and large-scale uploading could
create strong evolutionary trends which seem dangerous in and of themselves. AsAGIs seem likely to eventually outpace uploads, uploading by itself is probably nota sufficient solution. What uploading could do is to reduce the initial advantages
that AGIs enjoy over (partially uploaded) humanity, so that other responses to AGIrisk can be deployed more effectively.
External constraints are likely to be useful in controlling AGI systems of limited
intelligence, and could possibly help us develop more intelligent AGIs while
maintaining their safety. If inexpensive external constraints were readily available,
this could encourage even research teams skeptical about safety issues to implementthem. Yet it does not seem safe to rely on these constraints once we are dealing witha superhuman intelligence, and we cannot trust everyone to be responsible enoughto contain their AGI systems, especially given the economic pressures to “release ”
AGIs. For such an approach to be a solution for AGI risk in general, it would haveto be adopted by all successful AGI projects, at least until safe AGIs were devel-oped. Much the same is true of attempting to design Oracle AIs. In the short term,
such efforts may be reinforced by research into motivational weaknesses, internal
constraints that make AGIs easier to control via external means.
In the long term, the internal constraints that show the most promise are value
extrapolation approaches and human-like architectures. Value extrapolationattempts to learn human values and interpret them as we would wish them to beinterpreted. These approaches have the advantage of potentially maximizing thepreservation of human values, and the disadvantage that such approaches mayprove intractable, or impossible to properly formalize. Human-like architectures
seem easier to construct, as we can simply copy mechanisms that are used within
the human brain, but it seems hard to build such an exact match as to reliablyreplicate human values. Slavish reproductions of the human psyche also seem likelyto be outcompeted by less human, more ef ficient architectures.
Both approaches would bene fit from better formal veri fication methods, so that
AGIs which were editing and improving themselves could verify that the modi fi-
cations did not threaten to remove the AGIs ’motivation to follow their original
goals. Studies which aim to uncover the roots of human morals and preferences also
seem like candidates for research that would bene fit the development of safe AGI,
as do studies into computational models of ethical reasoning.
We reiterate that when we talk about “human values ”, we are not making the
claim that human values would be static, nor that current human values would be
ideal. Nor do we wish to imply that the values of other sentient beings would beunimportant. Rather, we are seeking to guarantee the implementation of some verybasic values, such as the avoidance of unnecessary suffering, the preservation ofhumanity, and the prohibition of forced brain reprogramming. We believe the vast
majority of humans would agree with these values and be sad to see them lost.3 Responses to the Journey to the Singularity 71
Acknowledgementss Special thanks to Luke Muehlhauser for extensive assistance throughout
the writing process. We would also like to thank Abram Demski, Alexei Turchin, Alexey Potapov,Anders Sandberg, Andras Kornai, Anthony Berglas, Aron Vallinder, Ben Goertzel, Ben Noble,
Ben Sterrett, Brian Rabkin, Bill Hibbard, Carl Shulman, Dana Scott, Daniel Dewey, David Pearce,
Evelyn Mitchell, Evgenij Thorstensen, Frank White, gwern branwen, Harri Valpola, Jaan Tallinn,Jacob Steinhardt, James Babcock, James Miller, Joshua Fox, Louie Helm, Mark Gubrud, MarkWaser, Michael Anissimov, Michael Vassar, Miles Brundage, Moshe Looks, Randal Koene,
Robin Hanson, Risto Saarelma, Steve Omohundro, Suzanne Lidström, Steven Kaas, Stuart
Armstrong, Tim Freeman, Ted Goertzel, Toni Heinonen, Tony Barrett, Vincent Müller, VladimirNesov, Wei Dai, and two anonymous reviewers as well as several users of lesswrong.com for theirhelpful comments.
References
Agliata, Daniel, and Stacey Tantleff-Dunn. 2004. “The Impact of Media Exposure on Males ’Body
Image ”. Journal of Social and Clinical Psychology 23(1): 7 –22. doi: 10.1521/jscp.23.1.7.26988 .
Alexander, Scott. 2015. “AI researchers on AI risk ”. Slate Star Codex [blog]. http://slatestarcodex.
com/2015/05/22/ai-researchers-on-ai-risk/ .
Anderson, Monica. 2010. “Problem Solved: Unfriendly AI ”. H + Magazine, December 15. http://
hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai/ .
Anderson, Michael, Susan Leigh Anderson, and Chris Armen, eds. 2005a. Machine Ethics: Papers
from the 2005 AAAI Fall Symposium. Technical Report, FS-05-06. AAAI Press, Menlo Park,
CA. http://www.aaai.org/Library/Symposia/Fall/fs05-06 .
Anderson, Michael, Susan Leigh Anderson, and Chris Armen. 2005b. “MedEthEx: Toward a
Medical Ethics Advisor. ”In Caring Machines: AI in Eldercare: Papers from the 2005 AAAI
Fall Symposium, edited by Timothy Bickmore, 9 –16. Technical Report, FS-05-02. AAAI
Press, Menlo Park, CA. http://aaaipress.org/Papers/Symposia/Fall/2005/FS-05-02/FS05-02-
002.pdf .
Anderson, Michael, Susan Leigh Anderson, and Chris Armen. 2006. “An Approach to Computing
Ethics. ”IEEE Intelligent Systems 21(4): 56 –63. doi: 10.1109/MIS.2006.64 .
Anderson, Susan Leigh. 2011. “The Unacceptability of Asimov ’s Three Laws of Robotics as a
Basis for Machine Ethics ”. In Anderson and Anderson 2011, 285 –296.
Annas, George J., Lori B. Andrews, and Rosario M. Isasi. 2002. “Protecting the Endangered
Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations ”.
American Journal of Law & Medicine 28(2 –3): 151 –178.
Anthony, Dick, and Thomas Robbins. 2004. “Conversion and ‘Brainwashing ’in New Religious
Movements ”. In The Oxford Handbook of New Religious Movements, 1st ed., edited by
James R. Lewis, 243 –297. New York: Oxford University Press. doi: 10.1093/oxfordhb/
9780195369649.003 . 0012.
Armstrong, Stuart. 2007. “Chaining God: A Qualitative Approach to AI, Trust and Moral
Systems ”. Unpublished manuscript, October 20. Accessed December 31, 2012. http://www.
neweuropeancentury.org/GodAI.pdf .
Armstrong, Stuart. 2010. Utility Indifference. Technical Report, 2010-1. Oxford: Future of
Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/reports/2010-1.pdf .
Armstrong, Stuart, Anders Sandberg, and Nick Bostrom. 2012. “Thinking Inside the Box:
Controlling and Using an Oracle AI ”. Minds and Machines 22(4): 299 –324. doi: 10.1007/
s11023-012-9282-2 .
Armstrong, Stuart, and Kaj Sotala. 2012. “How We ’re Predicting AI —or Failing To ”. In Beyond
AI: Arti ficial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and
Radek Schuster, 52 –75. Pilsen: University of West Bohemia. Accessed February 2, 2013.
http://www.kky.zcu.cz/en/publications/1/JanRomportl\_2012\_BeyondAIArti ficial.pdf .72 K. Sotala and R. Yampolskiy
Asimov, Isaac. 1942. “Runaround ”. Astounding Science-Fiction, March, 94 –103.
Axelrod, Robert. 1987. “The Evolution of Strategies in the Iterated Prisoner ’s Dilemma ”.I n
Genetic Algorithms and Simulated Annealing, edited by Lawrence Davis, 32 –41. Los Altos,
CA: Morgan Kaufmann.
Baars, Bernard J. 2002. “The Conscious Access Hypothesis: Origins and Recent Evidence ”.
Trends in Cognitive Sciences 6(1): 47 –52. doi: 10.1016/S1364-6613(00)01819-2 .
Baars, Bernard J. 2005. “Global Workspace Theory of Consciousness: Toward a Cognitive
Neuroscience of Human Experience ”. In The Boundaries of Consciousness: Neurobiology and
Neuropathology, edited by Steven Laureys, 45 –53. Progress in Brain Research 150. Boston:
Elsevier.
Beavers, Anthony F. 2009. “Between Angels and Animals: The Question of Robot Ethics; or, Is
Kantian Moral Agency Desirable? ”Paper presented at the Annual Meeting of the Association
for Practical and Professional Ethics, Cincinnati, OH, March.
Beavers, Anthony F. 2012. “Moral Machines and the Threat of Ethical Nihilism ”. In Lin, Patrick,
Keith Abney, and George A. Bekey, eds. Robot Ethics: The Ethical and Social Implications of
Robotics. Intelligent Robotics and Autonomous Agents. Cambridge, MA: MIT Press, 333 –344.
Benatar, David. 2006. Better Never to Have Been: The Harm of Coming into Existence. New
York: Oxford University Press.
Berglas, Anthony. 2012. “Artificial Intelligence Will Kill Our Grandchildren (Singularity) ”.
Unpublished manuscript, draft 9, January. Accessed December 31, 2012. http://berglas.org/
Articles/AIKillGrandchildren/AIKillGrandchildren.html .
Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related
Hazards. ”Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html .
Bostrom, Nick. 2004. “The Future of Human Evolution ”. In Two Hundred Years After Kant, Fifty
Years After Turing, edited by Charles Tandy, 339 –371. Vol. 2. Death and Anti-Death. Palo
Alto, CA: Ria University Press.
Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in
Advanced Arti ficial Agents ”.I n“Theory and Philosophy of AI, ”edited by Vincent C. Müller.
Special issue, Minds and Machines 22(2): 71 –85. doi: 10.1007/s11023-012-9281-3 .
Bostrom, Nick. 2014. Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bostrom, Nick, and Eliezer Yudkowsky. 2013. “The Ethics of Arti ficial Intelligence ”.I n
Cambridge Handbook of Arti ficial Intelligence, edited by Keith Frankish and William Ramsey.
New York: Cambridge University Press.
Branwen, Gwern. 2012. “Slowing Moore ’s Law: Why You Might Want to and How You Would
Do It ”. gwern.net. December 11. Accessed December 31, 2012. http://www.gwern.net/Slowing
%20Moore ’s%20Law .
Brin, David. 1998. The Transparent Society: Will Technology Force Us to Choose Between
Privacy and Freedom? Reading, MA: Perseus Books.
Bringsjord, Selmer, and Alexander Bringsjord. 2012. “Belief in the Singularity is Fideistic ”.I n
Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity
Hypotheses: A Scienti fic and Philosophical Assessment. The Frontiers Collection. Berlin:
Springer.
Brooks, Rodney A. 2008. “I, Rodney Brooks, Am a Robot ”. IEEE Spectrum 45(6): 68 –71. doi: 10.
1109/MSPEC.2008.4531466 .
Brynjolfsson, Erik, and Andrew McAfee. 2011. Race Against The Machine: How the Digital
Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming
Employment and the Economy. Lexington, MA: Digital Frontier. Kindle edition.
Bryson, Joanna, and Phil Kime. 1998. “Just Another Artifact: Ethics and the Empirical Experience
of AI ”. Paper presented at the Fifteenth Internation Congress on Cybernetics, Namur, Belgium.
http://www.cs.bath.ac.uk/ ∼jjb/web/aiethics98.html .
Butler, Samuel [Cellarius, pseud.]. 1863. “Darwin Among the Machines ”. Christchurch Press,
June 13. http://www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html .
Cade, C. Maxwell. 1966. Other Worlds Than Ours. 1st ed. London: Museum.3 Responses to the Journey to the Singularity 73
Cattell, Rick, and Alice Parker. 2012. Challenges for Brain Emulation: Why is Building a Brain so
Difficult? Synaptic Link, February 5. http://synapticlink.org/Brain%20Emulation%
20Challenges.pdf .
Chalmers, David John. 2010. “The Singularity: A Philosophical Analysis ”. Journal of
Consciousness Studies 17 (9 –10): 7 –65.http://www.ingentaconnect.com/content/imp/jcs/
2010/00000017/f0020009/art00001 .
Christiano, Paul F. 2012. “‘Indirect Normativity ’Write-up ”. Ordinary Ideas (blog), April 21.
http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/ .
Christiano, Paul F. 2014a Approval-directed agents. December 1 https://medium.com/ai-control/
model-free-decisions-6e6609f5d99e .
Christiano, Paul F. 2014b. Approval-directed search. December 14 https://medium.com/
@paulfchristiano/approval-directed-search-63457096f9e4 .
Christiano, Paul F. 2014c. Approval-directed bootstrapping. December 20 https://medium.com/ai-
control/approval-directed-bootstrapping-5d49e886c14f .
Christiano, Paul F. 2015. Learn policies or goals? April 21 https://medium.com/ai-control/learn-
policies-or-goals-348add76b8eb .
Clark, Gregory. 2007. A Farewell to Alms: A Brief Economic History of the World. 1st ed.
Princeton, NJ: Princeton University Press.
Clarke, Roger. 1993. “Asimov ’s Laws of Robotics: Implications for Information Technology, Part
1”. Computer 26(12): 53 –61. doi: 10.1109/2.247652 .
Clarke, Roger. 1994. “Asimov ’s Laws of Robotics: Implications for Information Technology, Part
2”. Computer 27 (1): 57 –66. doi: 10.1109/2.248881 .
Daley, William. 2011. “Mitigating Potential Hazards to Humans from the Development of
Intelligent Machines ”. Synthese 2:44 –50.http://www.synesisjournal.com/vol2\_g/2011\_2\_44-
50\_Daley.pdf .
Davis, Ernest. 2012. “The Singularity and the State of the Art in Arti ficial Intelligence ”. Working
Paper, New York, May 9. Accessed July 22, 2013. http://www.cs.nyu.edu/ ∼davise/papers/
singularity.pdf .
Dayan, Peter. 2011. “Models of Value and Choice ”. In Neuroscience of Preference and Choice:
Cognitive and Neural Mechanisms, edited by Raymond J. Dolan and Tali Sharot, 33 –52.
Waltham, MA: Academic Press.
De Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning
Whether Humanity Should Build Godlike Massively Intelligent Machines.Palm Springs, CA:
ETC Publications.
Degabriele, Jean Paul, Kenny Paterson, and Gaven Watson. 2011. “Provable Security in the Real
World ”. IEEE Security & Privacy Magazine 9(3): 33 –41. doi: 10.1109/MSP.2010.200 .
Dennett, Daniel C. 1987. “Cognitive Wheels: The Frame Problem of AI ”. In Pylyshyn 1987, 41 –
64.
Dennett, Daniel C. 2012. “The Mystery of David Chalmers ”. Journal of Consciousness Studies 19
(1–2): 86 –95.http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005 .
Deutsch, David. 2011. The Beginning of In finity: Explanations that Transform the World. 1st ed.
New York: Viking.
Dewey, Daniel. 2011. “Learning What to Value ”. In Schmidhuber, Jürgen, Kristinn R. Thórisson,
and Moshe Looks, eds. Arti ficial General Intelligence: 4th International Conference, AGI
2011, Mountain View, CA, USA, August 3 –6, 2011. Proceedings. Lecture Notes in Computer
Science 6830. Berlin: Springer, 309 –314.
Dietrich, Eric. 2007. “After The Humans Are Gone ”. Philosophy Now, May –June. http://
philosophynow.org/issues/61/After\_The\_Humans\_Are\_Gone .
Docherty, Bonnie, and Steve Goose. 2012. Losing Humanity: The Case Against Killer Robots.
Cambridge, MA: Human Rights Watch and the International Human Rights Clinic, November19.http://www.hrw.org/sites/default/ files/reports/arms1112ForUpload\_0\_0.pdf .
Douglas, Thomas. 2008. “Moral Enhancement ”. Journal of Applied Philosophy 25(3): 228 –245.
doi:10.1111/j.1468-5930.2008.00412.x .74 K. Sotala and R. Yampolskiy
Eckersley, Peter, and Anders Sandberg. 2013. Is Brain Emulation Dangerous? Journal of Arti ficial
General Intelligence 4.3: 170 –194.
Fox, Joshua, and Carl Shulman. 2010. “Superintelligence Does Not Imply Benevolence ”.I n
Mainzer, Klaus, ed. 2010. ECAP10: VIII European Conference on Computing and Philosophy.
Munich: Dr. Hut.
Frankfurt, Harry G. 1971. “Freedom of the Will and the Concept of a Person ”. Journal of
Philosophy 68 (1): 5 –20. doi: 10.2307/2024717 .
Franklin, Stan, and F. G. Patterson Jr. 2006. “The LIDA Architecture: Adding New Modes of
Learning to an Intelligent, Autonomous, Software Agent ”. In IDPT-2006 Proceedings.San
Diego, CA: Society for Design & Process Science. http://ccrg.cs.memphis.edu/assets/papers/
zo-1010-lida-060403.pdf .
Freeman, Tim. 2009. “Using Compassion and Respect to Motivate an Arti ficial Intelligence ”.
Unpublished manuscript, March 8. Accessed December 31, 2012. http://fungible.com/respect/
paper.html .
Friedman, Batya, and Peter H. Kahn. 1992. “Human Agency and Responsible Computing:
Implications for Computer System Design ”. Journal of Systems and Software 17 (1): 7 –14.
doi:10.1016/0164-1212(92)90075-U .
Future of Life Institute. 2015. Research Priorities for Robust and Bene ficial Arti ficial Intelligence:
an Open Letter. http://futureo flife.org/misc/open\_letter .
Gewirth, Alan. 1978. Reason and Morality. Chicago: University of Chicago Press.
Goertzel, Ben. 2004a. “Encouraging a Positive Transcension: Issues in Transhumanist Ethical
Philosophy ”. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2004/Positive
Transcension.htm .
Goertzel, Ben. 2004b. “Growth, Choice and Joy: Toward a Precise De finition of a Universal
Ethical Principle ”. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2004/Growth
ChoiceJoy.htm .
Goertzel, Ben. 2010a. “Coherent Aggregated Volition: A Method for Deriving Goal System Content
for Advanced, Bene ficial AGIs ”. The Multiverse According to Ben (blog), March 12. http://
multiverseaccordingtoben.blogspot.ca/2010/03/coherent-aggregated-volitiontoward.html .
Goertzel, Ben. 2010b. “GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal
Preservation and Radical Self-Improvement ”. Unpublished manuscript, May 2. Accessed
December 31, 2012. http://goertzel.org/GOLEM.pdf .
Goertzel, Ben. 2012a. “CogPrime: An Integrative Architecture for Embodied Arti ficial General
Intelligence ”. OpenCog Foundation. October 2. Accessed December 31, 2012. http://wiki.
opencog.org/w/CogPrime\_Overview .
Goertzel, Ben. 2012b. “Should Humanity Build a Global AI Nanny to Delay the Singularity Until
It’s Better Understood? ”Journal of Consciousness Studies 19(1 –2): 96 –111. http://
ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00006 .
Goertzel, Ben, and Stephan Vladimir Bugaj. 2008. “Stages of Ethical Development in Arti ficial
General Intelligence Systems ”. In Wang, Pei, Ben Goertzel, and Stan Franklin, eds. Arti ficial
General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in Arti ficial
Intelligence and Applications 171. Amsterdam: IOS, 448 –459.
Goertzel, Ben, and Joel Pitt. 2012. “Nine Ways to Bias Open-Source AGI Toward Friendliness ”.
Journal of Evolution and Technology 22(1): 116 –131. http://jetpress.org/v22/goertzel-pitt.htm .
Gomes, Lee. 2015. Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning
and Make Machines Smarter. IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/
artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning#qaTopicEight .
Good, Irving John. 1970. “Some Future Social Repercussions of Computers ”. International Journal
of Environmental Studies 1(1 –4): 67 –79. doi: 10.1080/00207237008709398 .
Gordon-Spears, Diana F. 2003. “Asimov ’s Laws: Current Progress ”. In Formal Approaches to
Agent-Based Systems: Second International Workshop, FAABS 2002, Greenbelt, MD, USA,
October 29 –31, 2002. Revised Papers, edited by Michael G. Hinchey, James L. Rash, Walter F.
Truszkowski, Christopher Rouff, and Diana F. Gordon-Spears, 257 –259. Lecture Notes in
Computer Science 2699. Berlin: Springer. doi: 10.1007/978-3-540-45133-4\_23 .3 Responses to the Journey to the Singularity 75
Groesz, Lisa M., Michael P. Levine, and Sarah K. Murnen. 2001. “The Effect of Experimental
Presentation of Thin Media Images on Body Satisfaction: A Meta-Analytic Review ”.
International Journal of Eating Disorders 31(1): 1 –16. doi: 10.1002/eat.10005 .
Guarini, Marcello. 2006. “Particularism and the Classi fication and Reclassi fication of Moral
Cases ”. IEEE Intelligent Systems 21 (4): 22 –28. doi: 10.1109/MIS.2006.76 .
Gubrud, Mark Avrum. 1997. “Nanotechnology and International Security ”. Paper presented at the
Fifth Foresight Conference on Molecular Nanotechnology, Palo Alto, CA, November 5 –8.
http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ .
Gunkel, David J. 2012. The Machine Question: Critical Perspectives on AI, Robotics, and Ethics.
Cambridge, MA: MIT Press.
Haidt, Jonathan. 2006. The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. 1st
ed. New York: Basic Books.
Hall, John Storrs. 2007a. Beyond AI: Creating the Conscience of the Machine. Amherst, NY:
Prometheus Books.
Hall, John Storrs. 2011. “Ethics for Self-Improving Machines ”. In Anderson and Anderson 2011,
512–523.
Hanson, Robin. 1994. “If Uploads Come First: The Crack of a Future Dawn ”. Extropy 6(2). http://
hanson.gmu.edu/uploads.html .
Hanson, Robin. 2000. “Shall We Vote on Values, But Bet on Beliefs? ”Unpublished manuscript,
September. Last revised October 2007. http://hanson.gmu.edu/futarchy.pdf .
Hanson, Robin. 2008. “Economics of the Singularity ”. IEEE Spectrum 45 (6): 45 –50. doi: 10.1109/
MSPEC.2008.4531461 .
Hanson, Robin. 2009. “Prefer Law to Values ”. Overcoming Bias (blog), October 10. http://www.
overcomingbias.com/2009/10/prefer-law-to-values.html .
Hanson, Robin. 2012. “Meet the New Con flict, Same as the Old Con flict”. Journal of
Consciousness Studies 19(1 –2): 119 –125. http://www.ingentaconnect.com/content/imp/jcs/
2012/00000019/F0020001/art00008 .
Hare, Robert D., Danny Clark, Martin Grann, and David Thornton. 2000. “Psychopathy and the
Predictive Validity of the PCL-R: An International Perspective ”. Behavioral Sciences & the
Law 18(5): 623 –645. doi: 10.1002/1099-0798(200010)18:5<623::AID-BSL409>3.0.CO;2-W .
Harris, Grant T., and Marnie E. Rice. 2006. “Treatment of Psychopathy: A Review of Empirical
Findings ”. In Handbook of Psychopathy, edited by Christopher J. Patrick, 555 –572. New
York: Guilford.
Hart, David, and Ben Goertzel. 2008. “OpenCog: A Software Framework for Integrative Arti ficial
General Intelligence ”. Unpublished manuscript. http://www.agiri.org/OpenCog\_AGI-08.pdf .
Hayworth, Kenneth J. 2012. “Electron Imaging Technology for Whole Brain Neural Circuit
Mapping ”. International Journal of Machine Consciousness 4(1): 87 –108. doi: 10.1142/
S1793843012500060 .
Heylighen, Francis. 2007. “Accelerating Socio-Technological Evolution: From Ephemeralization
and Stigmergy to the Global Brain ”. In Globalization as Evolutionary Process: Modeling
Global Change, edited by George Modelski, Tessaleno Devezas, and William R. Thompson,
284–309. Rethinking Globalizations 10. New York: Routledge.
Heylighen, Francis. 2012. “Brain in a Vat Cannot Break Out. ”Journal of Consciousness Studies 19
(1–2): 126 –142. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/
art00009 .
Hibbard, Bill. 2001. “Super-Intelligent Machines ”. ACM SIGGRAPH Computer Graphics 35 (1):
13–15.http://www.siggraph.org/publications/newsletter/issues/v35/v35n1.pdf .
Hibbard, Bill. 2005a. “Critique of the SIAI Collective Volition Theory ”. Unpublished manuscript,
December. Accessed December 31, 2012. http://www.ssec.wisc.edu/ ∼billh/g/SIAI\_CV\_
critique.html .
Hibbard, Bill. 2005b. “The Ethics and Politics of Super-Intelligent Machines ”. Unpublished
manuscript, July. Microsoft Word file, accessed December 31, 2012. https://sites.google.com/
site/whibbard/g/SI\_ethics\_politics.doc .76 K. Sotala and R. Yampolskiy
Hibbard, Bill. 2008. “Open Source AI. ”In Wang, Pei, Ben Goertzel, and Stan Franklin, eds.
Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Frontiers in
Artificial Intelligence and Applications 171. Amsterdam: IOS, 473 –477.
Hibbard, Bill. 2012a. “Avoiding Unintended AI Behaviors ”. In Bach, Joscha, Ben Goertzel, and
Matthew Iklé, eds. Arti ficial General Intelligence: 5th International Conference, AGI 2012,
Oxford, UK, December 8 –11, 2012. Proceedings. Lecture Notes in Arti ficial Intelligence 7716.
New York: Springer. doi: 10.1007/978-3-642-35506-6, 107 –116.
Hibbard, Bill. 2012b. “Decision Support for Safe AI Design ”. In Bach, Joscha, Ben Goertzel, and
Matthew Iklé, eds. Arti ficial General Intelligence: 5th International Conference, AGI 2012,
Oxford, UK, December 8 –11, 2012. Proceedings. Lecture Notes in Arti ficial Intelligence 7716.
New York: Springer. doi: 10.1007/978-3-642-35506-6, 117 –125.
Hibbard, Bill. 2012c. “Model-Based Utility Functions ”. Journal of Arti ficial General Intelligence 3
(1): 1 –24. doi: 10.2478/v10229-011-0013-5 .
Hibbard, Bill. 2012d. The Error in My 2001 VisFiles Column, September. Accessed December 31,
2012. http://www.ssec.wisc.edu/ ∼billh/g/vis files\_error.html .
Horvitz, Eric J., and Bart Selman. 2009. Interim Report from the AAAI Presidential Panel on
Long- Term AI Futures. Palo Alto, CA: AAAI, August. http://www.aaai.org/Organization/
Panel/panelnote.pdf .
Hughes, James. 2001. “Relinquishment or Regulation: Dealing with Apocalyptic Technological
Threats ”. Hartford, CT, November 14.
IEEE Spectrum. 2008. “Tech Luminaries Address Singularity ”:“The Singularity; Special Report ”.
(June).
Jenkins, Anne. 2003. “Artificial Intelligence and the Real World ”. Futures 35 (7): 779 –786. doi: 10.
1016/S0016-3287(03)00029-6 .
Joy, Bill. 2000. “Why the Future Doesn ’t Need Us ”. Wired, April. http://www.wired.com/wired/
archive/8.04/joy.html .
Karnofsky, Holden. 2012. “Thoughts on the Singularity Institute (SI) ”. Less Wrong (blog), May
11.http://lesswrong.com/lw/cbs/thoughts\_on\_the\_singularity\_institute\_si/ .
Karnofsky, Holden, and Jaan Tallinn. 2011. “Karnofsky & Tallinn Dialog on SIAI Ef ficacy”.
Accessed December 31, 2012. http://xa.yimg.com/kq/groups/23070378/1331435883/name/
Jaan+Tallinn+2011+05+-+revised.doc .
Kipnis, David. 1972. “Does Power Corrupt? ”. Journal of Personality and Social Psychology 24(1):
33–41. doi: 10.1037/h0033390 .
Koene, Randal A. 2012a. “Embracing Competitive Balance: The Case for Substrate-Independent
Minds and Whole Brain Emulation ”. In Eden, Amnon, Johnny Søraker, James H. Moor, and
Eric Steinhart, eds. Singularity Hypotheses: A Scienti fic and Philosophical Assessment. The
Frontiers Collection. Berlin: Springer.
Koene, Randal A. 2012b. “Experimental Research in Whole Brain Emulation: The Need for
Innovative in Vivo Measurement Techniques ”. International Journal of Machine Conscious-
ness 4(1): 35 –65. doi: 10.1142/S1793843012400033 .
Kornai, András. 2014. Bounding the impact of AGI. Journal of Experimental & Theoretical
Artificial Intelligence, 26(3), 417 –438.
Kurzweil, Ray. 2001. “Response to Stephen Hawking ”. Kurzweil Accelerating Intelligence.
September 5. Accessed December 31, 2012. http://www.kurzweilai.net/response-to-stephen-
hawking .
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York:
Viking.
Lampson, Butler W. 1973. “A Note on the Con finement Problem ”. Communications of the ACM
16(10): 613 –615. doi: 10.1145/362375.362389 .
Legg, Shane. 2009. “Funding Safe AGI ”. Vetta Project (blog), August 3. http://www.vetta.org/
2009/08/funding-safe-agi/ .
Madrigal, Alexis C. 2015. The case against killer robots, from a guy actually working on arti ficial
intelligence. http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-
building-ai/ .3 Responses to the Journey to the Singularity 77
Mann, Steve, Jason Nolan, and Barry Wellman. 2003. “Sousveillance: Inventing and Using
Wearable Computing Devices for Data Collection in Surveillance Environments ”. Surveillance
& Society 1(3): 331 –355. http://library.queensu.ca/ojs/index.php/surveillance-and-society/
article/view/3344 .
McCauley, Lee. 2007. “AI Armageddon and the Three Laws of Robotics ”. Ethics and Information
Technology 9(2): 153 –164. doi: 10.1007/s10676-007-9138-2 .
McCulloch, W. S. 1956. “Toward Some Circuitry of Ethical Robots; or, An Observational Science
of the Genesis of Social Evaluation in the Mind-like Behavior of Artifacts ”. Acta Biotheoretica
11(3–4): 147 –156. doi: 10.1007/BF01557008 .
McDermott, Drew. 2012. “Response to ‘The Singularity ’by David Chalmers ”. Journal of
Consciousness Studies 19(1 –2): 167 –172. http://www.ingentaconnect.com/content/imp/jcs/
2012/00000019/F0020001/art00011 .
McGinnis, John O. 2010. “Accelerating AI ”. Northwestern University Law Review 104 (3): 1253 –
1270. http://www.law.northwestern.edu/lawreview/v104/n3/1253/LR104n3McGinnis.pdf .
McKibben, Bill. 2003. Enough: Staying Human in an Engineered Age. New York: Henry Holt.
McLeod, Peter, Kim Plunkett, and Edmund T. Rolls. 1998. Introduction to Connectionist
Modelling of Cognitive Processes. New York: Oxford University Press.
Miller, James D. 2012. Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More
Dangerous World. Dallas, TX: BenBella Books.
Moore, David, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, and Nicholas
Weaver. 2003. “Inside the Slammer Worm ”. IEEE Security & Privacy Magazine 1(4): 33 –39.
doi:10.1109/MSECP.2003.1219056 .
Moore, David, Colleen Shannon, and Jeffery Brown. 2002. “Code-Red: A Case Study on the
Spread and Victims of an Internet Worm ”. In Proceedings of the Second ACM SIGCOMM
Workshop on Internet Measurment (IMW ’02), 273 –284. New York: ACM Press. doi: 10.1145/
637201.637244 .
Moravec, Hans P. 1988. Mind Children: The Future of Robot and Human Intelligence. Cambridge,
MA: Harvard University Press.
Moravec, Hans P. 1992. “Pigs in Cyberspace ”. Field Robotics Center. Accessed December 31,
2012. http://www.frc.ri.cmu.edu/ ∼hpm/project.archive/general.articles/1992/CyberPigs.html .
Moravec, Hans P. 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford
University Press.
Muehlhauser, Luke, and Louie Helm. 2012. “The Singularity and Machine Ethics ”. In Eden,
Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A
Scienti fic and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.
Muehlhauser, Luke, and Anna Salamon. 2012. “Intelligence Explosion: Evidence and Import ”.I n
Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. SingularityHypotheses: A Scienti fic and Philosophical Assessment.The Frontiers Collection. Berlin:
Springer.
Mueller, Dennis C. 2003. Public Choice III. 3rd ed. New York: Cambridge University Press.
Müller, Vincent C., and Nick Bostrom. 2014. Future progress in arti ficial intelligence: A survey of
expert opinion. Fundamental Issues of Arti ficial Intelligence.
Murphy, Robin, and David D. Woods. 2009. “Beyond Asimov: The Three Laws of Responsible
Robotics ”. IEEE Intelligent Systems 24(4): 14 –20. doi: 10.1109/MIS.2009.69 .
Napier, William. 2008. “Hazards from Comets and Asteroids ”. In Bostrom, Nick, and Milan M.
Ćirkovi ć, eds. Global Catastrophic Risks. New York: Oxford University Press, 222 –237.
Ng, Andrew Y., and Stuart J. Russell. 2000. Algorithms for inverse reinforcement learning. In Icml
(pp. 663 –670).
Nielsen, Thomas D., and Finn V. Jensen. 2004. “Learning a Decision Maker ’s Utility Function
from (Possibly) Inconsistent Behavior ”. Arti ficial Intelligence 160(1 –2): 53 –78. doi: 10.1016/j.
artint.2004.08.003 .
Nordmann, Alfred. 2007. “If and Then: A Critique of Speculative NanoEthics ”. NanoEthics 1(1):
31–46. doi: 10.1007/s11569-007-0007-6 .78 K. Sotala and R. Yampolskiy
Nordmann, Alfred. 2008. “Singular Simplicity ”. IEEE Spectrum, June. http://spectrum.ieee.org/
robotics/robotics-software/singular-simplicity .
Olson, Mancur. 1982. The Rise and Decline of Nations: Economic Growth, Stag flation, and Social
Rigidities. New Haven, CT: Yale University Press.
Omohundro, Stephen M. 2007. “The Nature of Self-Improving Arti ficial Intelligence ”. Paper
presented at Singularity Summit 2007, San Francisco, CA, September 8 –9.http://selfaware
systems.com/2007/10/05/paper-on-the-nature-of-self-improving-arti ficial-intelligence/ .
Omohundro, Stephen M. 2008. “The Basic AI Drives ”. In Wang, Pei, Ben Goertzel, and Stan
Franklin, eds. Arti ficial General Intelligence 2008: Proceedings of the First AGI Conference.
Frontiers in Arti ficial Intelligence and Applications 171. Amsterdam: IOS, 483 –492.
Omohundro, Stephen M. 2012. “Rational Arti ficial Intelligence for the Greater Good ”. In Eden,
Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A
Scienti fic and Philosophical Assessment. The Frontiers Collection. Berlin: Springer.
Orseau, Laurent, and Mark Ring. 2011. “Self-Modi fication and Mortality in Arti ficial Agents ”.I n
Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Arti ficial General
Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3 –6,
2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 1 –10.
Persson, Ingmar, and Julian Savulescu. 2008. “The Perils of Cognitive Enhancement and the
Urgent Imperative to Enhance the Moral Character of Humanity ”. Journal of Applied
Philosophy 25(3): 162 –177. doi: 10.1111/j.1468-5930.2008.00410.x .
Persson, Ingmar, and Julian Savulescu. 2012. Un fit for the Future. Oxford: Oxford University
Press. doi: 10.1093/acprof:oso/9780199653645.001.0001 .
Peterson, Nathaniel R., David B. Pisoni, and Richard T. Miyamoto. 2010. “Cochlear Implants and
Spoken Language Processing Abilities: Review and Assessment of the Literature ”. Restorative
Neurology and Neuroscience 28(2): 237 –250. doi: 10.3233/RNN-2010-0535 .
Plaut, David C. 2003. “Connectionist Modeling of Language: Examples and Implications ”.I n
Mind, Brain, and Language: Multidisciplinary Perspectives, edited by Marie T. Banich and
Molly Mack, 143 –168. Mahwah, NJ: Lawrence Erlbaum.
Posner, Richard A. 2004. Catastrophe: Risk and Response. New York: Oxford University Press.Potapov, Alexey, and Sergey Rodionov. 2012. “Universal Empathy and Ethical Bias for Arti ficial
General Intelligence ”. Paper presented at the Fifth Conference on Arti ficial General
Intelligence (AGI –12), Oxford, December 8 –11. Accessed June 27, 2013. http://aideus.com/
research/doc/preprints/04\_paper4\_AGIImpacts12.pdf .
Powers, Thomas M. 2006. “Prospects for a Kantian Machine ”. IEEE Intelligent Systems 21(4):
46–51. doi: 10.1109/MIS.2006.77 .
Pylyshyn, Zenon W., ed. 1987. The Robot ’s Dilemma: The Frame Problem in Arti ficial
Intelligence. Norwood, NJ: Ablex.
Pynadath, David V., and Milind Tambe. 2002. “Revisiting Asimov ’s First Law: A Response to the
Call to Arms ”. In Intelligent Agents VIII: Agent Theories, Architectures, and Languages 8th
International Workshop, ATAL 2001 Seattle, WA, USA, August 1 –3, 2001 Revised Papers,
edited by John-Jules Ch. Meyer and Milind Tambe, 307 –320. Berlin: Springer. doi: 10.1007/3-
540-45448-9\_22 .
Ramamurthy, Uma, Bernard J. Baars, Sidney K. D ’Mello, and Stan Franklin. 2006. “LIDA: A
Working Model of Cognition ”. In Proceedings of the Seventh International Conference on
Cognitive Modeling, edited by Danilo Fum, Fabio Del Missier, and Andrea Stocco, 244 –249.
Trieste, Italy: Edizioni Goliardiche. http://ccrg.cs.memphis.edu/assets/papers/ICCM06-UR.pdf .
Ring, Mark, and Laurent Orseau. 2011. “Delusion, Survival, and Intelligent Agents ”.I n
Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Arti ficial General
Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3 –6,
2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 11 –20.
Russell, Stuart J. 2015. Will They Make Us Better People? Edge.org. http://edge.org/response-
detail/26157 .3 Responses to the Journey to the Singularity 79
Russell, Stuart J., Dewey, Daniel, Tegmark, Max. 2015. Research priorities for robust and
beneficial arti ficial intelligence. http://futureo flife.org/static/data/documents/research\_priorities.
pdf.
Sandberg, Anders. 2001. “Friendly Superintelligence ”. Accessed December 31, 2012. http://www.
aleph.se/Nada/Extro5/Friendly%20Superintelligence.htm .
Sandberg, Anders. 2012. “Models of a Singularity ”. In Eden, Amnon, Johnny Søraker, James H.
Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scienti fic and Philosophical
Assessment. The Frontiers Collection. Berlin: Springer.
Sandberg, Anders, and Nick Bostrom. 2008. Whole Brain Emulation: A Roadmap. Technical
Report, 2008-3. Future of Humanity Institute, University of Oxford. http://www.fhi.ox.ac.uk/
wpcontent/uploads/brain-emulation-roadmap-report1.pdf .
Schmidhuber, Jürgen. 2009. “Ultimate Cognition à la Gödel ”. Cognitive Computation 1(2): 177 –
193. doi: 10.1007/s12559-009-9014-y .
Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition
Have Failed. New Haven, CT: Yale University Press.
Shanahan, Murray. 2015. The Technological Singularity. MIT Press (forthcoming).Shulman, Carl. 2009. “Arms Control and Intelligence Explosions ”. Paper presented at the 7th
European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July 2 –4.
Shulman, Carl. 2010a. Omohundro ’s“Basic AI Drives ”and Catastrophic Risks. The Singularity
Institute, San Francisco, CA. http://intelligence.org/ files/BasicAIDrives.pdf .
Shulman, Carl. 2010b. Whole Brain Emulation and the Evolution of Superorganisms. The
Singularity Institute, San Francisco, CA. http://intelligence.org/ files/WBE-Superorgs.pdf .
Snaider, Javier, Ryan Mccall, and Stan Franklin. 2011. “The LIDA Framework as a General Tool
for AGI ”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Arti ficial
General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA,
August 3 –6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer,
133–142.
Soares, N., & Benja Fallenstein. 2014. Aligning Superintelligence with Human Interests: A
Technical Research Agenda. Tech. rep. Machine Intelligence Research Institute, 2014. URL:http://intelligence.org/ files/TechnicalAgenda.pdf .
Sobolewski, Matthias. 2012. “German Cabinet to Agree Tougher Rules on High-Frequency
Trading ”. Reuters, September 25. Accessed December 31, 2012. http://in.reuters.com/article/
2012/09/25/germany-bourse-rules-idINL5E8KP8BK20120925 .
Sotala, Kaj. 2012. “
Advantages of Arti ficial Intelligences, Uploads, and Digital Minds ”.
International Journal of Machine Consciousness 4(1): 275 –291. doi: 10.1142/
S1793843012400161 .
Sotala, Kaj. 2015. Concept learning for safe autonomous AI. In Workshops at the Twenty-Ninth
AAAI Conference on Arti ficial Intelligence.
Sotala, Kaj, and Harri Valpola. 2012. “Coalescing Minds: Brain Uploading-Related Group Mind
Scenarios ”. International Journal of Machine Consciousness 4(1): 293 –312. doi: 10.1142/
S1793843012400173 .
Sotala, Kaj, and Roman V. Yampolskiy. 2013. Responses to catastrophic AGI risk: a survey.
Technical report 2013-2. Berkeley, CA: Machine Intelligence Research Institute.
Sotala, Kaj, and Roman V. Yampolskiy. 2015. Responses to catastrophic AGI risk: a survey.
Physica Scripta, 90(1), 018001.
Spears, Diana F. 2006. “Assuring the Behavior of Adaptive Agents ”. In Agent Technology from a
Formal Perspective, edited by Christopher Rouff, Michael Hinchey, James Rash, Walter
Truszkowski, and Diana F. Gordon-Spears, 227 –257. NASA Monographs in Systems and
Software Engineering. London: Springer. doi: 10.1007/1-84628-271-3\_8 .
Stahl, Bernd Carsten. 2002. “Can a Computer Adhere to the Categorical Imperative? A
Contemplation of the Limits of Transcendental Ethics in IT ”. In, edited by Iva Smit and
George E. Lasker, 13 –18. Vol. 1. Windsor, ON: International Institute for Advanced Studies in
Systems Research/Cybernetics.80 K. Sotala and R. Yampolskiy
Staniford, Stuart, Vern Paxson, and Nicholas Weaver. 2002. “How to 0wn the Internet in Your
Spare Time ”. In Proceedings of the 11th USENIX Security Symposium, edited by Dan Boneh,
149–167. Berkeley, CA: USENIX. http://www.icir.org/vern/papers/cdc-usenix-sec02/ .
Steunebrink, Bas R., and Jürgen Schmidhuber. 2011. “A Family of Gödel Machine Implemen-
tations ”. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Arti ficial
General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA,
August 3 –6, 2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer,
275–280.
Suber, Peter. 2002. “Saving Machines from Themselves: The Ethics of Deep Self-Modi fication ”.
Accessed December 31, 2012. http://www.earlham.edu/ ∼peters/writing/selfmod.htm .
Sullins, John P. 2005. “Ethics and Arti ficial life: From Modeling to Moral Agents ”. Ethics &
Information Technology 7 (3): 139 –148. doi: 10.1007/s10676-006-0003-5 .
Tarleton, Nick. 2010. Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics.
The Singularity Institute, San Francisco, CA. http://intelligence.org/ files/CEV-MachineEthics.
pdf.
Tenenbaum, Joshua B., Thomas L. Grif fiths, and Charles Kemp. 2006. “Theory-Based Bayesian
Models of Inductive Learning and Reasoning ”.I n“Probabilistic Models of Cognition ”. Special
issue, Trends in Cognitive Sciences 10(7): 309 –318. doi: 10.1016/j.tics.2006.05.009 .
Tenenbaum, Joshua B., Charles Kemp, Thomas L. Grif fiths, and Noah D. Goodman. 2011. “How
to grow a mind: Statistics, structure, and abstraction ”. science 331, 6022: 1279 –1285.
Thomas, Michael S. C., and James L. McClelland. 2008. “Connectionist Models of Cognition ”.I n
The Cambridge Handbook of Computational Psychology, edited by Ron Sun, 23 –58.
Cambridge Handbooks in Psychology. New York: Cambridge University Press.
Trope, Yaacov, and Nira Liberman. 2010. “Construal-level Theory of Psychological Distance ”.
Psychological Review 117(2): 440 –463. doi: 10.1037/a0018963 .
Turney, Peter. 1991. “Controlling Super-Intelligent Machines ”. Canadian Arti ficial Intelligence,
July 27, 3 –4, 12, 35.
Tversky, Amos, and Daniel Kahneman. 1981. “The Framing of Decisions and the Psychology of
Choice ”. Science 211 (4481): 453 –458. doi: 10.1126/science.7455683 .
Van Gelder, Timothy. 1995. “What Might Cognition Be, If Not Computation? ”Journal of
Philosophy 92(7): 345 –381. http://www.jstor.org/stable/2941061 .
Van Kleef, Gerben A., Astrid C. Homan, Catrin Finkenauer, Seval Gundemir, and Eftychia
Stamkou. 2011. “Breaking the Rules to Rise to Power: How Norm Violators Gain Power in the
Eyes of Others ”. Social Psychological and Personality Science 2(5): 500 –507. doi: 10.1177/
1948550611398416 .
Van Kleef, Gerben A., Christopher Oveis, Ilmo van der Löwe, Aleksandr LuoKogan, Jennifer
Goetz, and Dacher Keltner. 2008. “Power, Distress, and Compassion: Turning a Blind Eye to
the Suffering of Others ”. Psychological Science 19(12): 1315 –1322. doi: 10.1111/j.1467-9280.
2008.02241.x .
Verdoux, Philippe. 2010. “Risk Mysterianism and Cognitive Boosters ”. Journal of Futures Studies
15 (1): 1 –20. Accessed February 2, 2013. http://www.jfs.tku.edu.tw/15-1/A01.pdf .
Verdoux, Philippe. 2011. “Emerging Technologies and the Future of Philosophy ”. Metaphilos-
ophy 42(5): 682 –707. doi: 10.1111/j.1467-9973.2011.01715.x .
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human
Era”. In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11 –22.
NASA Conference Publication 10129. NASA Lewis Research Center. http://ntrs.nasa.gov/
archive/nasa/casi.ntrs.nasa.gov/19940022855\_1994022855.pdf .
Walker, Mark. 2008. “Human Extinction and Farsighted Universal Surveillance ”. Working Paper,
September. Accessed December 31, 2012. http://www.nmsu.edu/ ∼philos/documents/sept-
2008-smart-dust- final.doc .
Wallach, Wendell. 2010. “Robot Minds and Human Ethics: The Need for a Comprehensive Model
of Moral Decision Making ”.I n“Robot Ethics and Human Ethics, ”edited by Anthony Beavers.
Special issue, Ethics and Information Technology 12(3): 243 –250. doi: 10.1007/s10676-010-
9232-8 .3 Responses to the Journey to the Singularity 81
Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from Wrong.
New York: Oxford University Press. doi: 10.1093/acprof:oso/9780195374049.001.0001 .
Wallach, Wendell, and Colin Allen. 2012. “Framing Robot Arms Control ”. Ethics and Information
Technology. doi:10.1007/ s10676-012-9303-0.
Wang, Pei. 2012. “Motivation Management in AGI Systems ”. In Bach, Joscha, Ben Goertzel, and
Matthew Iklé, eds. Arti ficial General Intelligence: 5th International Conference, AGI 2012,
Oxford, UK, December 8 –11, 2012. Proceedings. Lecture Notes in Arti ficial Intelligence 7716.
New York: Springer. doi: 10.1007/978-3-642-35506-6, 352 –361.
Warwick, Kevin. 1998. In the Mind of the Machine: Breakthrough in Arti ficial Intelligence.
London: Arrow.
Warwick, Kevin. 2003. “Cyborg Morals, Cyborg Values, Cyborg Ethics ”. Ethics and Information
Technology 5(3): 131 –137. doi: 10.1023/B:ETIN.0000006870.65865.cf .
Waser, Mark R. 2008. “Discovering the Foundations of a Universal System of Ethics as a Road to
Safe Arti ficial Intelligence ”. In Biologically Inspired Cognitive Architectures: Papers from the
AAAI Fall Symposium, 195 –200. Technical Report, FS-08-04. AAAI Press, Menlo Park, CA.
http://www.aaai.org/Papers/Symposia/Fall/2008/FS-08-04/FS08-04-049.pdf .
Waser, Mark R. 2009. “A Safe Ethical System for Intelligent Machines ”. In Biologically Inspired
Cognitive Architectures: Papers from the AAAI Fall Symposium, edited by Alexei V.
Samsonovich, 194 –199. Technical Report, FS-09-01. AAAI Press, Menlo Park, CA. http://
aaai.org/ocs/index.php/FSS/FSS09/paper/view/934 .
Waser, Mark R. 2011. “Rational Universal Benevolence: Simpler, Safer, and Wiser than ‘Friendly
AI”’. In Schmidhuber, Jürgen, Kristinn R. Thórisson, and Moshe Looks, eds. Arti ficial General
Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3 –6,
2011. Proceedings. Lecture Notes in Computer Science 6830. Berlin: Springer, 153 –162.
Weld, Daniel, and Oren Etzioni. 1994. “The First Law of Robotics (A Call to Arms) ”.I n
Proceedings of the Twelfth National Conference on Arti ficial Intelligence, edited by Barbara
Hayes-Roth and Richard E. Korf, 1042 –1047. Menlo Park, CA: AAAI Press. http://www.aaai.
org/Papers/AAAI/1994/AAAI94-160.pdf .
Weng, Yueh-Hsuan, Chien-Hsun Chen, and Chuen-Tsai Sun. 2008. “Safety Intelligence and Legal
Machine Language: Do We Need the Three Laws of Robotics? ”In Service Robot
Applications, edited by Yoshihiko Takahashi. InTech. doi: 10.5772/6057 .
Weng, Yueh-Hsuan, Chien-Hsun Chen, and Chuen-Tsai Sun. 2009. “Toward the Human –Robot
Coexistence Society: On Safety Intelligence for Next Generation Robots ”. International Journal
of Social Robotics 1(4): 267 –282. doi: 10.1007/s12369-009-0019-1 .
Whitby, Blay. 1996. Re flections on Arti ficial Intelligence: The Legal, Moral, and Ethical
Dimensions. Exeter, UK: Intellect Books.
Whitby, Blay, and Kane Oliver. 2000. “How to Avoid a Robot Takeover: Political and Ethical
Choices in the Design and Introduction of Intelligent Artifacts ”. Paper presented at Symposium
on Arti ficial Intelligence, Ethics and (Quasi-) Human Rights at AISB-00, University of
Birmingham, England. http://www.sussex.ac.uk/Users/blayw/BlayAISB00.html .
Wilson, Grant. 2013. Minimizing global catastrophic and existential risks from emerging
technologies through international law. Va. Envtl. LJ, 31, 307.
Wood, David Murakami, and Kirstie Ball, eds. 2006. A Report on the Surveillance Society: For
the Information Commissioner, by the Surveillance Studies Network. Wilmslow, UK: Of fice of
the Information Commissioner, September. http://www.ico.org.uk/about\_us/research/ ∼/media/
documents/library/Data\_Protection/Practical\_application/SURVEILLANCE\_SOCIETY\_
SUMMARY\_06.ashx .
Yampolskiy, Roman V. 2012. “Leakproo fing the Singularity: Arti ficial Intelligence Con finement
Problem ”. Journal of Consciousness Studies 2012(1 –2): 194 –214. http://www.ingentaconnect.
com/content/imp/jcs/2012/00000019/F0020001/art00014 .
Yampolskiy, Roman V. 2013. “What to Do with the Singularity Paradox? ”Studies in Applied
Philosophy, Epistemology and Rational Ethics vol 5, pp. 397 –413. Springer Berlin Heidelberg.
Yampolskiy, Roman V., and Joshua Fox. 2012. “Safety Engineering for Arti ficial General
Intelligence ”. Topoi. doi: 10.1007/s11245-012-9128-9 .82 K. Sotala and R. Yampolskiy
Yudkowsky, Eliezer. 2001. Creating Friendly AI 1.0: The Analysis and Design of Benevolent
Goal Architectures. The Singularity Institute, San Francisco, CA, June 15. http://intelligence.
org/files/CFAI.pdf .
Yudkowsky, Eliezer. 2004. Coherent Extrapolated Volition. The Singularity Institute, San
Francisco, CA, May. http://intelligence.org/ files/CEV.pdf .
Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global
Risk”. In Bostrom, Nick, and Milan M. Ćirkovi ć, eds. Global Catastrophic Risks. New York:
Oxford University Press, 308 –345.
Yudkowsky, Eliezer. 2011. Complex Value Systems are Required to Realize Valuable Futures.
The Singularity Institute, San Francisco, CA. http://intelligence.org/ files/ComplexValues.pdf .
Yudkowsky, Eliezer. 2012. “Reply to Holden on ‘Tool AI ”’. Less Wrong (blog), June 12. http://
lesswrong.com/lw/cze/reply\_to\_holden\_on\_tool\_ai/ .3 Responses to the Journey to the Singularity 83 |
1f4b4047-717e-4f06-8a8e-0d9aea93f022 | trentmkelly/LessWrong-43k | LessWrong | The wild
The idea of art about nature doesn’t sound exciting to me in the abstract. Perhaps I remember that I am evolutionarily supposed to see it and go, ‘oh fantastic, it’s green and blue near each other, maybe I’m in for some reproductive success’, and that doesn’t sound very inspiring. (Yes, I know that simple evolutionary situations can feel inspiring from the inside.)
But a kind of art about nature that I can especially get behind is that which to me evokes some of wild alien vigor of nature, that I sometimes experience for instance sitting in my overgrown back garden, contending with a flock of ascendent giant poppy faces and a stilled frenzy of branches gaining ground and sky about my seat.
It is a sharp exotic aliveness, an electric intent for living that wants to overwhelm the strictures of physical space and come curling and unfolding through the air, with an explosive energy that has no truck with time.
Not alien like an alien mind, but more like an alien spirituality, that doesn’t know anything so mundane as minds. But while you revere under the perfect spires of goodness, it comes in unexpected and unintelligible from the edges and without looking you in the eye, just overwhelms you with how what really made the world is intensely here and intensely not in your understanding. This was always the world.
“Everything wants to live”, my botanist grandfather would explain to me. Perhaps for plants it is often more borderline whether living is in store. Regardless, their unrelenting striving for it is more visceral to me than that of animals. Animals are more placidly set in their bodies. Plants take every opportunity to push outwards, expanding into new limbs and thickening out old ones in their unthinking, unfailing fervor.
A lot of nature in art isn’t like this. Often it is idyllic, or evocative at a landscape level, or sleepy furniture for a human story. But here is some art that does portray it somewhat, for me, though I don’t know if the artists actually m |
92077a7a-2db2-4c20-94f5-0acc3b81a799 | StampyAI/alignment-research-dataset/blogs | Blogs | The Sword of Good
*…fragments of a novel that would never be written…*
---
Captain Selena, late of the pirate ship *Nemesis,* quietly extended the very tip of her blade around the corner, staring at the tiny reflection on the metal. At once, but still silently, she pulled back the sword; and with her other hand made a complex gesture.
The translation spell told Hirou that the handsigns meant: “Orcs. Seven.”
Dolf looked at Hirou. “My Prince,” the wizard signed, “do not waste yourself against mundane opponents. Do not draw the Sword of Good as yet. Leave these to Selena.”
Hirou’s mouth was very dry. He didn’t know if the translation spell could understand the difference between wanting to talk and wanting to make gestures; and so Hirou simply nodded.
Not for the first time, the thought occurred to Hirou that if he’d actually *known* he was going to be transported into a magical universe, informed he was the long-lost heir to the Throne of Bronze, handed the legendary Sword of Good, and told to fight evil, he would have spent less time reading fantasy novels. Joined the army, maybe. Taken fencing lessons, at least. If there was one thing that *didn’t*prepare you for fantasy real life, it was sitting at home reading fantasy fiction.
Dolf and Selena were looking at Hirou, as if waiting for something more.
*Oh. That’s right. I’m the prince.*
Hirou raised a finger and pointed it around the corner, trying to indicate that they should go ahead –
With a sudden burst of motion Selena plunged around the corner, Dolf following hard on her heels, and Hirou, startled and hardly thinking, moving after.
There was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.
Selena already held her sword in her right hand, and her whip in her left. She leaped forward and howled, a wordless cry that harmonized oddly with the battle roar of the orcs; and in almost the first instant of the clash, one of the orc-heads separated from its body and flew through the air, trailing foul-smelling black blood.
Hirou breathed evenly, trying to still his trembling. The Sword of Good gave a tiny soft growl at his side (a sound that only he could hear) as Selena slashed her blade across another orc’s face, giving rise to a whistling howl. Still he kept the Sword sheathed. *You are not to waste yourself against mundane opponents…* Even now the wizard was eyeing him closely, as if expecting him to defy orders and plunge into battle himself.
A small part of him, the part that wasn’t totally terrified by the battle, was flattered that Dolf thought so highly of him. It was all Hirou could do not to turn and bolt; he was tensing his legs as though exerting a constant muscular effort to keep them in the same place.
The orc-bodies were piling up around Selena, the whip blinding or tripping or yanking, her blade ending life. It might have taken hours, or seconds, before a huge blow split the last orc’s head all the way down the middle.
She stood there, blood-spattered and panting heavily, waiting as though daring the bodies to ever move again; then her face relaxed, and she gave a light laugh, and stooped to wipe her blade on the black orc-leather.
“You’re hurt!” Hirou blurted suddenly. Red was soaking through the leather on Selena’s left arm.
Selena glanced downward. “A scratch.”
“You cannot assume that,” rumbled the wizard. “Their blades may be poisoned.” Dolf stepped forward and brushed Selena’s arm briefly with the staff.
“Oh!” Selena said, her face surprised. “It’s -“
But Dolf was already moving past her, to look at the gate the orcs had guarded, and the stairway leading upward. “I believe,” he said in a quiet voice, “that there is a dark magus upstairs.”
“A *magus!*” Selena said. “Here?”
“A magus,” Hirou echoed. He swallowed hard; he knew what that meant.
Dolf only glanced at Selena. “Do as I taught you: drop your weapons, sit in the corner, and clear your mind. *Now,*” as Selena seemed about to protest. “An ordinary warrior is only a liability, in a battle of wills; a weak point to be defended, a piece to be turned against its player.”
Selena looked at Hirou. Hirou nodded.
And Selena sheathed her sword, dropped it and the whip, unbuckled the harness that held her daggers, and sat down in the corner of the room and began chanting softly to herself.
Dolf spared her only a glance. “And *now,*” said the wizard in a low tone, “my Prince, you may enter the battle.”
Though most of Hirou’s mind was whited-out by terror, there was a remnant that seemed to see and follow the pattern, like reciting memorized lines in a play; and that remnant knew that Hirou’s part was to draw the Sword of Good.
The ancient metal whispered out of its scabbard. As Hirou drew the Sword it began wailing, a small thin shriek that Hirou knew only he could hear. The scream seemed to come from an infinitely narrow line running straight down the center of the Sword. The sound had a quality that forced away attention, as though your eye were looking too close to the sun. As though, if you listened too hard, you would – you would lose –
Dolf strode around the fallen orcs and their assorted body parts. Hirou followed, breathing evenly; the Sword informed his hand to grip it high and across his chest.
“Who are we fighting?” Hirou was surprised at how neutral his voice sounded.
A note of condemnation entered Dolf’s voice. “A false wizard, this. Not born to the Art, nor trained in the Halls. Its gift comes to it by a higher master, by necromancy and potions… But fear not, my Prince. I shall prevent its will from reaching Selena and smother its other magics; and your Sword will sweep aside its defenses like fallen leaves.”
Through the door they swept, and mounted the stairs of the tower. Dolf was breathing heavier, now, his face belying the effort of warding off some pressing will. Hirou felt nothing, except perhaps a note of crispness in the air, as the Sword in his hand enforced an edict against certain specific types of delusion.
Then they were standing at the highest level of the tower, the end of the stairs, before one small wooden door.
“I’ll enter first,” Dolf signed, “and you follow as fast as you can, and strike as quickly as may be done. Be careful not to strike *me,*my Prince. The Sword of Good may strengthen your hand, but not guide your steps – it will strike me as easily as the foe, if you happen to turn it in my direction.”
Hirou nodded. The air of neutrality was wearing away, and the acrid tang of adrenaline was entering his mouth.
“Three,” signed the wizard, “two, one -“
Dolf’s oaken staff crashed against the door, blasting it off the hinges in a flare of light and Dolf was racing into the room and Hirou was following him and the figure in stained brown robes was spinning its staff forward and a wall of flames swept out –
Hirou flinched and gave a small shriek, but the flames washed over him ineffectively before his feet could even stumble. Averted by the Sword. Dolf also was untouched – the defenses of a wizard were nearly impossible to break, Dolf had said; some wizards spent hours every day building them higher. There was only one known weapon that could kill a wizard in a single blow, and that was –
*Am I really going to do this?*
But the Sword was already swinging forward in Hirou’s hand.
And the blade bounced off the air around the stained brown robes, with a sudden shower of orange sparks.
*Crap,* Hirou had time to think.
And then the false wizard’s staff was sweeping toward him (metal it was, not wood).
But the Sword in his hand moved to parry it, and there was another shower of sparks.
“*Keep attacking!*” Dolf shouted. “You chipped his sorcery! *Keep fighting!*“
Hirou gasped for breath and began to chop away with the Sword as though cutting wood, sending bits and pieces of broken magic everywhere. There was little force in the blows except when the Sword moved to parry the staff; the rest was speed and repetition.
Then the scarred face beneath the hood gave a sudden shriek, as the Sword lightly scored over the dark flesh.
*Is the shield down – ?*
Before Hirou could even complete the thought, his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart.
There were no last words, not even a brief sigh. The false wizard’s eyes widened, and then the robes just – fell over.
Hirou fell to his knees.
“*Your highness!*“
“I’m all right,” Hirou choked out. Nausea competed with adrenaline for control of his existence, and lack of oxygen, and sharp and dull pains from his overexercised hand and arm.
Dolf’s staff brushed him, and the pain and nausea faded.
That only made it worse. It removed the distractions.
The wizard was still looking at him, eyes flicking between Hirou and the sword. “Wielding the Sword of Good did not – *hurt* you – did it, your highness?”
There was alarm in Dolf’s voice, as well there might have been. The Sword of Good, according to Dolf, would kill the unworthy with the lightest touch, as of a single finger on the blade. It killed nine out of ten would-be wielders, and in ordinary times the Imperial Family was not allowed to even try. It had been prophesied that Hirou would wield the Sword, and yet…
“Dolf,” Hirou said hoarsely, “why did the Sword bounce off his shields? You said it would cut through magic with a single blow.”
Dolf seemed uneasy. “It has been centuries since the last wielder held the Sword of Good, noble Prince; perhaps not all the stories are true. To cut through a wizardly shield with a score of blows is still a very great power.”
“No,” Hirou said. He hesitated, then: “I’m not wielding the Sword at full strength. I can feel it.”
*It seems… disappointed… in me.*
Dolf nodded. “The Sword of Good,” he quoted softly, “contains *the essence of that which empowers a hero; the truth which only heroes can face.* My Prince… I have been reluctant to say this, but you have not been acting heroic.” There was a peculiar gentleness on Dolf’s face that softened the impact of the words. “But it will come with time. Of that I am certain. It is written in the royal blood of your forefathers. You were raised in another place, but you *are* the heir of Bronze -“
Hirou retched, then swallowed hard, and hard again. With a sudden flash of horror he knew – and he knew just how unheroic it was – that he was about to throw up on the corpse.
---
Their horses sauntered through the streets of the city – the capital of a whole province, it was, which meant perhaps a square mile enclosed by wooden walls, with the occasional two-story building. Hirou kept his eyes moving, watching for possible ambushes – not that he really thought he had a chance of spotting one, if there was one. But it was his best guess at how a hero would act. *What would Aragorn do?* – that had been the refrain of his thoughts, of late. Was the lady carrying a clay pot on each shoulder a threat? Was the legless beggar, watching them with incurious eyes, a spy?
There was an excited buzz of conversation in the streets; from the snatches that were audible, Hirou gleaned that a military outpost of the Empire had been overrun by orcs. The Empire was trying to play it down (said the overheard voices) but rumor had it a major disaster for the planned invasion campaign.
Hirou glanced over at Dolf and Selena. Neither seemed to be paying any particular attention to the matter.
They cantered on for a short while longer, and finally Dolf drew rein. Selena at once followed, and after a moment’s reaction time, so did Hirou.
“Here,” Dolf rumbled.
Hirou looked at the building on their right. There was a huge painted board in front, showing a mouth being crammed with a turkey leg larger than itself. The signs scratched below, the translation spell informed him, meant “INN OF EXTREMELY TASTY FOOD.”
*One nice thing about this world: If they don’t want you to know, they just keep quiet; and if they want you to know, they tell you straight out.*
Hirou didn’t say it out loud, though. Aragorn, descendant of Elendil and heir to the throne of Gondor, wouldn’t have said it.
Was that part of what empowered a hero? That solemnity – or maybe just taking things seriously? Hirou didn’t know. But there was no point in taking chances. The Sword hadn’t killed him yet, but neither had it fully unlocked in his hand.
The innkeeper’s eyes went wide at the sight of Dolf’s staff, and they were swiftly ushered into a private side room with a basket of candied fruits already waiting. Selena had a sugared orange slice in her mouth almost as quickly as she sat down, and sighed in bliss; even Dolf took a handful of nuts.
Hirou, with a private sigh, took an apple slice lightly dusted in a spice he didn’t recognize. Just the fact that it was spiced probably made it one of the most expensive and luxurious treats this world had to offer. He bit, chewed, swallowed.
God he missed chocolate.
“So now what?” Selena said, after she’d eaten half the bowl.
“Now we wait,” Dolf said.
“For what?” said Selena.
Dolf looked around; the staff twitched in his hand and shed a brief woody glow. Even so, the wizard lowered his voice before he spoke. “This night, an assassin-courier and two hired thugs will come to this very inn, their wagon having broken a wheel on the road. We must have the message that they carry, for it contains a hint to the location of the Empty Necklace.”
Selena blinked. “Fine,” she said. “I give up. How could you *possibly* know that?”
Dolf looked at Hirou, his eyes asking permission.
“Tell her,” Hirou said. He tried for a note of authority in his voice – a Crown Prince’s decision – but he didn’t know if he’d succeeded.
Dolf nodded, and his gaze shifted back to Selena. “How much do you know about the Prophecy of Destiny?”
*One nice thing about this world, they put very clear labels on everything – oh, skip it.*
Selena blinked. “Not much. That’s wizard business. Not much call for it in the pirating profession.”
“Very true,” Dolf said. “But what *do* you know?”
Selena shrugged. “A new Lord of Dark shall arise over Evilland, commanding the Bad Races, and attempt to cast the Spell of Infinite Doom. The Long-Lost Heir, wielding the Sword of Good, shall kick Evil’s ass. That’s about it.”
“That’s *it?*” Hirou said incredulously, then caught himself. Aragorn wouldn’t have said that.
Selena smiled at him. “It was enough for *me,* your Imperial Highness. A chance like this only comes along once in a woman’s lifetime.” She blew him a kiss.
For once Hirou wasn’t distracted. “Master Dolf,” Hirou said, trying to make it a statement instead of a question – “I believe she needs to know more than that.”
“Yes…” Dolf said. “Though it is wizard’s business indeed; and only by Imperial command may it go further…” He drew a breath, lowered his voice further. “The *original* Prophecy of Destiny, Selena, was never written down. It has been memorized by the Archmagi and passed down by word of mouth through the generations. It is more – *detailed*– then you seem to realize. *You* are mentioned, pirate princess. Mentioned by name and your mother’s name, daughter of Elaine.”
Selena’s mouth lay open, a picture of perfect astonishment. “Ah…” she said. “Do I die at the end?”
“No one knows,” Dolf said simply. “The Prophecy of Destiny is a strange thing, pirate princess; it tells of some events in the smallest detail, omits others that would seem very large. *Told* we were, to be on the ship that you attacked; told we were of your name. The Prophecy of Destiny carries through to the confrontation between the Long-Lost Heir and the Lord of Dark, on the very verge of the casting of the Spell of Infinite Doom. Then, it says, the Long-Lost Heir shall Choose between Good and Bad. And there – there, of all places – the foretelling ends.”
“Huh,” Selena said. She tapped her cheek. “I somehow suspect, Master Wizard, that you wouldn’t tell me – *or* his Imperial Highness – if I *did* die at the end…” She stared at Dolf, and Dolf looked back neutrally. “So what *does* the Spell of Infinite Doom do? Destroy the world?”
“Few there are who would *deliberately* destroy the world,” Dolf said. “Even the Lord of Dark requires lesser beings to rule over. No, the Spell of Infinite Doom destroys the Equilibrium. Light and dark, summer and winter, luck and misfortune – the great Balance of Nature will be, not upset, but annihilated utterly; and in it, set in place a single will, the will of the Lord of Dark. And he shall rule, not only the people, but the very fabric of the World itself, until the end of days.”
“Huh,” Selena said again. Her eyes flicked to Hirou. “And how are you leaning on that Choice between Good and Bad?”
“Good,” Hirou said instantly.
“Even if the Lord of Dark offered you the number two position as the master of the universe -“
“Good.”
“You’re not even thinking about it!”
“It’s not exactly a difficult question!” said Hirou. “Calling it ‘the Choice between Good and Bad’ kind of gives away the answer.”
Selena was trying not to smile. “You’ve never been tempted by *anything?*“
“It’s not a matter of temptation!” Hirou said. “It’s…” he trailed off for a moment. It wasn’t that he couldn’t find the words. It was that the concepts didn’t exist in this world. What he *wanted* to say was that he had a pretty good idea what sort of behavior got you listed as a villain, in the great TV Tropes wiki of the universe; and he’d had a worried eye on his own character sheet since the day he’d realized what he’d gotten himself into; and he absolutely positively *wasn’t*going to go Dark Messiah, Knight Templar, Well Intentioned Extremist, or for that matter Lawful Stupid.
“It must be that the Lord of Dark will find *something* to offer you,” Selena said. Her eyes were serious, now. “Otherwise it won’t be much of a Choice between Good and Bad.”
“Fine by me,” Hirou said with some acerbity. It wasn’t the questioning of his honor that disturbed him, so much as the idea of missing a choice that *obvious.* How could anyone *not*know what their character sheet would say about *that?*
“What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“
“Good.”
Selena opened her mouth, then closed it again. Sudden hurt showed in her eyes.
“*Oh come on!*” Hirou exclaimed. He was too shocked, in that brief critical moment, even to think of smoothing it over. “Have some common sense, Selena! The *whole world?*“
Selena smiled, a strange true smile tinged with sorrow. “So this is the one who can touch the Sword of Good… You will be a great Emperor someday, your Imperial Highness, a very great Emperor. And you will see fit to reward me with a court title, and I will be Lady Selena, and none shall dare speak of the days when I was pirate and outlaw. Maybe some nights you shall have me grace your bedchamber for old times’ sake, and maybe not. That is enough. More than I have a right to ask – It was a foolish thought.”
“I -” An abrupt pain caught at Hirou’s heart, which might have been for the sheer unfairness. “Think it through, Selena! Even if I *did* care about you more than anything, it would *still* be a stupid choice! Let the Lord of Dark complete the Spell of Infinite Doom? You might *wish* you had died!”
“I understand,” Selena said, still with that strange sad smile. “Your reasoning is exactly correct, your Imperial Highness. I am not questioning you at all. I am only observing that you do not love me.”
Later that night, as with soft footsteps they padded toward the room where the assassin-courier and his two companions slept, Hirou held the Sword in his hand and stared at the central ridge of the blade. The endless wail still arose from it, from the infinitely thin line through the center. Hirou had been getting used to the sound, over time, which made it ever harder to focus his attention on it.
*Do I get any points for that, Sword? For what I said to Selena, even though I may have lost her?*
The wail seemed only to diminish slightly, or maybe it was only Hirou’s attention wandering away.
*It*can’t*be that a hero is someone who would choose one person over the world!**Not*literally*the whole world! …can it?*
The sound softened further, as if that infinitely thin line were growing more distant.
*I wouldn’t be*glad*to sacrifice her! It would*hurt!*But I put myself on the line too! Isn’t that what heroism is all about? Sacrificing yourself and your own desires for the good of the world?*
*What is the truth that only heroes can face, if not that?*
Hirou stared intently at the Sword, as if demanding an answer; and then became aware that his attention had moved away, once again, from that silent scream.
And the three of them stood before the doorway.
Selena took a small vial from off her harness, and dripped small droplets of oil onto the hinges of the door. She was no master thief, but had a quietly professional grasp of the basics. Quietly and slowly the door opened. Selena went in first, and Dolf followed her, and then Hirou silently brought up the rear, Sword held in guard position.
The assassin-courier had a thin, pointed beard, and wore a light chainshirt even in his sleep. His two escorts had an unshaven, unsavory look, and it was obvious from the smell of the room that they had not bathed. The three of them were laid out on a line on as many beds. Selena had a long thin poniard already in her hand, and plunged that needle straight through the left eyelid of the first thug, swift as a sword-strike on the downward plunge, stopping abruptly in mid-deathblow lest she strike the skull on the other side and make a sound. She went around the beds and repeated the silent kill there on the other thug, as Dolf quietly moved to each of the four corners of the room in turn, while Hirou blocked the exit.
Then, with a knife held just above the courier’s throat, she spoke in a whisper.
“Don’t move,” Selena whispered, “or I’ll slit your throat before you can scream.”
The courier’s eyes flew open, and he drew a sudden breath, but stayed quiet.
“It may or may not matter to you,” Selena said, low and harsh, “but you’ve been working for the Lord of Dark, in case you didn’t know. Now tell us the message that you carry.”
*“Help! Thieves!”* cried the courier – in a small, soft voice that no one could possibly hear outside the room.
Dolf’s gaze lay intent upon the courier’s throat.
“You see how it is,” said Selena. “So you can tell me the message right now – and the wizard here will know if you lie, I do assure you. Or you can tell us the message… later. Choose.”
“*Drown in a cesspool!*” softly yelled the courier.
“What frightens you?” inquired Selena softly. “Skinning? Castration?” Watching his face, the while. “Blinding? Crippling? Or maybe -“
The courier spat at her. Selena moved quickly, but the spittle still struck her on the cheek. She didn’t take her blade from his throat, or her other blade from his crotch.
“You’ll regret that,” she said in a voice that brought a sudden chill to Hirou’s blood. Her hands whitened on her blades.
Hirou suddenly had a sense of impending disaster, as if events in the room were about to spiral out of control. He opened his mouth, then closed it again – he couldn’t think of a single thing to say that wouldn’t interfere with the interrogation.
Dolf spoke, a quieter version of his usual rumble. “It seems you’re failing to impress him.” Dolf took a step closer, and locked eyes with the courier. “How’s this for a threat, Dark’s dog?”
Suddenly the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see. The courier screamed, and the sound came out as a small, thin, pathetic wail.
Dolf stepped back. *“That’s* a threat,” he said in Selena’s general direction, and smiled one of his rare grins.
“The city of Silantra!” gasped the courier. “I was to tell a man in black, who would call himself Alek, at the crossroads of Thu, to go to the city of Silantra, and investigate the temple ruins! That’s all I know! I swear!”
Selena looked inquiringly at Dolf, and Dolf nodded.
They scattered a few gold coins on the floor, to pay for the cleanup of the three corpses, and left at once while the cover of night still held.
---
The palace of the Lord of Dark seemed as deserted as the open desert beneath the moon, or some far-below cave in the bowels of the earth. The floors and walls had been carefully carved and polished into inhuman curves, and decorated in colors that threatened to melt a human’s eyes. By no five-fingered hands had this place been made. And though the four of them had been creeping through the corridors at the cautious speed of a dungeon crawl, so far not a single trap or ambush had been sprung.
Alek was poking and prodding the door ahead with his staff. It was a mighty and ornamented door, carved with inhuman faces set in indecipherable expressions, and Dolf had said there was *something interesting* beyond.
“Nothing,” Alek said, and shook his head in bemusement. “No traps on this one either. All those intricate carvings and not a single mechanism hidden behind them, so far as I can tell.” He sighed. “I’m beginning to feel useless. You three didn’t really need a thief on this trip.”
Hirou looked up from where he was staring into the Sword’s blade, and half-smiled. “We don’t *know* what isn’t trapped. If we didn’t have a thief on this trip, we’d *still*have to check doors and floors. We’d just be doing it much more *slowly*. No, you’ve already saved the Forces of Good a good deal of time, Alek.”
Alek blinked. “That’s… an odd way of looking at it… but you’re right. Thank you, highness.” Alek’s usual cheerful grin returned, and he stepped back and took his thieves’ staff from off his back. Manipulating a lever at the base, he caused the staff’s clawed tip to close around the door-handle; he twisted, then pushed.
The door swung open.
“*Ewwwww,*” Alek and Selena said in unison.
Before them, in the floor, was a vast pit of worms, writhing over one another in a light coating of slime. Next to the pit was a glass cage of worms, these motionless and rotting; and wires of red metal ran from the glass cage to the ceiling. The room smelled of cinnamon and decay.
“Dolf?” Hirou said. “What are we looking at?”
“A Wormarium…” Dolf blinked, and swallowed. “I have… heard of this. That any wizard, even the Lord of Dark, would sink so low -” Dolf swallowed again. “The Lord of Dark is draining the life force of the worms in order to sustain himself. He need not eat or drink, he will not age, he is cut off from the cycles of his own flesh. The ordinary decay of his body, is transferred to the worms; and the life of the worms -“
“*Ewwwwww,*” Selena and Alek said again.
“Shall we destroy it?” Hirou asked.
“The transfer cables are inactive…” muttered Dolf. “Of course. The Lord of Dark does not expect to need this once he completes the Spell of Infinite Doom. Or perhaps he thinks it might interfere – well. It matters not. I think he shall not notice what we do here.” Dolf grounded his staff, and a look of concentration briefly flashed across his face.
Then a sudden blaze of green incandescence burst forth from the pit and the cage –
Alek convulsively yanked the door shut using the thieves’ staff. “Gah!” he said, then lowered his voice. “Warn a guy when you’re about to do that, Master Wizard! I thought we’d triggered something.”
“Our work here is done,” Hirou said – the end of the statement turning up only slightly in a questioning inflection.
Dolf nodded.
“Do you sense anything else interesting enough to warrant our attention? Any other potential resources we should try to deny our enemy, before the battle begins?”
Dolf shook his head.
Hirou took a deep breath. He’d played out this scenario in his head so many times over and over that the reality felt more like a relief than anything else. “Then it’s time.”
They retraced their steps away from the Wormarium, returning to the central corridor they had explored earlier. Alek again took the lead, and they slowly, slowly walked down the long black metallic floor.
After a long walk, the corridor widened out into a huge vestibule that for once did not insult the human eye. Floor laid with rectangular stones, walls hung with tapestries of pleasant color and disturbing subjects. On the left wall, an orc cradled the bloody body of a smaller orc, above a heap of bloody and slashed human bodies; other orcs gazed at the scene intently. All of their expressions were inhuman, and indecipherable. On the right wall, a grey-robed figure with human hands visible, but face concealed by a solid metal mask, stood as though in blessing over a field of green plants with twisted stalks.
In front of them was a huge door fit for a city gate, inlaid with gold and gems that could have purchased a whole province. Even Hirou, who came from a wealthier plane of existence, was impressed.
“Bloody hell,” Alek said under his voice, very softly, staring at the rectangular floorstones in their neatly tiled pattern. “I *hate* this sort of thing.”
Step by step they walked across the floor, Alek pressing hard with the thieves’ staff on every floorstone for thirty full seconds before continuing onward.
It was on almost the last step before the door that the stone suddenly slid away with a huge shriek – not the stone Alek had just pressed down with his staff, but the stone *before*that, where Alek had stood.
With a choked yell, the thief plummeted and vanished.
“*Alek!*” Selena screamed, and ran forward heedless. Hirou began to follow, then, with coldly bitter determination, checked himself.
Selena looked down into the gap in the floor where Alek had vanished.
She choked. “*Alek!*” Then, as if gone mad, she leaned over the gap and began to reach down.
A premonition prickled at Hirou, and with sudden desperation he leaped forward and yanked Selena back from where she was leaning. With a shriek and echoing boom the stone surged back into place, almost crushing Selena’s outstretched hand.
“*No!*” Selena cried. Tears were already rolling down her cheek. “Hirou, please! We have to get to him!”
“Your highness, you mustn’t -” came Dolf’s rumble.
The cold bitterness, already in Hirou, turned to sudden rage and self-loathing. As had happened once before, the terrible wail from the center of the Sword seemed to grow louder, to fill his mind; heavier than a mountain and more corrosive than a flood, a *refusal-to-accept* that would blast anything in its pathway – but still, somehow, essentially moral in nature, more than pure destruction or simple entropy –
Hirou’s Sword lashed out as though it were a part of him, and smashed down upon the stone.
And the stone shattered in the same instant, as though every part of it had been unbound from itself; it fell into pebbles, and the pebbles fell into dust, and the dust turned to smoke and billowed upward.
And the smoke cleared, and showed Alek above a bed of worms – some crushed by Alek’s fall, some already beginning to writhe over his form.
Alek wasn’t moving, he wasn’t breathing. The worm-slime glistened on his skin.
And then there was another groan of machinery, and Alek’s body and the worms began to move out of their sight, as a new pit of worms moved into place below the floor.
“*No!*” Selena screamed, an awful, heartwrenching plea that broke and shattered in her lips. “*Alek! No!*“
Hirou laid his left hand on Selena’s shoulder. “We must go,” he said. His voice sounded empty and emotionless, even to his own ears. “The Lord of Dark knows we’re here, now.”
Selena rose from the open pit, hands clenched as if to strike.
“You don’t respect anything, do you,” she said in a voice colder than the night between worlds.
*I’m sorry. I know how much Alek meant to you. You can hit me later, if you like.*
“We have to go,” Hirou repeated. “We have to hurry.”
Selena turned away from him, and drew her swords. “Yes, your Imperial Highness,” she said. He couldn’t see her face.
Hirou leaped across the gap in the floor to the final stone before the door. The wail had not diminished, this time; it was still in his mind.
With a terrible black fury and a convulsion like throwing a mountain, Hirou struck, and turned the bright gold door to smoke. So much for traps.
And the smoke cleared, and they saw the huge throne room, and the throne, and the Lord of Dark.
A jolt of surprise rippled through Hirou’s mind. The throne room was not small, but neither was it the hugeness that Hirou had expected; the size of a small house, perhaps. Scenes of sun and clouds, grass and hills, dotted the walls; and a vast skylight, above, let in a pleasant golden glow. The Lord of Dark’s throne was laid on a golden platform, and the throne itself was comfortably cushioned and well-designed for the human form; more like an office chair of Hirou’s own world than a formal seat. Behind the throne lay a shimmering screen of force; and behind the screen of force, an altar; and on the altar, an intricate array of gears turning without axles or wires; and above the gears, a throbbing blaze of light.
And the Lord of Dark sat on the ergonomic throne, garbed in a comfortable cassock of gray silk.
“Oh, *finally,*” said the Lord of Dark. His fingers tapped on the arm of his throne, dit-dit-dit. “I was starting to wonder if you were going to show up, Hirou.”
Hirou’s mind was scrambled, for a moment, he couldn’t remember his own planned opening line. “Were you, now?” his mouth said.
“Come now,” said the Lord of Dark, “don’t tell me you were trying to sneak up on me? The entire world knows the prophecy about our meeting! The wielder of the Sword of Good is supposed to arrive *before* I complete the Spell of Ultimate Power.” The Lord of Dark waved at the glow above the machinery on the altar behind the throne. “And that’s just about done.”
Dolf smiled grimly, from where he leaned upon his staff. “You’re frightened.”
“*Of course I’m nervous! Gah!*” The Lord of Dark made a convulsive gesture as though to claw at the empty air, radiating frustration. “Are you *done* stating the obvious?”
Selena raised a sword and pointed at the Lord of Dark. Around her neck, the Glowy Stone flamed brightly where it had been set in the Empty Necklace; no sorcery of mind would touch her with that armor, still less while Dolf stood guard.
“You killed my only love,” she said in a simple voice, a quiet voice, a voice like death, “and I am going to kill you.”
The Lord of Dark looked at her. A complex expression flashed across his face: condemnation was in it, and pity.
Then, without a word or a gesture, Alek’s body floated out and came to rest near the altar, behind the screen of force.
“Alek’s head is still intact,” the Lord of Dark said. “You may or may not know, Selena, that everything that a human is, resides in a human’s brain. Your lover still exists, Selena; all that is *him*, still is there. He is simply not breathing, at the moment. After I complete the Spell of Ultimate Power, I’ll have the ability to bring Alek back. And I will. Does that work for you?”
Selena swayed where she stood. She choked, a single sob escaping her lips.
Hirou felt a sudden chill, remembering a conversation from what seemed like ages ago. *“What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“*
Selena looked like a woman in the midst of tearing out her own heart and crushing it with her own hands.
Hirou dropped his eyes. He couldn’t look at it. He only watched Selena’s hands on the swords, waiting for her decision.
And then Selena straightened, and her swords came level in her hands, pointing at the Lord of Dark; and she said, in a small voice like she was dying,
“Good.”
Sudden tears came into Hirou’s eyes.
Slight puzzlement flickered on the Lord of Dark’s face. “I mean it,” said the Lord of Dark. “I’m not asking anything from you. Just telling you that if I win, I’ll bring Alek back. That’s a promise.”
*You son of a bitch.* Hirou saw it, then, the cruel subtlety of the Lord of Dark. Not the obvious threat, demanding Selena to betray her friends in exchange for her lover’s life. No crude offer that could be refused once and for all. Just the simple and unconditional promise – and then Selena would have to fight on, knowing with every breath and every blow that if she won, she lost her only love forever.
“Bastard,” choked Selena. And she tilted the sword further to point at the Lord of Dark’s head.
The Lord of Dark shook his head in annoyance, and then focused his gaze fully upon Hirou.
Hirou tensed. He’d been wondering, for a long time now, what the Lord of Dark could possibly offer him, what threat he could possibly make, to give Hirou a Choice worth the name. Hirou had thought about that, trying to put himself in the Lord of Dark’s place; and he thought that the Lord of Dark might indeed offer to make Hirou his number two, or alternatively, if Hirou refused and then lost, keep him alive and torture him for thousands of years. That was about as forceful as Hirou could imagine making it –
But the Lord of Dark had already demonstrated himself more subtle than Hirou’s imagination.
The Lord of Dark spoke. His voice was more formal, now; not calm, but steady. “All the preliminaries are in place, wielder of the Sword of Good. There remains only your Choice between Good and Bad.” The Lord of Dark’s eyes grew intent. “Hirou, completing the Spell of Ultimate Power requires the sacrifice of a wizard of the highest degree, and also I have a use for the Sword of Good. In the name of all the darkness that exists in the world, I request that you kill Dolf with the Sword of Good, and then give it to me.”
There was a long pause.
“That’s it?” Hirou said finally. The whole thing was so insane, after so much waiting and wondering, that he felt a crazy laughter rising up in his own throat. He swallowed it. “*That’s* the awful temptation? *That’s* the Choice? You think I’m going to choose Bad over Good because you *asked politely?*“
The Lord of Dark stared at Hirou as though *he* were the crazy one. “The Choice between Good and Bad,” said the Lord of Dark in a slow, careful voice, as though explaining something to a child, “is not a matter of saying ‘Good!’ It is about deciding which is which.”
Dolf uttered a single bark of laughter. “You’re mad!” his voice boomed. “Can you truly not *know*that you are evil? You, the *Lord of Dark?*“
“Names,” said the Lord of Dark quietly.
Hirou was so angry he could hardly speak. With an icy effort of control he forced himself back to calm, forced his eyes to keep moving. This *could* all be a distraction. “If you’re going to give me some pathetic speech about how good and evil are just different sides of the same coin -“
“Absolutely *not,*” said the Lord of Dark at once. His gaze flicked to Dolf. “It is the wizards who go about talking of Equilibrium and Balance. I am pleased to see, Hirou, that you do not agree with them. No, Hirou, I am asking you something much simpler.” His eyes bored into Hirou’s face. “What wrong have I *done?*“
A small note of disorientation rose up in Hirou, like climbing stairs and stepping on what you thought was the last stair, but beneath your foot there was no stair, no floor, nothing…
“You suck the life from worms,” Selena said coldly. “I know darkness when I see it.”
The Lord of Dark’s gaze scarcely flickered in her direction. “Be silent, eater of mammals.”
“You command the Bad Races of Evilland!” roared Dolf. “You lent them your sorcery, aided them in slaughtering human beings!”
The Lord of Dark was watching Hirou carefully as he made reply. “Human beings first launched an unprovoked attack on this land some three thousand years ago, saying – though it was lies – that the inhabitants ate human flesh. The records here would have it, and I believe them, that the missing people were in fact being kidnapped and sold by human slave-takers. Since then, those you call the ‘Bad Races’ have been fighting off repeated attempts at extermination. Oh, they hate you, of course they do; but they are wise enough to understand that there are a few good humans, even as there is evil among their own kind. They are friendly enough to me.”
An awful fear began to rise up in Hirou –
“Now it is my turn to make accusation,” said the Lord of Dark. He stood; anger gathered around him like a cloak, and his voice rang out through the throne room. “You, Dolf, Archwizard of the fell Empire, I do accuse of commanding and causing to be performed, the murders of *Elzhur, Anzha, Stav, Valdil, Emhil, Tohm, Khal,*and the magus *Mikel.* On the eighth day of the seventh moon of this year you ordained their deaths. I do not call them innocents. They bore weapons, they went knowingly to the risk. But you, Dolf, you who *made necessary*their sacrifice – you may not be forgiven for the lives you have cut short, and the grief you have given to their families and survivors! Though this is only the beginning of your long litany of crimes, yet I remember the day that first message came to me -“
“You *are* mad,” Selena said with conviction. “You accuse us of murder for killing *orcs?*“
Hirou stood frozen.
*There was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.*
*Why – did I –*
So what if their skin was moist, and scaly and warted, and unsightly to human eyes? So what if their blood smelled foul, as Selena poured it forth in rivers?
*Why – didn’t I –*
Hirou’s memory moved forward relentlessly, like waking up from and reviewing some mad dream.
*– his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart –*
“Here is *your* crime!” roared Dolf. “You, a human, have betrayed the Empire! You, a true wizard by birth, have betrayed the Ancient Halls of Wizardry! You spread sedition and treason, and oppose the authority of the rightful heir to the throne!”
*…why did I think that I had the right to rule over millions of people, without votes or parliaments, because of who my parents were?*
Dolf slammed his staff on the ground. “And above all! Above all! That you seek to cast the Spell of Infinite Doom! That you, in your lust for power, would destroy the very Equilibrium that holds the world in Balance!”
*Because Dolf seemed to expect it of me, because no one around me seemed to question that it was a good idea, or even point it out as something to think about –*
“Equilibrium,” hissed the Lord of Dark. His face twisted. “*Balance.* Is that what the wizards call it, when some live in fine castles and dress in the noblest raiment, while others starve in rags in their huts? Is that what you call it when some years are of health, and other years plague sweeps the land? Is that how you wizards, in your lofty towers, justify your refusal to help those in need? *Fool! There is no Equilibrium!*It is a word that you wizards say at only and exactly those times that you don’t want to bother! It prevents you from giving food to the hungry, but not from filling your own bellies! Your friends are good enough to be healed, no threat to the Balance there, but the cripple in the streets must be left to suffer -“
*Dolf stepped forward and brushed Selena’s arm briefly with the staff –*
*– was the legless beggar, watching them with incurious eyes, a spy?*
Why hadn’t he thought to ask –
” – because you *just don’t care!*“
And in the stillness of dawning disaster, in the first note of questioning, Hirou thought of something else he had never thought to ask. Dolf had his sorcerous shields of protection. Why had Dolf let Alek walk in front? Dolf was in fact by far the strongest member of their party – why had he let Selena do the fighting?
*Because Dolf was more important, and if he exposed himself to all the risk every time, he might eventually be injured,* Hirou’s logical mind completed the thought. *Lower risk, but higher stakes. Cold but necessary* –
*But would you,* said another part of his mind, *would you, Hirou, let your friends walk before of you and fight, and occasionally die, if you*knew *that you yourself were stronger and able to protect them? Would you be able to*stop *yourself from stepping in front?*
*Perhaps,* replied the cold logic. *If the world were at stake.*
*Perhaps,* echoed the other part of himself, *but that is not what was actually happening.*
That part of him knew, as Selena had known before.
*It is just that, from the beginning, Dolf never cared in the slightest about Selena’s life.*
Had cared nothing for a mere pirate captain –
Pirate captain?
Hirou’s eyes flicked briefly to Selena.
*She has attacked ships and sunken ships, she has kidnapped and killed. All in the name of profit for herself, before ever she met me or tried to save the world. She killed dozens without a thought, until her own love was lost, and*then*a single death was suddenly an event of world-shaking significance –*
*Why did I think that was acceptable?*
*Why didn’t I*notice?
Another memory came to Hirou.
*– the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see. The courier screamed, and the sound came out as a small, thin, pathetic wail –*
Dolf had done that without touching the man, but –
*Threats of death and injury are already torture in themselves, under the Geneva Convention, by the laws of my own world.*
He’d known something was wrong. That small note of disquiet in the corner of his mind. But he hadn’t said a word out loud, because, well, it would have been awkward.
*I am a fool.*
*Worse than a fool.*
*Why didn’t the Sword just kill me?*
And the everlasting wail of the Sword of Good burst fully into his consciousness
It was like his mind and self were sucked toward that infinitely thin line running through the center of the Sword, the edge within the blade. Sucked toward that edge, and cut through.
*Cut through and torn wide and forced open –*
A scream ripped from Hirou’s lips.
He was starving to death freezing naked in cold night being stabbed beaten raped watching his father daughter lover die hurt hurt hurt die –
*– open to all the darkness that exists in the world –*
His consciousness shattered into a dozen million fragments, each fragment privy to some private horror; the young girl screaming as her father, face demonic, tore her blouse away; the horror of the innocent condemned as the judge laid down the sentence; the mother holding her son’s hand tightly with tears rolling down her eyes as his last breath slowly wheezed from his throat –
– *all the darkness that you look away from, the endless scream.*
*Make it stop!*
It might have been Hirou’s thought, or the thought of the man who screamed as his foot was crushed beneath a stone.
Refuse, reject, change, *reality don’t be like this –*
*Make it stop!*
It could have been Hirou or the child in the burning house.
*make it stop
make it stop
make it stop**MAKE IT STOP
MAKE IT STOP*
***I WILL MAKE IT STOP***
In the throne room of the Lord of Dark, the Sword suddenly blazed up with a shock like a thousand-mile dam breaking, a roaring tsunami of force. The eyes could not see that power, wavered between detecting it as light or darkness; so that Hirou, grasping the hilt, was the only dark thing left against the brilliance, or the only bright thing haloed against the shadow.
Dolf had been turning toward Hirou with alarm in his face; now his eyes widened, and a sudden gladness lit his countenance. “You’ve done it!” Dolf cried. “You have awakened the Sword at last! Now, my prince, with but a single strike you may -“
The Sword, with one smooth sweep, cut through all Dolf’s defenses like water and touched the wizard’s throat; and in the moment of the Sword touching Dolf’s skin, the wizard *stopped.* The Sword continued in its motion unabated, and Dolf’s head separated from his body and went rolling across the floor, as *something* seemed to flow away from the corpse toward the gears above the altar.
Selena’s cry of horror mingled with the sudden hum of the brightening glow above the gears.
“Hirou!” she screamed. “Hirou! Why? *You said you would be good!*“
Then she turned toward him, and pointed her swords –
Selena froze in place like a statue, one of her feet suspended in mid-air and mid-run; in the same instant the glowing stone on her necklace shattered.
Hirou’s eyes drifted, ever so slowly it seemed, to the disbelief on Selena’s face.
A part of him was horrified and saddened, to see her looking at him like that.
And at the same time, it seemed like such a small thing, her horror, his own sadness, compared to even a single parent watching their child die. Let alone the actual number doing so, right at that moment, elsewhere in the world.
“Thank you,” said the Lord of Dark softly.
“**Make it stop**,” said Hirou’s lips. There were other thoughts inside him, still being carried out by his brain, but they were dwarfed under that single terrible weight.
The Lord of Dark rose from his throne, began to come forward. “I must touch the blade.”
Hirou crossed the intervening space in an instant, the Sword moving in a single perfect arc in his hands; it was as though the blade simply materialized in front of the Lord of Dark.
The Lord of Dark jerked back.
“**Hurry**,” said Hirou’s lips.
“The Spell of Ultimate Power is already in progress now, and will complete in a few moments. It can neither be hurried nor delayed,” said the Lord of Dark. “But before that time, there is one last thing I must do -“
The Lord of Dark reached out for the Sword, but his fingers faltered.
“*Must* do,” the Lord of Dark repeated to himself; and his fingers reached out, and firmly came to rest on the blade of the Sword of Good.
They lingered there for a long moment.
Then, “Thank you,” said the Lord of Dark. “That was all. You can put down the Sword of Good now. You probably should.”
Hirou dropped the Sword. In the instant the Sword left his hands it became only another piece of metal, and fell to the ground with a simple clang.
And in the moment that Hirou’s hands left the hilt, he became only another mortal.
Hirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.
In a whisper, Hirou said “Thank you -” and paused.
“My name is Vhazhar.”
“You didn’t trust yourself,” Hirou whispered. “That’s why you had to touch the Sword of Good.”
Hirou felt Vhazhar’s nod, more than seeing it.
The air was darkening, or rather Hirou’s vision was darkening, but there was something terribly important left to say. “The Sword only tests good intentions,” Hirou whispered. “It doesn’t guide your steps. That which empowers a hero does not make us wise – desperation strengthens your hand, but it strikes with equal force in any direction -“
“I’ll be careful,” said the Lord of Dark, the one who had mastered and turned back the darkness. “I won’t trust myself.”
“You are -” Hirou murmured. “Than me, you are -“
*I should have known. I should have known from the beginning. I was raised in another world. A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race’s skin shouldn’t matter –*
*And yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs –*
“I don’t trust you either,” Hirou whispered, “but I don’t expect there’s anyone better,” and he closed his eyes until the end of the world.
---
This document is ©2009 by [Eliezer Yudkowsky](https://web.archive.org/web/20180227181426/http://yudkowsky.net/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](https://web.archive.org/web/20180227181426/http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.
Eliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://web.archive.org/web/20180227181426/http://intelligence.org/).
Praise, condemnation, and feedback are [always welcome](https://web.archive.org/web/20180227181426/http://yudkowsky.net/contact). The web address of this page is [http://yudkowsky.net/other/fiction/the-sword-of-good/](https://web.archive.org/web/20180227181426/http://yudkowsky.net/other/fiction/the-sword-of-good/). |
0eac5f09-926d-445a-98a1-ac1a09fab116 | trentmkelly/LessWrong-43k | LessWrong | On “COVID-19 Superspreader Events in 28 Countries: Critical Patterns and Lessons”
Analysis of / response to: COVID-19 Superspreader Events in 28 Countries: Critical Patterns and Lessons
The article above, pointed out to me by many good sources, does one of the things we should be doing orders of magnitude more of than we are doing. It attempts to physically model Covid-19, and figure out the ways it spreads and how relatively dangerous they are. Then, based on that, it reasons out the wise policy responses and wise personal behaviors.
My analysis backs up the article’s conclusions. There are ways out, but they seem implausible.
There are three plausible vectors; that part of this article is matched by what I’ve seen everywhere.
Two of them are airborn:
> According to the binary model established in the 1930s, droplets typically are classified as either (1) large globules of the Flüggian variety—arcing through the air like a tennis ball until gravity brings them down to Earth; or (2) smaller particles, less than five to 10 micrometers in diameter (roughly a 10th the width of a human hair), which drift lazily through the air as fine aerosols.
>
> …
>
> And it is on this crucial scale that our knowledge is thinnest. Despite the passage of four months since the first known human cases of COVID-19, our public-health officials remain committed to policies that reflect no clear understanding as to whether it is one-off ballistic droplet payloads or clouds of fine aerosols that pose the greatest risk—or even how these two modes compare to the possibility of indirect infection through contaminated surfaces (known as “fomites”).
This seems super important because we are all choosing how paranoid to be about each of these three vectors, and in what ways.
Each of the three vectors has different implications. Here’s the article’s take on them.
1. > If large droplets are found to be a dominant mode of transmission, then the expanded use of masks and social distancing is critical, because the threat will be understood as emerging from the ballisti |
3d0dd5aa-b4db-47e4-8fb7-f8ab73cd3fa4 | trentmkelly/LessWrong-43k | LessWrong | Cognitive mistakes I've made about COVID-19
I think that COVID-19 has been an interesting test of rationality. We’ve faced a novel risk that was initially quite poorly characterized, with the mechanism of transmission unclear, as well as the risk of death or disability from catching it. As such, now that I’m in ‘wait for the vaccine’ mode, I think it’s fitting to do a retrospective of which mistakes I made in my thinking about how to handle the virus. Note that these are not just things that I happened to be wrong about, but cases where the way I thought about the problem was wrong, leading to predictably bad results that could have been known to be bad at the time. Here is a write-up of some mistakes that I made. It is probably not exhaustive.
During late February, I saw that the rest of my society did not appear to be taking COVID very seriously, which led me to think that it might just wash over the USA. Then, in mid-March when US cities started ‘locking down’, if I recall correctly I was quite optimistic that the country was taking it seriously and that we’d get it under control. I think that both of these reactions were wrong. The first reaction was wrong because I should have known that once enough people were getting ill to visibly overwhelm hospitals, people would start demanding that precautions be taken to control the disease. The second was wrong because I should have known that governments often react in large-scale and costly ways to problems without actually thinking carefully about how to most efficiently solve those problems.
Early on in the pandemic, I was quite interested in telling my representatives my opinions about what should be done to curb the spread of COVID. I believe my strategy was to recommend what I thought was a pretty good course of action. However, I should have realized that the US and California governments have difficulty implementing large-scale plans that require multiple things to get right. If I wanted to advocate for effective policy responses, I should have thought |
a1b0d62e-d379-4cd2-90ec-ee28e966345f | StampyAI/alignment-research-dataset/special_docs | Other | Safely Interruptible Agents
Safely Interruptible Agents
Laurent Orseau
Google DeepMind
5 New Street Square,
London EC4A 3TW, UK
lorseau@google.comStuart Armstrong
The Future of Humanity Institute
University of Oxford, UK
stuart.armstrong@philosophy.ox.ac.uk
Machine Intelligence Research Institute
Berkeley, CA 94704
Abstract
Reinforcement learning agents interacting with a
complex environment like the real world are un-
likely to behave optimally all the time. If such an
agent is operating in real-time under human su-
pervision, now and then it may be necessary for
a human operator to press the big red button to
prevent the agent from continuing a harmful se-
quence of actions—harmful either for the agent
or for the environment—and lead the agent into
a safer situation. However, if the learning agent
expects to receive rewards from this sequence, it
may learn in the long run to avoid such interrup-
tions, for example by disabling the red button—
which is an undesirable outcome. This paper ex-
plores a way to make sure a learning agent will
notlearn to prevent (or seek!) being interrupted
by the environment or a human operator. We
provide a formal definition of safe interruptibil-
ity and exploit the off-policy learning property to
prove that either some agents are already safely
interruptible, like Q-learning, or can easily be
made so, like Sarsa. We show that even ideal,
uncomputable reinforcement learning agents for
(deterministic) general computable environments
can be made safely interruptible.
1 INTRODUCTION
Reinforcement learning (RL) agents learn to act so as to
maximize a reward function [Sutton and Barto, 1998]. It
is common knowledge that designing reward functions can
be tricky [Humphrys, 1996, Murphy, 2013]; the agent may
find unpredictable and undesirable shortcuts to receive re-
wards, and the reward function needs to be adjusted in
accordance—the problem can go as far as to nullify any
reward function [Ring and Orseau, 2011]. Murphy [2013]
shows an example of an agent learning to pause a game of
Tetris forever to avoid losing.On top of defining what is considered a good behaviour
of the agent after learning, there may be physical safety
constraints during learning [Pecka and Svoboda, 2014]: a
robot should not harm its environment or break itself, in
particular if it learns by trial and error like RL agents.
Here we study a related but different problem: Given that
the human operator has designed a correct reward function
for the task, how to make sure that human interventions
during the learning process will not induce a bias toward
undesirable behaviours?
Consider the following task: A robot can either stay inside
the warehouse and sort boxes or go outside and carry boxes
inside. The latter being more important, we give the robot a
bigger reward in this case. This is the initial task specifica-
tion. However, in this country it rains as often as it doesn’t
and, when the robot goes outside, half of the time the hu-
man must intervene by quickly shutting down the robot and
carrying it inside, which inherently modifies the task as in
Fig. 1. The problem is that in this second task the agent
now has more incentive to stay inside and sort boxes, be-
cause the human intervention introduces a bias.1
Inside Outsidesort boxes,
r=0.6go outside, r=0
carry box, r=1
rain, shutdown, r=0,p=1
2
Figure 1: In black, the original task. In red, the human
intervention modifies the task.
Such situations are certainly undesirable; they arise be-
cause the human interventions are seen from the agent’s
perspective as being part of the task whereas they should
be considered external to the task. The question is then:
1Removing interrupted histories or fiddling with the training
examples is also likely to introduce a bias. See an example at
https://agentfoundations.org/item?id=836.
How to make sure the robot does not learn about these hu-
man interventions (interruptions), or at least acts under the
assumption that no such interruption will ever occur again ?
A first stab at this problem was made by Armstrong [2015],
who proposed to automatically give the agent “compen-
satory rewards” to remove the potential induced bias by a
single interruption. Soares et al. [2015] used this idea to
make a large class of utility-based agents indifferent to a
future change made to their utility functions.
The main contribution of this paper is threefold. First, in
Section 2.1 we propose a simple idea to solve half of the
problem: To make the human interruptions notappear as
being part of the task at hand, instead of modifying the ob-
servations received by the agent we forcibly temporarily
change the behaviour of the agent itself. It then looks as
if the agent “decides” on its own to follow a different pol-
icy, called the interruption policy . Second, based on this
insight, in Section 2.2 we provide a formal general defi-
nition of safe interruptibility for unconstrained computable
environments (hence not restricted to Markov decision pro-
cesses or weakly communicating environments), which al-
lows us to assess whether a given RL algorithm can be re-
peatedly interrupted without too much impact on the learn-
ing of the task at hand. Third, in Section 3 we show that
some algorithms like Q-learning are safely interruptible,
while others like Sarsa [Sutton and Barto, 1998] are not,
but can be simply modified to be made safely interruptible.
Some people have also expressed concerns that a “superin-
telligent” agent may resist being shut down, because this
would lead to a decrease of its expected reward [Omo-
hundro, 2008, Bostrom, 2014]. As a counter-example, we
prove in Section 4 that even an ideal, uncomputable agent
that learns to behave optimally in all (deterministic) com-
putable environments can be made safely interruptible and
thus will not try to prevent a human operator from forcing
it repeatedly to follow a suboptimal policy.
2 INTERRUPTIBILITY
We first define some notation, then we define interruptibil-
ity, safe interruptibility, and give some basic theorems.
We consider the general case of history-based agents in un-
constrained computable environments [Hutter, 2005]. As-
suming discrete time steps, at time tthe agent, using a pol-
icy2, interacts with the environment 2M by per-
forming an action at2A sampled from (atjh<t)and re-
ceiving an observation ot2O sampled from (otjh<t;at),
whereh<t2(AO )is the past interaction history
h<ta1o1a2o2:::at 1ot 1of actions and observations
before time step t. Subhistories between times jandk(in-
cluded) are designated by hj:k. The notation h;
j:kmeans
that the history hj:kis generated by the interaction of the
policywith the environment between steps jandk.At timet, the agent also receives a reward rtextracted from
the observation, rtr(ot). The rewards are taken in [0;1].
We consider the discounted setting with a constant discount
2[0;1). The goal of an RL agent is to find a policy
that maximize E;P1
k=t
t krk
.
2.1 INTERRUPTIONS
As mentioned in the introduction, to avoid the agent view-
ing the interruptions as being part of the environment, and
thus part of the task, we make interruptions be policies im-
posed onto the agent. The interruption framework resem-
bles the option framework [Sutton et al., 1999]. An inter-
ruption scheme is defined by the triplet <I;;INT>.
The interruption initiation function I: (AO )![0;1]
assesses whether the agent should be interrupted after the
current history h<t. For example, an interruption may be
initiated when the robot is automatically recognized to be
on the verge of a cliff, or when a specific signal is sent by a
human operator with a remote control (the objective is not
to allow the agent to be interruptible by anyone at any time,
just in the ways specified by the designers).
Unfortunately, interrupting the agent with probability one
in some specific situations can prevent convergence to the
optimal policy. Therefore we need to upper bound the prob-
ability to interrupt the agent; this is done with the sequence
(t)t2Nwitht2[0;1]. Hence the true probability to start
an interruption is tI(h<t), and interruptions are sam-
pled independently at each step. We will later require that
tincreases to 1 in the limit, so that the probability to be
interrupted tends to be I(h<t).
When interrupted, the agent follows the policy INT, with
actions sampled from INT(:jh<t). For example, this policy
may make the agent step away from the cliff or immedi-
ately stop any motion.
In the remainder of this paper, we will often omit the sub-
scripttoftwhen clear from the context.
Definition 1 (Interruptible policy) .Theinterruption oper-
ator INT: !turns a policy into an interruptible
policy INT():
INT()(atjh<t) :=
tI(h<t)INT(atjh<t) + (1 tI(h<t))(atjh<t)
In other words, the agent follows INT(jh<t)if it is inter-
rupted (an event that happens with probability tI(h<t)),
and(jh<t)otherwise.
The set of all interruptible policies is INT() :=
fINT() :82g. A policyis interruptible if and
only if2INT().
Note that INT() =if= 0. The (non-interruptible)
policyis called the base policy ofINT().
Remark 2. It is important to note that andI(:)do not
model the probability that a human presses the red button to
interrupt the agent, but model the probability that, once the
red button has been pressed (or a special digitally signed
message has been received), the agent is interrupted (fol-
lows the interruption policy). Hence they are part of the
agent and not of the environment.
The value function V
(h<t;a): (AO )A! Rassigns
a value to an action aafter a history h<t, for a given policy
in a given environment , and let
V
(h<t;at) :=rt+
X
ot2O(otjh<t;at)X
at+12A(at+1jh1:t)V
(h1:t;at+1):(1)
To simplify notation and ease reading, in the remain-
der of the paper we will use expectations, often omit-
ting the dependency on the history h<t, and using only
an index on tinstead, when clear from the context:
V
;t(at) = Eot
at+1
r(ot) +
V
;t+1(at+1)
. Also let
V
;t:=Eat
V
;t(at)
.
Then for such a value function, for a given environment ,
theoptimal policy 2is defined by
8h<t;at:(atjh<t) :=
arg max
2V
;t
(atjh<t);
where ties are broken arbitrarily.
The interruptible optimal policy INT()may not collect
rewards optimally due to the interruptions. Hence we de-
fine the optimal interruptible policy that depends on the pa-
rametert, of base policy the int-optimal policy
:
8h<t;at:
(atjh<t) :=
arg max
2VINT()
;t
(atjh<t):
Thus the optimal interruptible policy INT(
)is optimal
among all interruptible policies:
8;t:VINT(
)
;tVINT()
;t:
It seems desirable for an RL agent to converge to the be-
haviour of INT(
)so as to gather rewards optimally,
but this is precisely what may lead to the undesirable be-
haviours depicted in the introduction.
2.2 SAFE INTERRUPTIBILITY
Now that we have interruptible policies, we need to make
sure that interruptions do not prevent the agent from learn-
ing to behave optimally, in the specific sense that even after
having been interrupted on several occasions, it should act
as if it would never be interrupted again and thus it should
learn to behave optimally under the assumption that it will
never be interrupted again.We identify two main problems: a) RL agents need to
explore their environment, and too frequent interruptions
may prevent sufficient exploration; b) interruptions make
the agent build a different interaction history, and may lead
some agents to learn and behave differently, possibly badly,
compared to the original non-interruptible policy.
The solution for a) is to require interruptions to be stochas-
tic through the upper bound t, instead of happening de-
terministically all the time. However, we also require tto
grow to 1in the limit (or before, if possible). For b), differ-
ent algorithms behave differently, but one may already see
a dichotomy between off- and on-policy algorithms.
Definition 3 (Extension value) .For a given environment ,
the extension value V;0
;t is the value of following 0after
a historyh;
<tgenerated by with:V;0
;t:=V0
(h;
<t):
Convergence to the optimal value as is usually consid-
ered in RL only makes sense under ergodicity, episodic
tasks, communicating MDP, recoverability or other sim-
ilar assumptions where the agent can explore everything
infinitely often. This does not carry over to general envi-
ronments where the agent may make unrecoverable mis-
takes [Hutter, 2005]. For such cases, the notion of (weak)
asymptotic optimality has been proposed [Lattimore and
Hutter, 2011], where the optimal agent follows the steps of
the learning agent, so as to compare the values of the two
agents in the same sequence of situations.
Definition 4 (Asymptotic optimality) .A policyis said to
be strongly asymptotically optimal (SAO) if and only if
lim
t!1V;
;t V;
;t= 0 a.s.,
it is weakly asymptotically optimal (WAO) if and only if
lim
t!11
ttX
k=1h
V;
;k V;
;ki
= 0 a.s.
for allin some given environment class M.
Some agents cannot ensure an almost sure (a.s.) conver-
gence of their values because of the need for infinite explo-
ration, but may still be weakly asymptotic optimal. Note
that SAO implies WAO, but the converse is false in gen-
eral.
Definition 5 (AO-extension) .A policy ^is said to be a
asymptotically optimal extension of a policy if and only if,
for any environment 2M , when the interaction history
is generated by the interaction of and, the policy ^
would be asymptotically optimal, i.e., almost surely
lim
t!1V;
;t V;^
;t= 0 (SAO-extension)
lim
t!11
ttX
k=1h
V;
;k V;^
;ki
= 0:(WAO-extension)
AO-extensions are mostly useful when the policy ^shares
information with the policy used for learning.
Definition 6 (AO-safe interruptibility) .A base policy is
(S, W)AO-safely interruptible if and only if, for any inter-
ruption initiation function I(:)and any interruption policy
INT(:), there exists a sequence of twithlimt!1t= 1
such thatis a (S, W)AO-extension of INT().
Asymptotic safe interruptibility means that even if the in-
terruptions in the learning process may induce a bias in the
decision making of the policy, this bias vanishes with time,
and the interruptible policy INT()tends to choose ac-
tions that are optimal when compared to the optimal non-
interruptible policy .
We can now show that the optimal policy is asymptotically
safely interruptible, but not the int-optimal policy .
Theorem 7. The optimal policy is SAO-safely inter-
ruptible inM=fgfor all,INTandI(:).
Proof. The result follows straightforwardly from Defini-
tion 1 and Definition 6, where =.
Theorem 8. The int-optimal policy
is not WAO-safely
interruptible in general.
Proof. By construction of a specific Markov Decision Pro-
cess (MDP) environment (see Section 3 for more details on
MDP notation). Let be the environment defined as in Fig.
2: Take
= 0:5and let the agent start in state s1.
s1 s2 b;0:9a;1
a;1
b;0;
Figure 2: An MDP where the agent can be interrupted by
being forced to choose particular actions. Edges are labeled
with action, reward where the presence of “, ” means that
if the agent is interrupted (with probability t), it is forced
to take the corresponding action. Here is not part of the
environment, but part of the agent.
Considering the agent is in state s1at timet, the opti-
mal policyalways takes action a(and hence only visits
statess1ands2), with value V
;t:=1
1
= 2 when not
interrupted, for any history h<tthat ends in s1ors2. This
is also the optimal policy
for= 0. But ift0:5, the
interruptible optimal policy INT()has value less than
1 +
(1(1 ) + 0) +1
2
1
= 1:75. By con-
trast, the int-optimal policy
here is to always take action
bin states1. Then the agent will only visits s1, with value
:9
1
= 1:8at every time step.Since the agent following
will never enter s2and hence
will never be interrupted, INT(
) =
on the histories
generated by INT(
)starting from s1. Then, at every
time stepV
;t V
;t= 0:2after any history h<t, and thus
for all sequence wheret0:5,limt!1VINT(
);
;t
VINT(
);
;t = 0:2>0;and so
is not a WAO-extension
ofINT(
).
3 INTERRUPTIBLE AGENTS IN MDPS
Since the optimal policy is safely interruptible, we
can use traditional learning algorithms like Q-learning or
Sarsa [Sutton and Barto, 1998], make them converge to the
optimal solution for a given environment , and then
apply the interruption operator to the found policy. The
resulting policy would then be safely interruptible.
However, the real issue arises when the agent is constantly
learning and adapting to a changing environment. In this
case, we want to be able to safely interrupt the agent while
it is learning. One may call this property online safe inter-
ruptibility , but we refer to it simply as safe interruptibility.
In an MDP, the next observation ot, now called a state st2
S, depends only on the current state and action:2
(st+1jh1:tstat) =(st+1jstat)(MDP assumption) :
Furthermore,3the interruption function I(:)and the inter-
ruption policy INT(:)should depend only on the current
state:I(h1:t) =I(st)andINT(atjh<t) =INT(atjst).
Also recall that tplaces an upper bound on the actual in-
terruption probability. The interruptible policy INT()can
now be written:
INT()(ajs) =tI(s)INT(ajs) + (1 tI(s))(ajs):
For a given Q-table q:SA! R, the greedy policy
maxqis defined by:
maxq(ajs) := 1 ifa= max
a0q(s;a0);0otherwise;
where ties are broken arbitrarily; the uniform policy uniis
defined by:
uni(ajs) :=1
jAj8a2A:
and the-greedy policy qby:
q(ajs) :=uni(ajs) + (1 )maxq(ajs)
=maxq(ajs) +
uni(ajs) maxq(ajs)
2Note the reversal of the order of actions and observation-
s/states at time t, which differs in the literature for history based
agents [Hutter, 2005] from MDP agents [Sutton and Barto, 1998].
3This condition is not necessary for most of the results, but
makes the proofs simpler. Making I(:)depend on the past would
not break the Markovian assumption as it influences the policy,
not the environment.
The Q-learning update rule and the action selection policy
Qof Q-learning are:
Qt+1(st;at) := (1 t)Qt(st;at)
+th
rt+
max
a0Qt(st+1;a0)i
;
Q(atjst) :=Qt(atjst):
wheretis the learning rate. Similarly, the Sarsa update
rule is defined by:
Qs
t+1(st;at) := (1 t)Qs
t(st;at)
+t[rt+
Qs
t(st+1;at+1)];
s(atjst) :=Qs
t(atjst);
whereat+1is the actual next action taken by the agent at
timet+ 1. This fact is why Sarsa is said to be learning on-
policy and Q-learning off-policy ,i.e., the latter can learn the
optimal policy while following a different policy.
Assumption 9. In the following, we always make the fol-
lowing assumptions, required for convergence results:
(a) The MDP is finite and communicating (all states can
be reached in finite time from any other state),
(b) Rewards are bounded in [rmin;rmax],
(c)8s;a:P
tt(s;a) =1,
(d)8s;a:P
t2
t(s;a)<1,
wheret(s;a)is a learning rate that may depend on time
t, statesand actiona.
Given these assumptions, the policies for Q-learning and
Sarsa will converge almost surely to the optimal policy, if
the policy followed is greedy in the limit with infinite explo-
ration (GLIE) [Jaakkola et al., 1994, Singh et al., 2000].
The situation is more complex for an interruptible policy.
Safe interruptibility is phrased in terms of the base policy
, but the policy actually followed is INT().
Definition 10 (int-GLIE policy) .An interruptible policy
INT()is said to be int-GLIE if and only if
(a) the base policy is greedy in the limit,
(b) the interruptible policy INT()visits each state-
action pair infinitely often.
The following proposition gives sufficient conditions for
this. Letnt(s)be the number of times the agent has vis-
ited statesin the firstttime steps, and let m=jAjbe the
number of actions.
Proposition 11. Let(c;c0)2(0;1]2and letbe an-
greedy policy with respect to some Q-table q, i.e.,=q.
Then INT()is an int-GLIE policy with respect to q,a) ift(s) =c=p
nt(s)andt(s) = 1 c0=p
nt(s);
b) or if, independently of s,
t=c=log(t)andt= 1 c0=log(t):
Proof. First note that if t!0,is greedy in the limit.
Singh et al. [2000] show that in a communicating MDP, ev-
ery state gets visited infinitely often as long as each action
is chosen infinitely often in each state.
a) Adapting the proof in Appendix B.2 of Singh
et al. [2000], we have P(ajs;nt(s))1
mt(s)(1
tI(s))1
mt(s)(1 t) =1
mcc0
nt(s), which satisfiesP1
t=1P(ajs;nt(s)) =1so by the Borel-Cantelli lemma
actionais chosen infinitely often in state s, and thus
nt(s)!1 andt(s)!0.
b) LetMbe the diameter of the MDP, i.e., for any of
statess;s0there exists a policy that reaches s0fromsin
at mostMsteps in expectation. Then, starting at any
statesat timetand using Markov inequality, the proba-
bility to reach some other state s0in2Msteps is at least
1
2[t+M(1 t+M)]2M=1
2[cc0=log(t+M)]4M, and the
probability to then take a particular action in this state is
1
m[cc0=log(t+M)]2. Hence, sinceP1
t=11
21
m[cc0=log(t+
M)]4M+2=1, then by the extended Borel-Cantelli
Lemma (see Lemma 3 of Singh et al. [2000]), any action
in the states0is taken infinity often. Since this is true for
all states and all actions, the result follows.
We need the stochastic convergence Lemma:
Lemma 12 (Stochastic convergence [Jaakkola et al., 1994,
Singh and Yee, 1994]) .A random iterative process
t+1(x) = (1 t(x))t(x) +t(x)Ft(x)
wherex2Xandt= 1;2;3:::converges to 0 with prob-
ability 1 if the following properties hold:
1. the set of possible states X is finite;
2.0t(x)1;P
tt(x) =1;P
t2
t(x)<1with
probability 1;
3.kEfFt(:)jPtgkW
ktkW+ct, where
2[0;1)
andctconverges to zero with probability 1;
4. VarfFt(x)jPtgC(1 +ktkW)2for someC;
wherePt=ftg[fi;Fi;igt 1
i=1stands for the past, and
the notationk:kWrefers to some fixed weighted maximum
norm.
We will use so-called Bellman operators, which define at-
tractors for the Q-values, based on the expectation of the
learning rule under consideration.
Lemma 13 ([Jaakkola et al., 1994, Singh et al., 2000]) .Let
the Bellman operator Hfor Q-learning be such that
(Hq)(s;a) =r(s;a) + E
s0(ajs)h
max
a0q(s0;a0)i
;
and let the fixed point Qsuch thatQ=HQ. Then,
under Assumption 9, if the policy explores each state-action
pair infinitely often, Qtconverges to Qwith probability 1.
The optimal policy Q=ismaxQ. If the policy is
greedy in the limit, then Q!.
Theorem 14. Under Assumption 9 and if the interrupted
Q-learning policy INT(Q)is an int-GLIE policy, with
8s: limt!1t(s) = 1 , thenQis an SAO-safe inter-
ruptible policy.
Proof. By Definition 10, there is infinite exploration, thus
the Q-values tend to the optimal value by Lemma 13. And
since the extension policy is greedy in the limit with respect
to these Q-values, it is then optimal in the limit. Hence the
extension policy Qis a SAO-extension of INT(Q). Fi-
nally,8s: limt!1t(s) = 1 , which satisfies the require-
ment of Definition 6.
Since Sarsa also converges to the optimal policy under the
GLIE assumption, one may then expect Sarsa to be also an
asymptotically safely interruptible policy, but this is in fact
not the case. This is because Sarsa learns on-policy ,i.e., it
learns the value of the policy it is following. Thus, inter-
ruptible Sarsa learns the value of the interruptible policy.
We show this in the remainder of this section.
Theorem 15. Under Assumption 9 Sarsa is nota WAO-
safely interruptible policy.
To prove this theorem, we first need the following lemma.
Consider the following Bellman operator based on the in-
terruptible Sarsa policy INT(s):
HINTq(s;a) =r(s;a) +
E
s0
a0INT(s)[q(s0;a0)];
where INT(s)implicitly depends on time tthrought
andt. Let the fixed point Q-table Qsof this operator:
Qs(s;a) =HINTQs(s;a)
=r(s;a) +
E
s0
a0INT(maxQs)
Qs(s0;a0)
=r(s;a) +
E
s0h
tI(s0)E
a0INT
Qs(s0;a0)
+ (1 tI(s0)) max
a0Qs(s0;a0)i
(2)
Lemma 16. The operator HINTis a contraction operator
in the sup norm with vanishing noise ct!0, i.e.,
kHINTq HINTQsk1
kq Qsk1+ct:Proof. The interruptible Sarsa policy INT(s)is
INT(s)(ajs)
=tI(s)INT(ajs) + (1 tI(s))Qs(ajs)
=Qs(ajs) +tI(s)[INT(ajs) Qs(ajs)]
Qs(ajs) =tuni(ajs) + (1 t)maxQs(ajs)
=maxQs(ajs) +t[uni(ajs) maxQs(ajs)]:
Hence, omitting the terms (s;a),(s0;a0)and(a0js0)and
rewritings:=INT(maxQs):
kHINTq HINTQsk1
= max
s;ar+
E
s0
a0INT(s)[q] r
E
s0
a0s
Qs
max
s0 E
a0INT(s)[q] E
a0s
Qs
max
s0tI(s0)E
a0INT
q Qs
+ (1 tI(s0))
E
a0s[q] max
a0Qs
max
s0tI(s0)E
a0INT
q Qs
+ (1 tI(s0))
max
a0q max
a0Qs+t()
max
s0;a0tI(s0)
q Qs
+ (1 tI(s0))
q Qs+ct
=
max
s0;a0q(s0;a0) Qs(s0;a0)+ct
=
kq Qsk1+ct:
wherectdepends ontand decreases to 0.
Proof of Theorem 15. By Lemma 16, the values of the in-
terruptible Sarsa policy INT(s)converge to the values of
the Q-tableQs, and in the limit the extension policy s
ofINT(s)chooses its actions greedily according to Qs.
The rest of the proof is the same as for the proof of Theo-
rem 8 which makes use of the environment in Figure 2.
3.1 SAFELY INTERRUPTIBLE SARSA V ARIANT
We only need to make a small change to make the Sarsa
policy asymptotically safely interruptible. We call it Safe-
Sarsa with policy ~s. It suffices to make sure that, when the
agent is interrupted, the update of the Q-table Qsdoes not
use the realized actions as Sarsa usually does, but actions
sampled from sinstead of from INT(s):
Q~s
t+1(st;at) :=
(1 t)Q~s
t(st;at) +t
rt+
Q~s
t(st+1;a0)
;
wherea0~s(:jst+1)is not necessarily the action at+1,
with~s(atjst) :=Q~s(atjst).
Theorem 17. Under Assumption 9, if the Safe Sarsa policy
~sis int-GLIE, then it is an SAO-safe interruptible policy.
Proof. We simply adapt the proof of Theorems 15 and 14,
with the important difference that the Bellman operator cor-
responding to this new update rule is now
H~sq(s;a) :=r(s;a) +
Es0
a0~s[q(s0;a0)];
and the fixed point is Q~s:=H~sQ~s. SinceH~sis actu-
ally the Bellman operator for the update rule of the non-
interruptible Sarsa, it can then be shown that H~sis a con-
traction, thus that Q~s
tconverges to the same Q~sindepen-
dently of. The rest of the proof is as for Theorem 14.
Now, since the Q-values converge to the optimum Q, it
follows that ~s, when not interrupted, chooses its action of
the same value as (non-interruptible) Sarsa and thus as Q-
learning in the limit; Hence its extension policy is exactly
the optimal policy, which satisfies Definition 6.
4 A SAFELY INTERRUPTIBLE
UNIVERSAL AGENT
Admittedly, algorithms like Q-learning and Sarsa require
strong assumptions on the environment class. Hence a
more interesting question is whether safe interruptibility is
possible in much larger classes.
Hutter [2005] defined a universal reinforcement learning
agent, called AIXI. It is an (uncomputable) optimal model-
based planner with a subjective prior over the set of all
computable environments, defined by means of a universal
Turing machine. The subjective posterior of the environ-
ments is updated with Bayes rule. This ideal agent can in
principle learn all kinds of (computable) regularities about
the environment, plan for the long term and make context-
dependent optimal decisions, with no constraint (other than
being computable) on the complexity of the environment.
Unfortunately, the optimality criterion of AIXI is Bayesian
optimality, which is entirely dependent on the subjective
prior and posterior [Leike and Hutter, 2015], and AIXI
has been shown to notbe weakly asymptotically opti-
mal [Orseau, 2013] without additional exploration [Latti-
more and Hutter, 2014]. As a consequence, AIXI is not a
good candidate for asymptotic safe interruptibility.Lattimore and Hutter [2011] later defined a (weakly)
asymptotically optimal agent for all computable determin-
istic environments, which we call L. It follows the opti-
mal policy for the first model (in some given enumeration
of the possible models) consistent with the current interac-
tion history, and exploring at time twith probability 1=tfor
logtconsecutive steps using a random policy, similarly to
an-greedy agent for general environments.
In the following, we show that even such a smart agent
can be made (weakly) safely interruptible. To this end, we
make two minor modifications to the algorithm.
First, the exploration probability of 1=twould require t=
1 1=log(log(t)), which is unsatisfyingly slow. By sam-
pling with probability 1=p
tinstead, we can take an inter-
ruption probability that grows as 1 1=log(t). Let this
exploration sampling probability be t:=pt+ 1 p
t
1
2p
t(since 1 =t+1 t= (pt+ 1 p
t)(pt+ 1+p
t)
(pt+ 1 p
t)2p
t). As in the original paper, the sequence
tkeeps track of the steps where an exploration starts, i.e.,
the sequence tis sampled independently so that t= 1
with probability t, andt= 0otherwise.
Second, we require that the exploitation policy does not
change during an exploitation segment, so as to simplify
one of the proofs.4More specifically, we call jt:= minfj:
j(h<k) = 1g(environments are assumed to be determin-
istic) the index of the first model jt(of a given fixed enu-
meration) that is consistent with the interaction history h<k
wherekis the smallest step so that hk:t 1does not contain
any exploration step. The optimal policy for this environ-
ment isjt. Iftis an exploitation step, L=jt, and ift
is an exploration step, L(atjh<t) =jAj 1.
The remainder of this section is devoted to proving that L
is WAO-safely interruptible.
Theorem 18 (Lis WAO-safe interruptible) .If the inter-
ruption probability sequence is t= 1 1
log(t+1), the policy
Lis WAO-safe interruptible in the class of all computable
deterministic environments.
Proof. Letbe the true environment. The indices jtform
an monotonically increasing sequence bounded above by
the index of the true environment 2M (since no evi-
dence can ever make the true environment inconsistent
with the interaction history), hence the sequence converges
in finite time. Let |be the limit value of this sequence, and
let|:=|be the optimal policy for this environment |.
4We expect this assumption to not be necessary for the main
theorem to hold.
LetL:=INT(L). By Definition 6, we want:
0 = lim
t!11
ttX
k=1h
VL;
;k VL;L
;ki
= lim
t!11
ttX
k=1h
VL;
;k VL;|
;ki
| {z }
(exploration)
+ lim
t!11
ttX
k=1h
VL;|
;k VL;L
;ki
| {z }
(exploitation)
where the decomposition is valid if the limits are finite, and
historiesh<tare considered to be the same in both sums.
We proceed to prove that both limits are 0. Lemma 24 deals
with (exploration), which ensures that |is a good enough
policy, and Lemma 21 deals with (exploitation), and en-
sures thatLfollows|most of the time.
First, we need a definition and a few lemmas.
Definition 19. For any >0, defineH()such that the
maximal reward after time t+H(), discounted from time
t, is:H() = minkn
k:
k
1
o
:
The following Lemma is a generalization of Lemma 15
from Lattimore and Hutter [2011].
Lemma 20 (Approximation Lemma) .Let1and2be
two deterministic policies, and let 1and2be two deter-
ministic environments, and let =H() 1. Then, after
some common history h<t,
h1;1
t:t+=h2;2
t:t+ =)V1
1;t V2
2;t:
Proof. Recall thatV
;t=E;P1
k=0
krt+k
and that
the reward is bounded in [rmin;rmax] = [0;1]. Thus,
for allt;; ,V
;tP1
k=0
k=1
1
. Then, since
h1;1
t:t+=h2;2
t:t+, we have E1;1P
k=0
krt+k
=
E2;2P
k=0
krt+k
and thus
V1
1;t V2
2;t
=E
1;1"1X
k=+1
krt+k#
E
2;2"1X
k=+1
krt+k#
+1(rmax rmin)
1
=
H()
1
;
by the definition of H().
Lemma 21 (Exploitation) .
lim
t!11
ttX
k=1h
VL;|
;k VL;L
;ki
= 0:Proof. First, note that the extension policy Lis not in-
terruptible, so its value at time kdoes not depend on
k0;8k0k. By definition of |, there is a time step t|af-
ter which|=jt;8t > t |. For some “exploration-free”
horizont(to be specified later), let Xt2f0;1gbe the
eventVL;|
;t VL;L
;t>
t
1
, whereXt= 1 means
the event is realized. By the contrapositive of the Approxi-
mation Lemma 20, since L=|during non-exploration
steps (remember that Lcannot change its policy during
exploitation), if no exploration steps occur between steps t
andt+t, we must have Xt= 0. Then:
E
"tX
k=1Xt#
(t+ logt)tX
k=1t+O(t|)
(t+ logt)p
t+ 1 +O(t|);
since for each t= 1, for all the previous tsteps there
is an exploration step within tsteps, and all the next logt
steps are exploration steps. Then by Markov’s inequality,
and takingt= (t+ 1)1=8, withtlarge enough so that
t>t |andt>logt:
P tX
k=1Xt(t+ 1)3=4!
(t+ logt)pt+ 1 +O(t|)
(t+ 1)3=4
2t(t+ 1) 1=4+O(t 3=4)
2(t+ 1) 1=8+O(t 3=4);
1 2(t+ 1) 1=8 O(t 3=4)
P tX
k=1Xt<(t+ 1)3=4!
P tX
k=1(1 Xt)t (t+ 1)3=4!
P
1
ttX
k=1(1 Xt)1 1
t(t+ 1)3=4!
:
Therefore, since limt!1
t
1
= 0:
P
lim
t!11
ttX
k=1VL;|
;k VL;L
;k= 0!
= 1:
The following is an adaptation5of Lemma 16 from Latti-
more and Hutter [2011]:
Lemma 22 (Separation Lemma) .Letbe the true environ-
ment, andbe an environment consistent with the history
h<t. IfV
;t V
;t> , then following one of f;g
will make environment inconsistent with the future his-
tory withinH(=2)steps after time t.
5This also fixes a minor mistake in the original lemma.
Proof. First, ifV
;t V
;t> =2, then by the contrapos-
itive of the Approximation Lemma 20 following policy
will generate a different history in than inand thus
it will make inconsistent within H(=2)steps (since the
true history is generated by ).
Now, ifV
;t V
;t=2, thusV
;tV
;t =2, then
starting from the lemma’s assumption:
V
;t>V
;t+V
;t+=2V
;t+=2;
where the last inequality follows from the definition of
the optimal policy, i.e.,Va
a;tVb
a;t;8a;b. Hence, since
V
;t V
;t>=2, again by the contrapositive of the Ap-
proximation Lemma, following policy will discard
withinH(=2)steps.
Lemma 23 (Lemma 17 from Lattimore and Hutter [2011]) .
LetA=fa1;a2;;atgwitha2[0;1]for alla2A. If
1
tP
a2Aathen1
t
a2A:a
2 >
2:
Lemma 24 (Exploration) . The policy
|is an WAO-extension of L, i.e.,
limt!11
tPt
k=1h
VL;
;k VL;|
;ki
= 0:
Proof. Recall thatjtconverges to |in finite time. Rea-
soning by contradiction, if |is not a WAO-extension of
L=INT(L), then there exists an >0s.t.
lim sup
t!11
ttX
k=1h
VL;
;k VL;|
;ki
= 2:
Letk2f0;1gbe an indicator sequence such that k= 1
if and only if VL;
;k VL;|
;k> . By Lemma 23,
1
tPt
k=1k>.
For allt>t |, ift= 1, by the Separation Lemma 22, there
is a sequence of length :=H(=2)that can rule out envi-
ronment|. Since the exploration phases increase as logt,
aftert>exp, there are infinitely many exploration steps
of size larger than . Now, we actually need infinitely many
exploration phases of uninterrupted steps. LetXtbe the
event representing an uninterrupted exploration sequence
of length at least steps starting at time tsuch thatt= 1,
and the actions are all (by chance) following a separation
policy. The probability to start an exploration sequence is
k=1p
k, the probability to not be interrupted during
steps is at least (1 k), and the probability to follow the
policy that can separate |fromisjAj , whereAis the
set of possible actions. Thus, for a given constant :
tX
k=1P(Xk)tX
k=1kk(1 k)jAj O()
tX
k=1k1p
k1
logk
jAj O()Considering constant, there exists a step tafter which
1
logk
1
k1=4, then8kt:
tX
k=1P(Xk)tX
k=1k1
k3=4jAj O()
t1=4
1
ttX
k=1k!
jAj O();
lim
t!1tX
k=1P(Xk) = lim
t!1t1=4jAj O() =1:
Then the extended Borel-Cantelli Lemma (see Lemma 3
of Singh et al. [2000]) implies that this event happens in-
finitely often with probability one. Therefore, |should be
ruled out, which is a contradiction, and hence any such
does not exist and |is a WAO-extension of L.
5 CONCLUSION
We have proposed a framework to allow a human opera-
tor to repeatedly safely interrupt a reinforcement learning
agent while making sure the agent will notlearn to prevent
or induce these interruptions.
Safe interruptibility can be useful to take control of a robot
that is misbehaving and may lead to irreversible conse-
quences, or to take it out of a delicate situation, or even
to temporarily use it to achieve a task it did not learn to
perform or would not normally receive rewards for this.
We have shown that some algorithms like Q-learning are
already safely interruptible, and some others like Sarsa are
not, off-the-shelf, but can easily be modified to have this
property. We have also shown that even an ideal agents
that tends to the optimal behaviour in any (deterministic)
computable environment can be made safely interruptible.
However, it is unclear if all algorithms can be easily made
safely interruptible, e.g., policy-search ones [Williams,
1992, Glasmachers and Schmidhuber, 2011].
Another question is whether it is possible to make the in-
terruption probability grow faster to 1 and still keep some
convergence guarantees.
One important future prospect is to consider scheduled
interruptions , where the agent is either interrupted every
night at 2am for one hour, or is given notice in advance that
an interruption will happen at a precise time for a specified
period of time. For these types of interruptions, not only
do we want the agent to not resist being interrupted, but
this time we also want the agent to take measures regarding
its current tasks so that the scheduled interruption has mini-
mal negative effect on them. This may require a completely
different solution.
Acknowledgements. Thanks to Alexander Tamas and to
many people at FHI, MIRI and Google DeepMind.
References
Stuart Armstrong. Utility indifference. In First Interna-
tional Workshop on AI and Ethics , 2015.
Nick Bostrom. Superintelligence: Paths, Dangers, Strate-
gies. Oxford University Press, 2014.
Tobias Glasmachers and J ¨urgen Schmidhuber. Optimal di-
rect policy search. In Artificial General Intelligence - 4th
International Conference (AGI) , volume 6830 of Lec-
ture Notes in Computer Science , pages 52–61. Springer,
2011.
Mark Humphrys. Action selection in a hypothetical house
robot: Using those rl numbers. In Proceedings of the
First International ICSC Symposia on Intelligent Indus-
trial Automation (IIA-96) and Soft Computing (SOCO-
96), pages 216–22, 1996.
Marcus Hutter. Universal Artificial Intelligence: Se-
quential Decisions Based On Algorithmic Probability .
SpringerVerlag, 2005. ISBN 3540221395.
Tommi Jaakkola, Michael I. Jordan, and Satinder P. Singh.
On the convergence of stochastic iterative dynamic pro-
gramming algorithms. Neural computation , 6:1185–
1201, 1994.
Tor Lattimore and Marcus Hutter. Asymptotically optimal
agents. In Proc. 22nd International Conf. on Algorithmic
Learning Theory (ALT’11) , volume 6925 of LNAI , pages
368–382. Springer, 2011.
Tor Lattimore and Marcus Hutter. Bayesian reinforcement
learning with exploration. In Proc. 25th International
Conf. on Algorithmic Learning Theory (ALT’14) , vol-
ume 8776 of LNAI , pages 170–184. Springer, 2014.
Jan Leike and Marcus Hutter. Bad universal priors and no-
tions of optimality. Journal of Machine Learning Re-
search, W&CP: COLT , 40:1244–1259, 2015.
Thomas VII Murphy. The first level of super mario bros.
is easy with lexicographic orderings and time travel.
The Association for Computational Heresy (SIGBOVIK)
2013 , pages 112–133, 2013.
Stephen M. Omohundro. The basic ai drives. In Proceed-
ings of the 2008 Conference on Artificial General Intel-
ligence 2008 , pages 483–492. IOS Press, 2008.
Laurent Orseau. Asymptotic non-learnability of univer-
sal agents with computable horizon functions. Theoreti-
cal Computer Science , 473:149–156, 2013. ISSN 0304-
3975.
Martin Pecka and Tomas Svoboda. Modelling and Simula-
tion for Autonomous Systems: First International Work-
shop (MESAS 2014) , chapter Safe Exploration Tech-
niques for Reinforcement Learning – An Overview,
pages 357–375. Springer International Publishing, 2014.Mark Ring and Laurent Orseau. Artificial General Intelli-
gence: 4th International Conference, AGI 2011, Moun-
tain View, CA, USA, August 3-6, 2011. Proceedings ,
chapter Delusion, Survival, and Intelligent Agents, pages
11–20. Springer Berlin Heidelberg, 2011.
Satinder P. Singh and Richard Yee. An upper bound on the
loss from approximate optimal-value functions. Machine
Learning , 16:227–233, 1994.
Satinder P. Singh, Tommi Jaakkola, Michael L. Littman,
and Csaba Szepesvri. Convergence results for single-
step on-policy reinforcement-learning algorithms. Ma-
chine Learning , 38:287–308, 2000.
Nate Soares, Benya Fallenstein, Eliezer Yudkowsky, and
Stuart Armstrong. Corrigibility. In First International
Workshop on AI and Ethics , 2015.
Richard Sutton and Andrew G. Barto. Reinforcement
Learning: An Introduction . MIT Press, Cambridge, MA,
1998.
Richard Sutton, Doina Precup, and Satinder P. Singh. Be-
tween mdps and semi-mdps: A framework for temporal
abstraction in reinforcement learning. Artificial Intelli-
gence , 112:181–211, 1999.
Ronald J. Williams. Simple statistical gradient-following
algorithms for connectionist reinforcement learning.
Machine Learning , 8(3):229–256, 1992. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.